id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
15165
https://en.wikipedia.org/wiki/Ivory
Ivory
Ivory is a hard, white material from the tusks (traditionally from elephants) and teeth of animals, that consists mainly of dentine, one of the physical structures of teeth and tusks. The chemical structure of the teeth and tusks of mammals is the same, regardless of the species of origin, but ivory contains structures of mineralised collagen. The trade in certain teeth and tusks other than elephant is well established and widespread; therefore, "ivory" can correctly be used to describe any mammalian teeth or tusks of commercial interest which are large enough to be carved or scrimshawed. Besides natural ivory, ivory can also be produced synthetically, hence (unlike natural ivory) not requiring the retrieval of the material from animals. Tagua nuts can also be carved like ivory. The trade of finished goods of ivory products has its origins in the Indus Valley. Ivory is a main product that is seen in abundance and was used for trading in Harappan civilization. Finished ivory products that were seen in Harappan sites include kohl sticks, pins, awls, hooks, toggles, combs, game pieces, dice, inlay and other personal ornaments. Ivory has been valued since ancient times in art or manufacturing for making a range of items from ivory carvings to false teeth, piano keys, fans, and dominoes. Elephant ivory is the most important source, but ivory from mammoth, walrus, hippopotamus, sperm whale, orca, narwhal and warthog are used as well. Elk also have two ivory teeth, which are believed to be the remnants of tusks from their ancestors. The national and international trade in natural ivory of threatened species such as African and Asian elephants is illegal. The word ivory ultimately derives from the ancient Egyptian ('elephant'), through the Latin or . Uses Both the Greek and Roman civilizations practiced ivory carving to make large quantities of high value works of art, precious religious objects, and decorative boxes for costly objects. Ivory was often used to form the white of the eyes of statues. There is some evidence of either whale or walrus ivory used by the ancient Irish. Solinus, a Roman writer in the 3rd century claimed that the Celtic peoples in Ireland would decorate their sword-hilts with the 'teeth of beasts that swim in the sea'. Adomnan of Iona wrote a story about St Columba giving a sword decorated with carved ivory as a gift that a penitent would bring to his master so he could redeem himself from slavery. The Syrian and North African elephant populations were reduced to extinction, probably due to the demand for ivory in the Classical world. The Chinese have long valued ivory for both art and utilitarian objects. Early reference to the Chinese export of ivory is recorded after the Chinese explorer Zhang Qian ventured to the west to form alliances to enable the eventual free movement of Chinese goods to the west; as early as the first century BC, ivory was moved along the Northern Silk Road for consumption by western nations. Southeast Asian kingdoms included tusks of the Indian elephant in their annual tribute caravans to China. Chinese craftsmen carved ivory to make everything from images of deities to the pipe stems and end pieces of opium pipes. In Japan, ivory carvings became popular in the 17th century during the Edo period, and many netsuke and kiseru, on which animals and legendary creatures were carved, and inro, on which ivory was inlaid, were made. From the mid-1800s, the new Meiji government's policy of promoting and exporting arts and crafts led to the frequent display of elaborate ivory crafts at World's fair. Among them, the best works were admired because they were purchased by Western museums, wealthy people, and the Japanese Imperial Family. The Buddhist cultures of Southeast Asia, including Myanmar, Thailand, Laos and Cambodia, traditionally harvested ivory from their domesticated elephants. Ivory was prized for containers due to its ability to keep an airtight seal. It was also commonly carved into elaborate seals utilized by officials to "sign" documents and decrees by stamping them with their unique official seal. In Southeast Asian countries, where Muslim Malay peoples live, such as Malaysia, Indonesia and the Philippines, ivory was the material of choice for making the handles of kris daggers. In the Philippines, ivory was also used to craft the faces and hands of Catholic icons and images of saints prevalent in the Santero culture. Tooth and tusk ivory can be carved into a vast variety of shapes and objects. Examples of modern carved ivory objects are okimono, netsukes, jewelry, flatware handles, furniture inlays, and piano keys. Additionally, warthog tusks, and teeth from sperm whales, orcas and hippos can also be scrimshawed or superficially carved, thus retaining their morphologically recognizable shapes. As trade with Africa expanded during the first part of the 1800s, ivory became readily available. Up to 90 percent of the ivory imported into the United States was processed, at one time, in Connecticut where Deep River and Ivoryton in 1860s became the centers of ivory milling, in particular, due to the demand for ivory piano keys. Ivory usage in the last thirty years has moved towards mass production of souvenirs and jewelry. In Japan, the increase in wealth sparked consumption of solid ivory hanko – name seals – which before this time had been made of wood. These hanko can be carved out in a matter of seconds using machinery and were partly responsible for massive African elephant decline in the 1980s, when the African elephant population went from 1.3 million to around 600,000 in ten years. Consumption before plastics Before plastics were introduced, ivory had many ornamental and practical uses, mainly because of the white color it presents when processed. It was formerly used to make cutlery handles, billiard balls, piano keys, Scottish bagpipes, buttons and a wide range of ornamental items. Synthetic substitutes for ivory in the use of most of these items have been developed since 1800: the billiard industry challenged inventors to come up with an alternative material that could be manufactured; the piano industry abandoned ivory as a key covering material in the 1970s. Ivory can be taken from dead animals – however, most ivory came from elephants that were killed for their tusks. For example, in 1930 to acquire 40 tons of ivory required the killing of approximately 700 elephants. Other animals which are now endangered were also preyed upon, for example, hippos, which have very hard white ivory prized for making artificial teeth. In the first half of the 20th century, Kenyan elephant herds were devastated because of demand for ivory, to be used for piano keys. During the Art Deco era from 1912 to 1940, dozens (if not hundreds) of European artists used ivory in the production of chryselephantine statues. Two of the most frequent users of ivory in their sculptured artworks were Ferdinand Preiss and Claire Colinet. Mechanical characteristics While many uses of ivory are purely ornamental in nature, it often must be carved and manipulated into different shapes to achieve the desired form. Other applications, such as ivory piano keys, introduce repeated wear and surface handling of the material. It is therefore essential to consider the mechanical properties of ivory when designing alternatives. Elephant tusks are the animal's incisors, so the composition of ivory is unsurprisingly similar to that of teeth in several other mammals. It is composed of dentine, a biomineral composite constructed from collagen fibers mineralized with hydroxyapatite. This composite lends ivory the impressive mechanical properties—high stiffness, strength, hardness, and toughness—required for its use in the animal's day-to-day activities. Ivory has a measured hardness of 35 on the Vickers scale, exceeding that of bone. It also has a flexural modulus of 14 GPa, a flexural strength of 378 MPa a fracture toughness of 2.05 MPam1/2. These measured values indicate that ivory mechanically outperforms most of its most common alternatives, including celluloid plastic and polyethylene terephthalate. Ivory's mechanical properties result from the microstructure of the dentine tissue. It is thought that the structural arrangement of mineralized collagen fibers could contribute to the checkerboard-like Schreger pattern observed in polished ivory samples. This is often used as an attribute in ivory identification. As well as being an optical feature, the Schreger pattern could point towards a micropattern well-designed to prevent crack propagation by dispersing stresses. Additionally, this intricate microstructure lends a strong anisotropy to ivory's mechanical characteristics. Separate hardness measurements on three orthogonal tusk directions indicated that circumferential planes of tusk had up to 25% greater hardness than radial planes of the same specimen. During hardness testing, inelastic and elastic recovery was observed on circumferential planes while the radial planes displayed plastic deformation. This implies that ivory has directional viscoelasticity. These anisotropic properties can be explained by the reinforcement of collagen fibers in the composite oriented along the circumference. Availability Owing to the rapid decline in the populations of the animals that produce it, the importation and sale of ivory in many countries is banned or severely restricted. In the ten years preceding a decision in 1989 by CITES to ban international trade in African elephant ivory, the population of African elephants declined from 1.3 million to around 600,000. It was found by investigators from the Environmental Investigation Agency (EIA) that CITES sales of stockpiles from Singapore and Burundi (270 tonnes and 89.5 tonnes respectively) had created a system that increased the value of ivory on the international market, thus rewarding international smugglers and giving them the ability to control the trade and continue smuggling new ivory. Since the ivory ban, some Southern African countries have claimed their elephant populations are stable or increasing, and argued that ivory sales would support their conservation efforts. Other African countries oppose this position, stating that renewed ivory trading puts their own elephant populations under greater threat from poachers reacting to demand. CITES allowed the sale of 49 tonnes of ivory from Zimbabwe, Namibia and Botswana in 1997 to Japan. In 2007, under pressure from the International Fund for Animal Welfare, eBay banned all international sales of elephant-ivory products. The decision came after several mass slaughters of African elephants, most notably the 2006 Zakouma elephant slaughter in Chad. The IFAW found that up to 90% of the elephant-ivory transactions on eBay violated their own wildlife policies and could potentially be illegal. In October 2008, eBay expanded the ban, disallowing any sales of ivory on eBay. A more recent sale in 2008 of 108 tonnes from the three countries and South Africa took place to Japan and China. The inclusion of China as an "approved" importing country created enormous controversy, despite being supported by CITES, the World Wide Fund for Nature and Traffic. They argued that China had controls in place and the sale might depress prices. However, the price of ivory in China has skyrocketed. Some believe this may be due to deliberate price fixing by those who bought the stockpile, echoing the warnings from the Japan Wildlife Conservation Society on price-fixing after sales to Japan in 1997, and monopoly given to traders who bought stockpiles from Burundi and Singapore in the 1980s. A 2019 peer-reviewed study reported that the rate of African elephant poaching was in decline, with the annual poaching mortality rate peaking at over 10% in 2011 and falling to below 4% by 2017. The study found that the "annual poaching rates in 53 sites strongly correlate with proxies of ivory demand in the main Chinese markets, whereas between-country and between-site variation is strongly associated with indicators of corruption and poverty." Based on these findings, the study authors recommended action to both reduce demand for ivory in China and other main markets and to decrease corruption and poverty in Africa. In 2006, nineteen African countries signed the "Accra Declaration" calling for a total ivory trade ban, and twenty range states attended a meeting in Kenya calling for a 20-year moratorium in 2007. Methods of obtaining ivory can be divided into: Shooting the elephant to take its tusks: this method is of concern here. Taking tusks from an elephant which has died of natural causes. Taking tusks from an elephant which has had to be put down for another reason, for example, severe arthritis, or if its last molar teeth are worn out and can no longer chew its food. Among working elephants which use their tusks to carry logs, there is a best length for their tusks. In former times in India, often their tusks were cut back to this length (and often the shortened tusks' ends were bound in copper). This periodically freed pieces of ivory for the carving trade. Controversy and conservation issues The use and trade of elephant ivory have become controversial because they have contributed to seriously declining elephant populations in many countries. It is estimated that consumption in Great Britain alone in 1831 amounted to the deaths of nearly 4,000 elephants. In 1975, the Asian elephant was placed on Appendix I of the Convention on International Trade in Endangered Species (CITES), which prevents international trade between member states of species that are threatened by trade. The African elephant was placed on Appendix I in January 1990. Since then, some southern African countries have had their populations of elephants "downlisted" to Appendix II, allowing the domestic trade of non-ivory items; there have also been two "one off" sales of ivory stockpiles. In June 2015, more than a ton of confiscated ivory was crushed in New York City's Times Square by the Wildlife Conservation Society to send a message that the illegal trade will not be tolerated. The ivory, confiscated in New York and Philadelphia, was sent up a conveyor belt into a rock crusher. The Wildlife Conservation Society has pointed out that the global ivory trade leads to the slaughter of up to 35,000 elephants a year in Africa. In June 2018, Conservative MEPs' Deputy Leader Jacqueline Foster MEP urged the EU to follow the UK's lead and introduce a tougher ivory ban across Europe. China was the biggest market for poached ivory but announced they would phase out the legal domestic manufacture and sale of ivory products in May 2015. In September of the same year, China and the U.S. announced they would "enact a nearly complete ban on the import and export of ivory." The Chinese market has a high degree of influence on the elephant population. Alternatives Fossil mammoth tusks Trade in the ivory from the tusks of dead woolly mammoths frozen in the tundra has occurred for 300 years and continues to be legal. Mammoth ivory is used today to make handcrafted knives and similar implements. Mammoth ivory is rare and costly because mammoths have been extinct for millennia, and scientists are hesitant to sell museum-worthy specimens in pieces. Some estimates suggest that 10 or more million mammoths are still buried in Siberia. Fossil walrus ivory Fossil walrus ivory from animals that died before 1972 is legal to buy and sell in the United States, unlike many other types of ivory. Elk Ivory The ancestors of elk had teeth, also known as elk ivory, that protruded outwards, similar to animals that have tusks. These served as protection from predators, and for asserting dominance during the mating season. These elk once had much smaller antlers compared to the size of modern day species’ antlers. Elk antlers evolved to become bigger and the use of their tusks diminished as antlers grew, thus evolving towards a smaller size over time, making them nothing more than teeth in their mouths. These teeth have the same chemical compound as the ivory found in the highly used and poached elephant tusks, making it another good alternative when it comes to taking ivory as the teeth can be possibly removed without harming the elk themselves. Among Indian tribes, elk teeth has major significance when it comes to jewelry. Among women, men wore them as well. Either through bracelets, earrings, and chokers, there was deeper meaning for both men and women within the tribes. For the women, it was believed that it would bring in good luck and good health. As for the men, it was seen that they were a good hunter. Synthetic ivory Ivory can also be produced synthetically. Nuts A species of hard nut is gaining popularity as a replacement for ivory, although its size limits its usability. It is sometimes called vegetable ivory, or tagua, and is the seed endosperm of the ivory nut palm commonly found in coastal rainforests of Ecuador, Peru and Colombia. Gallery
Physical sciences
Organic gemstones
null
15166
https://en.wikipedia.org/wiki/Infantry%20fighting%20vehicle
Infantry fighting vehicle
An infantry fighting vehicle (IFV), also known as a mechanized infantry combat vehicle (MICV), is a type of armoured fighting vehicle and armoured personnel carrier used to carry infantry into battle and provide direct-fire support. The 1990 Treaty on Conventional Armed Forces in Europe defines an infantry fighting vehicle as "an armoured combat vehicle which is designed and equipped primarily to transport a combat infantry squad, and which is armed with an integral or organic cannon of at least 20 millimeters calibre and sometimes an antitank missile launcher". IFVs often serve both as the principal weapons system and as the mode of transport for a mechanized infantry unit. Infantry fighting vehicles are distinct from general armored personnel carriers (APCs), which are transport vehicles armed only for self-defense and not specifically engineered to fight on their own. IFVs are designed to be more mobile than tanks and are equipped with a rapid-firing autocannon or a large conventional gun; they may include side ports for infantrymen to fire their personal weapons while on board. The IFV rapidly gained popularity with armies worldwide due to a demand for vehicles with higher firepower than APCs that were less expensive and easier to maintain than tanks. Nevertheless, it did not supersede the APC concept altogether, due to the latter's continued usefulness in specialized roles. Some armies continue to maintain fleets of both IFVs and APCs. History Early Cold War The infantry fighting vehicle (IFV) concept evolved directly out of that of the armored personnel carrier (APC). During the Cold War of 1947-1991 armies increasingly fitted heavier and heavier weapons systems on an APC chassis to deliver suppressive fire for infantry debussing from the vehicle's troop compartment. With the growing mechanization of infantry units worldwide, some armies also came to believe that the embarked personnel should fire their weapons from inside the protection of the APC and only fight on foot as a last resort. These two trends led to the IFV, with firing ports in the troop compartment and a crew-operated weapons system. The IFV established a new niche between those combat vehicles which functioned primarily as armored weapons-carriers or as APCs. During the 1950s, the Soviet, US, and most European armies had adopted tracked APCs. In 1958, however, the Federal Republic of Germany's newly organized adopted the Schützenpanzer Lang HS.30 (also known simply as the SPz 12-3), which resembled a conventional tracked APC but carried a turret-mounted 20 mm autocannon that enabled it to engage other armored vehicles. The SPz 12-3 was the first purpose-built IFV. The 's doctrine called for mounted infantry to fight and maneuver alongside tank formations rather than simply being ferried to the edge of the battlefield before dismounting. Each SPz 12-3 could carry five troops in addition to a three-man crew. Despite this, the design lacked firing ports, forcing the embarked infantry to expose themselves through open hatches to return fire. As the SPz 12-3 was being inducted into service, the French and Austrian armies adopted new APCs which possessed firing ports, allowing embarked infantry to observe and fire their weapons from inside the vehicle. These were known as the AMX-VCI and Saurer 4K, respectively. Austria subsequently introduced an IFV variant of the Saurer 4K which carried a 20 mm autocannon, making it the first vehicle of this class to possess both firing ports and a turreted weapons-system. In the early to mid-1960s, the Swedish Army adopted two IFVs armed with 20 mm autocannon turrets and roof firing hatches: Pansarbandvagn 301 and Pansarbandvagn 302, having experimented with the IFV concept already during WWII in the Terrängbil m/42 KP wheeled machine gun armed proto-IFV. Following the trend towards converting preexisting APCs into IFVs, the Dutch, US, and Belgian armies experimented with a variety of modified M113s during the late 1960s; these were collectively identified as the AIFV (Armored Infantry Fighting Vehicle). The first US M113-based IFV appeared in 1969; known as the XM765, it had a sharply angled hull, ten vision blocks, and a cupola-mounted 20 mm autocannon. The XM765 design, though rejected for service, later became the basis for the very similar Dutch YPR-765. The YPR-765 had five firing ports and a 25 mm autocannon with a co-axial machine gun. The Soviet Army fielded its first tracked APC, the BTR-50, in 1957. Its first wheeled APC, the BTR-152, had been designed as early as the late 1940s. Early versions of both these lightly armored vehicles were open-topped and carried only general-purpose machine guns for armament. As Soviet strategists became more preoccupied with the possibility of a war involving weapons of mass destruction, they became convinced of the need to deliver mounted troops to a battlefield without exposing them to the radioactive fallout from an atomic weapon. The IFV concept was received favorably because it would enable a Soviet infantry squad to fight from inside their vehicles when operating in contaminated environments. Soviet design work on a new tracked IFV began in the late 1950s and the first prototype appeared as the Obyekt 765 in 1961. After evaluating and rejecting a number of other wheeled and tracked prototypes, the Soviet Army accepted the Obyekt 765 for service. It entered serial production as the BMP-1 in 1966. The BMP-1 was heavily armed and armored, combining the qualities of a light tank with those of the traditional APC. In addition to being amphibious and superior in cross-country mobility to its predecessors, the BMP-1 carried a 73mm smoothbore cannon, a co-axial PKT machine gun, and a launcher for 9M14 Malyutka anti-tank missiles. Its hull had sufficiently heavy armor to resist .50 caliber armor-piercing ammunition along its frontal arc. Eight firing ports and vision blocks allowed the embarked infantry squad to observe and engage targets with rifles or machine guns. The BMP-1's use of a relatively large caliber main gun marked a departure from the Western trend of fitting IFVs with automatic cannon, which were more suitable for engaging low-flying aircraft, light armor, and dismounted personnel. The Soviet Union produced about 20,000 BMP-1s from 1966 to 1983, at which time it was considered the most widely adopted IFV design in the world. In Soviet service, the BMP-1 was ultimately superseded by the more sophisticated BMP-2 (in service from 1980) and by the BMP-3 (in service from 1987). A similar vehicle known as the BMD-1 was designed to accompany Soviet airborne infantry and for a number of years was the world's only airborne IFV. In 1971 the adopted the Marder, which became increasingly heavily armored through its successive marks and – like the BMP – was later fitted as standard with a launcher for anti-tank guided missiles. Between 1973 and 1975 the French and Yugoslav armies developed the AMX-10P and BVP M-80, respectively – the first amphibious IFVs to appear outside the Soviet Union. The Marder, AMX-10P, and M-80 were all armed with similar 20 mm autocannon and carried seven to eight passengers. They could also be armed with various anti-tank missile configurations. Late Cold War Wheeled IFVs did not begin appearing until 1976, when the Ratel was introduced in response to a South African Army specification for a wheeled combat vehicle suited to the demands of rapid offensives combining maximum firepower and strategic mobility. Unlike European IFVs, the Ratel was not designed to allow mounted infantrymen to fight in concert with tanks but rather to operate independently across vast distances. South African officials chose a very simple, economical design because it helped reduce the significant logistical commitment necessary to keep heavier combat vehicles operational in undeveloped areas. Excessive track wear was also an issue in the region's abrasive, sandy terrain, making the Ratel's wheeled configuration more attractive. The Ratel was typically armed with a 20 mm autocannon featuring what was then a unique twin-linked ammunition feed, allowing its gunner to rapidly switch between armor-piercing and high-explosive ammunition. Other variants were also fitted with mortars, a bank of anti-tank guided missiles, or a 90 mm cannon. Most notably, the Ratel was the first mine-protected IFV; it had a blastproof hull and was built to withstand the explosive force of anti-tank mines favored by local insurgents. Like the BMP-1, the Ratel proved to be a major watershed in IFV development, albeit for different reasons: until its debut wheeled IFV designs were evaluated unfavorably, since they lacked the weight-carrying capacity and off-road mobility of tracked vehicles, and their wheels were more vulnerable to hostile fire. However, improvements during the 1970s in power trains, suspension technology, and tires had increased their potential strategic mobility. Reduced production, operation, and maintenance costs also helped make wheeled IFVs attractive to several nations. During the late 1960s and early 1970s, the United States Army had gradually abandoned its attempts to utilize the M113 as an IFV and refocused on creating a dedicated IFV design able to match the BMP. Although considered reliable, the M113 chassis did not meet the necessary requirements for protection or stealth. The US also considered the M113 too heavy and slow to serve as an IFV capable of keeping pace with tanks. Its MICV-65 program produced a number of unique prototypes, none of which were accepted for service owing to concerns about speed, armor protection, and weight. US Army evaluation staff were sent to Europe to review the AMX-10P and the Marder, both of which were rejected due to high cost, insufficient armor, or lackluster amphibious capabilities. In 1973, the FMC Corporation developed and tested the XM723, which was a 21-ton tracked chassis which could accommodate three crew members and eight passengers. It initially carried a single 20 mm autocannon in a one-man turret but in 1976 a two-man turret was introduced; this carried a 25 mm autocannon like M242 or Oerlikon KBA, a co-axial machine gun, and a TOW anti-tank missile launcher. The XM723 possessed amphibious capability, nine firing ports, and spaced laminate armor on its hull. It was accepted for service with the US Army in 1980 as the Bradley Fighting Vehicle. Successive variants have been retrofitted with improved missile systems, gas particulate filter systems, Kevlar spall liners, and increased stowage. The amount of space taken up by the hull and stowage modifications has reduced the number of passengers to six. By 1982 30,000 IFVs had entered service worldwide, and the IFV concept appeared in the doctrines of 30 national armies. The popularity of the IFV was increased by the growing trend on the part of many nations to mechanize armies previously dominated by light infantry. However, contrary to expectation the IFV did not render APCs obsolete. The US, Russian, French, and German armies have all retained large fleets of IFVs and APCs, finding the APC more suitable for multi-purpose or auxiliary roles. The British Army was one of the few Western armies which had neither recognized a niche for IFVs nor adopted a dedicated IFV design by the late 1970s. In 1980, it made the decision to adopt a new tracked armored vehicle, the FV510 Warrior. British doctrine is that a vehicle should carry troops under protection to the objective and then give firepower support when they have disembarked. While normally classified as an IFV, the Warrior fills the role of an APC in British service and infantrymen do not remain embarked during combat. Doctrine The role of the IFV is closely linked to mechanized infantry doctrine. While some IFVs are armed with a direct fire gun or anti-tank guided missiles for close infantry support, they are not intended to assault armored and mechanized forces with any type of infantry on their own, mounted or not. Rather, the IFV's role is to give an infantry unit battlefield, tactical, and operational mobility during combined arms operations. Most IFVs either complement tanks as part of an armored battalion, brigade, or division. Others perform traditional infantry missions supported by tanks. Early development of IFVs in a number of Western nations was promoted primarily by armor officers who wanted to integrate tanks with supporting infantry in armored divisions. There were a few exceptions to the rule: for example, the 's decision to adopt the SPz 12-3 was largely due to the experiences of Wehrmacht s who had been inappropriately ordered to undertake combat operations better suited for armor. Hence, the concluded that infantry should only fight while mounted in their own armored vehicles, ideally supported by tanks. This doctrinal trend was later subsumed into the armies of other Western nations, including the US, leading to the widespread conclusion that IFVs should be confined largely to assisting the forward momentum of tanks. The Soviet Army granted more flexibility in this regard to its IFV doctrine, allowing for the mechanized infantry to occupy terrain that compromised an enemy defense, carry out flanking movements, or lure armor into ill-advised counterattacks. While they still performed an auxiliary role to tanks, the notion of using IFVs in these types of engagements dictated that they be heavily armed, which was reflected in the BMP-1 and its successors. Additionally, Soviet airborne doctrine made use of the BMD series of IFVs to operate in concert with paratroops rather than traditional mechanized or armored formations. IFVs assumed a new significance after the 1973 Arab-Israeli War. In addition to heralding the combat debut of the BMP-1, that conflict demonstrated the newfound significance of anti-tank guided missiles and the obsolescence of independent armored attacks. More emphasis was placed on combined arms offensives, and the importance of mechanized infantry to support tanks reemerged. As a result of the 1973 Arab-Israeli War, the Soviet Union attached more infantry to its armored formations and the US accelerated its long-delayed IFV development program. An IFV capable of accompanying tanks for the purpose of suppressing anti-tank weapons and the hostile infantry which operated them was seen as necessary to avoid the devastation wreaked on purely armored Israeli formations. Design The US Army defines all vehicles classed as IFVs as having three essential characteristics: they are armed with at least a medium-caliber cannon or automatic grenade launcher, at least sufficiently protected against small arms fire, and possess off-road mobility. It also identifies all IFVs as having some characteristics of an APC and a light tank. The United Nations Register for Conventional Arms (UNROCA) simply defines an IFV as any armored vehicle "designed to fight with soldiers on board" and "to accompany tanks". UNROCA makes a clear distinction between IFVs and APCs, as the former's primary mission is combat rather than general transport. Protection All IFVs possess armored hulls protected against rifle and machine gun fire, and some are equipped with active protection systems. Most have lighter armor than main battle tanks to ensure mobility. Armies have generally accepted risk in reduced protection to recapitalize on an IFV's mobility, weight and speed. Their fully enclosed hulls offer protection from artillery fragments and residual environmental contaminants as well as limit exposure time to the mounted infantry during extended movements over open ground. Many IFVs also have sharply angled hulls that offer a relatively high degree of protection for their armor thickness. The BMP, Boragh, BVP M-80, and their respective variants all possess steel hulls with a distribution of armor and steep angling that protect them during frontal advances. The BMP-1 was vulnerable to heavy machine guns at close range on its flanks or rear, leading to a variety of more heavily armored marks appearing from 1979 onward. The Bradley possessed a lightweight aluminum alloy hull, which in most successive marks has been bolstered by the addition of explosive reactive and slat armor, spaced laminate belts, and steel track skirts. Throughout its life cycle, an IFV is expected to gain 30% more weight from armor additions. As asymmetric conflicts become more common, an increasing concern with regards to IFV protection has been adequate countermeasures against land mines and improvised explosive devices. During the Iraq War, inadequate mine protection in US Bradleys forced their crews to resort to makeshift strategies such as lining the hull floors with sandbags. A few IFVs, such as the Ratel, have been specifically engineered to resist mine explosions. Armament IFVs may be equipped with: turrets carrying autocannons of various calibers, low or medium velocity tank guns, anti-tank guided missiles, or automatic grenade launchers. With a few exceptions, such as the BMP-1 and the BMP-3, designs such as the Marder and the BMP-2 have set the trend of arming IFVs with an autocannon suitable for use against lightly armored vehicles, low-flying aircraft, and dismounted infantry. This reflected the growing inclination to view IFVs as auxiliaries of armored formations: a small or medium caliber autocannon was perceived as an ideal suppressive weapon to complement large caliber tank fire. IFVs armed with miniature tank guns did not prove popular because many of the roles they were expected to perform were better performed by accompanying tanks. The BMP-1, which was the first IFV to carry a relatively large cannon, came under criticism during the 1973 Arab-Israeli War for its mediocre individual accuracy, due in part to the low velocities of its projectiles. During the Soviet–Afghan War, BMP-1 crews also complained that their armament lacked the elevation necessary to engage insurgents in mountainous terrain. The effectiveness of large caliber, low-velocity guns like the 2A28 Grom on the BMP-1 and BMD-1 was also much reduced by the appearance of Chobham armor on Western tanks. The Ratel, which included a variant armed with a 90mm low-velocity gun, was utilized in South African combat operations against Angolan and Cuban armored formations during the South African Border War, with mixed results. Although the Ratels succeeded in destroying a large number of Angolan tanks and APCs, they were hampered by many of the same problems as the BMP-1: mediocre standoff ranges, inferior fire control, and a lack of stabilized main gun. The Ratels' heavy armament also tempted South African commanders to utilize them as light tanks rather than in their intended role of infantry support. Another design feature of the BMP-1 did prove more successful in establishing a precedent for future IFVs: its inclusion of an anti-tank missile system. This consisted of a rail-launcher firing 9M14 Malyutka missiles which had to be reloaded manually from outside the BMP's turret. Crew members had to expose themselves to enemy fire to reload the missiles, and they could not guide them effectively from inside the confines of the turret space. The BMP-2 and later variants of the BMP-1 made use of semiautonomous guided missile systems. In 1978, the became the first Western army to embrace this trend when it retrofitted all its Marders with launchers for MILAN anti-tank missiles. The US Army added a launcher for TOW anti-tank missiles to its fleet of Bradleys, despite the fact that this greatly reduced the interior space available for seating the embarked infantry. This was justified on the basis that the Bradley needed to not only engage and destroy other IFVs, but support tanks in the destruction of other tanks during combined arms operations. Mobility IFVs are designed to have the strategic and tactical mobility necessary to keep pace with tanks during rapid maneuvers. Some, like the BMD series, have airborne and amphibious capabilities. IFVs may be either wheeled or tracked; tracked IFVs are usually more heavily armored and possess greater carrying capacity. Wheeled IFVs are cheaper and simpler to produce, maintain, and operate. From a logistical perspective, they are also ideal for an army without widespread access to transporters or a developed rail network to deploy its armor.
Technology
Maneuver
null
15167
https://en.wikipedia.org/wiki/ICQ
ICQ
ICQ was a cross-platform instant messaging (IM) and VoIP client founded in June 1996 by Yair Goldfinger, Sefi Vigiser, Amnon Amir, Arik Vardi, and Arik's father, Yossi Vardi. The name ICQ derives from the English phrase "I Seek You". Originally developed by the Israeli company Mirabilis in 1996, the client was bought by AOL in 1998, and then by Mail.Ru Group (now VK) in 2010. The ICQ client application and service were initially released in November 1996, freely available to download. The business did not have traditional marketing and relied mostly on word-of-mouth advertising instead, with customers telling their friends about it, who then informed their friends, and so on. ICQ was among the first stand-alone instant messenger (IM) applications—while real-time chat was not in itself new (Internet Relay Chat [IRC] being the most common platform at the time), the concept of a fully centralized service with individual user accounts focused on one-on-one conversations set the blueprint for later instant messaging services like AIM, and its influence is seen in modern social media applications. ICQ became the first widely adopted IM platform. At its peak around 2001, ICQ had more than 100 million accounts registered. At the time of the Mail.Ru acquisition in 2010, there were around 42 million daily users. In 2022, ICQ had about 11 million monthly users. The service was shut down on June 26, 2024, following an announcement on the website of ICQ in May 2024 that the service would be discontinued. Features of ICQ New The last version of the service, launched in 2020 as "ICQ New", featured a number of different messaging functions: Private chats: conversations between two users, with history synchronized to the cloud. A user could delete a sent message at any time, and a notification will be shown indicating that the message has been deleted. A chat with oneself, which could be used to save messages from group or private chats, or upload media content as a form of cloud storage. Group chats with up to 25 thousand participants at the same time, which any user could create. Users could hide their phone number from other participants, see which group members have read a message, and switched off notifications for messages from specific group members. Audio and video calls with up to five people. Sending and receiving of audio messages, with automatic transcription to text. Channels, where authors could publish posts as text messages and attach media files, similar to a blog. Once the post was published, subscribers receive a notification as they would from regular and group chats. The channel author could remain anonymous. Polls inside group chats. An API-bot which could be used by anyone to create a bot, to perform specific actions and interact with users. "Stickers": small images or photos expressing some form of emotion, could be selected from a provided sticker library or users could upload their own. Machine learning was used to recommend stickers automatically. "Masks": images that could be superimposed onto the camera in real-time during video calls, or onto photos to be sent to other users. Nicknames, which users could set to use in place of a phone number for others to search for and contact them. "Smart answers": short phrases that appear above the message box which can be used to answer messages. ICQ New analyzed the contents of a conversation and suggests a few pre-set answers. UIN ICQ users were identified and distinguished from one another by UIN, or User Identification Numbers, distributed in sequential order. The UIN was invented by Mirabilis, as the user name assigned to each user upon registration. Issued UINs started at '10,000' (5 digits) and every user received a UIN when first registering with ICQ. As of ICQ6 users were also able to log in using the specific e-mail address they associated with their UIN during the initial registration process. Unlike other instant messaging software or web applications, on ICQ the only permanent user info was the UIN, although it was possible to search for other users using their associated e-mail address or any other detail they made public by updating it in their account's public profile. In addition the user could change all of his or her personal information, including screen name and e-mail address, without having to re-register. Since 2000, ICQ and AIM users were able to add each other to their contact list without the need for any external clients. As a response to UIN theft or sale of attractive UINs, ICQ started to store email addresses previously associated with a UIN. As such UINs that are stolen could sometimes be reclaimed, if a valid primary email address was entered into the user profile. History The founding company of ICQ, Mirabilis, was established in June 1996 by five Israeli developers: Yair Goldfinger, Sefi Vigiser, Amnon Amir, Arik Vardi, and Arik's father Yossi Vardi. ICQ was one of the first text-based messengers to reach a wide range of users. The technology Mirabilis developed for ICQ was distributed free of charge. The technology's success encouraged AOL to acquire Mirabilis on June 8, 1998, for $287 million up front and $120 million in additional payments over three years based on performance levels. In 2002 AOL successfully patented the technology. After the purchase, the product was initially managed by Ariel Yarnitsky and Avi Shechter. ICQ's management changed at the end of 2003. Under the leadership of the new CEO, Orey Gilliam, who also assumed the responsibility for all of AOL's messaging business in 2007, ICQ resumed its growth; it was not only a highly profitable company, but one of AOL's most successful businesses. Eliav Moshe replaced Gilliam in 2009 and became ICQ's managing director. In April 2010, AOL sold ICQ to Digital Sky Technologies, headed by Alisher Usmanov, for $187.5 million. While ICQ was displaced by AOL Instant Messenger, Google Talk, and other competitors in the US and many other countries over the 2000s, it remained the most popular instant messaging network in Russian-speaking countries, and an important part of online culture. Popular UINs demanded over 11,000₽ in 2010. In September of that year, Digital Sky Technologies changed its name to Mail.Ru Group. Since the acquisition, Mail.ru has invested in turning ICQ from a desktop client to a mobile messaging system. As of 2013, around half of ICQ's users were using its mobile apps, and in 2014, the number of users began growing for the first time since the purchase. In March 2016, the source code of the client was released under the Apache license on GitHub. In 2020, Mail.Ru Group decided to launch a new version, "ICQ New", based on the original ICQ. The updated software was presented to the general public on April 6, 2020. During the second week of January 2021, ICQ saw a renewed increase in popularity in Hong Kong, spurred on by the controversy over WhatsApp's privacy policy update. The number of downloads for the application increased 35-fold in the region. On May 24, 2024, the main page of ICQ's website announced that the service would be shutting down on June 26, 2024. ICQ recommended that users migrate to VK Messenger and VK WorkSpace. Development history ICQ 99a/b the first releases that were widely available. ICQ 2000 incorporated into
Technology
Social network and blogging
null
15201
https://en.wikipedia.org/wiki/Interdisciplinarity
Interdisciplinarity
Interdisciplinarity or interdisciplinary studies involves the combination of multiple academic disciplines into one activity (e.g., a research project). It draws knowledge from several fields like sociology, anthropology, psychology, economics, etc. It is related to an interdiscipline or an interdisciplinary field, which is an organizational unit that crosses traditional boundaries between academic disciplines or schools of thought, as new needs and professions emerge. Large engineering teams are usually interdisciplinary, as a power station or mobile phone or other project requires the melding of several specialties. However, the term "interdisciplinary" is sometimes confined to academic settings. The term interdisciplinary is applied within education and training pedagogies to describe studies that use methods and insights of several established disciplines or traditional fields of study. Interdisciplinarity involves researchers, students, and teachers in the goals of connecting and integrating several academic schools of thought, professions, or technologies—along with their specific perspectives—in the pursuit of a common task. The epidemiology of HIV/AIDS or global warming requires understanding of diverse disciplines to solve complex problems. Interdisciplinary may be applied where the subject is felt to have been neglected or even misrepresented in the traditional disciplinary structure of research institutions, for example, women's studies or ethnic area studies. Interdisciplinarity can likewise be applied to complex subjects that can only be understood by combining the perspectives of two or more fields. The adjective interdisciplinary is most often used in educational circles when researchers from two or more disciplines pool their approaches and modify them so that they are better suited to the problem at hand, including the case of the team-taught course where students are required to understand a given subject in terms of multiple traditional disciplines. Interdisciplinary education fosters cognitive flexibility and prepares students to tackle complex, real-world problems by integrating knowledge from multiple fields. This approach emphasizes active learning, critical thinking, and problem-solving skills, equipping students with the adaptability needed in an increasingly interconnected world. For example, the subject of land use may appear differently when examined by different disciplines, for instance, biology, chemistry, economics, geography, and politics. Development Although "interdisciplinary" and "interdisciplinarity" are frequently viewed as twentieth century terms, the concept has historical antecedents, most notably Greek philosophy. Julie Thompson Klein attests that "the roots of the concepts lie in a number of ideas that resonate through modern discourse—the ideas of a unified science, general knowledge, synthesis and the integration of knowledge", while Giles Gunn says that Greek historians and dramatists took elements from other realms of knowledge (such as medicine or philosophy) to further understand their own material. The building of Roman roads required men who understood surveying, material science, logistics and several other disciplines. Any broadminded humanist project involves interdisciplinarity, and history shows a crowd of cases, as seventeenth-century Leibniz's task to create a system of universal justice, which required linguistics, economics, management, ethics, law philosophy, politics, and even sinology. Interdisciplinary programs sometimes arise from a shared conviction that the traditional disciplines are unable or unwilling to address an important problem. For example, social science disciplines such as anthropology and sociology paid little attention to the social analysis of technology throughout most of the twentieth century. As a result, many social scientists with interests in technology have joined science, technology and society programs, which are typically staffed by scholars drawn from numerous disciplines. They may also arise from new research developments, such as nanotechnology, which cannot be addressed without combining the approaches of two or more disciplines. Examples include quantum information processing, an amalgamation of quantum physics and computer science, and bioinformatics, combining molecular biology with computer science. Sustainable development as a research area deals with problems requiring analysis and synthesis across economic, social and environmental spheres; often an integration of multiple social and natural science disciplines. Interdisciplinary research is also key to the study of health sciences, for example in studying optimal solutions to diseases. Some institutions of higher education offer accredited degree programs in Interdisciplinary Studies. At another level, interdisciplinarity is seen as a remedy to the harmful effects of excessive specialization and isolation in information silos. On some views, however, interdisciplinarity is entirely indebted to those who specialize in one field of study—that is, without specialists, interdisciplinarians would have no information and no leading experts to consult. Others place the focus of interdisciplinarity on the need to transcend disciplines, viewing excessive specialization as problematic both epistemologically and politically. When interdisciplinary collaboration or research results in new solutions to problems, much information is given back to the various disciplines involved. Therefore, both disciplinarians and interdisciplinarians may be seen in complementary relation to one another. Barriers Because most participants in interdisciplinary ventures were trained in traditional disciplines, they must learn to appreciate differences of perspectives and methods. For example, a discipline that places more emphasis on quantitative rigor may produce practitioners who are more scientific in their training than others; in turn, colleagues in "softer" disciplines who may associate quantitative approaches with difficulty grasp the broader dimensions of a problem and lower rigor in theoretical and qualitative argumentation. An interdisciplinary program may not succeed if its members remain stuck in their disciplines (and in disciplinary attitudes). Those who lack experience in interdisciplinary collaborations may also not fully appreciate the intellectual contribution of colleagues from those disciplines. From the disciplinary perspective, however, much interdisciplinary work may be seen as "soft", lacking in rigor, or ideologically motivated; these beliefs place barriers in the career paths of those who choose interdisciplinary work. For example, interdisciplinary grant applications are often refereed by peer reviewers drawn from established disciplines; interdisciplinary researchers may experience difficulty getting funding for their research. In addition, untenured researchers know that, when they seek promotion and tenure, it is likely that some of the evaluators will lack commitment to interdisciplinarity. They may fear that making a commitment to interdisciplinary research will increase the risk of being denied tenure. Interdisciplinary programs may also fail if they are not given sufficient autonomy. For example, interdisciplinary faculty are usually recruited to a joint appointment, with responsibilities in both an interdisciplinary program (such as women's studies) and a traditional discipline (such as history). If the traditional discipline makes the tenure decisions, new interdisciplinary faculty will be hesitant to commit themselves fully to interdisciplinary work. Other barriers include the generally disciplinary orientation of most scholarly journals, leading to the perception, if not the fact, that interdisciplinary research is hard to publish. In addition, since traditional budgetary practices at most universities channel resources through the disciplines, it becomes difficult to account for a given scholar or teacher's salary and time. During periods of budgetary contraction, the natural tendency to serve the primary constituency (i.e., students majoring in the traditional discipline) makes resources scarce for teaching and research comparatively far from the center of the discipline as traditionally understood. For these same reasons, the introduction of new interdisciplinary programs is often resisted because it is perceived as a competition for diminishing funds. Due to these and other barriers, interdisciplinary research areas are strongly motivated to become disciplines themselves. If they succeed, they can establish their own research funding programs and make their own tenure and promotion decisions. In so doing, they lower the risk of entry. Examples of former interdisciplinary research areas that have become disciplines, many of them named for their parent disciplines, include neuroscience, cybernetics, biochemistry and biomedical engineering. These new fields are occasionally referred to as "interdisciplines". On the other hand, even though interdisciplinary activities are now a focus of attention for institutions promoting learning and teaching, as well as organizational and social entities concerned with education, they are practically facing complex barriers, serious challenges and criticism. The most important obstacles and challenges faced by interdisciplinary activities in the past two decades can be divided into "professional", "organizational", and "cultural" obstacles. Interdisciplinary studies and studies of interdisciplinarity An initial distinction should be made between interdisciplinary studies, which can be found spread across the academy today, and the study of interdisciplinarity, which involves a much smaller group of researchers. The former is instantiated in thousands of research centers across the US and the world. The latter has one US organization, the Association for Interdisciplinary Studies (founded in 1979), two international organizations, the International Network of Inter- and Transdisciplinarity (founded in 2010) and the Philosophy of/as Interdisciplinarity Network (founded in 2009). The US's research institute devoted to the theory and practice of interdisciplinarity, the Center for the Study of Interdisciplinarity at the University of North Texas, was founded in 2008 but is closed as of 1 September 2014, the result of administrative decisions at the University of North Texas. An interdisciplinary study is an academic program or process seeking to synthesize broad perspectives, knowledge, skills, interconnections, and epistemology in an educational setting. Interdisciplinary programs may be founded in order to facilitate the study of subjects which have some coherence, but which cannot be adequately understood from a single disciplinary perspective (for example, women's studies or medieval studies). More rarely, and at a more advanced level, interdisciplinarity may itself become the focus of study, in a critique of institutionalized disciplines' ways of segmenting knowledge. In contrast, studies of interdisciplinarity raise to self-consciousness questions about how interdisciplinarity works, the nature and history of disciplinarity, and the future of knowledge in post-industrial society. Researchers at the Center for the Study of Interdisciplinarity have made the distinction between philosophy 'of' and 'as' interdisciplinarity, the former identifying a new, discrete area within philosophy that raises epistemological and metaphysical questions about the status of interdisciplinary thinking, with the latter pointing toward a philosophical practice that is sometimes called 'field philosophy'. Perhaps the most common complaint regarding interdisciplinary programs, by supporters and detractors alike, is the lack of synthesis—that is, students are provided with multiple disciplinary perspectives but are not given effective guidance in resolving the conflicts and achieving a coherent view of the subject. Others have argued that the very idea of synthesis or integration of disciplines presupposes questionable politico-epistemic commitments. Critics of interdisciplinary programs feel that the ambition is simply unrealistic, given the knowledge and intellectual maturity of all but the exceptional undergraduate; some defenders concede the difficulty, but insist that cultivating interdisciplinarity as a habit of mind, even at that level, is both possible and essential to the education of informed and engaged citizens and leaders capable of analyzing, evaluating, and synthesizing information from multiple sources in order to render reasoned decisions. While much has been written on the philosophy and promise of interdisciplinarity in academic programs and professional practice, social scientists are increasingly interrogating academic discourses on interdisciplinarity, as well as how interdisciplinarity actually works—and does not—in practice. Some have shown, for example, that some interdisciplinary enterprises that aim to serve society can produce deleterious outcomes for which no one can be held to account. Politics of interdisciplinary studies Since 1998, there has been an ascendancy in the value of interdisciplinary research and teaching and a growth in the number of bachelor's degrees awarded at U.S. universities classified as multi- or interdisciplinary studies. The number of interdisciplinary bachelor's degrees awarded annually rose from 7,000 in 1973 to 30,000 a year by 2005 according to data from the National Center of Educational Statistics (NECS). In addition, educational leaders from the Boyer Commission to Carnegie's President Vartan Gregorian to Alan I. Leshner, CEO of the American Association for the Advancement of Science have advocated for interdisciplinary rather than disciplinary approaches to problem-solving in the 21st century. This has been echoed by federal funding agencies, particularly the National Institutes of Health under the direction of Elias Zerhouni, who has advocated that grant proposals be framed more as interdisciplinary collaborative projects than single-researcher, single-discipline ones. At the same time, many thriving longstanding bachelor's in interdisciplinary studies programs in existence for 30 or more years, have been closed down, in spite of healthy enrollment. Examples include Arizona International (formerly part of the University of Arizona), the School of Interdisciplinary Studies at Miami University, and the Department of Interdisciplinary Studies at Wayne State University; others such as the Department of Interdisciplinary Studies at Appalachian State University, and George Mason University's New Century College, have been cut back. Stuart Henry has seen this trend as part of the hegemony of the disciplines in their attempt to recolonize the experimental knowledge production of otherwise marginalized fields of inquiry. This is due to threat perceptions seemingly based on the ascendancy of interdisciplinary studies against traditional academia. Examples Communication science: Communication studies takes up theories, models, concepts, etc. of other, independent disciplines such as sociology, political science and economics and thus decisively develops them. Environmental science: Environmental science is an interdisciplinary earth science aimed at addressing environmental issues such as global warming and pollution, and involves the use of a wide range of scientific disciplines including geology, chemistry, physics, ecology, and oceanography. Faculty members of environmental programs often collaborate in interdisciplinary teams to solve complex global environmental problems. Those who study areas of environmental policy such as environmental law, sustainability, and environmental justice, may also seek knowledge in the environmental sciences to better develop their expertise and understanding in their fields. Knowledge management: Knowledge management discipline exists as a cluster of divergent schools of thought under an overarching knowledge management umbrella by building on works in computer science, economics, human resource management, information systems, organizational behavior, philosophy, psychology, and strategic management. Liberal arts education: A select realm of disciplines that cut across the humanities, social sciences, and hard sciences, initially intended to provide a well-rounded education. Several graduate programs exist in some form of Master of Arts in Liberal Studies to continue to offer this interdisciplinary course of study. Materials science: Field that combines the scientific and engineering aspects of materials, particularly solids. It covers the design, discovery and application of new materials by incorporating elements of physics, chemistry, and engineering. Permaculture: A holistic design science that provides a framework for making design decisions in any sphere of human endeavor, but especially in land use and resource security. Provenance research: Interdisciplinary research comes into play when clarifying the path of artworks into public and private art collections and also in relation to human remains in natural history collections. Sports science: Sport science is an interdisciplinary science that researches the problems and manifestations in the field of sport and movement in cooperation with a number of other sciences, such as sociology, ethics, biology, medicine, biomechanics or pedagogy. Transport sciences: Transport sciences are a field of science that deals with the relevant problems and events of the world of transport and cooperates with the specialised legal, ecological, technical, psychological or pedagogical disciplines in working out the changes of place of people, goods, messages that characterise them.<ref>Hendrik Ammoser, Mirko Hoppe: Glossary of Transport and Transport Sciences (PDF; 1,3 MB), published in the series Discussion Papers from the Institute of Economics and Transport, Technische Universität Dresden. Dresden 2006. </ref> Venture research: Venture research is an interdisciplinary research area located in the human sciences that deals with the conscious entering into and experiencing of borderline situations. For this purpose, the findings of evolutionary theory, cultural anthropology, social sciences, behavioral research, differential psychology, ethics or pedagogy are cooperatively processed and evaluated.Siegbert A. Warwitz: Vom Sinn des Wagens. Why people take on dangerous challenges. In: German Alpine Association (ed.): Berg 2006. Tyrolia Publishing House. Munich-Innsbruck-Bolzano. P. 96-111. Historical examples There are many examples of when a particular idea, almost in the same period, arises in different disciplines. One case is the shift from the approach of focusing on "specialized segments of attention" (adopting one particular perspective), to the idea of "instant sensory awareness of the whole", an attention to the "total field", a "sense of the whole pattern, of form and function as a unity", an "integral idea of structure and configuration". This has happened in painting (with cubism), physics, poetry, communication and educational theory. According to Marshall McLuhan, this paradigm shift was due to the passage from an era shaped by mechanization, which brought sequentiality, to the era shaped by the instant speed of electricity, which brought simultaneity. Efforts to simplify and defend the concept An article in the Social Science Journal attempts to provide a simple, common-sense, definition of interdisciplinarity, bypassing the difficulties of defining that concept and obviating the need for such related concepts as transdisciplinarity, pluridisciplinarity, and multidisciplinary: In turn, interdisciplinary richness of any two instances of knowledge, research, or education can be ranked by weighing four variables: number of disciplines involved, the "distance" between them, the novelty of any particular combination, and their extent of integration. Interdisciplinary knowledge and research are important because: "Creativity often requires interdisciplinary knowledge. Immigrants often make important contributions to their new field. Disciplinarians often commit errors which can be best detected by people familiar with two or more disciplines. Some worthwhile topics of research fall in the interstices among the traditional disciplines. Many intellectual, social, and practical problems require interdisciplinary approaches. Interdisciplinary knowledge and research serve to remind us of the unity-of-knowledge ideal. Interdisciplinarians enjoy greater flexibility in their research. More so than narrow disciplinarians, interdisciplinarians often treat themselves to the intellectual equivalent of traveling in new lands. Interdisciplinarians may help breach communication gaps in the modern academy, thereby helping to mobilize its enormous intellectual resources in the cause of greater social rationality and justice. By bridging fragmented disciplines, interdisciplinarians might play a role in the defense of academic freedom." Quotations
Physical sciences
Science basics
Basics and measurement
15215
https://en.wikipedia.org/wiki/Internet%20Explorer
Internet Explorer
Internet Explorer (formerly Microsoft Internet Explorer and Windows Internet Explorer, commonly abbreviated as IE or MSIE) is a retired series of graphical web browsers developed by Microsoft that were used in the Windows line of operating systems. While IE has been discontinued on most Windows editions, it remains supported on certain editions of Windows, such as Windows 10 LTSB/LTSC. Starting in 1995, it was first released as part of the add-on package Plus! for Windows 95 that year. Later versions were available as free downloads or in-service packs and included in the original equipment manufacturer (OEM) service releases of Windows 95 and later versions of Windows. Microsoft spent over per year on Internet Explorer in the late 1990s, with over 1,000 people involved in the project by 1999. New feature development for the browser was discontinued in 2016 and ended support on June 15, 2022 for Windows 10 Semi-Annual Channel (SAC), in favor of its successor, Microsoft Edge. Internet Explorer was once the most widely used web browser, attaining a peak of 95% usage share by 2003. It has since fallen out of general use after retirement. This came after Microsoft used bundling to win the first browser war against Netscape, which was the dominant browser in the 1990s. Its usage share has since declined with the launches of Firefox (2004) and Google Chrome (2008) and with the growing popularity of mobile operating systems such as Android and iOS that do not support Internet Explorer. Microsoft Edge, IE's successor, first overtook Internet Explorer in terms of market share in November 2019. Versions of Internet Explorer for other operating systems have also been produced, including an Xbox 360 version called Internet Explorer for Xbox and for platforms Microsoft no longer supports: Internet Explorer for Mac and Internet Explorer for UNIX (Solaris and HP-UX), and an embedded OEM version called Pocket Internet Explorer, later rebranded Internet Explorer Mobile, made for Windows CE, Windows Phone, and, previously, based on Internet Explorer 7, for Windows Phone 7. The browser has been scrutinized throughout its development for its use of third-party technology (such as the source code of Spyglass Mosaic, used without royalty in early versions) and security and privacy vulnerabilities, and the United States and the European Union have determined that the integration of Internet Explorer with Windows has been to the detriment of fair browser competition. The core of Internet Explorer 11 will continue being shipped and supported until at least 2029 as IE Mode, a feature of Microsoft Edge, enabling Edge to display web pages using Internet Explorer 11's Trident layout engine and other components. Through IE Mode, the underlying technology of Internet Explorer 11 partially exists on versions of Windows that do not support IE11 as a proper application, including newer versions of Windows 10, as well as Windows 11, Windows Server Insider Build 22463 and Windows Server Insider Build 25110. History Internet Explorer 1 The Internet Explorer project was started in the summer of 1994 by Thomas Reardon, who, according to former project lead Ben Slivka, used source code from Spyglass, Inc. Mosaic, which was an early commercial web browser with formal ties to the pioneering National Center for Supercomputing Applications (NCSA) Mosaic browser. In late 1994, Microsoft licensed Spyglass Mosaic for a quarterly fee plus a percentage of Microsoft's non-Windows revenues for the software. Although bearing a name like NCSA Mosaic, Spyglass Mosaic had used the NCSA Mosaic source code sparingly. The first version, dubbed Microsoft Internet Explorer, was installed as part of the Internet Jumpstart Kit in the Microsoft Plus! pack for Windows 95. The Internet Explorer team began with about six people in early development. Internet Explorer 1.5 was released several months later for Windows NT and added support for basic table rendering. By including it free of charge with their operating system, they did not have to pay royalties to Spyglass Inc, resulting in a lawsuit and a US$8 million settlement on January 22, 1997. Microsoft was sued by SyNet Inc. in 1996, for trademark infringement, claiming it owned the rights to the name "Internet Explorer". It ended with Microsoft paying $5 million to settle the lawsuit. Internet Explorer 2 Internet Explorer 2 is the second major version of Internet Explorer, released on November 28, 1995, for Windows 95 and Windows NT, and on April 23, 1996, for Apple Macintosh and Windows 3.1. Internet Explorer 3 Internet Explorer 3 is the third major version of Internet Explorer, released on August 13, 1996, for Microsoft Windows and on January 8, 1997, for Apple Mac OS. Internet Explorer 4 Internet Explorer 4 is the fourth major version of Internet Explorer, released in September 1997 for Microsoft Windows, Mac OS, Solaris, and HP-UX. It was the first version of Internet Explorer to use the Trident web engine. Internet Explorer 5 Internet Explorer 5 is the fifth major version of Internet Explorer, released on March 18, 1999, for Windows 3.1, Windows NT 3, Windows 95, Windows NT 4.0 SP3, Windows 98, Mac OS X (up to v5.2.3), Classic Mac OS (up to v5.1.7), Solaris and HP-UX (up to 5.01 SP1). Internet Explorer 6 Internet Explorer 6 is the sixth major version of Internet Explorer, released on August 24, 2001, for Windows NT 4.0 SP6a, Windows 98, Windows 2000, Windows ME and as the default web browser for Windows XP and Windows Server 2003. Internet Explorer 7 Internet Explorer 7 is the seventh major version of Internet Explorer, released on October 18, 2006, for Windows XP SP2, Windows Server 2003 SP1 and as the default web browser for Windows Vista, Windows Server 2008 and Windows Embedded POSReady 2009. IE7 introduces tabbed browsing. Internet Explorer 8 Internet Explorer 8 is the eighth major version of Internet Explorer, released on March 19, 2009, for Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008 and as the default web browser for Windows 7 (later default was Internet Explorer 11) and Windows Server 2008 R2. Internet Explorer 9 Internet Explorer 9 is the ninth major version of Internet Explorer, released on March 14, 2011, for Windows 7, Windows Server 2008 R2, Windows Vista Service Pack 2 and Windows Server 2008 SP2 with the Platform Update. Internet Explorer 10 Internet Explorer 10 is the tenth major version of Internet Explorer, released on October 26, 2012, and is the default web browser for Windows 8 and Windows Server 2012. It became available for Windows 7 SP1 and Windows Server 2008 R2 SP1 in February 2013. Internet Explorer 11 Internet Explorer 11 is featured in Windows 8.1, Windows Server 2012 R2 and Windows RT 8.1, which was released on October 17, 2013. It includes an incomplete mechanism for syncing tabs. It is a major update to its developer tools, enhanced scaling for high DPI screens, HTML5 prerender and prefetch, hardware-accelerated JPEG decoding, closed captioning, HTML5 full screen, and is the first Internet Explorer to support WebGL and Google's protocol SPDY (starting at v3). This version of IE has features dedicated to Windows 8.1, including cryptography (WebCrypto), adaptive bitrate streaming (Media Source Extensions) and Encrypted Media Extensions. Internet Explorer 11 was made available for Windows 7 users to download on November 7, 2013, with Automatic Updates in the following weeks. Internet Explorer 11's user agent string now identifies the agent as "Trident" (the underlying browser engine) instead of "MSIE". It also announces compatibility with Gecko (the browser engine of Firefox). Microsoft claimed that Internet Explorer 11, running the WebKit SunSpider JavaScript Benchmark, was the fastest browser as of October 15, 2013. Internet Explorer 11 was made available for Windows Server 2012 and Windows Embedded 8 Standard in April 2019. End of life Microsoft Edge [Legacy] was officially unveiled on January 21, 2015, as "Project Spartan". On April 29, 2015, Microsoft announced that Microsoft Edge would replace Internet Explorer as the default browser in Windows 10. However, Internet Explorer remained the default web browser on the Windows 10 Long Term Servicing Channel (LTSC) and on Windows Server until 2021, primarily for enterprise purposes. Internet Explorer is still installed in Windows 10 to maintain compatibility with older websites and intranet sites that require ActiveX and other legacy web technologies. The browser's MSHTML rendering engine also remains for compatibility reasons. Additionally, Microsoft Edge (Chromium) shipped with the "Internet Explorer mode" feature, which enables support for legacy internet applications. This is possible through use of the Trident MSHTML engine, the rendering code of Internet Explorer. Microsoft has committed to supporting Internet Explorer mode at least through 2029, with a one-year notice before it is discontinued. With the release of Microsoft Edge [Legacy], the development of new features for Internet Explorer ceased. Internet Explorer 11 was the final release, and Microsoft began the process of deprecating Internet Explorer. During this process, it will still be maintained as part of Microsoft's support policies. Since January 12, 2016, only the latest version of Internet Explorer available for each version of Windows has been supported. At the time, nearly half of Internet Explorer users were using an unsupported version. In February 2019, Microsoft Chief of Security Chris Jackson recommended that users stop using Internet Explorer as their default browser. Various websites have dropped support for Internet Explorer. On June 1, 2020, the Internet Archive removed Internet Explorer from its list of supported browsers, due to the browser's dated nature. Since November 30, 2020, the web version of Microsoft Teams can no longer be accessed using Internet Explorer 11, followed by the remaining Microsoft 365 applications since August 17, 2021. WordPress also dropped support for the browser in July 2021. Microsoft disabled the normal means of launching Internet Explorer in Windows 11 and later versions of Windows 10, but it is still possible for users to launch the browser from the Control Panel's browser toolbar settings or via PowerShell. On June 15, 2022, Internet Explorer 11 support ended for the Windows 10 Semi-Annual Channel (SAC). Users on these versions of Windows 10 were redirected to Microsoft Edge starting on February 14, 2023, and visual references to the browser (such as icons on the taskbar) would have been removed on June 13, 2023. However, on May 19, 2023, various organizations disapproved, leading Microsoft to withdraw the change. Other versions of Windows that were still supported at the time were unaffected. Specifically, Windows 7 ESU, Windows 8.x, Windows RT; Windows Server 2008/R2 ESU, Windows Server 2012/R2 and later; and Windows 10 LTSB/LTSC continued to receive updates until their respective end of life dates. On other versions of Windows, Internet Explorer will still be supported until their own end of support dates. IE7 was supported until October 10, 2023, alongside the end of support for Windows Embedded Compact 2013, while IE9 is supported until January 13, 2026, alongside the end of [paid and grandfathered] Premium Assurance support for customers on Windows Server 2008. Barring additional changes to the support policy, Internet Explorer 11 will be supported until January 13, 2032, concurrent with the end of support for Windows 10 IoT Enterprise LTSC 2021. Features Internet Explorer has been designed to view a broad range of web pages and provide certain features within the operating system, including Microsoft Update. During the height of the browser wars, Internet Explorer superseded Netscape only when it caught up technologically to support the progressive features of the time. Standards support Internet Explorer, using the MSHTML (Trident) browser engine: Supports HTML 4.01, parts of HTML5, CSS Level 1, Level 2, and Level 3, XML 1.0, and DOM Level 1, with minor implementation gaps. Fully supports XSLT 1.0 as well as an obsolete Microsoft dialect of XSLT often referred to as WD-xsl, which was loosely based on the December 1998 W3C Working Draft of XSL. Support for XSLT 2.0 lies in the future: semi-official Microsoft bloggers have indicated that development is underway, but no dates have been announced. Almost full conformance to CSS 2.1 has been added in the Internet Explorer 8 release. The MSHTML browser engine in Internet Explorer 9 in 2011, scored highest in the official W3C conformance test suite for CSS 2.1 of all major browsers. Supports XHTML in Internet Explorer 9 (MSHTML Trident version 5.0). Prior versions can render XHTML documents authored with HTML compatibility principles and served with a text/html MIME-type. Supports a subset of SVG in Internet Explorer 9 (MSHTML Trident version 5.0), excluding SMIL, SVG fonts and filters. Internet Explorer uses DOCTYPE sniffing to choose between standards mode and a "quirks mode" in which it deliberately mimics nonstandard behaviors of old versions of MSIE for HTML and CSS rendering on screen (Internet Explorer always uses standards mode for printing). It also provides its own dialect of ECMAScript called JScript. Internet Explorer was criticized by Tim Berners-Lee for its limited support for SVG, which is promoted by W3C. Non-standard extensions Internet Explorer has introduced an array of proprietary extensions to many of the standards, including HTML, CSS, and the DOM. This has resulted in several web pages that appear broken in standards-compliant web browsers and has introduced the need for a "quirks mode" to allow for rendering improper elements meant for Internet Explorer in these other browsers. Internet Explorer has introduced several extensions to the DOM that have been adopted by other browsers. These include the inner HTML property, which provides access to the HTML string within an element, which was part of IE 5 and was standardized as part of HTML 5 roughly 15 years later after all other browsers implemented it for compatibility, the XMLHttpRequest object, which allows the sending of HTTP request and receiving of HTTP response, and may be used to perform AJAX, and the designMode attribute of the content Document object, which enables rich text editing of HTML documents. Some of these functionalities were not possible until the introduction of the W3C DOM methods. Its Ruby character extension to HTML is also accepted as a module in W3C XHTML 1.1, though it is not found in all versions of W3C HTML. Microsoft submitted several other features of IE for consideration by the W3C for standardization. These include the 'behavior' CSS property, which connects the HTML elements with JScript behaviors (known as HTML Components, HTC), HTML+TIME profile, which adds timing and media synchronization support to HTML documents (similar to the W3C XHTML+SMIL), and the VML vector graphics file format. However, all were rejected, at least in their original forms; VML was subsequently combined with PGML (proposed by Adobe and Sun), resulting in the W3C-approved SVG format, one of the few vector image formats being used on the web, which IE did not support until version 9. Other non-standard behaviors include: support for vertical text, but in a syntax different from W3C CSS3 candidate recommendation, support for a variety of image effects and page transitions, which are not found in W3C CSS, support for obfuscated script code, in particular JScript.Encode, as well as support for embedding EOT fonts in web pages. Favicon Support for favicons was first added in Internet Explorer 5. Internet Explorer supports favicons in PNG, static GIF and native Windows icon formats. In Windows Vista and later, Internet Explorer can display native Windows icons that have embedded PNG files. Usability and accessibility Internet Explorer makes use of the accessibility framework provided in Windows. Internet Explorer is also a user interface for FTP, with operations similar to Windows Explorer. Internet Explorer 5 and 6 had a side bar for web searches, enabling jumps through pages from results listed in the side bar. Pop-up blocking and tabbed browsing were added respectively in Internet Explorer 6 and Internet Explorer 7. Tabbed browsing can also be added to older versions by installing MSN Search Toolbar or Yahoo Toolbar. Cache Internet Explorer caches visited content in the Temporary Internet Files folder to allow quicker access (or offline access) to previously visited pages. The content is indexed in a database file, known as Index.dat. Multiple Index.dat files exist which index different content—visited content, web feeds, visited URLs, cookies, etc. Prior to IE7, clearing the cache used to clear the index but the files themselves were not reliably removed, posing a potential security and privacy risk. In IE7 and later, when the cache is cleared, the cache files are more reliably removed, and the index.dat file is overwritten with null bytes. Caching has been improved in IE9. Group Policy Internet Explorer is fully configurable using Group Policy. Administrators of Windows Server domains (for domain-joined computers) or the local computer can apply and enforce a variety of settings on computers that affect the user interface (such as disabling menu items and individual configuration options), as well as underlying security features such as downloading of files, zone configuration, per-site settings, ActiveX control behavior and others. Policy settings can be configured for each user and for each machine. Internet Explorer also supports Integrated Windows Authentication. Architecture Internet Explorer uses a componentized architecture built on the Component Object Model (COM) technology. It consists of several major components, each of which is contained in a separate dynamic-link library (DLL) and exposes a set of COM programming interfaces hosted by the Internet Explorer main executable, : is the protocol handler for HTTP, HTTPS, and FTP. It handles all network communication over these protocols. is responsible for MIME-type handling and download of web content, and provides a thread-safe wrapper around WinInet.dll and other protocol implementations. houses the MSHTML (Trident) browser engine introduced in Internet Explorer 4, which is responsible for displaying the pages on-screen and handling the Document Object Model (DOM) of the web pages. MSHTML.dll parses the HTML/CSS file and creates the internal DOM tree representation of it. It also exposes a set of APIs for runtime inspection and modification of the DOM tree. The DOM tree is further processed by a browser engine which then renders the internal representation on screen. contains the user interface and window of IE in Internet Explorer 7 and above. provides the navigation, local caching and history functionalities for the browser. is responsible for rendering the browser user interface such as menus and toolbars. Internet Explorer does not include any native scripting functionality. Rather, exposes an API that permits a programmer to develop a scripting environment to be plugged-in and to access the DOM tree. Internet Explorer 8 includes the bindings for the Active Scripting engine, which is a part of Microsoft Windows and allows any language implemented as an Active Scripting module to be used for client-side scripting. By default, only the JScript and VBScript modules are provided; third party implementations like ScreamingMonkey (for ECMAScript 4 support) can also be used. Microsoft also makes available the Microsoft Silverlight runtime that allows CLI languages, including DLR-based dynamic languages like IronPython and IronRuby, to be used for client-side scripting. Internet Explorer 8 introduced some major architectural changes, called loosely coupled IE (LCIE). LCIE separates the main window process (frame process) from the processes hosting the different web applications in different tabs (tab processes). A frame process can create multiple tab processes, each of which can be of a different integrity level, each tab process can host multiple web sites. The processes use asynchronous inter-process communication to synchronize themselves. Generally, there will be a single frame process for all web sites. In Windows Vista with protected mode turned on, however, opening privileged content (such as local HTML pages) will create a new tab process as it will not be constrained by protected mode. Extensibility Internet Explorer exposes a set of Component Object Model (COM) interfaces that allows add-ons to extend the functionality of the browser. Extensibility is divided into two types: Browser extensibility and content extensibility. Browser extensibility involves adding context menu entries, toolbars, menu items or Browser Helper Objects (BHO). BHOs are used to extend the feature set of the browser, whereas the other extensibility options are used to expose that feature in the user interface. Content extensibility adds support for non-native content formats. It allows Internet Explorer to handle new file formats and new protocols, e.g. WebM or SPDY. In addition, web pages can integrate widgets known as ActiveX controls which run on Windows only but have vast potentials to extend the content capabilities; Adobe Flash Player and Microsoft Silverlight are examples. Add-ons can be installed either locally, or directly by a web site. Since malicious add-ons can compromise the security of a system, Internet Explorer implements several safeguards. Internet Explorer 6 with Service Pack 2 and later feature an Add-on Manager for enabling or disabling individual add-ons, complemented by a "No Add-Ons" mode. Starting with Windows Vista, Internet Explorer and its BHOs run with restricted privileges and are isolated from the rest of the system. Internet Explorer 9 introduced a new component – Add-on Performance Advisor. Add-on Performance Advisor shows a notification when one or more of installed add-ons exceed a pre-set performance threshold. The notification appears in the Notification Bar when the user launches the browser. Windows 8 and Windows RT introduce a Metro-style version of Internet Explorer that is entirely sandboxed and does not run add-ons at all. In addition, Windows RT cannot download or install ActiveX controls at all; although existing ones bundled with Windows RT still run in the traditional version of Internet Explorer. Internet Explorer itself can be hosted by other applications via a set of COM interfaces. This can be used to embed the browser functionality inside a computer program or create Internet Explorer shells. Security Internet Explorer uses a zone-based security framework that groups sites based on certain conditions, including whether it is an Internet- or intranet-based site as well as a user-editable whitelist. Security restrictions are applied per zone; all the sites in a zone are subject to the restrictions. Internet Explorer 6 SP2 onwards uses the Attachment Execution Service of Microsoft Windows to mark executable files downloaded from the Internet as being potentially unsafe. Accessing files marked as such will prompt the user to make an explicit trust decision to execute the file, as executables originating from the Internet can be potentially unsafe. This helps in preventing the accidental installation of malware. Internet Explorer 7 introduced the phishing filter, which restricts access to phishing sites unless the user overrides the decision. With version 8, it also blocks access to sites known to host malware. Downloads are also checked to see if they are known to be malware-infected. In Windows Vista, Internet Explorer by default runs in what is called Protected Mode, where the privileges of the browser itself are severely restricted—it cannot make any system-wide changes. One can optionally turn this mode off, but this is not recommended. This also effectively restricts the privileges of any add-ons. As a result, even if the browser or any add-on is compromised, the damage the security breach can cause is limited. Patches and updates to the browser are released periodically and made available through the Windows Update service, as well as through Automatic Updates. Although security patches continue to be released for a range of platforms, most feature additions and security infrastructure improvements are only made available on operating systems that are in Microsoft's mainstream support phase. On December 16, 2008, Trend Micro recommended users switch to rival browsers until an emergency patch was released to fix a potential security risk which "could allow outside users to take control of a person's computer and steal their passwords." Microsoft representatives countered this recommendation, claiming that "0.02% of internet sites" were affected by the flaw. A fix for the issue was released the following day with the Security Update for Internet Explorer KB960714, on Microsoft Windows Update. In 2010, Germany's Federal Office for Information Security, known by its German initials, BSI, advised "temporary use of alternative browsers" because of a "critical security hole" in Microsoft's software that could allow hackers to remotely plant and run malicious code on Windows PCs. In 2011, a report by Accuvant, funded by Google, rated the security (based on sandboxing) of Internet Explorer worse than Google Chrome but better than Mozilla Firefox. A 2017 browser security white paper comparing Google Chrome, Microsoft Edge [Legacy], and Internet Explorer 11 by X41 D-Sec in 2017 came to similar conclusions, also based on sandboxing and support of legacy web technologies. Security vulnerabilities Internet Explorer has been subjected to many security vulnerabilities and concerns such that the volume of criticism for IE is unusually high. Much of the spyware, adware, and computer viruses across the Internet are made possible by exploitable bugs and flaws in the security architecture of Internet Explorer, sometimes requiring nothing more than viewing of a malicious web page to install themselves. This is known as a "drive-by install". There are also attempts to trick the user into installing malicious software by misrepresenting the software's true purpose in the description section of an ActiveX security alert. A number of security flaws affecting IE originated not in the browser itself, but in ActiveX-based add-ons used by it. Because the add-ons have the same privilege as IE, the flaws can be as critical as browser flaws. This has led to the ActiveX-based architecture being criticized for being fault-prone. By 2005, some experts maintained that the dangers of ActiveX had been overstated and there were safeguards in place. In 2006, new techniques using automated testing found more than a hundred vulnerabilities in standard Microsoft ActiveX components. Security features introduced in Internet Explorer 7 mitigated some of these vulnerabilities. In 2008, Internet Explorer had a number of published security vulnerabilities. According to research done by security research firm Secunia, Microsoft did not respond as quickly as its competitors in fixing security holes and making patches available. The firm also reported 366 vulnerabilities in ActiveX controls, an increase from the previous year. According to an October 2010 report in The Register, researcher Chris Evans had detected a known security vulnerability which, then dating back to 2008, had not been fixed for at least six hundred days. Microsoft says that it had known about this vulnerability, but it was of exceptionally low severity as the victim web site must be configured in a peculiar way for this attack to be feasible at all. In December 2010, researchers were able to bypass the "Protected Mode" feature in Internet Explorer. Vulnerability exploited in attacks on U.S. firms In an advisory on January 14, 2010, Microsoft said that attackers targeting Google and other U.S. companies used software that exploits a security hole, which had already been patched, in Internet Explorer. The vulnerability affected Internet Explorer 6 from on Windows XP and Server 2003, IE6 SP1 on Windows 2000 SP4, IE7 on Windows Vista, XP, Server 2008, and Server 2003, IE8 on Windows 7, Vista, XP, Server 2003, and Server 2008 (R2). The German government warned users against using Internet Explorer and recommended switching to an alternative web browser, due to the major security hole described above that was exploited in Internet Explorer. The Australian and French governments also issued a similar warning a few days later. Major vulnerability across versions On April 26, 2014, Microsoft issued a security advisory relating to (use-after-free vulnerability in Microsoft Internet Explorer 6 through 11), a vulnerability that could allow "remote code execution" in Internet Explorer versions 6 to 11. On April 28, 2014, the United States Department of Homeland Security's United States Computer Emergency Readiness Team (US-CERT) released an advisory stating that the vulnerability could result in "the complete compromise" of an affected system. US-CERT recommended reviewing Microsoft's suggestions to mitigate an attack or using an alternate browser until the bug is fixed. The UK National Computer Emergency Response Team (CERT-UK) published an advisory announcing similar concerns and for users to take the additional step of ensuring their antivirus software is up to date. Symantec, a cyber security firm, confirmed that "the vulnerability crashes Internet Explorer on Windows XP." The vulnerability was resolved on May 1, 2014, with a security update. Market adoption and usage share The adoption rate of Internet Explorer seems to be closely related to that of Microsoft Windows, as it is the default web browser that comes with Windows. Since the integration of Internet Explorer 2.0 with Windows 95 OSR 1 in 1996, and especially after version 4.0's release in 1997, the adoption was greatly accelerated: from below 20% in 1996, to about 40% in 1998, and over 80% in 2000. This made Microsoft the winner in the infamous 'first browser war' against Netscape. Netscape Navigator was the dominant browser during 1995 and until 1997, but rapidly lost share to IE starting in 1998, and eventually slipped behind in 1999. The integration of IE with Windows led to a lawsuit by AOL, Netscape's owner, accusing Microsoft of unfair competition. The infamous case was eventually won by AOL but by then it was too late, as Internet Explorer had already become the dominant browser. Internet Explorer peaked during 2002 and 2003, with about 95% share. Its first notable competitor after beating Netscape was Firefox from Mozilla, which itself was an offshoot from Netscape. Approximate usage over time based on various usage share counters averaged for the year overall, or for the fourth quarter, or for the last month in the year depending on availability of reference. Internet Explorer's market share fell below 50% in September 2010. In May 2012, Google Chrome overtook Internet Explorer as the most used browser worldwide, according to StatCounter. Industry adoption Browser Helper Objects are also used by many search engines companies and third parties for creating add-ons that access their services, such as search engine toolbars. Because of the use of COM, it is possible to embed web-browsing functionality in third-party applications. Hence, there are several Internet Explorer shells, and several content-centric applications like RealPlayer also use Internet Explorer's web browsing module for viewing web pages within the applications. Removal While a major upgrade of Internet Explorer can be uninstalled in a traditional way if the user has saved the original application files for installation, the matter of uninstalling the version of the browser that has shipped with an operating system remains a controversial one. The idea of removing a stock install of Internet Explorer from a Windows system was proposed during the United States v. Microsoft Corp. case. One of Microsoft's arguments during the trial was that removing Internet Explorer from Windows may result in system instability. Indeed, programs that depend on libraries installed by IE, including Windows help and support system, fail to function without IE. Before Windows Vista, it was not possible to run Windows Update without IE because the service used ActiveX technology, which no other web browser supports. Impersonation by malware The popularity of Internet Explorer led to the appearance of malware abusing its name. On January 28, 2011, a fake Internet Explorer browser calling itself "Internet Explorer – Emergency Mode" appeared. It closely resembled the real Internet Explorer but had fewer buttons and no search bar. If a user attempted to launch any other browser such as Google Chrome, Mozilla Firefox, Opera, Safari, or the real Internet Explorer, this browser would be loaded instead. It also displayed a fake error message, claiming that the computer was infected with malware and Internet Explorer had entered "Emergency Mode". It blocked access to legitimate sites such as Google if the user tried to access them.
Technology
Browsers
null
15223
https://en.wikipedia.org/wiki/Invertebrate
Invertebrate
Invertebrates are animals that neither develop nor retain a vertebral column (commonly known as a spine or backbone), which evolved from the notochord. It is a paraphyletic grouping including all animals excluding the chordate subphylum Vertebrata, i.e. vertebrates. Well-known phyla of invertebrates include arthropods, mollusks, annelids, echinoderms, flatworms, cnidarians, and sponges. The majority of animal species are invertebrates; one estimate puts the figure at 97%. Many invertebrate taxa have a greater number and diversity of species than the entire subphylum of Vertebrata. Invertebrates vary widely in size, from 10 μm (0.0004 in) myxozoans to the 9–10 m (30–33 ft) colossal squid. Some so-called invertebrates, such as the Tunicata and Cephalochordata, are actually sister chordate subphyla to Vertebrata, being more closely related to vertebrates than to other invertebrates. This makes the "invertebrates" polyphyletic, so the term has no significance in taxonomy. Etymology The word "invertebrate" comes from the Latin word vertebra, which means a joint in general, and sometimes specifically a joint from the spinal column of a vertebrate. The jointed aspect of vertebra is derived from the concept of turning, expressed in the root verto or vorto, to turn. The prefix in- means "not" or "without". Taxonomic significance The term invertebrates does not describe a taxon in the same way that Arthropoda, Vertebrata or Manidae do. Each of those terms describes a valid taxon, phylum, subphylum or family. "Invertebrata" is a term of convenience, not a taxon; it has very little circumscriptional significance except within the Chordata. The Vertebrata as a subphylum comprises such a small proportion of the Metazoa that to speak of the kingdom Animalia in terms of "Vertebrata" and "Invertebrata" has limited practicality. In the more formal taxonomy of Animalia other attributes that logically should precede the presence or absence of the vertebral column in constructing a cladogram, for example, the presence of a notochord. That would at least circumscribe the Chordata. However, even the notochord would be a less fundamental criterion than aspects of embryological development and symmetry or perhaps Bauplan. Despite this, the concept of invertebrates as a taxon of animals has persisted for over a century among the laity, and within the zoological community and in its literature it remains in use as a term of convenience for animals that are not members of the Vertebrata. The following text reflects earlier scientific understanding of the term and of those animals which have constituted it. According to this understanding, invertebrates do not possess a skeleton of bone, either internal or external. They include hugely varied body plans. Many have fluid-filled, hydrostatic skeletons, like jellyfish or worms. Others have hard exoskeletons, outer shells like those of insects and crustaceans. The most familiar invertebrates include the Protozoa, Porifera, Coelenterata, Platyhelminthes, Nematoda, Annelida, Echinodermata, Mollusca and Arthropoda. Arthropoda include insects, crustaceans and arachnids. Number of extant species By far the largest number of described invertebrate species are insects. The following table lists the number of described extant species for major invertebrate groups as estimated in the IUCN Red List of Threatened Species, 2014.3. The IUCN estimates that 66,178 extant vertebrate species have been described, which means that over 95% of the described animal species in the world are invertebrates. Characteristics The trait that is common to all invertebrates is the absence of a vertebral column (backbone): this creates a distinction between invertebrates and vertebrates. The distinction is one of convenience only; it is not based on any clear biologically homologous trait, any more than the common trait of having wings functionally unites insects, bats, and birds, or than not having wings unites tortoises, snails and sponges. Being animals, invertebrates are heterotrophs, and require sustenance in the form of the consumption of other organisms. With a few exceptions, such as the Porifera, invertebrates generally have bodies composed of differentiated tissues. There is also typically a digestive chamber with one or two openings to the exterior. Morphology and symmetry The body plans of most multicellular organisms exhibit some form of symmetry, whether radial, bilateral, or spherical. A minority, however, exhibit no symmetry. One example of asymmetric invertebrates includes all gastropod species. This is easily seen in snails and sea snails, which have helical shells. Slugs appear externally symmetrical, but their pneumostome (breathing hole) is located on the right side. Other gastropods develop external asymmetry, such as Glaucus atlanticus that develops asymmetrical cerata as they mature. The origin of gastropod asymmetry is a subject of scientific debate. Other examples of asymmetry are found in fiddler crabs and hermit crabs. They often have one claw much larger than the other. If a male fiddler loses its large claw, it will grow another on the opposite side after moulting. Sessile animals such as sponges are asymmetrical alongside coral colonies (with the exception of the individual polyps that exhibit radial symmetry); Alpheidae claws that lack pincers; and some copepods, polyopisthocotyleans, and monogeneans which parasitize by attachment or residency within the gill chamber of their fish hosts). Nervous system Neurons differ in invertebrates from mammalian cells. Invertebrates cells fire in response to similar stimuli as mammals, such as tissue trauma, high temperature, or changes in pH. The first invertebrate in which a neuron cell was identified was the medicinal leech, Hirudo medicinalis. Learning and memory using nociceptors have been described in the sea hare, Aplysia. Mollusk neurons are able to detect increasing pressures and tissue trauma. Neurons have been identified in a wide range of invertebrate species, including annelids, molluscs, nematodes and arthropods. Respiratory system One type of invertebrate respiratory system is the open respiratory system composed of spiracles, tracheae, and tracheoles that terrestrial arthropods have to transport metabolic gases to and from tissues. The distribution of spiracles can vary greatly among the many orders of insects, but in general each segment of the body can have only one pair of spiracles, each of which connects to an atrium and has a relatively large tracheal tube behind it. The tracheae are invaginations of the cuticular exoskeleton that branch (anastomose) throughout the body with diameters from only a few micrometres up to 0.8 mm. The smallest tubes, tracheoles, penetrate cells and serve as sites of diffusion for water, oxygen, and carbon dioxide. Gas may be conducted through the respiratory system by means of active ventilation or passive diffusion. Unlike vertebrates, insects do not generally carry oxygen in their haemolymph. A tracheal tube may contain ridge-like circumferential rings of taenidia in various geometries such as loops or helices. In the head, thorax, or abdomen, tracheae may also be connected to air sacs. Many insects, such as grasshoppers and bees, which actively pump the air sacs in their abdomen, are able to control the flow of air through their body. In some aquatic insects, the tracheae exchange gas through the body wall directly, in the form of a gill, or function essentially as normal, via a plastron. Despite being internal, the tracheae of arthropods are shed during moulting (ecdysis). Hearing Reproduction Like vertebrates, most invertebrates reproduce at least partly through sexual reproduction. They produce specialized reproductive cells that undergo meiosis to produce smaller, motile spermatozoa or larger, non-motile ova. These fuse to form zygotes, which develop into new individuals. Others are capable of asexual reproduction, or sometimes, both methods of reproduction. Extensive research with model invertebrate species such as Drosophila melanogaster and Caenorhabditis elegans has contributed much to our understanding of meiosis and reproduction. However, beyond the few model systems, the modes of reproduction found in invertebrates show incredible diversity. In one extreme example, it is estimated that 10% of orbatid mite species have persisted without sexual reproduction and have reproduced asexually for more than 400 million years. Reproductive systems Social interaction Social behavior is widespread in invertebrates, including cockroaches, termites, aphids, thrips, ants, bees, Passalidae, Acari, spiders, and more. Social interaction is particularly salient in eusocial species but applies to other invertebrates as well. Insects recognize information transmitted by other insects. Phyla The term invertebrates covers several phyla. One of these are the sponges (Porifera). They were long thought to have diverged from other animals early. They lack the complex organization found in most other phyla. Their cells are differentiated, but in most cases not organized into distinct tissues. Sponges typically feed by drawing in water through pores. Some speculate that sponges are not so primitive, but may instead be secondarily simplified. The Ctenophora and the Cnidaria, which includes sea anemones, corals, and jellyfish, are radially symmetric and have digestive chambers with a single opening, which serves as both the mouth and the anus. Both have distinct tissues, but they are not organized into organs. There are only two main germ layers, the ectoderm and endoderm, with only scattered cells between them. As such, they are sometimes called diploblastic. The Echinodermata are radially symmetric and exclusively marine, including starfish (Asteroidea), sea urchins, (Echinoidea), brittle stars (Ophiuroidea), sea cucumbers (Holothuroidea) and feather stars (Crinoidea). The largest animal phylum is also included within invertebrates: the Arthropoda, including insects, spiders, crabs, and their kin. All these organisms have a body divided into repeating segments, typically with paired appendages. In addition, they possess a hardened exoskeleton that is periodically shed during growth. Two smaller phyla, the Onychophora and Tardigrada, are close relatives of the arthropods and share some traits with them, excluding the hardened exoskeleton. The Nematoda, or roundworms, are perhaps the second largest animal phylum, and are also invertebrates. Roundworms are typically microscopic, and occur in nearly every environment where there is water. A number are important parasites. Smaller phyla related to them are the Kinorhyncha, Priapulida, and Loricifera. These groups have a reduced coelom, called a pseudocoelom. Other invertebrates include the Nemertea, or ribbon worms, and the Sipuncula. Another phylum is Platyhelminthes, the flatworms. These were originally considered primitive, but it now appears they developed from more complex ancestors. Flatworms are acoelomates, lacking a body cavity, as are their closest relatives, the microscopic Gastrotricha. The Rotifera, or rotifers, are common in aqueous environments. Invertebrates also include the Acanthocephala, or spiny-headed worms, the Gnathostomulida, Micrognathozoa, and the Cycliophora. Also included are two of the most successful animal phyla, the Mollusca and Annelida. The former, which is the second-largest animal phylum by number of described species, includes animals such as snails, clams, and squids, and the latter comprises the segmented worms, such as earthworms and leeches. These two groups have long been considered close relatives because of the common presence of trochophore larvae, but the annelids were considered closer to the arthropods because they are both segmented. Now, this is generally considered convergent evolution, owing to many morphological and genetic differences between the two phyla. Among lesser phyla of invertebrates are the Hemichordata, or acorn worms, and the Chaetognatha, or arrow worms. Other phyla include Acoelomorpha, Brachiopoda, Bryozoa, Entoprocta, Phoronida, and Xenoturbellida. Classification Invertebrates can be classified into several main categories, some of which are taxonomically obsolescent or debatable, but still used as terms of convenience. Each however appears in its own article at the following links. Sponges (Porifera) Comb jellies (Ctenophora) Medusozoans and corals (Cnidaria) Acoels (Xenacoelomorpha) Flatworms (Platyhelminthes) Bristleworms, earthworms and leeches (Annelida) Insects, springtails, crustaceans, myriapods, chelicerates (Arthropoda) Chitons, snails, slugs, bivalves, tusk shells, cephalopods (Mollusca) Roundworms or threadworms (Nematoda) Rotifers (Rotifera) Tardigrades (Tardigrada) Scalidophores (Scalidophora) Lophophorates (Lophophorata) Velvet worms (Onychophora) Arrow worms (Chaetognatha) Gordian worms or horsehair worms (Nematomorpha) Ribbon worms (Nemertea) Placozoa Loricifera Starfishes, sea urchins, sea cucumbers, sea lilies and brittle stars (Echinodermata) Acorn worms, cephalodiscids and graptolites (Hemichordata) Lancelets (Amphioxiformes) Salps, pyrosomes, doliolids, larvaceans and sea squirts (Tunicata) Cycliophora (currently a monogeneric phylum) History The earliest animal fossils are of invertebrates. 665-million-year-old fossils in the Trezona Formation at Trezona Bore, West Central Flinders, South Australia have been interpreted as being early sponges. Some paleontologists suggest that animals appeared much earlier, possibly as early as 1 billion years ago though they probably became multicellular in the Tonian. Trace fossils such as tracks and burrows found in the late Neoproterozoic Era indicate the presence of triploblastic worms, roughly as large (about 5 mm wide) and complex as earthworms. Around 453 MYA, animals began diversifying, and many of the important groups of invertebrates diverged from one another. Fossils of invertebrates are found in various types of sediment from the Phanerozoic. Fossils of invertebrates are commonly used in stratigraphy. Classification Carl Linnaeus divided these animals into only two groups, the Insecta and the now-obsolete Vermes (worms). Jean-Baptiste Lamarck, who was appointed to the position of "Curator of Insecta and Vermes" at the Muséum National d'Histoire Naturelle in 1793, both coined the term "invertebrate" to describe such animals and divided the original two groups into ten, by splitting Arachnida and Crustacea from the Linnean Insecta, and Mollusca, Annelida, Cirripedia, Radiata, Coelenterata and Infusoria from the Linnean Vermes. They are now classified into over 30 phyla, from simple organisms such as sea sponges and flatworms to complex animals such as arthropods and molluscs. Significance Invertebrates are animals without a vertebral column. This has led to the conclusion that invertebrates are a group that deviates from the normal, vertebrates. This has been said to be because researchers in the past, such as Lamarck, viewed vertebrates as a "standard": in Lamarck's theory of evolution, he believed that characteristics acquired through the evolutionary process involved not only survival, but also progression toward a "higher form", to which humans and vertebrates were closer than invertebrates were. Although goal-directed evolution has been abandoned, the distinction of invertebrates and vertebrates persists to this day, even though the grouping has been noted to be "hardly natural or even very sharp." Another reason cited for this continued distinction is that Lamarck created a precedent through his classifications which is now difficult to escape from. It is also possible that some humans believe that, they themselves being vertebrates, the group deserves more attention than invertebrates. In any event, in the 1968 edition of Invertebrate Zoology, it is noted that "division of the Animal Kingdom into vertebrates and invertebrates is artificial and reflects human bias in favor of man's own relatives." The book also points out that the group lumps a vast number of species together, so that no one characteristic describes all invertebrates. In addition, some species included are only remotely related to one another, with some more related to vertebrates than other invertebrates (see Paraphyly). In research For many centuries, invertebrates were neglected by biologists, in favor of big vertebrates and "useful" or charismatic species. Invertebrate biology was not a major field of study until the work of Linnaeus and Lamarck in the 18th century. During the 20th century, invertebrate zoology became one of the major fields of natural sciences, with prominent discoveries in the fields of medicine, genetics, palaeontology, and ecology. The study of invertebrates has also benefited law enforcement, as arthropods, and especially insects, were discovered to be a source of information for forensic investigators. Two of the most commonly studied model organisms nowadays are invertebrates: the fruit fly Drosophila melanogaster and the nematode Caenorhabditis elegans. They have long been the most intensively studied model organisms, and were among the first life-forms to be genetically sequenced. This was facilitated by the severely reduced state of their genomes, but many genes, introns, and linkages have been lost. Analysis of the starlet sea anemone genome has emphasised the importance of sponges, placozoans, and choanoflagellates, also being sequenced, in explaining the arrival of 1,500 ancestral genes unique to animals. Invertebrates are also used by scientists in the field of aquatic biomonitoring to evaluate the effects of water pollution and climate change.
Biology and health sciences
General classification
null
15250
https://en.wikipedia.org/wiki/Indigo
Indigo
Indigo is a term used for a number of hues in the region of blue. The word comes from the ancient dye of the same name. The term "indigo" can refer to the color of the dye, various colors of fabric dyed with indigo dye, a spectral color, one of the seven colors of the rainbow as described by Newton, or a region on the color wheel, and can include various shades of blue, ultramarine, and green-blue. Since the web era, the term has also been used for various purple and violet hues identified as "indigo", based on use of the term "indigo" in HTML web page specifications. The word "indigo" comes from the Latin word , meaning "Indian", as the naturally based dye was originally exported to Europe from India. The Early Modern English word indigo referred to the dye, not to the color (hue) itself, and indigo is not traditionally part of the basic color-naming system. The first known recorded use of indigo as a color name in English was in 1289. Isaac Newton regarded indigo as a color in the visible spectrum, as well as one of the seven colors of the rainbow: the color between blue and violet; however, sources differ as to its actual position in the electromagnetic spectrum. Later scientists have concluded that what Newton called "blue" was what is now called cyan or blue-green; and what Newton called "indigo" was what is now called blue. In the 1980s, programmers produced a somewhat arbitrary list of color names for the X Window computer operating system, resulting in the HTML and CSS specifications issued in the 1990s using the term "indigo" for a dark purple hue. This has resulted in violet and purple hues also being associated with the term "indigo" since that time. Because of the Abney effect, pinpointing indigo to a specific hue value in the HSV color wheel is elusive, as a higher HSV saturation value shifts the hue towards blue. However, on the new CIECAM16 standard, the hues values around 290° may be thought of as indigo, depending on the observer. History Indigo as a dye Indigo dye is a blue color, obtained from several different types of plants. The indigo plant (Indigofera tinctoria) often called "true indigo" probably produces the best results, although several others are close in color: Japanese indigo (Polygonum tinctoria), Natal indigo (Indigofera arrecta), Guatemalan indigo (Indigofera suffruticosa), Chinese indigo (Persicaria tinctoria), and woad Isatis tinctoria. Indigofera tinctoria and related species were cultivated in East Asia, Egypt, India, Bangladesh and Peru in antiquity. The earliest direct evidence for the use of indigo dates to around 4000 BC and comes from Huaca Prieta, in contemporary Peru. Pliny the Elder mentions India as the source of the dye after which it was named. It was imported from there in small quantities via the Silk Road. The Ancient Greek term for the dye was (indikon pharmakon, "Indian dye"), which, adopted to Latin as (a second declension noun) or indico (oblique case) and via Portuguese, gave rise to the modern word indigo. In early Europe the main source was from the woad plant Isatis tinctoria, also known as pastel. For a long time, woad was the main source of blue dye in Europe. Woad was replaced by "true indigo", as trade routes opened up. Plant sources have now been largely replaced by synthetic dyes. Spanish explorers discovered an American species of indigo and began to cultivate the product in Guatemala. The English and French subsequently began to encourage indigo cultivation in their colonies in the West Indies. In North America, indigo was introduced by Eliza Lucas into colonial South Carolina, where it became the colony's second-most important cash crop (after rice). Before the Revolutionary War, indigo accounted for more than one-third of the value of exports from the American colonies. Isaac Newton's classification of indigo as a spectral color Isaac Newton introduced indigo as one of the seven base colors of his work. In the mid-1660s, when Newton bought a pair of prisms at a fair near Cambridge, the East India Company had begun importing indigo dye into England, supplanting the homegrown woad as source of blue dye. In a pivotal experiment in the history of optics, the young Newton shone a narrow beam of sunlight through a prism to produce a rainbow-like band of colors on the wall. In describing this optical spectrum, Newton acknowledged that the spectrum had a continuum of colors, but named seven: "The originall or primary colours are Red, yellow, Green, Blew, & a violet purple; together with Orang, Indico, & an indefinite varietie of intermediate gradations." He linked the seven prismatic colors to the seven notes of a western major scale, as shown in his color wheel, with orange and indigo as the semitones. Having decided upon seven colors, he asked a friend to repeatedly divide up the spectrum that was projected from the prism onto the wall: I desired a friend to draw with a pencil lines cross the image, or pillar of colours, where every one of the seven aforenamed colours was most full and brisk, and also where he judged the truest confines of them to be, whilst I held the paper so, that the said image might fall within a certain compass marked on it. And this I did, partly because my own eyes are not very critical in distinguishing colours, partly because another, to whom I had not communicated my thoughts about this matter, could have nothing but his eyes to determine his fancy in making those marks. Indigo is therefore counted as one of the traditional colors of the rainbow, the order of which is given by the mnemonics "Richard of York gave battle in vain" and Roy G. Biv. James Clerk Maxwell and Hermann von Helmholtz accepted indigo as an appropriate name for the color flanking violet in the spectrum. Later scientists concluded that Newton named the colors differently from current usage. According to Gary Waldman, "A careful reading of Newton's work indicates that the color he called indigo, we would normally call blue; his blue is then what we would name blue-green or cyan." If this is true, Newton's seven spectral colors would have been: The human eye does not readily differentiate hues in the wavelengths between what are now called blue and violet. If this is where Newton meant indigo to lie, most individuals would have difficulty distinguishing indigo from its neighbors. According to Isaac Asimov, "It is customary to list indigo as a color lying between blue and violet, but it has never seemed to me that indigo is worth the dignity of being considered a separate color. To my eyes, it seems merely deep blue." 1800s In 1821, Abraham Werner published Werner's Nomenclature of Colours, where indigo, called indigo blue, is classified as a blue hue, and not listed among the violet hues. He writes that the color is composed of "Berlin blue, a little black, and a small portion of apple green," and indicating it is the color of blue copper ore, with Berlin blue being described as the color of a blue jay's wing, a hepatica flower, or a blue sapphire. According to an article, Definition of the Color Indigo published in Nature magazine in the late 1800s, Newton's use of the term "indigo" referred to a spectral color between blue and violet. However, the article states that Wilhelm von Bezold, in his treatise on color, disagreed with Newton's use of the term, on the basis that the pigment indigo was a darker hue than the spectral color; and furthermore, Professor Ogden Rood points out that indigo pigment corresponds to the cyan-blue region of the spectrum, lying between blue and green, although darker in hue. Rood considers that artificial ultramarine pigment is closer to the point of the spectrum described as "indigo", and proposed renaming that spectral point as "ultramarine". The article goes on to state that comparison of the pigments, both dry and wet, with Maxwell's discs and with the spectrum, that indigo is almost identical to Prussian blue, stating that it "certainly does not lie on the violet side of 'blue.'" When scraped, a lump of indigo pigment appears more violet, and if powdered or dissolved, becomes greenish. Modern spectral classification Several modern sources place indigo in the electromagnetic spectrum between 420 and 450 nanometers, which lies on the short-wave side of color wheel (RGB) blue, towards (spectral) violet. The correspondence of this definition with colors of actual indigo dyes, though, is disputed. Optical scientists Hardy and Perrin list indigo as between 445 and 464 nm wavelength, which occupies a spectrum segment from roughly the color wheel (RGB) blue extending to the long-wave side, towards azure. Other modern color scientists, such as Bohren and Clothiaux (2006), and J.W.G. Hunt (1980), divide the spectrum between violet and blue at about 450 nm, with no hue specifically named indigo. Web era Origin of "Indigo" as a name for purple in web pages Towards the end of the 20th century, purple colors also became referred to as "indigo". In the 1980s, computer programmers Jim Gettys, Paul Ravelling, John C. Thomas and Jim Fulton produced a list of colors for the X Window Operating System. The color identified as "indigo" was not the color indigo (as generally understood at the time), but was actually a dark purple hue; the programmers assigned it the hex code #4B0082 . This collection of color names was somewhat arbitrary: Thomas used a box of 72 Crayola crayons as a standard, whereas Ravelling used color swabs from the now-defunct Sinclair Paints company, resulting in the color list for version X11 of the operating system containing fanciful color names such as "papaya whip", "blanched almond" and "peach puff". The database was also criticised for its many inconsistencies, such as "dark gray" being lighter than "gray", and for the color distribution being uneven, tending towards reds and greens at the expense of blues. In the 1990s, this list which came with version X11 became the basis of the HTML and CSS color rendition used in websites and web design. This resulted in the name "Indigo" being associated with purple and violet hues in web page design and graphic design. Physics author John Spacey writes on the website Simplicable that the X11 programmers did not have any background in color theory, and that as these names are used by web designers and graphic designers, the name indigo has since that time been strongly associated with purple or violet. Spacey writes, "As such, a few programmers accidentally repurposed a color name that was known to civilisations for thousands of years." Crayola crayon colors The Crayola company released an indigo crayon in 1999, with the Crayola website using the hex code #4F49C6 to approximate the crayon color. The 2001 iron indigo crayon is portrayed using hex code #184FA1 . The 2004 indigo crayon color is depicted by #5D76CB , and the 2019 iridescent indigo is portrayed by #3C32CD . Distinction among tones of indigo Like many other colors (orange, rose, and violet are the best-known), indigo gets its name from an object in the natural world—the plant named indigo once used for dyeing cloth (see also Indigo dye). The color pigment indigo is equivalent to the web color indigo and approximates the color indigo that is usually reproduced in pigments and colored pencils. The color of indigo dye is a different color from either spectrum indigo or pigment indigo. This is the actual color of the dye. A vat full of this dye is a darker color, approximating the web color midnight blue. The color "electric indigo" is a bright and saturated color between the traditional indigo and violet. This is the brightest color indigo that can be approximated on a computer screen; it is a color located between the (primary) blue and the color violet of the RGB color wheel. The web color blue violet or deep indigo is a tone of indigo brighter than pigment indigo, but not as bright as electric indigo. Listed below are several indigo hues, some of which have included the word "indigo", with the adoption of HTML color names in the World Wide Web era. Indigo dye color Indigo dye is a greenish dark blue color, obtained from either the leaves of the tropical Indigo plant (Indigofera), or from woad (Isatis tinctoria), or the Chinese indigo (Persicaria tinctoria). Many societies make use of the Indigofera plant for producing different shades of blue. Cloth that is repeatedly boiled in an indigo dye bath-solution (boiled and left to dry, boiled and left to dry, etc.), the blue pigment becomes darker on the cloth. After dyeing, the cloth is hung in the open air to dry. A Native American woman described the process used by the Cherokee Indians when extracting the dye: We raised our indigo which we cut in the morning while the dew was still on it; then we put it in a tub and soaked it overnight, and the next day we foamed it up by beating it with a gourd. We let it stand overnight again, and the next day rubbed tallow on our hands to kill the foam. Afterwards, we poured the water off, and the sediment left in the bottom we would pour into a pitcher or crock to let it get dry, and then we would put it into a poke made of cloth (i.e. sack made of coarse cloth) and then when we wanted any of it to dye [there]with, we would take the dry indigo. In Sa Pa, Vietnam, the tropical Indigo (Indigo tinctoria) leaves are harvested and, while still fresh, placed inside a tub of room-temperature to lukewarm water where they are left to sit for 3 to 4 days and allowed to ferment, until the water turns green. Afterwards, crushed limestone (pickling lime) is added to the water, at which time the water with the leaves are vigorously agitated for 15 to 20 minutes, until the water turns blue. The blue pigment settles as sediment at the bottom of the tub. The sediment is scooped out and stored. When dyeing cloth, the pigment is then boiled in a vat of water; the cloth (usually made from yarns of hemp) is inserted into the vat for absorbing the dye. After hanging out to dry, the boiling process is repeated as often as needed to produce a darker color. Indigo (color wheel) In a RGB color space, "Indigo(color wheel)" is composed of 25.1% red, 0% green and 100% blue. Whereas in a CMYK color space, it is composed of 74.9% cyan, 100% magenta, 0% yellow and 0% black. It has a hue angle of 255.1 degrees, a saturation of 100% and a lightness of 50%. Indigo(color wheel) could be obtained by blending violet with blue. Electric indigo "Electric indigo" is brighter than the pigment indigo reproduced above. When plotted on the CIE chromaticity diagram, this color is at 435 nanometers, in the middle of the portion of the spectrum traditionally considered indigo, i.e., between 450 and 420 nanometers. This color is only an approximation of spectral indigo, since actual spectral colors are outside the gamut of the sRGB color system. Deep indigo (web color blue-violet) At right is displayed the web color "blue-violet", a color intermediate in brightness between electric indigo and pigment indigo. It is also known as "deep indigo". Web color indigo The color box on the right displays the web color indigo, the color indigo as it would be reproduced by artists' paints as opposed to the brighter indigo above (electric indigo) that is possible to reproduce on a computer screen. Its hue is closer to violet than to indigo dye for which the color is named. Pigment indigo can be obtained by mixing 55% pigment cyan with about 45% pigment magenta. Compare the subtractive colors to the additive colors in the two primary color charts in the article on primary colors to see the distinction between electric colors as reproducible from light on a computer screen (additive colors) and the pigment colors reproducible with pigments (subtractive colors); the additive colors are significantly brighter because they are produced from light instead of pigment. Web color indigo represents the way the color indigo was always reproduced in pigments, paints, or colored pencils in the 1950s. By the 1970s, because of the advent of psychedelic art, artists became accustomed to brighter pigments. Pigments called "bright indigo" or "bright blue-violet" (the pigment equivalent of the electric indigo reproduced in the section above) became available in artists' pigments and colored pencils. Tropical indigo 'Tropical Indigo' is the color that is called añil in the Guía de coloraciones (Guide to colorations) by Rosa Gallego and Juan Carlos Sanz, a color dictionary published in 2005 that is widely popular in the Hispanophone realm. Imperial blue In nature Birds Male indigobirds are a very dark, metallic blue. The indigo bunting, native to North America, is mostly bright cerulean blue with an indigo head. The related blue grosbeak is, ironically, more indigo than the indigo bunting. Fungi Lactarius indigo is one of the very few species of mushrooms colored in tones of blue. Snakes The eastern indigo snake, Drymarchon couperi, of the southeastern United States, is a dark blue/black. In culture Business IndiGo is an Indian budget airline that uses an indigo logo and operates only Airbus A320s. Indigo Books and Music uses an indigo logo and has sometimes referred to the color as "blue" in advertising. The GameCube was initially released in two color variants. One bore the title of 'Indigo', with the main console and controllers in that color. Computer graphics Electric indigo is sometimes used as a glow color for computer graphics lighting, possibly because it seems to change color from indigo to lavender when blended with white. Dyes Indigo dye was used to dye denim, giving the original 'blue jeans' their distinctive colour. The original Postal Worker uniform contained indigo dye, partly due to the dye not running when wet. Guatemala, as of 1778, was considered one of the world's foremost providers of indigo. In Mexico, indigo is known as añil. After silver, and cochineal to produce red, añil was the most important product exported by historical Mexico. The use of añil is survived in the Philippines, particularly in the Visayas and Mindanao. The powder dye is mixed with vinegar to be applied to the cheek of a person suffering from mumps. Food Scientists discovered in 2008 that when a banana becomes ripe, it glows bright indigo under a black light. Some insects, as well as birds, see into the ultraviolet, because they are tetrachromats and can use this information to tell when a banana is ready to eat. The glow is the result of a chemical created as the green chlorophyll in the peel breaks down. Literature Marina Warner's novel Indigo (1992) is a retelling of Shakespeare's The Tempest and features the production of indigo dye by Sycorax. Military The French Army adopted dark blue indigo at the time of the French Revolution, as a replacement for the white uniforms previously worn by the Royal infantry regiments. In 1806, Napoleon decided to restore the white coats because of shortages of indigo dye imposed by the British continental blockade. However, the greater practicability of the blue color led to its retention, and indigo remained the dominant color of French military coats until 1914. Popular culture In the Better Call Saul episode "Hero", Howard Hamlin mentions that his law firm Hamlin Hamlin & McGill trademarked a colour called "Hamlindigo" whilst confronting Jimmy McGill over trademark infringement in a billboard advertisement he produced for his own legal services. Spirituality The spiritualist applications use electric indigo, because the color is positioned between blue and violet on the spectrum. The color electric indigo is used in New Age philosophy to symbolically represent the sixth chakra (called Ajna), which is said to include the third eye. This chakra is believed to be related to intuition and gnosis (spiritual knowledge). Alice A. Bailey used indigo as the "second ray", representing "Love-Wisdom", in her Seven Rays system classifying people into seven metaphysical psychological types. Psychics often associate indigo paranormal auras with an interest in religion or with intense spirituality and intuition. Indigo children are said to have predominantly indigo auras. People with indigo auras are said to favor occupations such as computer analyst, animal caretaker, and counselor. In Wicca, it represents emotion, fluidity, insight, and expressiveness. It is used to spiritually heal.
Physical sciences
Colors
Physics
15287
https://en.wikipedia.org/wiki/Series%20%28mathematics%29
Series (mathematics)
In mathematics, a series is, roughly speaking, an addition of infinitely many terms, one after the other. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures in combinatorics through generating functions. The mathematical properties of infinite series make them widely applicable in other quantitative disciplines such as physics, computer science, statistics and finance. Among the Ancient Greeks, the idea that a potentially infinite summation could produce a finite result was considered paradoxical, most famously in Zeno's paradoxes. Nonetheless, infinite series were applied practically by Ancient Greek mathematicians including Archimedes, for instance in the quadrature of the parabola. The mathematical side of Zeno's paradoxes was resolved using the concept of a limit during the 17th century, especially through the early calculus of Isaac Newton. The resolution was made more rigorous and further improved in the 19th century through the work of Carl Friedrich Gauss and Augustin-Louis Cauchy, among others, answering questions about which of these sums exist via the completeness of the real numbers and whether series terms can be rearranged or not without changing their sums using absolute convergence and conditional convergence of series. In modern terminology, any ordered infinite sequence of terms, whether those terms are numbers, functions, matrices, or anything else that can be added, defines a series, which is the addition of the one after the other. To emphasize that there are an infinite number of terms, series are often also called infinite series. Series are represented by an expression like or, using capital-sigma summation notation, The infinite sequence of additions expressed by a series cannot be explicitly performed in sequence in a finite amount of time. However, if the terms and their finite sums belong to a set that has limits, it may be possible to assign a value to a series, called the sum of the series. This value is the limit as tends to infinity of the finite sums of the first terms of the series if the limit exists. These finite sums are called the partial sums of the series. Using summation notation, if it exists. When the limit exists, the series is convergent or summable and also the sequence is summable, and otherwise, when the limit does not exist, the series is divergent. The expression denotes both the series—the implicit process of adding the terms one after the other indefinitely—and, if the series is convergent, the sum of the series—the explicit limit of the process. This is a generalization of the similar convention of denoting by both the addition—the process of adding—and its result—the sum of and . Commonly, the terms of a series come from a ring, often the field of the real numbers or the field of the complex numbers. If so, the set of all series is also itself a ring, one in which the addition consists of adding series terms together term by term and the multiplication is the Cauchy product. Definition Series A series or, redundantly, an infinite series, is an infinite sum. It is often represented as where the terms are the members of a sequence of numbers, functions, or anything else that can be added. A series may also be represented with capital-sigma notation: It is also common to express series using a few first terms, an ellipsis, a general term, and then a final ellipsis, the general term being an expression of the th term as a function of : For example, Euler's number can be defined with the series where denotes the product of the first positive integers, and is conventionally equal to Partial sum of a series Given a series , its th partial sum is Some authors directly identify a series with its sequence of partial sums. Either the sequence of partial sums or the sequence of terms completely characterizes the series, and the sequence of terms can be recovered from the sequence of partial sums by taking the differences between consecutive elements, Partial summation of a sequence is an example of a linear sequence transformation, and it is also known as the prefix sum in computer science. The inverse transformation for recovering a sequence from its partial sums is the finite difference, another linear sequence transformation. Partial sums of series sometimes have simpler closed form expressions, for instance an arithmetic series has partial sums and a geometric series has partial sums if or simply if . Sum of a series Strictly speaking, a series is said to converge, to be convergent, or to be summable when the sequence of its partial sums has a limit. When the limit of the sequence of partial sums does not exist, the series diverges or is divergent. When the limit of the partial sums exists, it is called the sum of the series or value of the series: A series with only a finite number of nonzero terms is always convergent. Such series are useful for considering finite sums without taking care of the numbers of terms. When the sum exists, the difference between the sum of a series and its th partial sum, is known as the th truncation error of the infinite series. An example of a convergent series is the geometric series It can be shown by algebraic computation that each partial sum is As one has the series is convergent and converges to with truncation errors . By contrast, the geometric series is divergent in the real numbers. However, it is convergent in the extended real number line, with as its limit and as its truncation error at every step. When a series's sequence of partial sums is not easily calculated and evaluated for convergence directly, convergence tests can be used to prove that the series converges or diverges. Grouping and rearranging terms Grouping In ordinary finite summations, terms of the summation can be grouped and ungrouped freely without changing the result of the summation as a consequence of the associativity of addition. Similarly, in a series, any finite groupings of terms of the series will not change the limit of the partial sums of the series and thus will not change the sum of the series. However, if an infinite number of groupings is performed in an infinite series, then the partial sums of the grouped series may have a different limit than the original series and different groupings may have different limits from one another; the sum of may not equal the sum of For example, Grandi's series has a sequence of partial sums that alternates back and forth between and and does not converge. Grouping its elements in pairs creates the series which has partial sums equal to zero at every term and thus sums to zero. Grouping its elements in pairs starting after the first creates the series which has partial sums equal to one for every term and thus sums to one, a different result. In general, grouping the terms of a series creates a new series with a sequence of partial sums that is a subsequence of the partial sums of the original series. This means that if the original series converges, so does the new series after grouping: all infinite subsequences of a convergent sequence also converge to the same limit. However, if the original series diverges, then the grouped series do not necessarily diverge, as in this example of Grandi's series above. However, divergence of a grouped series does imply the original series must be divergent, since it proves there is a subsequence of the partial sums of the original series which is not convergent, which would be impossible if it were convergent. This reasoning was applied in Oresme's proof of the divergence of the harmonic series, and it is the basis for the general Cauchy condensation test. Rearrangement In ordinary finite summations, terms of the summation can be rearranged freely without changing the result of the summation as a consequence of the commutativity of addition. Similarly, in a series, any finite rearrangements of terms of a series does not change the limit of the partial sums of the series and thus does not change the sum of the series: for any finite rearrangement, there will be some term after which the rearrangement did not affect any further terms: any effects of rearrangement can be isolated to the finite summation up to that term, and finite summations do not change under rearrangement. However, as for grouping, an infinitary rearrangement of terms of a series can sometimes lead to a change in the limit of the partial sums of the series. Series with sequences of partial sums that converge to a value but whose terms could be rearranged to a form a series with partial sums that converge to some other value are called conditionally convergent series. Those that converge to the same value regardless of rearrangement are called unconditionally convergent series. For series of real numbers and complex numbers, a series is unconditionally convergent if and only if the series summing the absolute values of its terms, is also convergent, a property called absolute convergence. Otherwise, any series of real numbers or complex numbers that converges but does not converge absolutely is conditionally convergent. Any conditionally convergent sum of real numbers can be rearranged to yield any other real number as a limit, or to diverge. These claims are the content of the Riemann series theorem. A historically important example of conditional convergence is the alternating harmonic series, which has a sum of the natural logarithm of 2, while the sum of the absolute values of the terms is the harmonic series, which diverges per the divergence of the harmonic series, so the alternating harmonic series is conditionally convergent. For instance, rearranging the terms of the alternating harmonic series so that each positive term of the original series is followed by two negative terms of the original series rather than just one yields which is times the original series, so it would have a sum of half of the natural logarithm of 2. By the Riemann series theorem, rearrangements of the alternating harmonic series to yield any other real number are also possible. Operations Series addition The addition of two series and is given by the termwise sum , or, in summation notation, Using the symbols and for the partial sums of the added series and for the partial sums of the resulting series, this definition implies the partial sums of the resulting series follow Then the sum of the resulting series, i.e., the limit of the sequence of partial sums of the resulting series, satisfies when the limits exist. Therefore, first, the series resulting from addition is summable if the series added were summable, and, second, the sum of the resulting series is the addition of the sums of the added series. The addition of two divergent series may yield a convergent series: for instance, the addition of a divergent series with a series of its terms times will yield a series of all zeros that converges to zero. However, for any two series where one converges and the other diverges, the result of their addition diverges. For series of real numbers or complex numbers, series addition is associative, commutative, and invertible. Therefore series addition gives the sets of convergent series of real numbers or complex numbers the structure of an abelian group and also gives the sets of all series of real numbers or complex numbers (regardless of convergence properties) the structure of an abelian group. Scalar multiplication The product of a series with a constant number , called a scalar in this context, is given by the termwise product , or, in summation notation, Using the symbols for the partial sums of the original series and for the partial sums of the series after multiplication by , this definition implies that for all and therefore also when the limits exist. Therefore if a series is summable, any nonzero scalar multiple of the series is also summable and vice versa: if a series is divergent, then any nonzero scalar multiple of it is also divergent. Scalar multiplication of real numbers and complex numbers is associative, commutative, invertible, and it distributes over series addition. In summary, series addition and scalar multiplication gives the set of convergent series and the set of series of real numbers the structure of a real vector space. Similarly, one gets complex vector spaces for series and convergent series of complex numbers. All these vector spaces are infinite dimensional. Series multiplication The multiplication of two series and to generate a third series , called the Cauchy product, can be written in summation notation with each Here, the convergence of the partial sums of the series is not as simple to establish as for addition. However, if both series and are absolutely convergent series, then the series resulting from multiplying them also converges absolutely with a sum equal to the product of the two sums of the multiplied series, Series multiplication of absolutely convergent series of real numbers and complex numbers is associative, commutative, and distributes over series addition. Together with series addition, series multiplication gives the sets of absolutely convergent series of real numbers or complex numbers the structure of a commutative ring, and together with scalar multiplication as well, the structure of a commutative algebra; these operations also give the sets of all series of real numbers or complex numbers the structure of an associative algebra. Examples of numerical series A geometric series is one where each successive term is produced by multiplying the previous term by a constant number (called the common ratio in this context). For example: In general, a geometric series with initial term and common ratio , converges if and only if , in which case it converges to . The harmonic series is the series The harmonic series is divergent. An alternating series is a series where terms alternate signs. Examples: the alternating harmonic series, and the Leibniz formula for A telescoping series converges if the sequence converges to a limit as goes to infinity. The value of the series is then . An arithmetico-geometric series is a series that has terms which are each the product of an element of an arithmetic progression with the corresponding element of a geometric progression. Example: The Dirichlet series converges for and diverges for , which can be shown with the integral test for convergence described below in convergence tests. As a function of , the sum of this series is Riemann's zeta function. Hypergeometric series: and their generalizations (such as basic hypergeometric series and elliptic hypergeometric series) frequently appear in integrable systems and mathematical physics. There are some elementary series whose convergence is not yet known/proven. For example, it is unknown whether the Flint Hills series, converges or not. The convergence depends on how well can be approximated with rational numbers (which is unknown as of yet). More specifically, the values of with large numerical contributions to the sum are the numerators of the continued fraction convergents of , a sequence beginning with 1, 3, 22, 333, 355, 103993, ... . These are integers that are close to for some integer , so that is close to and its reciprocal is large. Pi Natural logarithm of 2 Natural logarithm base Convergence testing One of the simplest tests for convergence of a series, applicable to all series, is the vanishing condition or th-term test: If , then the series diverges; if , then the test is inconclusive. Absolute convergence tests When every term of a series is a non-negative real number, for instance when the terms are the absolute values of another series of real numbers or complex numbers, the sequence of partial sums is non-decreasing. Therefore a series with non-negative terms converges if and only if the sequence of partial sums is bounded, and so finding a bound for a series or for the absolute values of its terms is an effective way to prove convergence or absolute convergence of a series. For example, the series is convergent and absolutely convergent because for all and a telescoping sum argument implies that the partial sums of the series of those non-negative bounding terms are themselves bounded above by 2. The exact value of this series is ; see Basel problem. This type of bounding strategy is the basis for general series comparison tests. First is the general direct comparison test: For any series , If is an absolutely convergent series such that for some positive real number and for sufficiently large , then converges absolutely as well. If diverges, and for all sufficiently large , then also fails to converge absolutely, although it could still be conditionally convergent, for example, if the alternate in sign. Second is the general limit comparison test: If is an absolutely convergent series such that for sufficiently large , then converges absolutely as well. If diverges, and for all sufficiently large , then also fails to converge absolutely, though it could still be conditionally convergent if the vary in sign. Using comparisons to geometric series specifically, those two general comparison tests imply two further common and generally useful tests for convergence of series with non-negative terms or for absolute convergence of series with general terms. First is the ratio test: if there exists a constant such that for all sufficiently large , then converges absolutely. When the ratio is less than , but not less than a constant less than , convergence is possible but this test does not establish it. Second is the root test: if there exists a constant such that for all sufficiently large , then converges absolutely. Alternatively, using comparisons to series representations of integrals specifically, one derives the integral test: if is a positive monotone decreasing function defined on the interval then for a series with terms for all , converges if and only if the integral is finite. Using comparisons to flattened-out versions of a series leads to Cauchy's condensation test: if the sequence of terms is non-negative and non-increasing, then the two series and are either both convergent or both divergent. Conditional convergence tests A series of real or complex numbers is said to be conditionally convergent (or semi-convergent) if it is convergent but not absolutely convergent. Conditional convergence is tested for differently than absolute convergence. One important example of a test for conditional convergence is the alternating series test or Leibniz test: A series of the form with all is called alternating. Such a series converges if the non-negative sequence is monotone decreasing and converges to . The converse is in general not true. A famous example of an application of this test is the alternating harmonic series which is convergent per the alternating series test (and its sum is equal to ), though the series formed by taking the absolute value of each term is the ordinary harmonic series, which is divergent. The alternating series test can be viewed as a special case of the more general Dirichlet's test: if is a sequence of terms of decreasing nonnegative real numbers that converges to zero, and is a sequence of terms with bounded partial sums, then the series converges. Taking recovers the alternating series test. Abel's test is another important technique for handling semi-convergent series. If a series has the form where the partial sums of the series with terms , are bounded, has bounded variation, and exists: if and converges, then the series is convergent. Other specialized convergence tests for specific types of series include the Dini test for Fourier series. Evaluation of truncation errors The evaluation of truncation errors of series is important in numerical analysis (especially validated numerics and computer-assisted proof). It can be used to prove convergence and to analyze rates of convergence. Alternating series When conditions of the alternating series test are satisfied by , there is an exact error evaluation. Set to be the partial sum of the given alternating series . Then the next inequality holds: Hypergeometric series By using the ratio, we can obtain the evaluation of the error term when the hypergeometric series is truncated. Matrix exponential For the matrix exponential: the following error evaluation holds (scaling and squaring method): Sums of divergent series Under many circumstances, it is desirable to assign generalized sums to series which fail to converge in the strict sense that their sequences of partial sums do not converge. A summation method is any method for assigning sums to divergent series in a way that systematically extends the classical notion of the sum of a series. Summation methods include Cesàro summation, generalized Cesàro summation, Abel summation, and Borel summation, in order of applicability to increasingly divergent series. These methods are all based on sequence transformations of the original series of terms or of its sequence of partial sums. An alternative family of summation methods are based on analytic continuation rather than sequence transformation. A variety of general results concerning possible summability methods are known. The Silverman–Toeplitz theorem characterizes matrix summation methods, which are methods for summing a divergent series by applying an infinite matrix to the vector of coefficients. The most general methods for summing a divergent series are non-constructive and concern Banach limits. Series of functions A series of real- or complex-valued functions is pointwise convergent to a limit on a set if the series converges for each in as a series of real or complex numbers. Equivalently, the partial sums converge to as goes to infinity for each in . A stronger notion of convergence of a series of functions is uniform convergence. A series converges uniformly in a set if it converges pointwise to the function at every point of and the supremum of these pointwise errors in approximating the limit by the th partial sum, converges to zero with increasing , of . Uniform convergence is desirable for a series because many properties of the terms of the series are then retained by the limit. For example, if a series of continuous functions converges uniformly, then the limit function is also continuous. Similarly, if the are integrable on a closed and bounded interval and converge uniformly, then the series is also integrable on and can be integrated term by term. Tests for uniform convergence include Weierstrass' M-test, Abel's uniform convergence test, Dini's test, and the Cauchy criterion. More sophisticated types of convergence of a series of functions can also be defined. In measure theory, for instance, a series of functions converges almost everywhere if it converges pointwise except on a set of measure zero. Other modes of convergence depend on a different metric space structure on the space of functions under consideration. For instance, a series of functions converges in mean to a limit function on a set if Power series A power series is a series of the form The Taylor series at a point of a function is a power series that, in many cases, converges to the function in a neighborhood of . For example, the series is the Taylor series of at the origin and converges to it for every . Unless it converges only at , such a series converges on a certain open disc of convergence centered at the point in the complex plane, and may also converge at some of the points of the boundary of the disc. The radius of this disc is known as the radius of convergence, and can in principle be determined from the asymptotics of the coefficients . The convergence is uniform on closed and bounded (that is, compact) subsets of the interior of the disc of convergence: to wit, it is uniformly convergent on compact sets. Historically, mathematicians such as Leonhard Euler operated liberally with infinite series, even if they were not convergent. When calculus was put on a sound and correct foundation in the nineteenth century, rigorous proofs of the convergence of series were always required. Formal power series While many uses of power series refer to their sums, it is also possible to treat power series as formal sums, meaning that no addition operations are actually performed, and the symbol "+" is an abstract symbol of conjunction which is not necessarily interpreted as corresponding to addition. In this setting, the sequence of coefficients itself is of interest, rather than the convergence of the series. Formal power series are used in combinatorics to describe and study sequences that are otherwise difficult to handle, for example, using the method of generating functions. The Hilbert–Poincaré series is a formal power series used to study graded algebras. Even if the limit of the power series is not considered, if the terms support appropriate structure then it is possible to define operations such as addition, multiplication, derivative, antiderivative for power series "formally", treating the symbol "+" as if it corresponded to addition. In the most common setting, the terms come from a commutative ring, so that the formal power series can be added term-by-term and multiplied via the Cauchy product. In this case the algebra of formal power series is the total algebra of the monoid of natural numbers over the underlying term ring. If the underlying term ring is a differential algebra, then the algebra of formal power series is also a differential algebra, with differentiation performed term-by-term. Laurent series Laurent series generalize power series by admitting terms into the series with negative as well as positive exponents. A Laurent series is thus any series of the form If such a series converges, then in general it does so in an annulus rather than a disc, and possibly some boundary points. The series converges uniformly on compact subsets of the interior of the annulus of convergence. Dirichlet series A Dirichlet series is one of the form where is a complex number. For example, if all are equal to , then the Dirichlet series is the Riemann zeta function Like the zeta function, Dirichlet series in general play an important role in analytic number theory. Generally a Dirichlet series converges if the real part of is greater than a number called the abscissa of convergence. In many cases, a Dirichlet series can be extended to an analytic function outside the domain of convergence by analytic continuation. For example, the Dirichlet series for the zeta function converges absolutely when , but the zeta function can be extended to a holomorphic function defined on with a simple pole at . This series can be directly generalized to general Dirichlet series. Trigonometric series A series of functions in which the terms are trigonometric functions is called a trigonometric series: The most important example of a trigonometric series is the Fourier series of a function. Asymptotic series Asymptotic series, typically called asymptotic expansions, are infinite series whose terms are functions of a sequence of different asymptotic orders and whose partial sums are approximations of some other function in an asymptotic limit. In general they do not converge, but they are still useful as sequences of approximations, each of which provides a value close to the desired answer for a finite number of terms. They are crucial tools in perturbation theory and in the analysis of algorithms. An asymptotic series cannot necessarily be made to produce an answer as exactly as desired away from the asymptotic limit, the way that an ordinary convergent series of functions can. In fact, a typical asymptotic series reaches its best practical approximation away from the asymptotic limit after a finite number of terms; if more terms are included, the series will produce less accurate approximations. History of the theory of infinite series Development of infinite series Infinite series play an important role in modern analysis of Ancient Greek philosophy of motion, particularly in Zeno's paradoxes. The paradox of Achilles and the tortoise demonstrates that continuous motion would require an actual infinity of temporal instants, which was arguably an absurdity: Achilles runs after a tortoise, but when he reaches the position of the tortoise at the beginning of the race, the tortoise has reached a second position; when he reaches this second position, the tortoise is at a third position, and so on. Zeno is said to have argued that therefore Achilles could never reach the tortoise, and thus that continuous movement must be an illusion. Zeno divided the race into infinitely many sub-races, each requiring a finite amount of time, so that the total time for Achilles to catch the tortoise is given by a series. The resolution of the purely mathematical and imaginative side of the paradox is that, although the series has an infinite number of terms, it has a finite sum, which gives the time necessary for Achilles to catch up with the tortoise. However, in modern philosophy of motion the physical side of the problem remains open, with both philosophers and physicists doubting, like Zeno, that spatial motions are infinitely divisible: hypothetical reconciliations of quantum mechanics and general relativity in theories of quantum gravity often introduce quantizations of spacetime at the Planck scale. Greek mathematician Archimedes produced the first known summation of an infinite series with a method that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of π. Mathematicians from the Kerala school were studying infinite series . In the 17th century, James Gregory worked in the new decimal system on infinite series and published several Maclaurin series. In 1715, a general method for constructing the Taylor series for all functions for which they exist was provided by Brook Taylor. Leonhard Euler in the 18th century, developed the theory of hypergeometric series and q-series. Convergence criteria The investigation of the validity of infinite series is considered to begin with Gauss in the 19th century. Euler had already considered the hypergeometric series on which Gauss published a memoir in 1812. It established simpler criteria of convergence, and the questions of remainders and the range of convergence. Cauchy (1821) insisted on strict tests of convergence; he showed that if two series are convergent their product is not necessarily so, and with him begins the discovery of effective criteria. The terms convergence and divergence had been introduced long before by Gregory (1668). Leonhard Euler and Gauss had given various criteria, and Colin Maclaurin had anticipated some of Cauchy's discoveries. Cauchy advanced the theory of power series by his expansion of a complex function in such a form. Abel (1826) in his memoir on the binomial series corrected certain of Cauchy's conclusions, and gave a completely scientific summation of the series for complex values of and . He showed the necessity of considering the subject of continuity in questions of convergence. Cauchy's methods led to special rather than general criteria, and the same may be said of Raabe (1832), who made the first elaborate investigation of the subject, of De Morgan (from 1842), whose logarithmic test DuBois-Reymond (1873) and Pringsheim (1889) have shown to fail within a certain region; of Bertrand (1842), Bonnet (1843), Malmsten (1846, 1847, the latter without integration); Stokes (1847), Paucker (1852), Chebyshev (1852), and Arndt (1853). General criteria began with Kummer (1835), and have been studied by Eisenstein (1847), Weierstrass in his various contributions to the theory of functions, Dini (1867), DuBois-Reymond (1873), and many others. Pringsheim's memoirs (1889) present the most complete general theory. Uniform convergence The theory of uniform convergence was treated by Cauchy (1821), his limitations being pointed out by Abel, but the first to attack it successfully were Seidel and Stokes (1847–48). Cauchy took up the problem again (1853), acknowledging Abel's criticism, and reaching the same conclusions which Stokes had already found. Thomae used the doctrine (1866), but there was great delay in recognizing the importance of distinguishing between uniform and non-uniform convergence, in spite of the demands of the theory of functions. Semi-convergence A series is said to be semi-convergent (or conditionally convergent) if it is convergent but not absolutely convergent. Semi-convergent series were studied by Poisson (1823), who also gave a general form for the remainder of the Maclaurin formula. The most important solution of the problem is due, however, to Jacobi (1834), who attacked the question of the remainder from a different standpoint and reached a different formula. This expression was also worked out, and another one given, by Malmsten (1847). Schlömilch (Zeitschrift, Vol.I, p. 192, 1856) also improved Jacobi's remainder, and showed the relation between the remainder and Bernoulli's function Genocchi (1852) has further contributed to the theory. Among the early writers was Wronski, whose "loi suprême" (1815) was hardly recognized until Cayley (1873) brought it into prominence. Fourier series Fourier series were being investigated as the result of physical considerations at the same time that Gauss, Abel, and Cauchy were working out the theory of infinite series. Series for the expansion of sines and cosines, of multiple arcs in powers of the sine and cosine of the arc had been treated by Jacob Bernoulli (1702) and his brother Johann Bernoulli (1701) and still earlier by Vieta. Euler and Lagrange simplified the subject, as did Poinsot, Schröter, Glaisher, and Kummer. Fourier (1807) set for himself a different problem, to expand a given function of in terms of the sines or cosines of multiples of , a problem which he embodied in his Théorie analytique de la chaleur (1822). Euler had already given the formulas for determining the coefficients in the series; Fourier was the first to assert and attempt to prove the general theorem. Poisson (1820–23) also attacked the problem from a different standpoint. Fourier did not, however, settle the question of convergence of his series, a matter left for Cauchy (1826) to attempt and for Dirichlet (1829) to handle in a thoroughly scientific manner (see convergence of Fourier series). Dirichlet's treatment (Crelle, 1829), of trigonometric series was the subject of criticism and improvement by Riemann (1854), Heine, Lipschitz, Schläfli, and du Bois-Reymond. Among other prominent contributors to the theory of trigonometric and Fourier series were Dini, Hermite, Halphen, Krause, Byerly and Appell. Summations over general index sets Definitions may be given for infinitary sums over an arbitrary index set This generalization introduces two main differences from the usual notion of series: first, there may be no specific order given on the set ; second, the set may be uncountable. The notions of convergence need to be reconsidered for these, then, because for instance the concept of conditional convergence depends on the ordering of the index set. If is a function from an index set to a set then the "series" associated to is the formal sum of the elements over the index elements denoted by the When the index set is the natural numbers the function is a sequence denoted by A series indexed on the natural numbers is an ordered formal sum and so we rewrite as in order to emphasize the ordering induced by the natural numbers. Thus, we obtain the common notation for a series indexed by the natural numbers Families of non-negative numbers When summing a family of non-negative real numbers over the index set , define When the supremum is finite then the set of such that is countable. Indeed, for every the cardinality of the set is finite because If is countably infinite and enumerated as then the above defined sum satisfies provided the value is allowed for the sum of the series. Any sum over non-negative reals can be understood as the integral of a non-negative function with respect to the counting measure, which accounts for the many similarities between the two constructions. Abelian topological groups Let be a map, also denoted by from some non-empty set into a Hausdorff abelian topological group Let be the collection of all finite subsets of with viewed as a directed set, ordered under inclusion with union as join. The family is said to be if the following limit, which is denoted by and is called the of exists in Saying that the sum is the limit of finite partial sums means that for every neighborhood of the origin in there exists a finite subset of such that Because is not totally ordered, this is not a limit of a sequence of partial sums, but rather of a net. For every neighborhood of the origin in there is a smaller neighborhood such that It follows that the finite partial sums of an unconditionally summable family form a , that is, for every neighborhood of the origin in there exists a finite subset of such that which implies that for every (by taking and ). When is complete, a family is unconditionally summable in if and only if the finite sums satisfy the latter Cauchy net condition. When is complete and is unconditionally summable in then for every subset the corresponding subfamily is also unconditionally summable in When the sum of a family of non-negative numbers, in the extended sense defined before, is finite, then it coincides with the sum in the topological group If a family in is unconditionally summable then for every neighborhood of the origin in there is a finite subset such that for every index not in If is a first-countable space then it follows that the set of such that is countable. This need not be true in a general abelian topological group (see examples below). Unconditionally convergent series Suppose that If a family is unconditionally summable in a Hausdorff abelian topological group then the series in the usual sense converges and has the same sum, By nature, the definition of unconditional summability is insensitive to the order of the summation. When is unconditionally summable, then the series remains convergent after any permutation of the set of indices, with the same sum, Conversely, if every permutation of a series converges, then the series is unconditionally convergent. When is complete then unconditional convergence is also equivalent to the fact that all subseries are convergent; if is a Banach space, this is equivalent to say that for every sequence of signs , the series converges in Series in topological vector spaces If is a topological vector space (TVS) and is a (possibly uncountable) family in then this family is summable if the limit of the net exists in where is the directed set of all finite subsets of directed by inclusion and It is called absolutely summable if in addition, for every continuous seminorm on the family is summable. If is a normable space and if is an absolutely summable family in then necessarily all but a countable collection of ’s are zero. Hence, in normed spaces, it is usually only ever necessary to consider series with countably many terms. Summable families play an important role in the theory of nuclear spaces. Series in Banach and seminormed spaces The notion of series can be easily extended to the case of a seminormed space. If is a sequence of elements of a normed space and if then the series converges to in if the sequence of partial sums of the series converges to in ; to wit, More generally, convergence of series can be defined in any abelian Hausdorff topological group. Specifically, in this case, converges to if the sequence of partial sums converges to If is a seminormed space, then the notion of absolute convergence becomes: A series of vectors in converges absolutely if in which case all but at most countably many of the values are necessarily zero. If a countable series of vectors in a Banach space converges absolutely then it converges unconditionally, but the converse only holds in finite-dimensional Banach spaces (theorem of ). Well-ordered sums Conditionally convergent series can be considered if is a well-ordered set, for example, an ordinal number In this case, define by transfinite recursion: and for a limit ordinal if this limit exists. If all limits exist up to then the series converges. Examples Given a function into an abelian topological group define for every a function whose support is a singleton Then in the topology of pointwise convergence (that is, the sum is taken in the infinite product group ). In the definition of partitions of unity, one constructs sums of functions over arbitrary index set While, formally, this requires a notion of sums of uncountable series, by construction there are, for every given only finitely many nonzero terms in the sum, so issues regarding convergence of such sums do not arise. Actually, one usually assumes more: the family of functions is locally finite, that is, for every there is a neighborhood of in which all but a finite number of functions vanish. Any regularity property of the such as continuity, differentiability, that is preserved under finite sums will be preserved for the sum of any subcollection of this family of functions. On the first uncountable ordinal viewed as a topological space in the order topology, the constant function given by satisfies (in other words, copies of 1 is ) only if one takes a limit over all countable partial sums, rather than finite partial sums. This space is not separable.
Mathematics
Analysis
null
15292
https://en.wikipedia.org/wiki/Ink
Ink
Ink is a gel, sol, or solution that contains at least one colorant, such as a dye or pigment, and is used to color a surface to produce an image, text, or design. Ink is used for drawing or writing with a pen, brush, reed pen, or quill. Thicker inks, in paste form, are used extensively in letterpress and lithographic printing. Ink can be a complex medium, composed of solvents, pigments, dyes, resins, lubricants, solubilizers, surfactants, particulate matter, fluorescents, and other materials. The components of inks serve many purposes; the ink's carrier, colorants, and other additives affect the flow and thickness of the ink and its dry appearance. History Many ancient cultures around the world have independently discovered and formulated inks due to the need to write and draw. The recipes and techniques for the production of ink are derived from archaeological analyses or from written texts itself. The earliest inks from all civilizations are believed to have been made with lampblack, a kind of soot, easily collected as a by-product of fire. Ink was used in Ancient Egypt for writing and drawing on papyrus from at least the 26th century BC. Egyptian red and black inks included iron and ocher as pigments, in addition to phosphate, sulfate, chloride, and carboxylate ions, with lead also used as a drier. The earliest Chinese inks may date to four millennia ago, to the Chinese Neolithic Period. These included plant, animal, and mineral inks, based on such materials as graphite; these were ground with water and applied with ink brushes. Direct evidence for the earliest Chinese inks, similar to modern inksticks, is found around 256 BC, in the end of the Warring States period; being produced from soot and animal glue. The preferred inks for drawing or painting on paper or silk are produced from the resin of the pine trees between 50 and 100 years old. The Chinese inkstick is produced with a fish glue, whereas Japanese glue (膠 nikawa) is from cow or stag. India ink was invented in China, though materials were often traded from India, hence the name. The traditional Chinese method of making the ink was to grind a mixture of hide glue, carbon black, lampblack, and bone black pigment with a pestle and mortar, then pour it into a ceramic dish to dry. To use the dry mixture, a wet brush would be applied until it reliquified. The manufacture of India ink was well-established by the Cao Wei dynasty (220–265 AD). Indian documents written in Kharosthi with ink have been unearthed in Xinjiang. The practice of writing with ink and a sharp pointed needle was common in early South India. Several Buddhist and Jain sutras in India were compiled in ink. Cephalopod ink, known as sepia, turns from dark blue-black to brown on drying, and was used as an ink in the Graeco-Roman period and subsequently. Black atramentum was also used in ancient Rome; in an article for The Christian Science Monitor, Sharon J. Huntington describes these other historical inks: About 1,600 years ago, a popular ink recipe was created. The recipe was used for centuries. Iron salts, such as ferrous sulfate (made by treating iron with sulfuric acid), were mixed with tannin from gallnuts (they grow on trees) and a thickener. When first put to paper, this ink is bluish-black. Over time it fades to a dull brown. Scribes in medieval Europe (about AD 800 to 1500) wrote principally on parchment or vellum. One 12th century ink recipe called for hawthorn branches to be cut in the spring and left to dry. Then the bark was pounded from the branches and soaked in water for eight days. The water was boiled until it thickened and turned black. Wine was added during boiling. The ink was poured into special bags and hung in the sun. Once dried, the mixture was mixed with wine and iron salt over a fire to make the final ink. The reservoir pen, which may have been the first fountain pen, dates back to 953, when Ma'ād al-Mu'izz, the caliph of Egypt, demanded a pen that would not stain his hands or clothes, and was provided with a pen that held ink in a reservoir. In the 15th century, a new type of ink had to be developed in Europe for the printing press by Johannes Gutenberg. According to Martyn Lyons in his book Books: A Living History, Gutenberg's dye was indelible, oil-based, and made from the soot of lamps (lamp-black) mixed with varnish and egg white. Two types of ink were prevalent at the time: the Greek and Roman writing ink (soot, glue, and water) and the 12th century variety composed of ferrous sulfate, gall, gum, and water. Neither of these handwriting inks could adhere to printing surfaces without creating blurs. Eventually an oily, varnish-like ink made of soot, turpentine, and walnut oil was created specifically for the printing press. Types Ink formulas vary, but commonly involve two components: Colorants Vehicles (binders) Inks generally fall into four classes: Aqueous Liquid Paste Powder Colorants Pigments Pigment inks are used more frequently than dyes because they are more color-fast, but they are also more expensive, less consistent in color, and have less of a color range than dyes. Pigments are solid, opaque particles suspended in ink to provide color. Pigment molecules typically link together in crystalline structures that are 0.1–2 μm in size and comprise 5–30 percent of the ink volume. Qualities such as hue, saturation, and lightness vary depending on the source and type of pigment.Solvent-based inks are widely used for high-speed printing and applications that require quick drying times. And the inclusion of TiO2 powder provides superior coverage and vibrant colors. Dyes Dye-based inks are generally much stronger than pigment-based inks and can produce much more color of a given density per unit of mass. However, because dyes are dissolved in the liquid phase, they have a tendency to soak into paper, potentially allowing the ink to bleed at the edges of an image. To circumvent this problem, dye-based inks are made with solvents that dry rapidly or are used with quick-drying methods of printing, such as blowing hot air on the fresh print. Other methods include harder paper sizing and more specialized paper coatings. The latter is particularly suited to inks used in non-industrial settings (which must conform to tighter toxicity and emission controls), such as inkjet printer inks. Another technique involves coating the paper with a charged coating. If the dye has the opposite charge, it is attracted to and retained by this coating, while the solvent soaks into the paper. Cellulose, the wood-derived material most paper is made of, is naturally charged, and so a compound that complexes with both the dye and the paper's surface aids retention at the surface. Such a compound is commonly used in ink-jet printing inks. An additional advantage of dye-based ink systems is that the dye molecules can interact with other ink ingredients, potentially allowing greater benefit as compared to pigmented inks from optical brighteners and color-enhancing agents designed to increase the intensity and appearance of dyes. Dye-based inks can be used for anti-counterfeit purposes and can be found in some gel inks, fountain pen inks, and inks used for paper currency. These inks react with cellulose to bring about a permanent color change. Dye based inks are used to color hair. Health and environmental aspects There is a misconception that ink is non-toxic even if swallowed. Once ingested, ink can be hazardous to one's health. Certain inks, such as those used in digital printers, and even those found in a common pen can be harmful. Though ink does not easily cause death, repeated skin contact or ingestion can cause effects such as severe headaches, skin irritation, or nervous system damage. These effects can be caused by solvents, or by pigment ingredients such as p-Anisidine, which helps create some inks' color and shine. Three main environmental issues with ink are: Heavy metals Non-renewable oils Volatile organic compounds Some regulatory bodies have set standards for the amount of heavy metals in ink. There is a trend toward vegetable oils rather than petroleum oils in recent years in response to a demand for better environmental sustainability performance. Ink uses up non-renewable oils and metals, which has a negative impact on the environment. Carbon Carbon inks were commonly made from lampblack or soot and a binding agent such as gum arabic or animal glue. The binding agent keeps carbon particles in suspension and adhered to paper. Carbon particles do not fade over time even when bleached or when in sunlight. One benefit is that carbon ink does not harm paper. Over time, the ink is chemically stable and therefore does not threaten the paper's strength. Despite these benefits, carbon ink is not ideal for permanence and ease of preservation. Carbon ink tends to smudge in humid environments and can be washed off surfaces. The best method of preserving a document written in carbon ink is to store it in a dry environment (Barrow 1972). Recently, carbon inks made from carbon nanotubes have been successfully created. They are similar in composition to traditional inks in that they use a polymer to suspend the carbon nanotubes. These inks can be used in inkjet printers and produce electrically conductive patterns. Iron gall (common ink) Iron gall inks became prominent in the early 12th century; they were used for centuries and were widely thought to be the best type of ink. However, iron gall ink is corrosive and damages paper over time (Waters 1940). Items containing this ink can become brittle and the writing fades to brown. The original scores of Johann Sebastian Bach are threatened by the destructive properties of iron gall ink. The majority of his works are held by the German State Library, and about 25% of those are in advanced stages of decay (American Libraries 2000). The rate at which the writing fades is based on several factors, such as proportions of ink ingredients, amount deposited on the paper, and paper composition (Barrow 1972:16). Corrosion is caused by acid catalyzed hydrolysis and iron(II)-catalysed oxidation of cellulose (Rouchon-Quillet 2004:389). Treatment is a controversial subject. No treatment undoes damage already caused by acidic ink. Deterioration can only be stopped or slowed. Some think it best not to treat the item at all for fear of the consequences. Others believe that non-aqueous procedures are the best solution. Yet others think an aqueous procedure may preserve items written with iron gall ink. Aqueous treatments include distilled water at different temperatures, calcium hydroxide, calcium bicarbonate, magnesium carbonate, magnesium bicarbonate, and calcium hyphenate. There are many possible side effects from these treatments. There can be mechanical damage, which further weakens the paper. Paper color or ink color may change, and ink may bleed. Other consequences of aqueous treatment are a change of ink texture or formation of plaque on the surface of the ink (Reibland & de Groot 1999). Iron gall inks require storage in a stable environment, because fluctuating relative humidity increases the rate that formic acid, acetic acid, and furan derivatives form in the material the ink was used on. Sulfuric acid acts as a catalyst to cellulose hydrolysis, and iron (II) sulfate acts as a catalyst to cellulose oxidation. These chemical reactions physically weaken the paper, causing brittleness. Indelible ink Indelible means "un-removable". Some types of indelible ink have a very short shelf life because of the quickly evaporating solvents used. India, Mexico, Indonesia, Malaysia and other developing countries have used indelible ink in the form of electoral stain to prevent electoral fraud. Election ink based on silver nitrate was first applied in the 1962 Indian general election, after being developed at the National Physical Laboratory of India. The election commission in India has used indelible ink for many elections. Indonesia used it in its election in 2014. In Mali, the ink is applied to the fingernail. Indelible ink itself is not infallible as it can be used to commit electoral fraud by marking opponent party members before they have chances to cast their votes. There are also reports of "indelible" ink washing off voters' fingers in Afghanistan.
Technology
Artist's tools
null
15317
https://en.wikipedia.org/wiki/IPv4
IPv4
Internet Protocol version 4 (IPv4) is the first version of the Internet Protocol (IP) as a standalone specification. It is one of the core protocols of standards-based internetworking methods in the Internet and other packet-switched networks. IPv4 was the first version deployed for production on SATNET in 1982 and on the ARPANET in January 1983. It is still used to route most Internet traffic today, even with the ongoing deployment of Internet Protocol version 6 (IPv6), its successor. IPv4 uses a 32-bit address space which provides 4,294,967,296 (232) unique addresses, but large blocks are reserved for special networking purposes. History Earlier versions of TCP/IP were a combined specification through TCP/IPv3. With IPv4, the Internet Protocol became a separate specification. Internet Protocol version 4 is described in IETF publication RFC 791 (September 1981), replacing an earlier definition of January 1980 (RFC 760). In March 1982, the US Department of Defense decided on the Internet Protocol Suite (TCP/IP) as the standard for all military computer networking. Purpose The Internet Protocol is the protocol that defines and enables internetworking at the internet layer of the Internet Protocol Suite. In essence it forms the Internet. It uses a logical addressing system and performs routing, which is the forwarding of packets from a source host to the next router that is one hop closer to the intended destination host on another network. IPv4 is a connectionless protocol, and operates on a best-effort delivery model, in that it does not guarantee delivery, nor does it assure proper sequencing or avoidance of duplicate delivery. These aspects, including data integrity, are addressed by an upper layer transport protocol, such as the Transmission Control Protocol (TCP). Addressing IPv4 uses 32-bit addresses which limits the address space to (232) addresses. IPv4 reserves special address blocks for private networks (224 + 220 + 216 ≈ 18 million addresses) and multicast addresses (228 ≈ 268 million addresses). Address representations IPv4 addresses may be represented in any notation expressing a 32-bit integer value. They are most often written in dot-decimal notation, which consists of four octets of the address expressed individually in decimal numbers and separated by periods. For example, the quad-dotted IP address in the illustration () represents the 32-bit decimal number 2886794753, which in hexadecimal format is 0xAC10FE01. CIDR notation combines the address with its routing prefix in a compact format, in which the address is followed by a slash character (/) and the count of leading consecutive 1 bits in the routing prefix (subnet mask). Other address representations were in common use when classful networking was practiced. For example, the loopback address was commonly written as , given that it belongs to a class-A network with eight bits for the network mask and 24 bits for the host number. When fewer than four numbers were specified in the address in dotted notation, the last value was treated as an integer of as many bytes as are required to fill out the address to four octets. Thus, the address is equivalent to . Allocation In the original design of IPv4, an IP address was divided into two parts: the network identifier was the most significant octet of the address, and the host identifier was the rest of the address. The latter was also called the rest field. This structure permitted a maximum of 256 network identifiers, which was quickly found to be inadequate. To overcome this limit, the most-significant address octet was redefined in 1981 to create network classes, in a system which later became known as classful networking. The revised system defined five classes. Classes A, B, and C had different bit lengths for network identification. The rest of the address was used as previously to identify a host within a network. Because of the different sizes of fields in different classes, each network class had a different capacity for addressing hosts. In addition to the three classes for addressing hosts, Class D was defined for multicast addressing and Class E was reserved for future applications. Dividing existing classful networks into subnets began in 1985 with the publication of . This division was made more flexible with the introduction of variable-length subnet masks (VLSM) in in 1987. In 1993, based on this work, introduced Classless Inter-Domain Routing (CIDR), which expressed the number of bits (from the most significant) as, for instance, , and the class-based scheme was dubbed classful, by contrast. CIDR was designed to permit repartitioning of any address space so that smaller or larger blocks of addresses could be allocated to users. The hierarchical structure created by CIDR is managed by the Internet Assigned Numbers Authority (IANA) and the regional Internet registries (RIRs). Each RIR maintains a publicly searchable WHOIS database that provides information about IP address assignments. Special-use addresses The Internet Engineering Task Force (IETF) and IANA have restricted from general use various reserved IP addresses for special purposes. Notably these addresses are used for multicast traffic and to provide addressing space for unrestricted uses on private networks. {|class="wikitable sortable" |+Special address blocks !Address block !Address range !Number of addresses !Scope !Description |- |0.0.0.0/8 |0.0.0.0–0.255.255.255 |align=right| |Software |Current (local, "this") network |- |10.0.0.0/8 |10.0.0.0–10.255.255.255 |align=right| |Private network |Used for local communications within a private network |- |100.64.0.0/10 |100.64.0.0–100.127.255.255 |align=right| |Private network |Shared address space for communications between a service provider and its subscribers when using a carrier-grade NAT |- |127.0.0.0/8 |127.0.0.0–127.255.255.255 |align=right| |Host |Used for loopback addresses to the local host |- |169.254.0.0/16 |169.254.0.0–169.254.255.255 |align=right| |Subnet |Used for link-local addresses between two hosts on a single link when no IP address is otherwise specified, such as would have normally been retrieved from a DHCP server |- |172.16.0.0/12 |172.16.0.0–172.31.255.255 |align=right| |Private network |Used for local communications within a private network |- |- |192.0.0.0/24 |192.0.0.0–192.0.0.255 |align=right| |Private network |IETF Protocol Assignments, DS-Lite (/29) |- |192.0.2.0/24 |192.0.2.0–192.0.2.255 |align=right| |Documentation |Assigned as TEST-NET-1, documentation and examples |- |192.88.99.0/24 |192.88.99.0–192.88.99.255 |align=right| |Internet |Reserved. Formerly used for IPv6 to IPv4 relay (included IPv6 address block 2002::/16). |- |192.168.0.0/16 |192.168.0.0–192.168.255.255 |align=right| |Private network |Used for local communications within a private network |- |198.18.0.0/15 |198.18.0.0–198.19.255.255 |align=right| |Private network |Used for benchmark testing of inter-network communications between two separate subnets |- |198.51.100.0/24 |198.51.100.0–198.51.100.255 |align=right| |Documentation |Assigned as TEST-NET-2, documentation and examples |- |203.0.113.0/24 |203.0.113.0–203.0.113.255 |align=right| |Documentation |Assigned as TEST-NET-3, documentation and examples |- |224.0.0.0/4 |224.0.0.0–239.255.255.255 |align=right| |Internet |In use for multicast (former Class D network) |- |233.252.0.0/24 |233.252.0.0–233.252.0.255 |align=right| |Documentation |Assigned as MCAST-TEST-NET, documentation and examples (This is part of the above multicast space.) |- |240.0.0.0/4 |240.0.0.0–255.255.255.254 |align=right| |Internet |Reserved for future use (former Class E network) |- |255.255.255.255/32 |255.255.255.255 |align=right| |Subnet |Reserved for the "limited broadcast" destination address |} Private networks Of the approximately four billion addresses defined in IPv4, about 18 million addresses in three ranges are reserved for use in private networks. Packets addresses in these ranges are not routable in the public Internet; they are ignored by all public routers. Therefore, private hosts cannot directly communicate with public networks, but require network address translation at a routing gateway for this purpose. {|class=wikitable |+Reserved private IPv4 network ranges |- !Name!!CIDR block!!Address range!!Number ofaddresses!!Classful description |- |24-bit block||10.0.0.0/8||10.0.0.0 – 10.255.255.255||align=right|||Single Class A |- |20-bit block||172.16.0.0/12||172.16.0.0 – 172.31.255.255||align=right|||Contiguous range of 16 Class B blocks |- |16-bit block||192.168.0.0/16||192.168.0.0 – 192.168.255.255||align=right|||Contiguous range of 256 Class C blocks |} Since two private networks, e.g., two branch offices, cannot directly interoperate via the public Internet, the two networks must be bridged across the Internet via a virtual private network (VPN) or an IP tunnel, which encapsulates packets, including their headers containing the private addresses, in a protocol layer during transmission across the public network. Additionally, encapsulated packets may be encrypted for transmission across public networks to secure the data. Link-local addressing RFC 3927 defines the special address block 169.254.0.0/16 for link-local addressing. These addresses are only valid on the link (such as a local network segment or point-to-point connection) directly connected to a host that uses them. These addresses are not routable. Like private addresses, these addresses cannot be the source or destination of packets traversing the internet. These addresses are primarily used for address autoconfiguration (Zeroconf) when a host cannot obtain an IP address from a DHCP server or other internal configuration methods. When the address block was reserved, no standards existed for address autoconfiguration. Microsoft created an implementation called Automatic Private IP Addressing (APIPA), which was deployed on millions of machines and became a de facto standard. Many years later, in May 2005, the IETF defined a formal standard in RFC 3927, entitled Dynamic Configuration of IPv4 Link-Local Addresses. Loopback The class A network (classless network ) is reserved for loopback. IP packets whose source addresses belong to this network should never appear outside a host. Packets received on a non-loopback interface with a loopback source or destination address must be dropped. First and last subnet addresses The first address in a subnet is used to identify the subnet itself. In this address all host bits are 0. To avoid ambiguity in representation, this address is reserved. The last address has all host bits set to 1. It is used as a local broadcast address for sending messages to all devices on the subnet simultaneously. For networks of size or larger, the broadcast address always ends in 255. For example, in the subnet (subnet mask ) the identifier is used to refer to the entire subnet. The broadcast address of the network is . However, this does not mean that every address ending in 0 or 255 cannot be used as a host address. For example, in the subnet , which is equivalent to the address range –, the broadcast address is . One can use the following addresses for hosts, even though they end with 255: , , etc. Also, is the network identifier and must not be assigned to an interface. The addresses , , etc., may be assigned, despite ending with 0. In the past, conflict between network addresses and broadcast addresses arose because some software used non-standard broadcast addresses with zeros instead of ones. In networks smaller than , broadcast addresses do not necessarily end with 255. For example, a CIDR subnet has the broadcast address . As a special case, a network has capacity for just two hosts. These networks are typically used for point-to-point connections. There is no network identifier or broadcast address for these networks. Address resolution Hosts on the Internet are usually known by names, e.g., www.example.com, not primarily by their IP address, which is used for routing and network interface identification. The use of domain names requires translating, called resolving, them to addresses and vice versa. This is analogous to looking up a phone number in a phone book using the recipient's name. The translation between addresses and domain names is performed by the Domain Name System (DNS), a hierarchical, distributed naming system that allows for the subdelegation of namespaces to other DNS servers. Unnumbered interface A unnumbered point-to-point (PtP) link, also called a transit link, is a link that does not have an IP network or subnet number associated with it, but still has an IP address. First introduced in 1993, Phil Karn from Qualcomm is credited as the original designer. The purpose of a transit link is to route datagrams. They are used to free IP addresses from a scarce IP address space or to reduce the management of assigning IP and configuration of interfaces. Previously, every link needed to dedicate a or subnet using 2 or 4 IP addresses per point-to-point link. When a link is unnumbered, a router-id is used, a single IP address borrowed from a defined (normally a loopback) interface. The same router-id can be used on multiple interfaces. One of the disadvantages of unnumbered interfaces is that it is harder to do remote testing and management. Address space exhaustion In the 1980s, it became apparent that the pool of available IPv4 addresses was depleting at a rate that was not initially anticipated in the original design of the network. The main market forces that accelerated address depletion included the rapidly growing number of Internet users, who increasingly used mobile computing devices, such as laptop computers, personal digital assistants (PDAs), and smart phones with IP data services. In addition, high-speed Internet access was based on always-on devices. The threat of exhaustion motivated the introduction of a number of remedial technologies, such as: Classless Inter-Domain Routing (CIDR), for smaller ISP allocations Unnumbered interfaces removed the need for addresses on transit links. Network address translation (NAT) removed the need for the end-to-end principle. By the mid-1990s, NAT was used pervasively in network access provider systems, along with strict usage-based allocation policies at the regional and local Internet registries. The primary address pool of the Internet, maintained by IANA, was exhausted on 3 February 2011, when the last five blocks were allocated to the five RIRs. APNIC was the first RIR to exhaust its regional pool on 15 April 2011, except for a small amount of address space reserved for the transition technologies to IPv6, which is to be allocated under a restricted policy. The long-term solution to address exhaustion was the 1998 specification of a new version of the Internet Protocol, IPv6. It provides a vastly increased address space, but also allows improved route aggregation across the Internet, and offers large subnetwork allocations of a minimum of 264 host addresses to end users. However, IPv4 is not directly interoperable with IPv6, so that IPv4-only hosts cannot directly communicate with IPv6-only hosts. With the phase-out of the 6bone experimental network starting in 2004, permanent formal deployment of IPv6 commenced in 2006. Completion of IPv6 deployment is expected to take considerable time, so that intermediate transition technologies are necessary to permit hosts to participate in the Internet using both versions of the protocol. Packet structure An IP packet consists of a header section and a data section. An IP packet has no data checksum or any other footer after the data section. Typically the link layer encapsulates IP packets in frames with a CRC footer that detects most errors. Many transport-layer protocols carried by IP also have their own error checking. Header The IPv4 packet header consists of 14 fields, of which 13 are required. The 14th field is optional and aptly named: options. The fields in the header are packed with the most significant byte first (network byte order), and for the diagram and discussion, the most significant bits are considered to come first (MSB 0 bit numbering). The most significant bit is numbered 0, so the version field is actually found in the four most significant bits of the first byte, for example. Some of the common payload protocols include: {|class=wikitable |- !Protocol Number!!Protocol Name!!Abbreviation |- |1||Internet Control Message Protocol||ICMP |- |2||Internet Group Management Protocol||IGMP |- |6||Transmission Control Protocol||TCP |- |17||User Datagram Protocol||UDP |- |41||IPv6 encapsulation||ENCAP |- |89||Open Shortest Path First||OSPF |- |132||Stream Control Transmission Protocol||SCTP |} Fragmentation and reassembly The Internet Protocol enables traffic between networks. The design accommodates networks of diverse physical nature; it is independent of the underlying transmission technology used in the link layer. Networks with different hardware usually vary not only in transmission speed, but also in the maximum transmission unit (MTU). When one network wants to transmit datagrams to a network with a smaller MTU, it may fragment its datagrams. In IPv4, this function was placed at the Internet Layer and is performed in IPv4 routers limiting exposure to these issues by hosts. In contrast, IPv6, the next generation of the Internet Protocol, does not allow routers to perform fragmentation; hosts must perform Path MTU Discovery before sending datagrams. Fragmentation When a router receives a packet, it examines the destination address and determines the outgoing interface to use and that interface's MTU. If the packet size is bigger than the MTU, and the Do not Fragment (DF) bit in the packet's header is set to 0, then the router may fragment the packet. The router divides the packet into fragments. The maximum size of each fragment is the outgoing MTU minus the IP header size (20 bytes minimum; 60 bytes maximum). The router puts each fragment into its own packet, each fragment packet having the following changes: The total length field is the fragment size. The more fragments (MF) flag is set for all fragments except the last one, which is set to 0. The fragment offset field is set, based on the offset of the fragment in the original data payload. This is measured in units of 8-byte blocks. The header checksum field is recomputed. For example, for an MTU of 1,500 bytes and a header size of 20 bytes, the fragment offsets would be multiples of (0, 185, 370, 555, 740, etc.). It is possible that a packet is fragmented at one router, and that the fragments are further fragmented at another router. For example, a packet of 4,520 bytes, including a 20 bytes IP header is fragmented to two packets on a link with an MTU of 2,500 bytes: The total data size is preserved: 2,480 bytes + 2,020 bytes = 4,500 bytes. The offsets are and . When forwarded to a link with an MTU of 1,500 bytes, each fragment is fragmented into two fragments: Again, the data size is preserved: 1,480 + 1,000 = 2,480, and 1,480 + 540 = 2,020. Also in this case, the More Fragments bit remains 1 for all the fragments that came with 1 in them and for the last fragment that arrives, it works as usual, that is the MF bit is set to 0 only in the last one. And of course, the Identification field continues to have the same value in all re-fragmented fragments. This way, even if fragments are re-fragmented, the receiver knows they have initially all started from the same packet. The last offset and last data size are used to calculate the total data size: . Reassembly A receiver knows that a packet is a fragment, if at least one of the following conditions is true: The flag more fragments is set, which is true for all fragments except the last. The field fragment offset is nonzero, which is true for all fragments except the first. The receiver identifies matching fragments using the source and destination addresses, the protocol ID, and the identification field. The receiver reassembles the data from fragments with the same ID using both the fragment offset and the more fragments flag. When the receiver receives the last fragment, which has the more fragments flag set to 0, it can calculate the size of the original data payload, by multiplying the last fragment's offset by eight and adding the last fragment's data size. In the given example, this calculation was bytes. When the receiver has all fragments, they can be reassembled in the correct sequence according to the offsets to form the original datagram. Assistive protocols IP addresses are not tied in any permanent manner to networking hardware and, indeed, in modern operating systems, a network interface can have multiple IP addresses. In order to properly deliver an IP packet to the destination host on a link, hosts and routers need additional mechanisms to make an association between the hardware address of network interfaces and IP addresses. The Address Resolution Protocol (ARP) performs this IP-address-to-hardware-address translation for IPv4. In addition, the reverse correlation is often necessary. For example, unless an address is preconfigured by an administrator, when an IP host is booted or connected to a network it needs to determine its IP address. Protocols for such reverse correlations include Dynamic Host Configuration Protocol (DHCP), Bootstrap Protocol (BOOTP) and, infrequently, reverse ARP.
Technology
Internet
null
15318
https://en.wikipedia.org/wiki/IPv6
IPv6
Internet Protocol version 6 (IPv6) is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion, and was intended to replace IPv4. In December 1998, IPv6 became a Draft Standard for the IETF, which subsequently ratified it as an Internet Standard on 14 July 2017. Devices on the Internet are assigned a unique IP address for identification and location definition. With the rapid growth of the Internet after commercialization in the 1990s, it became evident that far more addresses would be needed to connect devices than the IPv4 address space had available. By 1998, the IETF had formalized the successor protocol. IPv6 uses 128-bit addresses, theoretically allowing 2128, or approximately total addresses. The actual number is slightly smaller, as multiple ranges are reserved for special usage or completely excluded from general use. The two protocols are not designed to be interoperable, and thus direct communication between them is impossible, complicating the move to IPv6. However, several transition mechanisms have been devised to rectify this. IPv6 provides other technical benefits in addition to a larger addressing space. In particular, it permits hierarchical address allocation methods that facilitate route aggregation across the Internet, and thus limit the expansion of routing tables. The use of multicast addressing is expanded and simplified, and provides additional optimization for the delivery of services. Device mobility, security, and configuration aspects have been considered in the design of the protocol. IPv6 addresses are represented as eight groups of four hexadecimal digits each, separated by colons. The full representation may be shortened; for example, 2001:0db8:0000:0000:0000:8a2e:0370:7334 becomes 2001:db8::8a2e:370:7334. Main features IPv6 is an Internet Layer protocol for packet-switched internetworking and provides end-to-end datagram transmission across multiple IP networks, closely adhering to the design principles developed in the previous version of the protocol, Internet Protocol Version 4 (IPv4). In addition to offering more addresses, IPv6 also implements features not present in IPv4. It simplifies aspects of address configuration, network renumbering, and router announcements when changing network connectivity providers. It simplifies packet processing in routers by placing the responsibility for packet fragmentation in the end points. The IPv6 subnet size is standardized by fixing the size of the host identifier portion of an address to 64 bits. The addressing architecture of IPv6 is defined in and allows three different types of transmission: unicast, anycast and multicast. Motivation and origin IPv4 address exhaustion Internet Protocol Version 4 (IPv4) was the first publicly used version of the Internet Protocol. IPv4 was developed as a research project by the Defense Advanced Research Projects Agency (DARPA), a United States Department of Defense agency, before becoming the foundation for the Internet and the World Wide Web. IPv4 includes an addressing system that uses numerical identifiers consisting of 32 bits. These addresses are typically displayed in dot-decimal notation as decimal values of four octets, each in the range 0 to 255, or 8 bits per number. Thus, IPv4 provides an addressing capability of 232 or approximately 4.3 billion addresses. Address exhaustion was not initially a concern in IPv4 as this version was originally presumed to be a test of DARPA's networking concepts. During the first decade of operation of the Internet, it became apparent that methods had to be developed to conserve address space. In the early 1990s, even after the redesign of the addressing system using a classless network model, it became clear that this would not suffice to prevent IPv4 address exhaustion, and that further changes to the Internet infrastructure were needed. The last unassigned top-level address blocks of 16 million IPv4 addresses were allocated in February 2011 by the Internet Assigned Numbers Authority (IANA) to the five regional Internet registries (RIRs). However, each RIR still has available address pools and is expected to continue with standard address allocation policies until one Classless Inter-Domain Routing (CIDR) block remains. After that, only blocks of 1,024 addresses (/22) will be provided from the RIRs to a local Internet registry (LIR). As of September 2015, all of Asia-Pacific Network Information Centre (APNIC), the Réseaux IP Européens Network Coordination Centre (RIPE NCC), Latin America and Caribbean Network Information Centre (LACNIC), and American Registry for Internet Numbers (ARIN) have reached this stage. This leaves African Network Information Center (AFRINIC) as the sole regional internet registry that is still using the normal protocol for distributing IPv4 addresses. As of November 2018, AFRINIC's minimum allocation is or 1024 IPv4 addresses. A LIR may receive additional allocation when about 80% of all the address space has been utilized. RIPE NCC announced that it had fully run out of IPv4 addresses on 25 November 2019, and called for greater progress on the adoption of IPv6. Comparison with IPv4 On the Internet, data is transmitted in the form of network packets. IPv6 specifies a new packet format, designed to minimize packet header processing by routers. Because the headers of IPv4 packets and IPv6 packets are significantly different, the two protocols are not interoperable. However, most transport and application-layer protocols need little or no change to operate over IPv6; exceptions are application protocols that embed Internet-layer addresses, such as File Transfer Protocol (FTP) and Network Time Protocol (NTP), where the new address format may cause conflicts with existing protocol syntax. Larger address space The main advantage of IPv6 over IPv4 is its larger address space. The size of an IPv6 address is 128 bits, compared to 32 bits in IPv4. The address space therefore has 2128=340,282,366,920,938,463,463,374,607,431,768,211,456 addresses (340 undecillion, approximately ). Some blocks of this space and some specific addresses are reserved for special uses. While this address space is very large, it was not the intent of the designers of IPv6 to assure geographical saturation with usable addresses. Rather, the longer addresses simplify allocation of addresses, enable efficient route aggregation, and allow implementation of special addressing features. In IPv4, complex Classless Inter-Domain Routing (CIDR) methods were developed to make the best use of the small address space. The standard size of a subnet in IPv6 is 264 addresses, about four billion times the size of the entire IPv4 address space. Thus, actual address space utilization will be small in IPv6, but network management and routing efficiency are improved by the large subnet space and hierarchical route aggregation. Multicasting Multicasting, the transmission of a packet to multiple destinations in a single send operation, is part of the base specification in IPv6. In IPv4 this is an optional (although commonly implemented) feature. IPv6 multicast addressing has features and protocols in common with IPv4 multicast, but also provides changes and improvements by eliminating the need for certain protocols. IPv6 does not implement traditional IP broadcast, i.e. the transmission of a packet to all hosts on the attached link using a special broadcast address, and therefore does not define broadcast addresses. In IPv6, the same result is achieved by sending a packet to the link-local all nodes multicast group at address ff02::1, which is analogous to IPv4 multicasting to address 224.0.0.1. IPv6 also provides for new multicast implementations, including embedding rendezvous point addresses in an IPv6 multicast group address, which simplifies the deployment of inter-domain solutions. In IPv4 it is very difficult for an organization to get even one globally routable multicast group assignment, and the implementation of inter-domain solutions is arcane. Unicast address assignments by a local Internet registry for IPv6 have at least a 64-bit routing prefix, yielding the smallest subnet size available in IPv6 (also 64 bits). With such an assignment it is possible to embed the unicast address prefix into the IPv6 multicast address format, while still providing a 32-bit block, the least significant bits of the address, or approximately 4.2 billion multicast group identifiers. Thus each user of an IPv6 subnet automatically has available a set of globally routable source-specific multicast groups for multicast applications. Stateless address autoconfiguration (SLAAC) IPv6 hosts configure themselves automatically. Every interface has a self-generated link-local address and, when connected to a network, conflict resolution is performed and routers provide network prefixes via router advertisements. Stateless configuration of routers can be achieved with a special router renumbering protocol. When necessary, hosts may configure additional stateful addresses via Dynamic Host Configuration Protocol version 6 (DHCPv6) or static addresses manually. Like IPv4, IPv6 supports globally unique IP addresses. The design of IPv6 intended to re-emphasize the end-to-end principle of network design that was originally conceived during the establishment of the early Internet by rendering network address translation obsolete. Therefore, every device on the network is globally addressable directly from any other device. A stable, unique, globally addressable IP address would facilitate tracking a device across networks. Therefore, such addresses are a particular privacy concern for mobile devices, such as laptops and cell phones. To address these privacy concerns, the SLAAC protocol includes what are typically called "privacy addresses" or, more correctly, "temporary addresses". Temporary addresses are random and unstable. A typical consumer device generates a new temporary address daily and will ignore traffic addressed to an old address after one week. Temporary addresses are used by default by Windows since XP SP1, macOS since (Mac OS X) 10.7, Android since 4.0, and iOS since version 4.3. Use of temporary addresses by Linux distributions varies. Renumbering an existing network for a new connectivity provider with different routing prefixes is a major effort with IPv4. With IPv6, however, changing the prefix announced by a few routers can in principle renumber an entire network, since the host identifiers (the least-significant 64 bits of an address) can be independently self-configured by a host. The SLAAC address generation method is implementation-dependent. IETF recommends that addresses be deterministic but semantically opaque. IPsec Internet Protocol Security (IPsec) was originally developed for IPv6, but found widespread deployment first in IPv4, for which it was re-engineered. IPsec was a mandatory part of all IPv6 protocol implementations, and Internet Key Exchange (IKE) was recommended, but with RFC 6434 the inclusion of IPsec in IPv6 implementations was downgraded to a recommendation because it was considered impractical to require full IPsec implementation for all types of devices that may use IPv6. However, as of RFC 4301 IPv6 protocol implementations that do implement IPsec need to implement IKEv2 and need to support a minimum set of cryptographic algorithms. This requirement will help to make IPsec implementations more interoperable between devices from different vendors. The IPsec Authentication Header (AH) and the Encapsulating Security Payload header (ESP) are implemented as IPv6 extension headers. Simplified processing by routers The packet header in IPv6 is simpler than the IPv4 header. Many rarely used fields have been moved to optional header extensions. The IPv6 packet header has simplified the process of packet forwarding by routers. Although IPv6 packet headers are at least twice the size of IPv4 packet headers, processing of packets that only contain the base IPv6 header by routers may, in some cases, be more efficient, because less processing is required in routers due to the headers being aligned to match common word sizes. However, many devices implement IPv6 support in software (as opposed to hardware), thus resulting in very bad packet processing performance. Additionally, for many implementations, the use of Extension Headers causes packets to be processed by a router's CPU, leading to poor performance or even security issues. Moreover, an IPv6 header does not include a checksum. The IPv4 header checksum is calculated for the IPv4 header, and has to be recalculated by routers every time the time to live (called hop limit in the IPv6 protocol) is reduced by one. The absence of a checksum in the IPv6 header furthers the end-to-end principle of Internet design, which envisioned that most processing in the network occurs in the leaf nodes. Integrity protection for the data that is encapsulated in the IPv6 packet is assumed to be assured by both the link layer or error detection in higher-layer protocols, namely the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) on the transport layer. Thus, while IPv4 allowed UDP datagram headers to have no checksum (indicated by 0 in the header field), IPv6 requires a checksum in UDP headers. IPv6 routers do not perform IP fragmentation. IPv6 hosts are required to do one of the following: perform Path MTU Discovery, perform end-to-end fragmentation, or send packets no larger than the default maximum transmission unit (MTU), which is 1280 octets. Mobility Unlike mobile IPv4, mobile IPv6 avoids triangular routing and is therefore as efficient as native IPv6. IPv6 routers may also allow entire subnets to move to a new router connection point without renumbering. Extension headers The IPv6 packet header has a minimum size of 40 octets (320 bits). Options are implemented as extensions. This provides the opportunity to extend the protocol in the future without affecting the core packet structure. However, RFC 7872 notes that some network operators drop IPv6 packets with extension headers when they traverse transit autonomous systems. Jumbograms IPv4 limits packets to 65,535 (216−1) octets of payload. An IPv6 node can optionally handle packets over this limit, referred to as jumbograms, which can be as large as 4,294,967,295 (232−1) octets. The use of jumbograms may improve performance over high-MTU links. The use of jumbograms is indicated by the Jumbo Payload Option extension header. IPv6 packets An IPv6 packet has two parts: a header and payload. The header consists of a fixed portion with minimal functionality required for all packets and may be followed by optional extensions to implement special features. The fixed header occupies the first 40 octets (320 bits) of the IPv6 packet. It contains the source and destination addresses, traffic class, hop count, and the type of the optional extension or payload which follows the header. This Next Header field tells the receiver how to interpret the data which follows the header. If the packet contains options, this field contains the option type of the next option. The "Next Header" field of the last option points to the upper-layer protocol that is carried in the packet's payload. The current use of the IPv6 Traffic Class field divides this between a 6 bit Differentiated Services Code Point and a 2-bit Explicit Congestion Notification field. Extension headers carry options that are used for special treatment of a packet in the network, e.g., for routing, fragmentation, and for security using the IPsec framework. Without special options, a payload must be less than . With a Jumbo Payload option (in a Hop-By-Hop Options extension header), the payload must be less than 4 GB. Unlike with IPv4, routers never fragment a packet. Hosts are expected to use Path MTU Discovery to make their packets small enough to reach the destination without needing to be fragmented. See IPv6 packet fragmentation. Addressing IPv6 addresses have 128 bits. The design of the IPv6 address space implements a different design philosophy than in IPv4, in which subnetting was used to improve the efficiency of utilization of the small address space. In IPv6, the address space is deemed large enough for the foreseeable future, and a local area subnet always uses 64 bits for the host portion of the address, designated as the interface identifier, while the most-significant 64 bits are used as the routing prefix. While the myth has existed regarding IPv6 subnets being impossible to scan, RFC 7707 notes that patterns resulting from some IPv6 address configuration techniques and algorithms allow address scanning in many real-world scenarios. Address representation The 128 bits of an IPv6 address are represented in 8 groups of 16 bits each. Each group is written as four hexadecimal digits (sometimes called hextets or more formally hexadectets and informally a quibble or quad-nibble) and the groups are separated by colons (:). An example of this representation is . For convenience and clarity, the representation of an IPv6 address may be shortened with the following rules: One or more leading zeros from any group of hexadecimal digits are removed, which is usually done to all of the leading zeros. For example, the group is converted to . The group is converted to . Consecutive sections of zeros are replaced with two colons (::). This may only be used once in an address, as multiple use would render the address indeterminate. A double colon should not be used to denote an omitted single section of zeros. An example of application of these rules: Initial address: . After removing all leading zeros in each group: . After omitting consecutive sections of zeros: . The loopback address is defined as and is abbreviated to by using both rules. As an IPv6 address may have more than one representation, the IETF has issued a proposed standard for representing them in text. Because IPv6 addresses contain colons, and URLs use colons to separate the host from the port number, an IPv6 address used as the host-part of a URL should be enclosed in square brackets, e.g. http://[2001:db8:4006:812::200e] or http://[2001:db8:4006:812::200e]:8080/path/page.html. Link-local address All interfaces of IPv6 hosts require a link-local address, which have the prefix . This prefix is followed by 54 bits that can be used for subnetting, although they are typically set to zeros, and a 64-bit interface identifier. The host can compute and assign the Interface identifier by itself without the presence or cooperation of an external network component like a DHCP server, in a process called link-local address autoconfiguration. The lower 64 bits of the link-local address (the suffix) were originally derived from the MAC address of the underlying network interface card. As this method of assigning addresses would cause undesirable address changes when faulty network cards were replaced, and as it also suffered from a number of security and privacy issues, has replaced the original MAC-based method with the hash-based method specified in . Address uniqueness and router solicitation IPv6 uses a new mechanism for mapping IP addresses to link-layer addresses (e.g. MAC addresses), because it does not support the broadcast addressing method, on which the functionality of the Address Resolution Protocol (ARP) in IPv4 is based. IPv6 implements the Neighbor Discovery Protocol (NDP, ND) in the link layer, which relies on ICMPv6 and multicast transmission. IPv6 hosts verify the uniqueness of their IPv6 addresses in a local area network (LAN) by sending a neighbor solicitation message asking for the link-layer address of the IP address. If any other host in the LAN is using that address, it responds. A host bringing up a new IPv6 interface first generates a unique link-local address using one of several mechanisms designed to generate a unique address. Should a non-unique address be detected, the host can try again with a newly generated address. Once a unique link-local address is established, the IPv6 host determines whether the LAN is connected on this link to any router interface that supports IPv6. It does so by sending out an ICMPv6 router solicitation message to the all-routers multicast group with its link-local address as source. If there is no answer after a predetermined number of attempts, the host concludes that no routers are connected. If it does get a response, known as a router advertisement, from a router, the response includes the network configuration information to allow establishment of a globally unique address with an appropriate unicast network prefix. There are also two flag bits that tell the host whether it should use DHCP to get further information and addresses: The Manage bit, which indicates whether or not the host should use DHCP to obtain additional addresses rather than rely on an auto-configured address from the router advertisement. The Other bit, which indicates whether or not the host should obtain other information through DHCP. The other information consists of one or more prefix information options for the subnets that the host is attached to, a lifetime for the prefix, and two flags: On-link: If this flag is set, the host will treat all addresses on the specific subnet as being on-link and send packets directly to them instead of sending them to a router for the duration of the given lifetime. Address: This flag tells the host to actually create a global address. Global addressing The assignment procedure for global addresses is similar to local-address construction. The prefix is supplied from router advertisements on the network. Multiple prefix announcements cause multiple addresses to be configured. Stateless address autoconfiguration (SLAAC) requires a address block. Local Internet registries are assigned at least blocks, which they divide among subordinate networks. The initial recommendation of stated assignment of a subnet to end-consumer sites. In this recommendation was refined: The IETF "recommends giving home sites significantly more than a single , but does not recommend that every home site be given a either". Blocks of s are specifically considered. It remains to be seen whether ISPs will honor this recommendation. For example, during initial trials, Comcast customers were given a single network. IPv6 in the Domain Name System In the Domain Name System (DNS), hostnames are mapped to IPv6 addresses by AAAA ("quad-A") resource records. For reverse resolution, the IETF reserved the domain ip6.arpa, where the name space is hierarchically divided by the 1-digit hexadecimal representation of nibble units (4 bits) of the IPv6 address. This scheme is defined in . When a dual-stack host queries a DNS server to resolve a fully qualified domain name (FQDN), the DNS client of the host sends two DNS requests, one querying A records and the other querying AAAA records. The host operating system may be configured with a preference for address selection rules . An alternative record type was used in early DNS implementations for IPv6, designed to facilitate network renumbering, the A6 records for the forward lookup and a number of other innovations such as bit-string labels and DNAME records. It is defined in and its references (with further discussion of the pros and cons of both schemes in ), but has been deprecated to experimental status (). Transition mechanisms IPv6 is not foreseen to supplant IPv4 instantaneously. Both protocols will continue to operate simultaneously for some time. Therefore, IPv6 transition mechanisms are needed to enable IPv6 hosts to reach IPv4 services and to allow isolated IPv6 hosts and networks to reach each other over IPv4 infrastructure. According to Silvia Hagen, a dual-stack implementation of the IPv4 and IPv6 on devices is the easiest way to migrate to IPv6. Many other transition mechanisms use tunneling to encapsulate IPv6 traffic within IPv4 networks and vice versa. This is an imperfect solution, which reduces the maximum transmission unit (MTU) of a link and therefore complicates Path MTU Discovery, and may increase latency. Dual-stack IP implementation Dual-stack IP implementations provide complete IPv4 and IPv6 protocol stacks in the operating system of a computer or network device on top of the common physical layer implementation, such as Ethernet. This permits dual-stack hosts to participate in IPv6 and IPv4 networks simultaneously. A device with dual-stack implementation in the operating system has an IPv4 and IPv6 address, and can communicate with other nodes in the LAN or the Internet using either IPv4 or IPv6. The DNS protocol is used by both IP protocols to resolve fully qualified domain names and IP addresses, but dual stack requires that the resolving DNS server can resolve both types of addresses. Such a dual-stack DNS server holds IPv4 addresses in the A records and IPv6 addresses in the AAAA records. Depending on the destination that is to be resolved, a DNS name server may return an IPv4 or IPv6 IP address, or both. A default address selection mechanism, or preferred protocol, needs to be configured either on hosts or the DNS server. The IETF has published Happy Eyeballs to assist dual-stack applications, so that they can connect using both IPv4 and IPv6, but prefer an IPv6 connection if it is available. However, dual-stack also needs to be implemented on all routers between the host and the service for which the DNS server has returned an IPv6 address. Dual-stack clients should be configured to prefer IPv6 only if the network is able to forward IPv6 packets using the IPv6 versions of routing protocols. When dual-stack network protocols are in place the application layer can be migrated to IPv6. While dual-stack is supported by major operating system and network device vendors, legacy networking hardware and servers do not support IPv6. ISP customers with public-facing IPv6 Internet service providers (ISPs) are increasingly providing their business and private customers with public-facing IPv6 global unicast addresses. If IPv4 is still used in the local area network (LAN), however, and the ISP can only provide one public-facing IPv6 address, the IPv4 LAN addresses are translated into the public facing IPv6 address using NAT64, a network address translation (NAT) mechanism. Some ISPs cannot provide their customers with public-facing IPv4 and IPv6 addresses, thus supporting dual-stack networking, because some ISPs have exhausted their globally routable IPv4 address pool. Meanwhile, ISP customers are still trying to reach IPv4 web servers and other destinations. A significant percentage of ISPs in all regional Internet registry (RIR) zones have obtained IPv6 address space. This includes many of the world's major ISPs and mobile network operators, such as Verizon Wireless, StarHub Cable, Chubu Telecommunications, Kabel Deutschland, Swisscom, T-Mobile, Internode and Telefónica. While some ISPs still allocate customers only IPv4 addresses, many ISPs allocate their customers only an IPv6 or dual-stack IPv4 and IPv6. ISPs report the share of IPv6 traffic from customers over their network to be anything between 20% and 40%, but by mid-2017 IPv6 traffic still only accounted for a fraction of total traffic at several large Internet exchange points (IXPs). AMS-IX reported it to be 2% and SeattleIX reported 7%. A 2017 survey found that many DSL customers that were served by a dual stack ISP did not request DNS servers to resolve fully qualified domain names into IPv6 addresses. The survey also found that the majority of traffic from IPv6-ready web-server resources were still requested and served over IPv4, mostly due to ISP customers that did not use the dual stack facility provided by their ISP and to a lesser extent due to customers of IPv4-only ISPs. Tunneling The technical basis for tunneling, or encapsulating IPv6 packets in IPv4 packets, is outlined in RFC 4213. When the Internet backbone was IPv4-only, one of the frequently used tunneling protocols was 6to4. Teredo tunneling was also frequently used for integrating IPv6 LANs with the IPv4 Internet backbone. Teredo is outlined in RFC 4380 and allows IPv6 local area networks to tunnel over IPv4 networks, by encapsulating IPv6 packets within UDP. The Teredo relay is an IPv6 router that mediates between a Teredo server and the native IPv6 network. It was expected that 6to4 and Teredo would be widely deployed until ISP networks would switch to native IPv6, but by 2014 Google Statistics showed that the use of both mechanisms had dropped to almost 0. IPv4-mapped IPv6 addresses Hybrid dual-stack IPv6/IPv4 implementations recognize a special class of addresses, the IPv4-mapped IPv6 addresses. These addresses are typically written with a 96-bit prefix in the standard IPv6 format, and the remaining 32 bits are written in the customary dot-decimal notation of IPv4. Addresses in this group consist of an 80-bit prefix of zeros, the next 16 bits are ones, and the remaining, least-significant 32 bits contain the IPv4 address. For example, represents the IPv4 address . A previous format, called "IPv4-compatible IPv6 address", was ; however, this method is deprecated. Because of the significant internal differences between IPv4 and IPv6 protocol stacks, some of the lower-level functionality available to programmers in the IPv6 stack does not work the same when used with IPv4-mapped addresses. Some common IPv6 stacks do not implement the IPv4-mapped address feature, either because the IPv6 and IPv4 stacks are separate implementations (e.g., Microsoft Windows 2000, XP, and Server 2003), or because of security concerns (OpenBSD). On these operating systems, a program must open a separate socket for each IP protocol it uses. On some systems, e.g., the Linux kernel, NetBSD, and FreeBSD, this feature is controlled by the socket option IPV6_V6ONLY. The address prefix is a class of IPv4-embedded IPv6 addresses for use in NAT64 transition methods. For example, represents the IPv4 address . Security A number of security implications may arise from the use of IPv6. Some of them may be related with the IPv6 protocols themselves, while others may be related with implementation flaws. Shadow networks The addition of nodes having IPv6 enabled by default by the software manufacturer may result in the inadvertent creation of shadow networks, causing IPv6 traffic flowing into networks having only IPv4 security management in place. This may also occur with operating system upgrades, when the newer operating system enables IPv6 by default, while the older one did not. Failing to update the security infrastructure to accommodate IPv6 can lead to IPv6 traffic bypassing it. Shadow networks have occurred on business networks in which enterprises are replacing Windows XP systems that do not have an IPv6 stack enabled by default, with Windows 7 systems, that do. Some IPv6 stack implementors have therefore recommended disabling IPv4 mapped addresses and instead using a dual-stack network where supporting both IPv4 and IPv6 is necessary. IPv6 packet fragmentation Research has shown that the use of fragmentation can be leveraged to evade network security controls, similar to IPv4. As a result, requires that the first fragment of an IPv6 packet contains the entire IPv6 header chain, such that some very pathological fragmentation cases are forbidden. Additionally, as a result of research on the evasion of RA-Guard in , has deprecated the use of fragmentation with Neighbor Discovery, and discouraged the use of fragmentation with Secure Neighbor Discovery (SEND). Standardization through RFCs Working-group proposals Due to the anticipated global growth of the Internet, the Internet Engineering Task Force (IETF) in the early 1990s started an effort to develop a next generation IP protocol. By the beginning of 1992, several proposals appeared for an expanded Internet addressing system and by the end of 1992 the IETF announced a call for white papers. In September 1993, the IETF created a temporary, ad hoc IP Next Generation (IPng) area to deal specifically with such issues. The new area was led by Allison Mankin and Scott Bradner, and had a directorate with 15 engineers from diverse backgrounds for direction-setting and preliminary document review: The working-group members were J. Allard (Microsoft), Steve Bellovin (AT&T), Jim Bound (Digital Equipment Corporation), Ross Callon (Wellfleet), Brian Carpenter (CERN), Dave Clark (MIT), John Curran (NEARNET), Steve Deering (Xerox), Dino Farinacci (Cisco), Paul Francis (NTT), Eric Fleischmann (Boeing), Mark Knopper (Ameritech), Greg Minshall (Novell), Rob Ullmann (Lotus), and Lixia Zhang (Xerox). The Internet Engineering Task Force adopted the IPng model on 25 July 1994, with the formation of several IPng working groups. By 1996, a series of RFCs was released defining Internet Protocol version 6 (IPv6), starting with . (Version 5 was used by the experimental Internet Stream Protocol.) RFC standardization The first RFC to standardize IPv6 was the in 1995, which became obsoleted by in 1998. In July 2017 this RFC was superseded by , which elevated IPv6 to "Internet Standard" (the highest maturity level for IETF protocols). Deployment The 1993 introduction of Classless Inter-Domain Routing (CIDR) in the routing and IP address allocation for the Internet, and the extensive use of network address translation (NAT), delayed IPv4 address exhaustion to allow for IPv6 deployment, which began in the mid-2000s. Universities were among the early adopters of IPv6. Virginia Tech deployed IPv6 at a trial location in 2004 and later expanded IPv6 deployment across the campus network. By 2016, 82% of the traffic on their network used IPv6. Imperial College London began experimental IPv6 deployment in 2003 and by 2016 the IPv6 traffic on their networks averaged between 20% and 40%. A significant portion of this IPv6 traffic was generated through their high energy physics collaboration with CERN, which relies entirely on IPv6. The Domain Name System (DNS) has supported IPv6 since 2008. In the same year, IPv6 was first used in a major world event during the Beijing 2008 Summer Olympics. By 2011, all major operating systems in use on personal computers and server systems had production-quality IPv6 implementations. Cellular telephone systems presented a large deployment field for Internet Protocol devices as mobile telephone service made the transition from 3G to 4G technologies, in which voice is provisioned as a voice over IP (VoIP) service that would leverage IPv6 enhancements. In 2009, the US cellular operator Verizon released technical specifications for devices to operate on its "next-generation" networks. The specification mandated IPv6 operation according to the 3GPP Release 8 Specifications (March 2009), and deprecated IPv4 as an optional capability. The deployment of IPv6 in the Internet backbone continued. In 2018 only 25.3% of the about 54,000 autonomous systems advertised both IPv4 and IPv6 prefixes in the global Border Gateway Protocol (BGP) routing database. A further 243 networks advertised only an IPv6 prefix. Internet backbone transit networks offering IPv6 support existed in every country globally, except in parts of Africa, the Middle East and China. By mid-2018 some major European broadband ISPs had deployed IPv6 for the majority of their customers. Sky UK provided over 86% of its customers with IPv6, Deutsche Telekom had 56% deployment of IPv6, XS4ALL in the Netherlands had 73% deployment and in Belgium the broadband ISPs VOO and Telenet had 73% and 63% IPv6 deployment respectively. In the United States the broadband ISP Xfinity had an IPv6 deployment of about 66%. In 2018 Xfinity reported an estimated 36.1 million IPv6 users, while AT&T reported 22.3 million IPv6 users. Peering issues There is a peering dispute going on between Hurricane Electric and Cogent Communications on IPv6, with the two network providers refusing to peer.
Technology
Internet
null
15323
https://en.wikipedia.org/wiki/Internet%20Protocol
Internet Protocol
The Internet Protocol (IP) is the network layer communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet. IP has the task of delivering packets from the source host to the destination host solely based on the IP addresses in the packet headers. For this purpose, IP defines packet structures that encapsulate the data to be delivered. It also defines addressing methods that are used to label the datagram with source and destination information. IP was the connectionless datagram service in the original Transmission Control Program introduced by Vint Cerf and Bob Kahn in 1974, which was complemented by a connection-oriented service that became the basis for the Transmission Control Protocol (TCP). The Internet protocol suite is therefore often referred to as TCP/IP. The first major version of IP, Internet Protocol version 4 (IPv4), is the dominant protocol of the Internet. Its successor is Internet Protocol version 6 (IPv6), which has been in increasing deployment on the public Internet since around 2006. Function The Internet Protocol is responsible for addressing host interfaces, encapsulating data into datagrams (including fragmentation and reassembly) and routing datagrams from a source host interface to a destination host interface across one or more IP networks. For these purposes, the Internet Protocol defines the format of packets and provides an addressing system. Each datagram has two components: a header and a payload. The IP header includes a source IP address, a destination IP address, and other metadata needed to route and deliver the datagram. The payload is the data that is transported. This method of nesting the data payload in a packet with a header is called encapsulation. IP addressing entails the assignment of IP addresses and associated parameters to host interfaces. The address space is divided into subnets, involving the designation of network prefixes. IP routing is performed by all hosts, as well as routers, whose main function is to transport packets across network boundaries. Routers communicate with one another via specially designed routing protocols, either interior gateway protocols or exterior gateway protocols, as needed for the topology of the network. Addressing methods There are four principal addressing methods in the Internet Protocol: Version history In May 1974, the Institute of Electrical and Electronics Engineers (IEEE) published a paper entitled "A Protocol for Packet Network Intercommunication". The paper's authors, Vint Cerf and Bob Kahn, described an internetworking protocol for sharing resources using packet switching among network nodes. A central control component of this model was the Transmission Control Program that incorporated both connection-oriented links and datagram services between hosts. The monolithic Transmission Control Program was later divided into a modular architecture consisting of the Transmission Control Protocol and User Datagram Protocol at the transport layer and the Internet Protocol at the internet layer. The model became known as the Department of Defense (DoD) Internet Model and Internet protocol suite, and informally as TCP/IP. The following Internet Experiment Note (IEN) documents describe the evolution of the Internet Protocol into the modern version of IPv4: IEN 2 Comments on Internet Protocol and TCP (August 1977) describes the need to separate the TCP and Internet Protocol functionalities (which were previously combined). It proposes the first version of the IP header, using 0 for the version field. IEN 26 A Proposed New Internet Header Format (February 1978) describes a version of the IP header that uses a 1-bit version field. IEN 28 Draft Internetwork Protocol Description Version 2 (February 1978) describes IPv2. IEN 41 Internetwork Protocol Specification Version 4 (June 1978) describes the first protocol to be called IPv4. The IP header is different from the modern IPv4 header. IEN 44 Latest Header Formats (June 1978) describes another version of IPv4, also with a header different from the modern IPv4 header. IEN 54 Internetwork Protocol Specification Version 4 (September 1978) is the first description of IPv4 using the header that would become standardized in 1980 as . IEN 80 IEN 111 IEN 123 IEN 128/RFC 760 (1980) IP versions 1 to 3 were experimental versions, designed between 1973 and 1978. Versions 2 and 3 supported variable-length addresses ranging between 1 and 16 octets (between 8 and 128 bits). An early draft of version 4 supported variable-length addresses of up to 256 octets (up to 2048 bits) but this was later abandoned in favor of a fixed-size 32-bit address in the final version of IPv4. This remains the dominant internetworking protocol in use in the Internet Layer; the number 4 identifies the protocol version, carried in every IP datagram. IPv4 is defined in (1981). Version number 5 was used by the Internet Stream Protocol, an experimental streaming protocol that was not adopted. The successor to IPv4 is IPv6. IPv6 was a result of several years of experimentation and dialog during which various protocol models were proposed, such as TP/IX (), PIP () and TUBA (TCP and UDP with Bigger Addresses, ). Its most prominent difference from version 4 is the size of the addresses. While IPv4 uses 32 bits for addressing, yielding c. 4.3 billion () addresses, IPv6 uses 128-bit addresses providing c. addresses. Although adoption of IPv6 has been slow, , most countries in the world show significant adoption of IPv6, with over 41% of Google's traffic being carried over IPv6 connections. The assignment of the new protocol as IPv6 was uncertain until due diligence assured that IPv6 had not been used previously. Other Internet Layer protocols have been assigned version numbers, such as 7 (IP/TX), 8 and 9 (historic). Notably, on April 1, 1994, the IETF published an April Fools' Day RfC about IPv9. IPv9 was also used in an alternate proposed address space expansion called TUBA. A 2004 Chinese proposal for an IPv9 protocol appears to be unrelated to all of these, and is not endorsed by the IETF. Reliability The design of the Internet protocol suite adheres to the end-to-end principle, a concept adapted from the CYCLADES project. Under the end-to-end principle, the network infrastructure is considered inherently unreliable at any single network element or transmission medium and is dynamic in terms of the availability of links and nodes. No central monitoring or performance measurement facility exists that tracks or maintains the state of the network. For the benefit of reducing network complexity, the intelligence in the network is located in the end nodes. As a consequence of this design, the Internet Protocol only provides best-effort delivery and its service is characterized as unreliable. In network architectural parlance, it is a connectionless protocol, in contrast to connection-oriented communication. Various fault conditions may occur, such as data corruption, packet loss and duplication. Because routing is dynamic, meaning every packet is treated independently, and because the network maintains no state based on the path of prior packets, different packets may be routed to the same destination via different paths, resulting in out-of-order delivery to the receiver. All fault conditions in the network must be detected and compensated by the participating end nodes. The upper layer protocols of the Internet protocol suite are responsible for resolving reliability issues. For example, a host may buffer network data to ensure correct ordering before the data is delivered to an application. IPv4 provides safeguards to ensure that the header of an IP packet is error-free. A routing node discards packets that fail a header checksum test. Although the Internet Control Message Protocol (ICMP) provides notification of errors, a routing node is not required to notify either end node of errors. IPv6, by contrast, operates without header checksums, since current link layer technology is assumed to provide sufficient error detection. Link capacity and capability The dynamic nature of the Internet and the diversity of its components provide no guarantee that any particular path is actually capable of, or suitable for, performing the data transmission requested. One of the technical constraints is the size of data packets possible on a given link. Facilities exist to examine the maximum transmission unit (MTU) size of the local link and Path MTU Discovery can be used for the entire intended path to the destination. The IPv4 internetworking layer automatically fragments a datagram into smaller units for transmission when the link MTU is exceeded. IP provides re-ordering of fragments received out of order. An IPv6 network does not perform fragmentation in network elements, but requires end hosts and higher-layer protocols to avoid exceeding the path MTU. The Transmission Control Protocol (TCP) is an example of a protocol that adjusts its segment size to be smaller than the MTU. The User Datagram Protocol (UDP) and ICMP disregard MTU size, thereby forcing IP to fragment oversized datagrams. Security During the design phase of the ARPANET and the early Internet, the security aspects and needs of a public, international network could not be adequately anticipated. Consequently, many Internet protocols exhibited vulnerabilities highlighted by network attacks and later security assessments. In 2008, a thorough security assessment and proposed mitigation of problems was published. The IETF has been pursuing further studies.
Technology
Internet
null
15343
https://en.wikipedia.org/wiki/Intron
Intron
An intron is any nucleotide sequence within a gene that is not expressed or operative in the final RNA product. The word intron is derived from the term intragenic region, i.e., a region inside a gene. The term intron refers to both the DNA sequence within a gene and the corresponding RNA sequence in RNA transcripts. The non-intron sequences that become joined by this RNA processing to form the mature RNA are called exons. Introns are found in the genes of most eukaryotes and many eukaryotic viruses and they can be located in both protein-coding genes and genes that function as RNA (noncoding genes). There are four main types of introns: tRNA introns, group I introns, group II introns, and spliceosomal introns (see below). Introns are rare in Bacteria and Archaea (prokaryotes). Discovery and etymology Introns were first discovered in protein-coding genes of adenovirus, and were subsequently identified in genes encoding transfer RNA and ribosomal RNA genes. Introns are now known to occur within a wide variety of genes throughout organisms, bacteria, and viruses within all of the biological kingdoms. The fact that genes were split or interrupted by introns was discovered independently in 1977 by Phillip Allen Sharp and Richard J. Roberts, for which they shared the Nobel Prize in Physiology or Medicine in 1993, though credit was excluded for the researchers and collaborators in their labs that did the experiments resulting in the discovery, Susan Berget and Louise Chow. The term intron was introduced by American biochemist Walter Gilbert: "The notion of the cistron [i.e., gene] ... must be replaced by that of a transcription unit containing regions which will be lost from the mature messenger – which I suggest we call introns (for intragenic regions) – alternating with regions which will be expressed – exons." (Gilbert 1978) The term intron also refers to intracistron, i.e., an additional piece of DNA that arises within a cistron. Although introns are sometimes called intervening sequences, the term "intervening sequence" can refer to any of several families of internal nucleic acid sequences that are not present in the final gene product, including inteins, untranslated regions (UTR), and nucleotides removed by RNA editing, in addition to introns. Distribution The frequency of introns within different genomes is observed to vary widely across the spectrum of biological organisms. For example, introns are extremely common within the nuclear genome of jawed vertebrates (e.g. humans, mice, and pufferfish (fugu)), where protein-coding genes almost always contain multiple introns, while introns are rare within the nuclear genes of some eukaryotic microorganisms, for example baker's/brewer's yeast (Saccharomyces cerevisiae). In contrast, the mitochondrial genomes of vertebrates are entirely devoid of introns, while those of eukaryotic microorganisms may contain many introns. A particularly extreme case is the Drosophila dhc7 gene containing a ≥3.6 megabase (Mb) intron, which takes roughly three days to transcribe. On the other extreme, a 2015 study suggests that the shortest known metazoan intron length is 30 base pairs (bp) belonging to the human MST1L gene. The shortest known introns belong to the heterotrich ciliates, such as Stentor coeruleus, in which most (> 95%) introns are 15 or 16 bp long. Classification Splicing of all intron-containing RNA molecules is superficially similar, as described above. However, different types of introns were identified through the examination of intron structure by DNA sequence analysis, together with genetic and biochemical analysis of RNA splicing reactions. At least four distinct classes of introns have been identified: Introns in nuclear protein-coding genes that are removed by spliceosomes (spliceosomal introns) Introns in nuclear and archaeal transfer RNA genes that are removed by proteins (tRNA introns) Self-splicing group I introns that are removed by RNA catalysis Self-splicing group II introns that are removed by RNA catalysis Group III introns are proposed to be a fifth family, but little is known about the biochemical apparatus that mediates their splicing. They appear to be related to group II introns, and possibly to spliceosomal introns. Spliceosomal introns Nuclear pre-mRNA introns (spliceosomal introns) are characterized by specific intron sequences located at the boundaries between introns and exons. These sequences are recognized by spliceosomal RNA molecules when the splicing reactions are initiated. In addition, they contain a branch point, a particular nucleotide sequence near the 3' end of the intron that becomes covalently linked to the 5' end of the intron during the splicing process, generating a branched intron. Apart from these three short conserved elements, nuclear pre-mRNA intron sequences are highly variable. Nuclear pre-mRNA introns are often much longer than their surrounding exons. tRNA introns Transfer RNA introns that depend upon proteins for removal occur at a specific location within the anticodon loop of unspliced tRNA precursors, and are removed by a tRNA splicing endonuclease. The exons are then linked together by a second protein, the tRNA splicing ligase. Note that self-splicing introns are also sometimes found within tRNA genes. Group I and group II introns Group I and group II introns are found in genes encoding proteins (messenger RNA), transfer RNA and ribosomal RNA in a very wide range of living organisms. Following transcription into RNA, group I and group II introns also make extensive internal interactions that allow them to fold into a specific, complex three-dimensional architecture. These complex architectures allow some group I and group II introns to be self-splicing, that is, the intron-containing RNA molecule can rearrange its own covalent structure so as to precisely remove the intron and link the exons together in the correct order. In some cases, particular intron-binding proteins are involved in splicing, acting in such a way that they assist the intron in folding into the three-dimensional structure that is necessary for self-splicing activity. Group I and group II introns are distinguished by different sets of internal conserved sequences and folded structures, and by the fact that splicing of RNA molecules containing group II introns generates branched introns (like those of spliceosomal RNAs), while group I introns use a non-encoded guanosine nucleotide (typically GTP) to initiate splicing, adding it on to the 5'-end of the excised intron. On the accuracy of splicing The spliceosome is a very complex structure containing up to one hundred proteins and five different RNAs. The substrate of the reaction is a long RNA molecule and the transesterification reactions catalyzed by the spliceosome require the bringing together of sites that may be thousands of nucleotides apart. All biochemical reactions are associated with known error rates and the more complicated the reaction the higher the error rate. Therefore, it is not surprising that the splicing reaction catalyzed by the spliceosome has a significant error rate even though there are spliceosome accessory factors that suppress the accidental cleavage of cryptic splice sites. Under ideal circumstances, the splicing reaction is likely to be 99.999% accurate (error rate of 10−5) and the correct exons will be joined and the correct intron will be deleted. However, these ideal conditions require very close matches to the best splice site sequences and the absence of any competing cryptic splice site sequences within the introns and those conditions are rarely met in large eukaryotic genes that may cover more than 40 kilobase pairs. Recent studies have shown that the actual error rate can be considerably higher than 10−5 and may be as high as 2% or 3% errors (error rate of 2 or 3 x 10−2) per gene. Additional studies suggest that the error rate is no less than 0.1% per intron. This relatively high level of splicing errors explains why most splice variants are rapidly degraded by nonsense-mediated decay. The presence of sloppy binding sites within genes causes splicing errors and it may seem strange that these sites haven't been eliminated by natural selection. The argument for their persistence is similar to the argument for junk DNA. Although mutations which create or disrupt binding sites may be slightly deleterious, the large number of possible such mutations makes it inevitable that some will reach fixation in a population. This is particularly relevant in species, such as humans, with relatively small long-term effective population sizes. It is plausible, then, that the human genome carries a substantial load of suboptimal sequences which cause the generation of aberrant transcript isoforms. In this study, we present direct evidence that this is indeed the case. While the catalytic reaction may be accurate enough for effective processing most of the time, the overall error rate may be partly limited by the fidelity of transcription because transcription errors will introduce mutations that create cryptic splice sites. In addition, the transcription error rate of 10−5 – 10−6 is high enough that one in every 25,000 transcribed exons will have an incorporation error in one of the splice sites leading to a skipped intron or a skipped exon. Almost all multi-exon genes will produce incorrectly spliced transcripts but the frequency of this background noise will depend on the size of the genes, the number of introns, and the quality of the splice site sequences. In some cases, splice variants will be produced by mutations in the gene (DNA). These can be SNP polymorphisms that create a cryptic splice site or mutate a functional site. They can also be somatic cell mutations that affect splicing in a particular tissue or a cell line. When the mutant allele is in a heterozygous state this will result in production of two abundant splice variants; one functional and one non-functional. In the homozygous state the mutant alleles may cause a genetic disease such as the hemophilia found in descendants of Queen Victoria where a mutation in one of the introns in a blood clotting factor gene creates a cryptic 3' splice site resulting in aberrant splicing. A significant fraction of human deaths by disease may be caused by mutations that interfere with normal splicing; mostly by creating cryptic splice sites. Incorrectly spliced transcripts can easily be detected and their sequences entered into the online databases. They are usually described as "alternatively spliced" transcripts, which can be confusing because the term does not distinguish between real, biologically relevant, alternative splicing and processing noise due to splicing errors. One of the central issues in the field of alternative splicing is working out the differences between these two possibilities. Many scientists have argued that the null hypothesis should be splicing noise, putting the burden of proof on those who claim biologically relevant alternative splicing. According to those scientists, the claim of function must be accompanied by convincing evidence that multiple functional products are produced from the same gene. Biological functions and evolution While introns do not encode protein products, they are integral to gene expression regulation. Some introns themselves encode functional RNAs through further processing after splicing to generate noncoding RNA molecules. Alternative splicing is widely used to generate multiple proteins from a single gene. Furthermore, some introns play essential roles in a wide range of gene expression regulatory functions such as nonsense-mediated decay and mRNA export. After the initial discovery of introns in protein-coding genes of the eukaryotic nucleus, there was significant debate as to whether introns in modern-day organisms were inherited from a common ancient ancestor (termed the introns-early hypothesis), or whether they appeared in genes rather recently in the evolutionary process (termed the introns-late hypothesis). Another theory is that the spliceosome and the intron-exon structure of genes is a relic of the RNA world (the introns-first hypothesis). There is still considerable debate about the extent to which of these hypotheses is most correct but the popular consensus at the moment is that following the formation of the first eukaryotic cell, group II introns from the bacterial endosymbiont invaded the host genome. In the beginning these self-splicing introns excised themselves from the mRNA precursor but over time some of them lost that ability and their excision had to be aided in trans by other group II introns. Eventually a number of specific trans-acting introns evolved and these became the precursors to the snRNAs of the spliceosome. The efficiency of splicing was improved by association with stabilizing proteins to form the primitive spliceosome. Early studies of genomic DNA sequences from a wide range of organisms show that the intron-exon structure of homologous genes in different organisms can vary widely. More recent studies of entire eukaryotic genomes have now shown that the lengths and density (introns/gene) of introns varies considerably between related species. For example, while the human genome contains an average of 8.4 introns/gene (139,418 in the genome), the unicellular fungus Encephalitozoon cuniculi contains only 0.0075 introns/gene (15 introns in the genome). Since eukaryotes arose from a common ancestor (common descent), there must have been extensive gain or loss of introns during evolutionary time. This process is thought to be subject to selection, with a tendency towards intron gain in larger species due to their smaller population sizes, and the converse in smaller (particularly unicellular) species. Biological factors also influence which genes in a genome lose or accumulate introns. Alternative splicing of exons within a gene after intron excision acts to introduce greater variability of protein sequences translated from a single gene, allowing multiple related proteins to be generated from a single gene and a single precursor mRNA transcript. The control of alternative RNA splicing is performed by a complex network of signaling molecules that respond to a wide range of intracellular and extracellular signals. Introns contain several short sequences that are important for efficient splicing, such as acceptor and donor sites at either end of the intron as well as a branch point site, which are required for proper splicing by the spliceosome. Some introns are known to enhance the expression of the gene that they are contained in by a process known as intron-mediated enhancement (IME). Actively transcribed regions of DNA frequently form R-loops that are vulnerable to DNA damage. In highly expressed yeast genes, introns inhibit R-loop formation and the occurrence of DNA damage. Genome-wide analysis in both yeast and humans revealed that intron-containing genes have decreased R-loop levels and decreased DNA damage compared to intronless genes of similar expression. Insertion of an intron within an R-loop prone gene can also suppress R-loop formation and recombination. Bonnet et al. (2017) speculated that the function of introns in maintaining genetic stability may explain their evolutionary maintenance at certain locations, particularly in highly expressed genes. Starvation adaptation The physical presence of introns promotes cellular resistance to starvation via intron enhanced repression of ribosomal protein genes of nutrient-sensing pathways. As mobile genetic elements Introns may be lost or gained over evolutionary time, as shown by many comparative studies of orthologous genes. Subsequent analyses have identified thousands of examples of intron loss and gain events, and it has been proposed that the emergence of eukaryotes, or the initial stages of eukaryotic evolution, involved an intron invasion. Two definitive mechanisms of intron loss, reverse transcriptase-mediated intron loss (RTMIL) and genomic deletions, have been identified, and are known to occur. The definitive mechanisms of intron gain, however, remain elusive and controversial. At least seven mechanisms of intron gain have been reported thus far: intron transposition, transposon insertion, tandem genomic duplication, intron transfer, intron gain during double-strand break repair (DSBR), insertion of a group II intron, and intronization. In theory it should be easiest to deduce the origin of recently gained introns due to the lack of host-induced mutations, yet even introns gained recently did not arise from any of the aforementioned mechanisms. These findings thus raise the question of whether or not the proposed mechanisms of intron gain fail to describe the mechanistic origin of many novel introns because they are not accurate mechanisms of intron gain, or if there are other, yet to be discovered, processes generating novel introns. In intron transposition, the most commonly purported intron gain mechanism, a spliced intron is thought to reverse splice into either its own mRNA or another mRNA at a previously intron-less position. This intron-containing mRNA is then reverse transcribed and the resulting intron-containing cDNA may then cause intron gain via complete or partial recombination with its original genomic locus. Transposon insertions have been shown to generate thousands of new introns across diverse eukaryotic species. Transposon insertions sometimes result in the duplication of this sequence on each side of the transposon. Such an insertion could intronize the transposon without disrupting the coding sequence when a transposon inserts into the sequence AGGT or encodes the splice sites within the transposon sequence. Where intron-generating transposons do not create target site duplications, elements include both splice sites GT (5') and AG (3') thereby splicing precisely without affecting the protein-coding sequence. It is not yet understood why these elements are spliced, whether by chance, or by some preferential action by the transposon. In tandem genomic duplication, due to the similarity between consensus donor and acceptor splice sites, which both closely resemble AGGT, the tandem genomic duplication of an exonic segment harboring an AGGT sequence generates two potential splice sites. When recognized by the spliceosome, the sequence between the original and duplicated AGGT will be spliced, resulting in the creation of an intron without alteration of the coding sequence of the gene. Double-stranded break repair via non-homologous end joining was recently identified as a source of intron gain when researchers identified short direct repeats flanking 43% of gained introns in Daphnia. These numbers must be compared to the number of conserved introns flanked by repeats in other organisms, though, for statistical relevance. For group II intron insertion, the retrohoming of a group II intron into a nuclear gene was proposed to cause recent spliceosomal intron gain. Intron transfer has been hypothesized to result in intron gain when a paralog or pseudogene gains an intron and then transfers this intron via recombination to an intron-absent location in its sister paralog. Intronization is the process by which mutations create novel introns from formerly exonic sequence. Thus, unlike other proposed mechanisms of intron gain, this mechanism does not require the insertion or generation of DNA to create a novel intron. The only hypothesized mechanism of recent intron gain lacking any direct evidence is that of group II intron insertion, which when demonstrated in vivo, abolishes gene expression. Group II introns are therefore likely the presumed ancestors of spliceosomal introns, acting as site-specific retroelements, and are no longer responsible for intron gain. Tandem genomic duplication is the only proposed mechanism with supporting in vivo experimental evidence: a short intragenic tandem duplication can insert a novel intron into a protein-coding gene, leaving the corresponding peptide sequence unchanged. This mechanism also has extensive indirect evidence lending support to the idea that tandem genomic duplication is a prevalent mechanism for intron gain. The testing of other proposed mechanisms in vivo, particularly intron gain during DSBR, intron transfer, and intronization, is possible, although these mechanisms must be demonstrated in vivo to solidify them as actual mechanisms of intron gain. Further genomic analyses, especially when executed at the population level, may then quantify the relative contribution of each mechanism, possibly identifying species-specific biases that may shed light on varied rates of intron gain amongst different species.
Biology and health sciences
Molecular biology
Biology
15352
https://en.wikipedia.org/wiki/Indistinguishable%20particles
Indistinguishable particles
In quantum mechanics, indistinguishable particles (also called identical or indiscernible particles) are particles that cannot be distinguished from one another, even in principle. Species of identical particles include, but are not limited to, elementary particles (such as electrons), composite subatomic particles (such as atomic nuclei), as well as atoms and molecules. Although all known indistinguishable particles only exist at the quantum scale, there is no exhaustive list of all possible sorts of particles nor a clear-cut limit of applicability, as explored in quantum statistics. They were first discussed by Werner Heisenberg and Paul Dirac in 1926. There are two main categories of identical particles: bosons, which can share quantum states, and fermions, which cannot (as described by the Pauli exclusion principle). Examples of bosons are photons, gluons, phonons, helium-4 nuclei and all mesons. Examples of fermions are electrons, neutrinos, quarks, protons, neutrons, and helium-3 nuclei. The fact that particles can be identical has important consequences in statistical mechanics, where calculations rely on probabilistic arguments, which are sensitive to whether or not the objects being studied are identical. As a result, identical particles exhibit markedly different statistical behaviour from distinguishable particles. For example, the indistinguishability of particles has been proposed as a solution to Gibbs' mixing paradox. Distinguishing between particles There are two methods for distinguishing between particles. The first method relies on differences in the intrinsic physical properties of the particles, such as mass, electric charge, and spin. If differences exist, it is possible to distinguish between the particles by measuring the relevant properties. However, as far as can be determined, microscopic particles of the same species have completely equivalent physical properties. For instance, every electron has the same electric charge. Even if the particles have equivalent physical properties, there remains a second method for distinguishing between particles, which is to track the trajectory of each particle. As long as the position of each particle can be measured with infinite precision (even when the particles collide), then there would be no ambiguity about which particle is which. The problem with the second approach is that it contradicts the principles of quantum mechanics. According to quantum theory, the particles do not possess definite positions during the periods between measurements. Instead, they are governed by wavefunctions that give the probability of finding a particle at each position. As time passes, the wavefunctions tend to spread out and overlap. Once this happens, it becomes impossible to determine, in a subsequent measurement, which of the particle positions correspond to those measured earlier. The particles are then said to be indistinguishable. Quantum mechanical description Symmetrical and antisymmetrical states What follows is an example to make the above discussion concrete, using the formalism developed in the article on the mathematical formulation of quantum mechanics. Let n denote a complete set of (discrete) quantum numbers for specifying single-particle states (for example, for the particle in a box problem, take n to be the quantized wave vector of the wavefunction.) For simplicity, consider a system composed of two particles that are not interacting with each other. Suppose that one particle is in the state n1, and the other is in the state n2. The quantum state of the system is denoted by the expression where the order of the tensor product matters ( if , then the particle 1 occupies the state n2 while the particle 2 occupies the state n1). This is the canonical way of constructing a basis for a tensor product space of the combined system from the individual spaces. This expression is valid for distinguishable particles, however, it is not appropriate for indistinguishable particles since and as a result of exchanging the particles are generally different states. "the particle 1 occupies the n1 state and the particle 2 occupies the n2 state" ≠ "the particle 1 occupies the n2 state and the particle 2 occupies the n1 state". Two states are physically equivalent only if they differ at most by a complex phase factor. For two indistinguishable particles, a state before the particle exchange must be physically equivalent to the state after the exchange, so these two states differ at most by a complex phase factor. This fact suggests that a state for two indistinguishable (and non-interacting) particles is given by following two possibilities: States where it is a sum are known as symmetric, while states involving the difference are called antisymmetric. More completely, symmetric states have the form while antisymmetric states have the form Note that if n1 and n2 are the same, the antisymmetric expression gives zero, which cannot be a state vector since it cannot be normalized. In other words, more than one identical particle cannot occupy an antisymmetric state (one antisymmetric state can be occupied only by one particle). This is known as the Pauli exclusion principle, and it is the fundamental reason behind the chemical properties of atoms and the stability of matter. Exchange symmetry The importance of symmetric and antisymmetric states is ultimately based on empirical evidence. It appears to be a fact of nature that identical particles do not occupy states of a mixed symmetry, such as There is actually an exception to this rule, which will be discussed later. On the other hand, it can be shown that the symmetric and antisymmetric states are in a sense special, by examining a particular symmetry of the multiple-particle states known as exchange symmetry. Define a linear operator P, called the exchange operator. When it acts on a tensor product of two state vectors, it exchanges the values of the state vectors: P is both Hermitian and unitary. Because it is unitary, it can be regarded as a symmetry operator. This symmetry may be described as the symmetry under the exchange of labels attached to the particles (i.e., to the single-particle Hilbert spaces). Clearly, (the identity operator), so the eigenvalues of P are +1 and −1. The corresponding eigenvectors are the symmetric and antisymmetric states: In other words, symmetric and antisymmetric states are essentially unchanged under the exchange of particle labels: they are only multiplied by a factor of +1 or −1, rather than being "rotated" somewhere else in the Hilbert space. This indicates that the particle labels have no physical meaning, in agreement with the earlier discussion on indistinguishability. It will be recalled that P is Hermitian. As a result, it can be regarded as an observable of the system, which means that, in principle, a measurement can be performed to find out if a state is symmetric or antisymmetric. Furthermore, the equivalence of the particles indicates that the Hamiltonian can be written in a symmetrical form, such as It is possible to show that such Hamiltonians satisfy the commutation relation According to the Heisenberg equation, this means that the value of P is a constant of motion. If the quantum state is initially symmetric (antisymmetric), it will remain symmetric (antisymmetric) as the system evolves. Mathematically, this says that the state vector is confined to one of the two eigenspaces of P, and is not allowed to range over the entire Hilbert space. Thus, that eigenspace might as well be treated as the actual Hilbert space of the system. This is the idea behind the definition of Fock space. Fermions and bosons The choice of symmetry or antisymmetry is determined by the species of particle. For example, symmetric states must always be used when describing photons or helium-4 atoms, and antisymmetric states when describing electrons or protons. Particles which exhibit symmetric states are called bosons. The nature of symmetric states has important consequences for the statistical properties of systems composed of many identical bosons. These statistical properties are described as Bose–Einstein statistics. Particles which exhibit antisymmetric states are called fermions. Antisymmetry gives rise to the Pauli exclusion principle, which forbids identical fermions from sharing the same quantum state. Systems of many identical fermions are described by Fermi–Dirac statistics. Parastatistics are mathematically possible, but no examples exist in nature. In certain two-dimensional systems, mixed symmetry can occur. These exotic particles are known as anyons, and they obey fractional statistics. Experimental evidence for the existence of anyons exists in the fractional quantum Hall effect, a phenomenon observed in the two-dimensional electron gases that form the inversion layer of MOSFETs. There is another type of statistic, known as braid statistics, which are associated with particles known as plektons. The spin-statistics theorem relates the exchange symmetry of identical particles to their spin. It states that bosons have integer spin, and fermions have half-integer spin. Anyons possess fractional spin. N particles The above discussion generalizes readily to the case of N particles. Suppose there are N particles with quantum numbers n1, n2, ..., nN. If the particles are bosons, they occupy a totally symmetric state, which is symmetric under the exchange of any two particle labels: Here, the sum is taken over all different states under permutations p acting on N elements. The square root left to the sum is a normalizing constant. The quantity mn stands for the number of times each of the single-particle states n appears in the N-particle state. Note that . In the same vein, fermions occupy totally antisymmetric states: Here, is the sign of each permutation (i.e. if is composed of an even number of transpositions, and if odd). Note that there is no term, because each single-particle state can appear only once in a fermionic state. Otherwise the sum would again be zero due to the antisymmetry, thus representing a physically impossible state. This is the Pauli exclusion principle for many particles. These states have been normalized so that Measurement Suppose there is a system of N bosons (fermions) in the symmetric (antisymmetric) state and a measurement is performed on some other set of discrete observables, m. In general, this yields some result m1 for one particle, m2 for another particle, and so forth. If the particles are bosons (fermions), the state after the measurement must remain symmetric (antisymmetric), i.e. The probability of obtaining a particular result for the m measurement is It can be shown that which verifies that the total probability is 1. The sum has to be restricted to ordered values of m1, ..., mN to ensure that each multi-particle state is not counted more than once. Wavefunction representation So far, the discussion has included only discrete observables. It can be extended to continuous observables, such as the position x. Recall that an eigenstate of a continuous observable represents an infinitesimal range of values of the observable, not a single value as with discrete observables. For instance, if a particle is in a state |ψ⟩, the probability of finding it in a region of volume d3x surrounding some position x is As a result, the continuous eigenstates |x⟩ are normalized to the delta function instead of unity: Symmetric and antisymmetric multi-particle states can be constructed from continuous eigenstates in the same way as before. However, it is customary to use a different normalizing constant: A many-body wavefunction can be written, where the single-particle wavefunctions are defined, as usual, by The most important property of these wavefunctions is that exchanging any two of the coordinate variables changes the wavefunction by only a plus or minus sign. This is the manifestation of symmetry and antisymmetry in the wavefunction representation: The many-body wavefunction has the following significance: if the system is initially in a state with quantum numbers n1, ..., nN, and a position measurement is performed, the probability of finding particles in infinitesimal volumes near x1, x2, ..., xN is The factor of N! comes from our normalizing constant, which has been chosen so that, by analogy with single-particle wavefunctions, Because each integral runs over all possible values of x, each multi-particle state appears N! times in the integral. In other words, the probability associated with each event is evenly distributed across N! equivalent points in the integral space. Because it is usually more convenient to work with unrestricted integrals than restricted ones, the normalizing constant has been chosen to reflect this. Finally, antisymmetric wavefunction can be written as the determinant of a matrix, known as a Slater determinant: Operator approach and parastatistics The Hilbert space for particles is given by the tensor product . The permutation group of acts on this space by permuting the entries. By definition the expectation values for an observable of indistinguishable particles should be invariant under these permutation. This means that for all and or equivalently for each . Two states are equivalent whenever their expectation values coincide for all observables. If we restrict to observables of identical particles, and hence observables satisfying the equation above, we find that the following states (after normalization) are equivalent . The equivalence classes are in bijective relation with irreducible subspaces of under . Two obvious irreducible subspaces are the one dimensional symmetric/bosonic subspace and anti-symmetric/fermionic subspace. There are however more types of irreducible subspaces. States associated with these other irreducible subspaces are called parastatistic states. Young tableaux provide a way to classify all of these irreducible subspaces. Statistical properties Statistical effects of indistinguishability The indistinguishability of particles has a profound effect on their statistical properties. To illustrate this, consider a system of N distinguishable, non-interacting particles. Once again, let nj denote the state (i.e. quantum numbers) of particle j. If the particles have the same physical properties, the njs run over the same range of values. Let ε(n) denote the energy of a particle in state n. As the particles do not interact, the total energy of the system is the sum of the single-particle energies. The partition function of the system is where k is the Boltzmann constant and T is the temperature. This expression can be factored to obtain where If the particles are identical, this equation is incorrect. Consider a state of the system, described by the single particle states [n1, ..., nN]. In the equation for Z, every possible permutation of the ns occurs once in the sum, even though each of these permutations is describing the same multi-particle state. Thus, the number of states has been over-counted. If the possibility of overlapping states is neglected, which is valid if the temperature is high, then the number of times each state is counted is approximately N!. The correct partition function is Note that this "high temperature" approximation does not distinguish between fermions and bosons. The discrepancy in the partition functions of distinguishable and indistinguishable particles was known as far back as the 19th century, before the advent of quantum mechanics. It leads to a difficulty known as the Gibbs paradox. Gibbs showed that in the equation Z = ξN, the entropy of a classical ideal gas is where V is the volume of the gas and f is some function of T alone. The problem with this result is that S is not extensive – if N and V are doubled, S does not double accordingly. Such a system does not obey the postulates of thermodynamics. Gibbs also showed that using Z = ξN/N! alters the result to which is perfectly extensive. Statistical properties of bosons and fermions There are important differences between the statistical behavior of bosons and fermions, which are described by Bose–Einstein statistics and Fermi–Dirac statistics respectively. Roughly speaking, bosons have a tendency to clump into the same quantum state, which underlies phenomena such as the laser, Bose–Einstein condensation, and superfluidity. Fermions, on the other hand, are forbidden from sharing quantum states, giving rise to systems such as the Fermi gas. This is known as the Pauli Exclusion Principle, and is responsible for much of chemistry, since the electrons in an atom (fermions) successively fill the many states within shells rather than all lying in the same lowest energy state. The differences between the statistical behavior of fermions, bosons, and distinguishable particles can be illustrated using a system of two particles. The particles are designated A and B. Each particle can exist in two possible states, labelled and , which have the same energy. The composite system can evolve in time, interacting with a noisy environment. Because the and states are energetically equivalent, neither state is favored, so this process has the effect of randomizing the states. (This is discussed in the article on quantum entanglement.) After some time, the composite system will have an equal probability of occupying each of the states available to it. The particle states are then measured. If A and B are distinguishable particles, then the composite system has four distinct states: , , , and . The probability of obtaining two particles in the state is 0.25; the probability of obtaining two particles in the state is 0.25; and the probability of obtaining one particle in the state and the other in the state is 0.5. If A and B are identical bosons, then the composite system has only three distinct states: , , and . When the experiment is performed, the probability of obtaining two particles in the state is now 0.33; the probability of obtaining two particles in the state is 0.33; and the probability of obtaining one particle in the state and the other in the state is 0.33. Note that the probability of finding particles in the same state is relatively larger than in the distinguishable case. This demonstrates the tendency of bosons to "clump". If A and B are identical fermions, there is only one state available to the composite system: the totally antisymmetric state . When the experiment is performed, one particle is always in the state and the other is in the state. The results are summarized in Table 1: As can be seen, even a system of two particles exhibits different statistical behaviors between distinguishable particles, bosons, and fermions. In the articles on Fermi–Dirac statistics and Bose–Einstein statistics, these principles are extended to large number of particles, with qualitatively similar results. Homotopy class To understand why particle statistics work the way that they do, note first that particles are point-localized excitations and that particles that are spacelike separated do not interact. In a flat -dimensional space , at any given time, the configuration of two identical particles can be specified as an element of . If there is no overlap between the particles, so that they do not interact directly, then their locations must belong to the space the subspace with coincident points removed. The element describes the configuration with particle I at and particle II at , while describes the interchanged configuration. With identical particles, the state described by ought to be indistinguishable from the state described by . Now consider the homotopy class of continuous paths from to , within the space . If is where , then this homotopy class only has one element. If is , then this homotopy class has countably many elements (i.e. a counterclockwise interchange by half a turn, a counterclockwise interchange by one and a half turns, two and a half turns, etc., a clockwise interchange by half a turn, etc.). In particular, a counterclockwise interchange by half a turn is not homotopic to a clockwise interchange by half a turn. Lastly, if is , then this homotopy class is empty. Suppose first that . The universal covering space of , which is none other than itself, only has two points which are physically indistinguishable from , namely itself and . So, the only permissible interchange is to swap both particles. This interchange is an involution, so its only effect is to multiply the phase by a square root of 1. If the root is +1, then the points have Bose statistics, and if the root is –1, the points have Fermi statistics. In the case the universal covering space of has infinitely many points that are physically indistinguishable from . This is described by the infinite cyclic group generated by making a counterclockwise half-turn interchange. Unlike the previous case, performing this interchange twice in a row does not recover the original state; so such an interchange can generically result in a multiplication by for any real (by unitarity, the absolute value of the multiplication must be 1). This is called anyonic statistics. In fact, even with two distinguishable particles, even though is now physically distinguishable from , the universal covering space still contains infinitely many points which are physically indistinguishable from the original point, now generated by a counterclockwise rotation by one full turn. This generator, then, results in a multiplication by . This phase factor here is called the mutual statistics. Finally, in the case the space is not connected, so even if particle I and particle II are identical, they can still be distinguished via labels such as "the particle on the left" and "the particle on the right". There is no interchange symmetry here.
Physical sciences
Statistical mechanics
Physics
15361
https://en.wikipedia.org/wiki/Ice%20age
Ice age
An ice age is a long period of reduction in the temperature of Earth's surface and atmosphere, resulting in the presence or expansion of continental and polar ice sheets and alpine glaciers. Earth's climate alternates between ice ages, and greenhouse periods during which there are no glaciers on the planet. Earth is currently in the ice age called Quaternary glaciation. Individual pulses of cold climate within an ice age are termed glacial periods (glacials, glaciations, glacial stages, stadials, stades, or colloquially, ice ages), and intermittent warm periods within an ice age are called interglacials or interstadials. In glaciology, the term ice age is defined by the presence of extensive ice sheets in the northern and southern hemispheres. By this definition, the current Holocene period is an interglacial period of an ice age. The accumulation of anthropogenic greenhouse gases is projected to delay the next glacial period. History of research In 1742, Pierre Martel (1706–1767), an engineer and geographer living in Geneva, visited the valley of Chamonix in the Alps of Savoy. Two years later he published an account of his journey. He reported that the inhabitants of that valley attributed the dispersal of erratic boulders to the glaciers, saying that they had once extended much farther. Later similar explanations were reported from other regions of the Alps. In 1815 the carpenter and chamois hunter Jean-Pierre Perraudin (1767–1858) explained erratic boulders in the Val de Bagnes in the Swiss canton of Valais as being due to glaciers previously extending further. An unknown woodcutter from Meiringen in the Bernese Oberland advocated a similar idea in a discussion with the Swiss-German geologist Jean de Charpentier (1786–1855) in 1834. Comparable explanations are also known from the Val de Ferret in the Valais and the Seeland in western Switzerland and in Goethe's scientific work. Such explanations could also be found in other parts of the world. When the Bavarian naturalist Ernst von Bibra (1806–1878) visited the Chilean Andes in 1849–1850, the natives attributed fossil moraines to the former action of glaciers. Meanwhile, European scholars had begun to wonder what had caused the dispersal of erratic material. From the middle of the 18th century, some discussed ice as a means of transport. The Swedish mining expert Daniel Tilas (1712–1772) was, in 1742, the first person to suggest drifting sea ice was a cause of the presence of erratic boulders in the Scandinavian and Baltic regions. In 1795, the Scottish philosopher and gentleman naturalist, James Hutton (1726–1797), explained erratic boulders in the Alps by the action of glaciers. Two decades later, in 1818, the Swedish botanist Göran Wahlenberg (1780–1851) published his theory of a glaciation of the Scandinavian peninsula. He regarded glaciation as a regional phenomenon. Only a few years later, the Danish-Norwegian geologist Jens Esmark (1762–1839) argued for a sequence of worldwide ice ages. In a paper published in 1824, Esmark proposed changes in climate as the cause of those glaciations. He attempted to show that they originated from changes in Earth's orbit. Esmark discovered the similarity between moraines near Haukalivatnet lake near sea level in Rogaland and moraines at branches of Jostedalsbreen. Esmark's discovery were later attributed to or appropriated by Theodor Kjerulf and Louis Agassiz. During the following years, Esmark's ideas were discussed and taken over in parts by Swedish, Scottish and German scientists. At the University of Edinburgh Robert Jameson (1774–1854) seemed to be relatively open to Esmark's ideas, as reviewed by Norwegian professor of glaciology Bjørn G. Andersen (1992). Jameson's remarks about ancient glaciers in Scotland were most probably prompted by Esmark. In Germany, Albrecht Reinhard Bernhardi (1797–1849), a geologist and professor of forestry at an academy in Dreissigacker (since incorporated in the southern Thuringian city of Meiningen), adopted Esmark's theory. In a paper published in 1832, Bernhardi speculated about the polar ice caps once reaching as far as the temperate zones of the globe. In Val de Bagnes, a valley in the Swiss Alps, there was a long-held local belief that the valley had once been covered deep in ice, and in 1815 a local chamois hunter called Jean-Pierre Perraudin attempted to convert the geologist Jean de Charpentier to the idea, pointing to deep striations in the rocks and giant erratic boulders as evidence. Charpentier held the general view that these signs were caused by vast floods, and he rejected Perraudin's theory as absurd. In 1818 the engineer Ignatz Venetz joined Perraudin and Charpentier to examine a proglacial lake above the valley created by an ice dam as a result of the 1815 eruption of Mount Tambora, which threatened to cause a catastrophic flood when the dam broke. Perraudin attempted unsuccessfully to convert his companions to his theory, but when the dam finally broke, there were only minor erratics and no striations, and Venetz concluded that Perraudin was right and that only ice could have caused such major results. In 1821 he read a prize-winning paper on the theory to the Swiss Society, but it was not published until Charpentier, who had also become converted, published it with his own more widely read paper in 1834. In the meantime, the German botanist Karl Friedrich Schimper (1803–1867) was studying mosses which were growing on erratic boulders in the alpine upland of Bavaria. He began to wonder where such masses of stone had come from. During the summer of 1835 he made some excursions to the Bavarian Alps. Schimper came to the conclusion that ice must have been the means of transport for the boulders in the alpine upland. In the winter of 1835–36 he held some lectures in Munich. Schimper then assumed that there must have been global times of obliteration ("Verödungszeiten") with a cold climate and frozen water. Schimper spent the summer months of 1836 at Devens, near Bex, in the Swiss Alps with his former university friend Louis Agassiz (1801–1873) and Jean de Charpentier. Schimper, Charpentier and possibly Venetz convinced Agassiz that there had been a time of glaciation. During the winter of 1836–37, Agassiz and Schimper developed the theory of a sequence of glaciations. They mainly drew upon the preceding works of Venetz, Charpentier and on their own fieldwork. Agassiz appears to have been already familiar with Bernhardi's paper at that time. At the beginning of 1837, Schimper coined the term "ice age" ("Eiszeit") for the period of the glaciers. In July 1837 Agassiz presented their synthesis before the annual meeting of the Swiss Society for Natural Research at Neuchâtel. The audience was very critical, and some were opposed to the new theory because it contradicted the established opinions on climatic history. Most contemporary scientists thought that Earth had been gradually cooling down since its birth as a molten globe. In order to persuade the skeptics, Agassiz embarked on geological fieldwork. He published his book Study on Glaciers ("Études sur les glaciers") in 1840. Charpentier was put out by this, as he had also been preparing a book about the glaciation of the Alps. Charpentier felt that Agassiz should have given him precedence as it was he who had introduced Agassiz to in-depth glacial research. As a result of personal quarrels, Agassiz had also omitted any mention of Schimper in his book. It took several decades before the ice age theory was fully accepted by scientists. This happened on an international scale in the second half of the 1870s, following the work of James Croll, including the publication of Climate and Time, in Their Geological Relations in 1875, which provided a credible explanation for the causes of ice ages. Evidence There are three main types of evidence for ice ages: geological, chemical, and paleontological. Geological evidence for ice ages comes in various forms, including rock scouring and scratching, glacial moraines, drumlins, valley cutting, and the deposition of till or tillites and glacial erratics. Successive glaciations tend to distort and erase the geological evidence for earlier glaciations, making it difficult to interpret. Furthermore, this evidence was difficult to date exactly; early theories assumed that the glacials were short compared to the long interglacials. The advent of sediment and ice cores revealed the true situation: glacials are long, interglacials short. It took some time for the current theory to be worked out. The chemical evidence mainly consists of variations in the ratios of isotopes in fossils present in sediments and sedimentary rocks and ocean sediment cores. For the most recent glacial periods, ice cores provide climate proxies, both from the ice itself and from atmospheric samples provided by included bubbles of air. Because water containing lighter isotopes has a lower heat of evaporation, its proportion decreases with warmer conditions. This allows a temperature record to be constructed. This evidence can be confounded, however, by other factors recorded by isotope ratios. The paleontological evidence consists of changes in the geographical distribution of fossils. During a glacial period, cold-adapted organisms spread into lower latitudes, and organisms that prefer warmer conditions become extinct or retreat into lower latitudes. This evidence is also difficult to interpret because it requires: sequences of sediments covering a long period of time, over a wide range of latitudes and which are easily correlated; ancient organisms which survive for several million years without change and whose temperature preferences are easily diagnosed; and the finding of the relevant fossils. Despite the difficulties, analysis of ice core and ocean sediment cores has provided a credible record of glacials and interglacials over the past few million years. These also confirm the linkage between ice ages and continental crust phenomena such as glacial moraines, drumlins, and glacial erratics. Hence the continental crust phenomena are accepted as good evidence of earlier ice ages when they are found in layers created much earlier than the time range for which ice cores and ocean sediment cores are available. Major ice ages There have been at least five major ice ages in Earth's history (the Huronian, Cryogenian, Andean-Saharan, late Paleozoic, and the latest Quaternary Ice Age). Outside these ages, Earth was previously thought to have been ice-free even in high latitudes; such periods are known as greenhouse periods. However, other studies dispute this, finding evidence of occasional glaciations at high latitudes even during apparent greenhouse periods. Rocks from the earliest well-established ice age, called the Huronian, have been dated to around 2.4 to 2.1 billion years ago during the early Proterozoic Eon. Several hundreds of kilometers of the Huronian Supergroup are exposed north of the north shore of Lake Huron, extending from near Sault Ste. Marie to Sudbury, northeast of Lake Huron, with giant layers of now-lithified till beds, dropstones, varves, outwash, and scoured basement rocks. Correlative Huronian deposits have been found near Marquette, Michigan, and correlation has been made with Paleoproterozoic glacial deposits from Western Australia. The Huronian ice age was caused by the elimination of atmospheric methane, a greenhouse gas, during the Great Oxygenation Event. The next well-documented ice age, and probably the most severe of the last billion years, occurred from 720 to 630 million years ago (the Cryogenian period) and may have produced a Snowball Earth in which glacial ice sheets reached the equator, possibly being ended by the accumulation of greenhouse gases such as produced by volcanoes. "The presence of ice on the continents and pack ice on the oceans would inhibit both silicate weathering and photosynthesis, which are the two major sinks for at present." It has been suggested that the end of this ice age was responsible for the subsequent Ediacaran and Cambrian explosion, though this model is recent and controversial. The Andean-Saharan occurred from 460 to 420 million years ago, during the Late Ordovician and the Silurian period. The evolution of land plants at the onset of the Devonian period caused a long term increase in planetary oxygen levels and reduction of levels, which resulted in the late Paleozoic icehouse. Its former name, the Karoo glaciation, was named after the glacial tills found in the Karoo region of South Africa. There were extensive polar ice caps at intervals from 360 to 260 million years ago in South Africa during the Carboniferous and early Permian periods. Correlatives are known from Argentina, also in the center of the ancient supercontinent Gondwanaland. Although the Mesozoic Era retained a greenhouse climate over its timespan and was previously assumed to have been entirely glaciation-free, more recent studies suggest that brief periods of glaciation occurred in both hemispheres during the Early Cretaceous. Geologic and palaeoclimatological records suggest the existence of glacial periods during the Valanginian, Hauterivian, and Aptian stages of the Early Cretaceous. Ice-rafted glacial dropstones indicate that in the Northern Hemisphere, ice sheets may have extended as far south as the Iberian Peninsula during the Hauterivian and Aptian. Although ice sheets largely disappeared from Earth for the rest of the period (potential reports from the Turonian, otherwise the warmest period of the Phanerozoic, are disputed), ice sheets and associated sea ice appear to have briefly returned to Antarctica near the very end of the Maastrichtian just prior to the Cretaceous-Paleogene extinction event. The Quaternary Glaciation / Quaternary Ice Age started about 2.58 million years ago at the beginning of the Quaternary Period when the spread of ice sheets in the Northern Hemisphere began. Since then, the world has seen cycles of glaciation with ice sheets advancing and retreating on 40,000- and 100,000-year time scales called glacial periods, glacials or glacial advances, and interglacial periods, interglacials or glacial retreats. Earth is currently in an interglacial, and the last glacial period ended about 11,700 years ago. All that remains of the continental ice sheets are the Greenland and Antarctic ice sheets and smaller glaciers such as on Baffin Island. The definition of the Quaternary as beginning 2.58 Ma is based on the formation of the Arctic ice cap. The Antarctic ice sheet began to form earlier, at about 34 Ma, in the mid-Cenozoic (Eocene-Oligocene Boundary). The term Late Cenozoic Ice Age is used to include this early phase. Ice ages can be further divided by location and time; for example, the names Riss (180,000–130,000 years bp) and Würm (70,000–10,000 years bp) refer specifically to glaciation in the Alpine region. The maximum extent of the ice is not maintained for the full interval. The scouring action of each glaciation tends to remove most of the evidence of prior ice sheets almost completely, except in regions where the later sheet does not achieve full coverage. Glacials and interglacials Within the current glaciation, more temperate and more severe periods have occurred. The colder periods are called glacial periods, the warmer periods interglacials, such as the Eemian Stage. There is evidence that similar glacial cycles occurred in previous glaciations, including the Andean-Saharan and the late Paleozoic ice house. The glacial cycles of the late Paleozoic ice house are likely responsible for the deposition of cyclothems. Glacials are characterized by cooler and drier climates over most of Earth and large land and sea ice masses extending outward from the poles. Mountain glaciers in otherwise unglaciated areas extend to lower elevations due to a lower snow line. Sea levels drop due to the removal of large volumes of water above sea level in the icecaps. There is evidence that ocean circulation patterns are disrupted by glaciations. The glacials and interglacials coincide with changes in orbital forcing of climate due to Milankovitch cycles, which are periodic changes in Earth's orbit and the tilt of Earth's rotational axis. Earth has been in an interglacial period known as the Holocene for around 11,700 years, and an article in Nature in 2004 argues that it might be most analogous to a previous interglacial that lasted 28,000 years. Predicted changes in orbital forcing suggest that the next glacial period would begin at least 50,000 years from now. Moreover, anthropogenic forcing from increased greenhouse gases is estimated to potentially outweigh the orbital forcing of the Milankovitch cycles for hundreds of thousands of years. Feedback processes Each glacial period is subject to positive feedback which makes it more severe, and negative feedback which mitigates and (in all cases so far) eventually ends it. Positive An important form of feedback is provided by Earth's albedo, which is how much of the sun's energy is reflected rather than absorbed by Earth. Ice and snow increase Earth's albedo, while forests reduce its albedo. When the air temperature decreases, ice and snow fields grow, and they reduce forest cover. This continues until competition with a negative feedback mechanism forces the system to an equilibrium. One theory is that when glaciers form, two things happen: the ice grinds rocks into dust, and the land becomes dry and arid. This allows winds to transport iron rich dust into the open ocean, where it acts as a fertilizer that causes massive algal blooms that pulls large amounts of out of the atmosphere. This in turn makes it even colder and causes the glaciers to grow more. In 1956, Ewing and Donn hypothesized that an ice-free Arctic Ocean leads to increased snowfall at high latitudes. When low-temperature ice covers the Arctic Ocean there is little evaporation or sublimation and the polar regions are quite dry in terms of precipitation, comparable to the amount found in mid-latitude deserts. This low precipitation allows high-latitude snowfalls to melt during the summer. An ice-free Arctic Ocean absorbs solar radiation during the long summer days, and evaporates more water into the Arctic atmosphere. With higher precipitation, portions of this snow may not melt during the summer and so glacial ice can form at lower altitudes and more southerly latitudes, reducing the temperatures over land by increased albedo as noted above. Furthermore, under this hypothesis the lack of oceanic pack ice allows increased exchange of waters between the Arctic and the North Atlantic Oceans, warming the Arctic and cooling the North Atlantic. (Current projected consequences of global warming include a brief ice-free Arctic Ocean period by 2050.) Additional fresh water flowing into the North Atlantic during a warming cycle may also reduce the global ocean water circulation. Such a reduction (by reducing the effects of the Gulf Stream) would have a cooling effect on northern Europe, which in turn would lead to increased low-latitude snow retention during the summer. It has also been suggested that during an extensive glacial, glaciers may move through the Gulf of Saint Lawrence, extending into the North Atlantic Ocean far enough to block the Gulf Stream. Negative Ice sheets that form during glaciations erode the land beneath them. This can reduce the land area above sea level and thus diminish the amount of space on which ice sheets can form. This mitigates the albedo feedback, as does the rise in sea level that accompanies the reduced area of ice sheets, since open ocean has a lower albedo than land. Another negative feedback mechanism is the increased aridity occurring with glacial maxima, which reduces the precipitation available to maintain glaciation. The glacial retreat induced by this or any other process can be amplified by similar inverse positive feedbacks as for glacial advances. According to research published in Nature Geoscience, human emissions of carbon dioxide (CO2) will defer the next glacial period. Researchers used data on Earth's orbit to find the historical warm interglacial period that looks most like the current one and from this have predicted that the next glacial period would usually begin within 1,500 years. They go on to predict that emissions have been so high that it will not. Causes The causes of ice ages are not fully understood for either the large-scale ice age periods or the smaller ebb and flow of glacial–interglacial periods within an ice age. The consensus is that several factors are important: atmospheric composition, such as the concentrations of carbon dioxide and methane (the specific levels of the previously mentioned gases are now able to be seen with the new ice core samples from the European Project for Ice Coring in Antarctica (EPICA) Dome C in Antarctica over the past 800,000 years); changes in Earth's orbit around the Sun known as Milankovitch cycles; the motion of tectonic plates resulting in changes in the relative location and amount of continental and oceanic crust on Earth's surface, which affect wind and ocean currents; variations in solar output; the orbital dynamics of the Earth–Moon system; the impact of relatively large meteorites and volcanism including eruptions of supervolcanoes. Some of these factors influence each other. For example, changes in Earth's atmospheric composition (especially the concentrations of greenhouse gases) may alter the climate, while climate change itself can change the atmospheric composition (for example by changing the rate at which weathering removes ). Maureen Raymo, William Ruddiman and others propose that the Tibetan and Colorado Plateaus are immense "scrubbers" with a capacity to remove enough from the global atmosphere to be a significant causal factor of the 40 million year Cenozoic Cooling trend. They further claim that approximately half of their uplift (and "scrubbing" capacity) occurred in the past 10 million years. Changes in Earth's atmosphere There is evidence that greenhouse gas levels fell at the start of ice ages and rose during the retreat of the ice sheets, but it is difficult to establish cause and effect (see the notes above on the role of weathering). Greenhouse gas levels may also have been affected by other factors which have been proposed as causes of ice ages, such as the movement of continents and volcanism. The Snowball Earth hypothesis maintains that the severe freezing in the late Proterozoic was ended by an increase in levels in the atmosphere, mainly from volcanoes, and some supporters of Snowball Earth argue that it was caused in the first place by a reduction in atmospheric . The hypothesis also warns of future Snowball Earths. In 2009, further evidence was provided that changes in solar insolation provide the initial trigger for Earth to warm after an Ice Age, with secondary factors like increases in greenhouse gases accounting for the magnitude of the change. Position of the continents The geological record appears to show that ice ages start when the continents are in positions which block or reduce the flow of warm water from the equator to the poles and thus allow ice sheets to form. The ice sheets increase Earth's reflectivity and thus reduce the absorption of solar radiation. With less radiation absorbed the atmosphere cools; the cooling allows the ice sheets to grow, which further increases reflectivity in a positive feedback loop. The ice age continues until the reduction in weathering causes an increase in the greenhouse effect. There are three main contributors from the layout of the continents that obstruct the movement of warm water to the poles: A continent sits on top of a pole, as Antarctica does today. A polar sea is almost land-locked, as the Arctic Ocean is today. A supercontinent covers most of the equator, as Rodinia did during the Cryogenian period. Since today's Earth has a continent over the South Pole and an almost land-locked ocean over the North Pole, geologists believe that Earth will continue to experience glacial periods in the geologically near future. Some scientists believe that the Himalayas are a major factor in the current ice age, because these mountains have increased Earth's total rainfall and therefore the rate at which carbon dioxide is washed out of the atmosphere, decreasing the greenhouse effect. The Himalayas' formation started about 70 million years ago when the Indo-Australian Plate collided with the Eurasian Plate, and the Himalayas are still rising by about 5 mm per year because the Indo-Australian plate is still moving at 67 mm/year. The history of the Himalayas broadly fits the long-term decrease in Earth's average temperature since the mid-Eocene, 40 million years ago. Fluctuations in ocean currents Another important contribution to ancient climate regimes is the variation of ocean currents, which are modified by continent position, sea levels and salinity, as well as other factors. They have the ability to cool (e.g. aiding the creation of Antarctic ice) and the ability to warm (e.g. giving the British Isles a temperate as opposed to a boreal climate). The closing of the Isthmus of Panama about 3 million years ago may have ushered in the present period of strong glaciation over North America by ending the exchange of water between the tropical Atlantic and Pacific Oceans. Analyses suggest that ocean current fluctuations can adequately account for recent glacial oscillations. During the last glacial period the sea-level fluctuated 20–30 m as water was sequestered, primarily in the Northern Hemisphere ice sheets. When ice collected and the sea level dropped sufficiently, flow through the Bering Strait (the narrow strait between Siberia and Alaska is about 50 m deep today) was reduced, resulting in increased flow from the North Atlantic. This realigned the thermohaline circulation in the Atlantic, increasing heat transport into the Arctic, which melted the polar ice accumulation and reduced other continental ice sheets. The release of water raised sea levels again, restoring the ingress of colder water from the Pacific with an accompanying shift to northern hemisphere ice accumulation. According to a study published in Nature in 2021, all glacial periods of ice ages over the last 1.5 million years were associated with northward shifts of melting Antarctic icebergs which changed ocean circulation patterns, leading to more CO2 being pulled out of the atmosphere. The authors suggest that this process may be disrupted in the future as the Southern Ocean will become too warm for the icebergs to travel far enough to trigger these changes. Uplift of the Tibetan plateau Matthias Kuhle's geological theory of Ice Age development was suggested by the existence of an ice sheet covering the Tibetan Plateau during the Ice Ages (Last Glacial Maximum?). According to Kuhle, the plate-tectonic uplift of Tibet past the snow-line has led to a surface of c. 2,400,000 square kilometres (930,000 sq mi) changing from bare land to ice with a 70% greater albedo. The reflection of energy into space resulted in a global cooling, triggering the Pleistocene Ice Age. Because this highland is at a subtropical latitude, with four to five times the insolation of high-latitude areas, what would be Earth's strongest heating surface has turned into a cooling surface. Kuhle explains the interglacial periods by the 100,000-year cycle of radiation changes due to variations in Earth's orbit. This comparatively insignificant warming, when combined with the lowering of the Nordic inland ice areas and Tibet due to the weight of the superimposed ice-load, has led to the repeated complete thawing of the inland ice areas. Variations in Earth's orbit The Milankovitch cycles are a set of cyclic variations in characteristics of Earth's orbit around the Sun. Each cycle has a different length, so at some times their effects reinforce each other and at other times they (partially) cancel each other. There is strong evidence that the Milankovitch cycles affect the occurrence of glacial and interglacial periods within an ice age. The present ice age is the most studied and best understood, particularly the last 400,000 years, since this is the period covered by ice cores that record atmospheric composition and proxies for temperature and ice volume. Within this period, the match of glacial/interglacial frequencies to the Milanković orbital forcing periods is so close that orbital forcing is generally accepted. The combined effects of the changing distance to the Sun, the precession of Earth's axis, and the changing tilt of Earth's axis redistribute the sunlight received by Earth. Of particular importance are changes in the tilt of Earth's axis, which affect the intensity of seasons. For example, the amount of solar influx in July at 65 degrees north latitude varies by as much as 22% (from 450 W/m2 to 550 W/m2). It is widely believed that ice sheets advance when summers become too cool to melt all of the accumulated snowfall from the previous winter. Some believe that the strength of the orbital forcing is too small to trigger glaciations, but feedback mechanisms like may explain this mismatch. While Milankovitch forcing predicts that cyclic changes in Earth's orbital elements can be expressed in the glaciation record, additional explanations are necessary to explain which cycles are observed to be most important in the timing of glacial–interglacial periods. In particular, during the last 800,000 years, the dominant period of glacial–interglacial oscillation has been 100,000 years, which corresponds to changes in Earth's orbital eccentricity and orbital inclination. Yet this is by far the weakest of the three frequencies predicted by Milankovitch. During the period 3.0–0.8 million years ago, the dominant pattern of glaciation corresponded to the 41,000-year period of changes in Earth's obliquity (tilt of the axis). The reasons for dominance of one frequency versus another are poorly understood and an active area of current research, but the answer probably relates to some form of resonance in Earth's climate system. Recent work suggests that the 100K year cycle dominates due to increased southern-pole sea-ice increasing total solar reflectivity. The "traditional" Milankovitch explanation struggles to explain the dominance of the 100,000-year cycle over the last 8 cycles. Richard A. Muller, Gordon J. F. MacDonald, and others have pointed out that those calculations are for a two-dimensional orbit of Earth but the three-dimensional orbit also has a 100,000-year cycle of orbital inclination. They proposed that these variations in orbital inclination lead to variations in insolation, as Earth moves in and out of known dust bands in the solar system. Although this is a different mechanism to the traditional view, the "predicted" periods over the last 400,000 years are nearly the same. The Muller and MacDonald theory, in turn, has been challenged by Jose Antonio Rial. William Ruddiman has suggested a model that explains the 100,000-year cycle by the modulating effect of eccentricity (weak 100,000-year cycle) on precession (26,000-year cycle) combined with greenhouse gas feedbacks in the 41,000- and 26,000-year cycles. Yet another theory has been advanced by Peter Huybers who argued that the 41,000-year cycle has always been dominant, but that Earth has entered a mode of climate behavior where only the second or third cycle triggers an ice age. This would imply that the 100,000-year periodicity is really an illusion created by averaging together cycles lasting 80,000 and 120,000 years. This theory is consistent with a simple empirical multi-state model proposed by Didier Paillard. Paillard suggests that the late Pleistocene glacial cycles can be seen as jumps between three quasi-stable climate states. The jumps are induced by the orbital forcing, while in the early Pleistocene the 41,000-year glacial cycles resulted from jumps between only two climate states. A dynamical model explaining this behavior was proposed by Peter Ditlevsen. This is in support of the suggestion that the late Pleistocene glacial cycles are not due to the weak 100,000-year eccentricity cycle, but a non-linear response to mainly the 41,000-year obliquity cycle. Variations in the Sun's energy output There are at least two types of variation in the Sun's energy output: In the very long term, astrophysicists believe that the Sun's output increases by about 7% every one billion years. Shorter-term variations such as sunspot cycles, and longer episodes such as the Maunder Minimum, which occurred during the coldest part of the Little Ice Age. The long-term increase in the Sun's output cannot be a cause of ice ages. Volcanism Volcanic eruptions may have contributed to the inception and/or the end of ice age periods. At times during the paleoclimate, carbon dioxide levels were two or three times greater than today. Volcanoes and movements in continental plates contributed to high amounts of CO2 in the atmosphere. Carbon dioxide from volcanoes probably contributed to periods with highest overall temperatures. One suggested explanation of the Paleocene–Eocene Thermal Maximum is that undersea volcanoes released methane from clathrates and thus caused a large and rapid increase in the greenhouse effect. There appears to be no geological evidence for such eruptions at the right time, but this does not prove they did not happen. Recent glacial and interglacial phases The current geological period, the Quaternary, which began about 2.6 million years ago and extends into the present, is marked by warm and cold episodes, cold phases called glacials (Quaternary ice age) lasting about 100,000 years, and warm phases called interglacials lasting 10,000–15,000 years. The last cold episode of the Last Glacial Period ended about 10,000 years ago. Earth is currently in an interglacial period of the Quaternary, called the Holocene. Glacial stages in North America The major glacial stages of the current ice age in North America are the Illinoian, Eemian, and Wisconsin glaciation. The use of the Nebraskan, Afton, Kansan, and Yarmouthian stages to subdivide the ice age in North America has been discontinued by Quaternary geologists and geomorphologists. These stages have all been merged into the Pre-Illinoian in the 1980s. During the most recent North American glaciation, during the latter part of the Last Glacial Maximum (26,000 to 13,300 years ago), ice sheets extended to about 45th parallel north. These sheets were thick. This Wisconsin glaciation left widespread impacts on the North American landscape. The Great Lakes and the Finger Lakes were carved by ice deepening old valleys. Most of the lakes in Minnesota and Wisconsin were gouged out by glaciers and later filled with glacial meltwaters. The old Teays River drainage system was radically altered and largely reshaped into the Ohio River drainage system. Other rivers were dammed and diverted to new channels, such as Niagara Falls, which formed a dramatic waterfall and gorge, when the waterflow encountered a limestone escarpment. Another similar waterfall, at the present Clark Reservation State Park near Syracuse, New York, is now dry. The area from Long Island to Nantucket, Massachusetts was formed from glacial till, and the plethora of lakes on the Canadian Shield in northern Canada can be almost entirely attributed to the action of the ice. As the ice retreated and the rock dust dried, winds carried the material hundreds of miles, forming beds of loess many dozens of feet thick in the Missouri Valley. Post-glacial rebound continues to reshape the Great Lakes and other areas formerly under the weight of the ice sheets. The Driftless Area, a portion of western and southwestern Wisconsin along with parts of adjacent Minnesota, Iowa, and Illinois, was not covered by glaciers. Last Glacial Period in the semiarid Andes around Aconcagua and Tupungato A specially interesting climatic change during glacial times has taken place in the semi-arid Andes. Beside the expected cooling down in comparison with the current climate, a significant precipitation change happened here. So, researches in the presently semiarid subtropic Aconcagua-massif (6,962 m) have shown an unexpectedly extensive glacial glaciation of the type "ice stream network". The connected valley glaciers exceeding 100 km in length, flowed down on the East-side of this section of the Andes at 32–34°S and 69–71°W as far as a height of 2,060 m and on the western luff-side still clearly deeper. Where current glaciers scarcely reach 10 km in length, the snowline (ELA) runs at a height of 4,600 m and at that time was lowered to 3,200 m asl, i.e. about 1,400 m. From this follows that—beside of an annual depression of temperature about c. 8.4 °C— here was an increase in precipitation. Accordingly, at glacial times the humid climatic belt that today is situated several latitude degrees further to the S, was shifted much further to the N. Effects of glaciation Although the last glacial period ended more than 8,000 years ago, its effects can still be felt today. For example, the moving ice carved out the landscape in Canada (See Canadian Arctic Archipelago), Greenland, northern Eurasia and Antarctica. The erratic boulders, till, drumlins, eskers, fjords, kettle lakes, moraines, cirques, horns, etc., are typical features left behind by the glaciers. The weight of the ice sheets was so great that they deformed Earth's crust and mantle. After the ice sheets melted, the ice-covered land rebounded. Due to the high viscosity of Earth's mantle, the flow of mantle rocks which controls the rebound process is very slow—at a rate of about 1 cm/year near the center of rebound area today. During glaciation, water was taken from the oceans to form the ice at high latitudes, thus global sea level dropped by about 110 meters, exposing the continental shelves and forming land-bridges between land-masses for animals to migrate. During deglaciation, the melted ice-water returned to the oceans, causing sea level to rise. This process can cause sudden shifts in coastlines and hydration systems resulting in newly submerged lands, emerging lands, collapsed ice dams resulting in salination of lakes, new ice dams creating vast areas of freshwater, and a general alteration in regional weather patterns on a large but temporary scale. It can even cause temporary reglaciation. This type of chaotic pattern of rapidly changing land, ice, saltwater and freshwater has been proposed as the likely model for the Baltic and Scandinavian regions, as well as much of central North America at the end of the last glacial maximum, with the present-day coastlines only being achieved in the last few millennia of prehistory. Also, the effect of elevation on Scandinavia submerged a vast continental plain that had existed under much of what is now the North Sea, connecting the British Isles to Continental Europe. The redistribution of ice-water on the surface of Earth and the flow of mantle rocks causes changes in the gravitational field as well as changes to the distribution of the moment of inertia of Earth. These changes to the moment of inertia result in a change in the angular velocity, axis, and wobble of Earth's rotation. The weight of the redistributed surface mass loaded the lithosphere, caused it to flex and also induced stress within Earth. The presence of the glaciers generally suppressed the movement of faults below. During deglaciation, the faults experience accelerated slip triggering earthquakes. Earthquakes triggered near the ice margin may in turn accelerate ice calving and may account for the Heinrich events. As more ice is removed near the ice margin, more intraplate earthquakes are induced and this positive feedback may explain the fast collapse of ice sheets. In Europe, glacial erosion and isostatic sinking from the weight of ice made the Baltic Sea, which before the Ice Age was all land drained by the Eridanos River. Future ice ages A 2015 report by the Past Global Changes Project says simulations show that a new glaciation is unlikely to happen within the next approximately 50,000 years, before the next strong drop in Northern Hemisphere summer insolation occurs "if either atmospheric concentration remains above 300 ppm or cumulative carbon emissions exceed 1000 Pg C" (i.e. 1,000 gigatonnes carbon). "Only for an atmospheric content below the preindustrial level may a glaciation occur within the next 10 ka. ... Given the continued anthropogenic emissions, glacial inception is very unlikely to occur in the next 50 ka, because the timescale for and temperature reduction toward unperturbed values in the absence of active removal is very long [IPCC, 2013], and only weak precessional forcing occurs in the next two precessional cycles." (A precessional cycle is around 21,000 years, the time it takes for the perihelion to move all the way around the tropical year.) Ice ages go through cycles of about 100,000 years, but the next one may well be avoided due to human carbon dioxide emissions.
Physical sciences
Geological history
null
15412
https://en.wikipedia.org/wiki/Infrared%20spectroscopy
Infrared spectroscopy
Infrared spectroscopy (IR spectroscopy or vibrational spectroscopy) is the measurement of the interaction of infrared radiation with matter by absorption, emission, or reflection. It is used to study and identify chemical substances or functional groups in solid, liquid, or gaseous forms. It can be used to characterize new materials or identify and verify known and unknown samples. The method or technique of infrared spectroscopy is conducted with an instrument called an infrared spectrometer (or spectrophotometer) which produces an infrared spectrum. An IR spectrum can be visualized in a graph of infrared light absorbance (or transmittance) on the vertical axis vs. frequency, wavenumber or wavelength on the horizontal axis. Typical units of wavenumber used in IR spectra are reciprocal centimeters, with the symbol cm−1. Units of IR wavelength are commonly given in micrometers (formerly called "microns"), symbol μm, which are related to the wavenumber in a reciprocal way. A common laboratory instrument that uses this technique is a Fourier transform infrared (FTIR) spectrometer. Two-dimensional IR is also possible as discussed below. The infrared portion of the electromagnetic spectrum is usually divided into three regions; the near-, mid- and far- infrared, named for their relation to the visible spectrum. The higher-energy near-IR, approximately 14,000–4,000 cm−1 (0.7–2.5 μm wavelength) can excite overtone or combination modes of molecular vibrations. The mid-infrared, approximately 4,000–400 cm−1 (2.5–25 μm) is generally used to study the fundamental vibrations and associated rotational–vibrational structure. The far-infrared, approximately 400–10 cm−1 (25–1,000 μm) has low energy and may be used for rotational spectroscopy and low frequency vibrations. The region from 2–130 cm−1, bordering the microwave region, is considered the terahertz region and may probe intermolecular vibrations. The names and classifications of these subregions are conventions, and are only loosely based on the relative molecular or electromagnetic properties. Uses and applications Infrared spectroscopy is a simple and reliable technique widely used in both organic and inorganic chemistry, in research and industry. It is used in quality control, dynamic measurement, and monitoring applications such as the long-term unattended measurement of CO2 concentrations in greenhouses and growth chambers by infrared gas analyzers. It is also used in forensic analysis in both criminal and civil cases, for example in identifying polymer degradation. It can be used in determining the blood alcohol content of a suspected drunk driver. IR spectroscopy has been used in identification of pigments in paintings and other art objects such as illuminated manuscripts. Infrared spectroscopy is also useful in measuring the degree of polymerization in polymer manufacture. Changes in the character or quantity of a particular bond are assessed by measuring at a specific frequency over time. Instruments can routinely record many spectra per second in situ, providing insights into reaction mechanism (e.g., detection of intermediates) and reaction progress. Infrared spectroscopy is utilized in the field of semiconductor microelectronics: for example, infrared spectroscopy can be applied to semiconductors like silicon, gallium arsenide, gallium nitride, zinc selenide, amorphous silicon, silicon nitride, etc. Another important application of infrared spectroscopy is in the food industry to measure the concentration of various compounds in different food products. Infrared spectroscopy is also used in gas leak detection devices such as the DP-IR and EyeCGAs. These devices detect hydrocarbon gas leaks in the transportation of natural gas and crude oil. Infrared spectroscopy is an important analysis method in the recycling process of household waste plastics, and a convenient stand-off method to sort plastic of different polymers (PET, HDPE, ...). Other developments include a miniature IR-spectrometer that's linked to a cloud based database and suitable for personal everyday use, and NIR-spectroscopic chips that can be embedded in smartphones and various gadgets. In catalysis research it is a very useful tool to characterize the catalyst, as well as to detect intermediates Infrared spectroscopy coupled with machine learning and artificial intelligence also has potential for rapid, accurate and non-invasive sensing of bacteria. The complex chemical composition of bacteria, including nucleic acids, proteins, carbohydrates and fatty acids, results in high-dimensional datasets where the essential features are effectively hidden under the total spectrum. Extraction of the essential features therefore requires advanced statistical methods such as machine learning and deep-neural networks. The potential of this technique for bacteria classification have been demonstrated for differentiation at the genus, species and serotype taxonomic levels, and it has also been shown promising for antimicrobial susceptibility testing, which is important for many clinical settings where faster susceptibility testing would decrease unnecessary blind-treatment with broad-spectrum antibiotics. The main limitation of this technique for clinical applications is the high sensitivity to technical equipment and sample preparation techniques, which makes it difficult to construct large-scale databases. Attempts in this direction have however been made by Bruker with the IR Biotyper for food microbiology. Theory Infrared spectroscopy exploits the fact that molecules absorb frequencies that are characteristic of their structure. These absorptions occur at resonant frequencies, i.e. the frequency of the absorbed radiation matches the vibrational frequency. The energies are affected by the shape of the molecular potential energy surfaces, the masses of the atoms, and the associated vibronic coupling. In particular, in the Born–Oppenheimer and harmonic approximations (i.e. when the molecular Hamiltonian corresponding to the electronic ground state can be approximated by a harmonic oscillator in the neighbourhood of the equilibrium molecular geometry), the resonant frequencies are associated with the normal modes of vibration corresponding to the molecular electronic ground state potential energy surface. Thus, it depends on both the nature of the bonds and the mass of the atoms that are involved. Using the Schrödinger equation leads to the selection rule for the vibrational quantum number in the system undergoing vibrational changes: The compression and extension of a bond may be likened to the behaviour of a spring, but real molecules are hardly perfectly elastic in nature. If a bond between atoms is stretched, for instance, there comes a point at which the bond breaks and the molecule dissociates into atoms. Thus real molecules deviate from perfect harmonic motion and their molecular vibrational motion is anharmonic. An empirical expression that fits the energy curve of a diatomic molecule undergoing anharmonic extension and compression to a good approximation was derived by P.M. Morse, and is called the Morse function. Using the Schrödinger equation leads to the selection rule for the system undergoing vibrational changes : Number of vibrational modes In order for a vibrational mode in a sample to be "IR active", it must be associated with changes in the molecular dipole moment. A permanent dipole is not necessary, as the rule requires only a change in dipole moment. A molecule can vibrate in many ways, and each way is called a vibrational mode. For molecules with N number of atoms, geometrically linear molecules have 3N – 5 degrees of vibrational modes, whereas nonlinear molecules have 3N – 6 degrees of vibrational modes (also called vibrational degrees of freedom). As examples linear carbon dioxide (CO2) has 3 × 3 – 5 = 4, while non-linear water (H2O), has only 3 × 3 – 6 = 3. Simple diatomic molecules have only one bond and only one vibrational band. If the molecule is symmetrical, e.g. N2, the band is not observed in the IR spectrum, but only in the Raman spectrum. Asymmetrical diatomic molecules, e.g. carbon monoxide (CO), absorb in the IR spectrum. More complex molecules have many bonds, and their vibrational spectra are correspondingly more complex, i.e. big molecules have many peaks in their IR spectra. The atoms in a CH2X2 group, commonly found in organic compounds and where X can represent any other atom, can vibrate in nine different ways. Six of these vibrations involve only the CH2 portion: two stretching modes (ν): symmetric (νs) and antisymmetric (νas); and four bending modes: scissoring (δ), rocking (ρ), wagging (ω) and twisting (τ), as shown below. Structures that do not have the two additional X groups attached have fewer modes because some modes are defined by specific relationships to those other attached groups. For example, in water, the rocking, wagging, and twisting modes do not exist because these types of motions of the H atoms represent simple rotation of the whole molecule rather than vibrations within it. In case of more complex molecules, out-of-plane (γ) vibrational modes can be also present. These figures do not represent the "recoil" of the C atoms, which, though necessarily present to balance the overall movements of the molecule, are much smaller than the movements of the lighter H atoms. The simplest and most important or fundamental IR bands arise from the excitations of normal modes, the simplest distortions of the molecule, from the ground state with vibrational quantum number v = 0 to the first excited state with vibrational quantum number v = 1. In some cases, overtone bands are observed. An overtone band arises from the absorption of a photon leading to a direct transition from the ground state to the second excited vibrational state (v = 2). Such a band appears at approximately twice the energy of the fundamental band for the same normal mode. Some excitations, so-called combination modes, involve simultaneous excitation of more than one normal mode. The phenomenon of Fermi resonance can arise when two modes are similar in energy; Fermi resonance results in an unexpected shift in energy and intensity of the bands etc. Practical IR spectroscopy The infrared spectrum of a sample is recorded by passing a beam of infrared light through the sample. When the frequency of the IR matches the vibrational frequency of a bond or collection of bonds, absorption occurs. Examination of the transmitted light reveals how much energy was absorbed at each frequency (or wavelength). This measurement can be achieved by scanning the wavelength range using a monochromator. Alternatively, the entire wavelength range is measured using a Fourier transform instrument and then a transmittance or absorbance spectrum is extracted. This technique is commonly used for analyzing samples with covalent bonds. The number of bands roughly correlates with symmetry and molecular complexity. A variety of devices are used to hold the sample in the path of the IR beam These devices are selected on the basis of their transparency in the region of interest and their resilience toward the sample. Sample preparation Gas samples Gaseous samples require a sample cell with a long pathlength to compensate for the diluteness. The pathlength of the sample cell depends on the concentration of the compound of interest. A simple glass tube with length of 5 to 10 cm equipped with infrared-transparent windows at both ends of the tube can be used for concentrations down to several hundred ppm. Sample gas concentrations well below ppm can be measured with a White's cell in which the infrared light is guided with mirrors to travel through the gas. White's cells are available with optical pathlength starting from 0.5 m up to hundred meters. Liquid samples Liquid samples can be sandwiched between two plates of a salt (commonly sodium chloride, or common salt, although a number of other salts such as potassium bromide or calcium fluoride are also used). The plates are transparent to the infrared light and do not introduce any lines onto the spectra. With increasing technology in computer filtering and manipulation of the results, samples in solution can now be measured accurately (water produces a broad absorbance across the range of interest, and thus renders the spectra unreadable without this computer treatment). Solid samples Solid samples can be prepared in a variety of ways. One common method is to crush the sample with an oily mulling agent (usually mineral oil Nujol). A thin film of the mull is applied onto salt plates and measured. The second method is to grind a quantity of the sample with a specially purified salt (usually potassium bromide) finely (to remove scattering effects from large crystals). This powder mixture is then pressed in a mechanical press to form a translucent pellet through which the beam of the spectrometer can pass. A third technique is the "cast film" technique, which is used mainly for polymeric materials. The sample is first dissolved in a suitable, non-hygroscopic solvent. A drop of this solution is deposited on the surface of a KBr or NaCl cell. The solution is then evaporated to dryness and the film formed on the cell is analysed directly. Care is important to ensure that the film is not too thick otherwise light cannot pass through. This technique is suitable for qualitative analysis. The final method is to use microtomy to cut a thin (20–100 μm) film from a solid sample. This is one of the most important ways of analysing failed plastic products for example because the integrity of the solid is preserved. In photoacoustic spectroscopy the need for sample treatment is minimal. The sample, liquid or solid, is placed into the sample cup which is inserted into the photoacoustic cell which is then sealed for the measurement. The sample may be one solid piece, powder or basically in any form for the measurement. For example, a piece of rock can be inserted into the sample cup and the spectrum measured from it. A useful way of analyzing solid samples without the need for cutting samples uses ATR or attenuated total reflectance spectroscopy. Using this approach, samples are pressed against the face of a single crystal. The infrared radiation passes through the crystal and only interacts with the sample at the interface between the two materials. Comparing to a reference It is typical to record spectrum of both the sample and a "reference". This step controls for a number of variables, e.g. infrared detector, which may affect the spectrum. The reference measurement makes it possible to eliminate the instrument influence. The appropriate "reference" depends on the measurement and its goal. The simplest reference measurement is to simply remove the sample (replacing it by air). However, sometimes a different reference is more useful. For example, if the sample is a dilute solute dissolved in water in a beaker, then a good reference measurement might be to measure pure water in the same beaker. Then the reference measurement would cancel out not only all the instrumental properties (like what light source is used), but also the light-absorbing and light-reflecting properties of the water and beaker, and the final result would just show the properties of the solute (at least approximately). A common way to compare to a reference is sequentially: first measure the reference, then replace the reference by the sample and measure the sample. This technique is not perfectly reliable; if the infrared lamp is a bit brighter during the reference measurement, then a bit dimmer during the sample measurement, the measurement will be distorted. More elaborate methods, such as a "two-beam" setup (see figure), can correct for these types of effects to give very accurate results. The Standard addition method can be used to statistically cancel these errors. Nevertheless, among different absorption-based techniques which are used for gaseous species detection, Cavity ring-down spectroscopy (CRDS) can be used as a calibration-free method. The fact that CRDS is based on the measurements of photon life-times (and not the laser intensity) makes it needless for any calibration and comparison with a reference Some instruments also automatically identify the substance being measured from a store of thousands of reference spectra held in storage. FTIR Fourier transform infrared (FTIR) spectroscopy is a measurement technique that allows one to record infrared spectra. Infrared light is guided through an interferometer and then through the sample (or vice versa). A moving mirror inside the apparatus alters the distribution of infrared light that passes through the interferometer. The signal directly recorded, called an "interferogram", represents light output as a function of mirror position. A data-processing technique called Fourier transform turns this raw data into the desired result (the sample's spectrum): light output as a function of infrared wavelength (or equivalently, wavenumber). As described above, the sample's spectrum is always compared to a reference. An alternate method for acquiring spectra is the "dispersive" or "scanning monochromator" method. In this approach, the sample is irradiated sequentially with various single wavelengths. The dispersive method is more common in UV-Vis spectroscopy, but is less practical in the infrared than the FTIR method. One reason that FTIR is favored is called "Fellgett's advantage" or the "multiplex advantage": The information at all frequencies is collected simultaneously, improving both speed and signal-to-noise ratio. Another is called "Jacquinot's Throughput Advantage": A dispersive measurement requires detecting much lower light levels than an FTIR measurement. There are other advantages, as well as some disadvantages, but virtually all modern infrared spectrometers are FTIR instruments. Infrared microscopy Various forms of infrared microscopy exist. These include IR versions of sub-diffraction microscopy such as IR NSOM, photothermal microspectroscopy, Nano-FTIR and atomic force microscope based infrared spectroscopy (AFM-IR). Other methods in molecular vibrational spectroscopy Infrared spectroscopy is not the only method of studying molecular vibrational spectra. Raman spectroscopy involves an inelastic scattering process in which only part of the energy of an incident photon is absorbed by the molecule, and the remaining part is scattered and detected. The energy difference corresponds to absorbed vibrational energy. The selection rules for infrared and for Raman spectroscopy are different at least for some molecular symmetries, so that the two methods are complementary in that they observe vibrations of different symmetries. Another method is electron energy loss spectroscopy (EELS), in which the energy absorbed is provided by an inelastically scattered electron rather than a photon. This method is useful for studying vibrations of molecules adsorbed on a solid surface. Recently, high-resolution EELS (HREELS) has emerged as a technique for performing vibrational spectroscopy in a transmission electron microscope (TEM). In combination with the high spatial resolution of the TEM, unprecedented experiments have been performed, such as nano-scale temperature measurements, mapping of isotopically labeled molecules, mapping of phonon modes in position- and momentum-space, vibrational surface and bulk mode mapping on nanocubes, and investigations of polariton modes in van der Waals crystals. Analysis of vibrational modes that are IR-inactive but appear in inelastic neutron scattering is also possible at high spatial resolution using EELS. Although the spatial resolution of HREELs is very high, the bands are extremely broad compared to other techniques. Computational infrared microscopy By using computer simulations and normal mode analysis it is possible to calculate theoretical frequencies of molecules. Absorption bands IR spectroscopy is often used to identify structures because functional groups give rise to characteristic bands both in terms of intensity and position (frequency). The positions of these bands are summarized in correlation tables as shown below. Regions A spectrograph is often interpreted as having two regions. functional group region In the functional region there are one to a few troughs per functional group. fingerprint region In the fingerprint region there are many troughs which form an intricate pattern which can be used like a fingerprint to determine the compound. Badger's rule For many kinds of samples, the assignments are known, i.e. which bond deformation(s) are associated with which frequency. In such cases further information can be gleaned about the strength on a bond, relying on the empirical guideline called Badger's rule. Originally published by Richard McLean Badger in 1934, this rule states that the strength of a bond (in terms of force constant) correlates with the bond length. That is, increase in bond strength leads to corresponding bond shortening and vice versa. Isotope effects The different isotopes in a particular species may exhibit different fine details in infrared spectroscopy. For example, the O–O stretching frequency (in reciprocal centimeters) of oxyhemocyanin is experimentally determined to be 832 and 788 cm−1 for ν(16O–16O) and ν(18O–18O), respectively. By considering the O–O bond as a spring, the frequency of absorbance can be calculated as a wavenumber [= frequency/(speed of light)] where k is the spring constant for the bond, c is the speed of light, and μ is the reduced mass of the A–B system: ( is the mass of atom ). The reduced masses for 16O–16O and 18O–18O can be approximated as 8 and 9 respectively. Thus The effect of isotopes, both on the vibration and the decay dynamics, has been found to be stronger than previously thought. In some systems, such as silicon and germanium, the decay of the anti-symmetric stretch mode of interstitial oxygen involves the symmetric stretch mode with a strong isotope dependence. For example, it was shown that for a natural silicon sample, the lifetime of the anti-symmetric vibration is 11.4 ps. When the isotope of one of the silicon atoms is increased to 29Si, the lifetime increases to 19 ps. In similar manner, when the silicon atom is changed to 30Si, the lifetime becomes 27 ps. Two-dimensional IR Two-dimensional infrared correlation spectroscopy analysis combines multiple samples of infrared spectra to reveal more complex properties. By extending the spectral information of a perturbed sample, spectral analysis is simplified and resolution is enhanced. The 2D synchronous and 2D asynchronous spectra represent a graphical overview of the spectral changes due to a perturbation (such as a changing concentration or changing temperature) as well as the relationship between the spectral changes at two different wavenumbers. Nonlinear two-dimensional infrared spectroscopy is the infrared version of correlation spectroscopy. Nonlinear two-dimensional infrared spectroscopy is a technique that has become available with the development of femtosecond infrared laser pulses. In this experiment, first a set of pump pulses is applied to the sample. This is followed by a waiting time during which the system is allowed to relax. The typical waiting time lasts from zero to several picoseconds, and the duration can be controlled with a resolution of tens of femtoseconds. A probe pulse is then applied, resulting in the emission of a signal from the sample. The nonlinear two-dimensional infrared spectrum is a two-dimensional correlation plot of the frequency ω1 that was excited by the initial pump pulses and the frequency ω3 excited by the probe pulse after the waiting time. This allows the observation of coupling between different vibrational modes; because of its extremely fine time resolution, it can be used to monitor molecular dynamics on a picosecond timescale. It is still a largely unexplored technique and is becoming increasingly popular for fundamental research. As with two-dimensional nuclear magnetic resonance (2DNMR) spectroscopy, this technique spreads the spectrum in two dimensions and allows for the observation of cross peaks that contain information on the coupling between different modes. In contrast to 2DNMR, nonlinear two-dimensional infrared spectroscopy also involves the excitation to overtones. These excitations result in excited state absorption peaks located below the diagonal and cross peaks. In 2DNMR, two distinct techniques, COSY and NOESY, are frequently used. The cross peaks in the first are related to the scalar coupling, while in the latter they are related to the spin transfer between different nuclei. In nonlinear two-dimensional infrared spectroscopy, analogs have been drawn to these 2DNMR techniques. Nonlinear two-dimensional infrared spectroscopy with zero waiting time corresponds to COSY, and nonlinear two-dimensional infrared spectroscopy with finite waiting time allowing vibrational population transfer corresponds to NOESY. The COSY variant of nonlinear two-dimensional infrared spectroscopy has been used for determination of the secondary structure content of proteins.
Physical sciences
Spectroscopy
Chemistry
15417
https://en.wikipedia.org/wiki/Intermolecular%20force
Intermolecular force
An intermolecular force (IMF; also secondary force) is the force that mediates interaction between molecules, including the electromagnetic forces of attraction or repulsion which act between atoms and other types of neighbouring particles, e.g. atoms or ions. Intermolecular forces are weak relative to intramolecular forces – the forces which hold a molecule together. For example, the covalent bond, involving sharing electron pairs between atoms, is much stronger than the forces present between neighboring molecules. Both sets of forces are essential parts of force fields frequently used in molecular mechanics. The first reference to the nature of microscopic forces is found in Alexis Clairaut's work Théorie de la figure de la Terre, published in Paris in 1743. Other scientists who have contributed to the investigation of microscopic forces include: Laplace, Gauss, Maxwell, Boltzmann and Pauling. Attractive intermolecular forces are categorized into the following types: Hydrogen bonding Ion–dipole forces and ion–induced dipole force Cation–π, σ–π and π–π bonding Van der Waals forces – Keesom force, Debye force, and London dispersion force Cation–cation bonding Salt bridge (protein and supramolecular) Information on intermolecular forces is obtained by macroscopic measurements of properties like viscosity, pressure, volume, temperature (PVT) data. The link to microscopic aspects is given by virial coefficients and intermolecular pair potentials, such as the Mie potential, Buckingham potential or Lennard-Jones potential. In the broadest sense, it can be understood as such interactions between any particles (molecules, atoms, ions and molecular ions) in which the formation of chemical (that is, ionic, covalent or metallic) bonds does not occur. In other words, these interactions are significantly weaker than covalent ones and do not lead to a significant restructuring of the electronic structure of the interacting particles. (This is only partially true. For example, all enzymatic and catalytic reactions begin with a weak intermolecular interaction between a substrate and an enzyme or a molecule with a catalyst, but several such weak interactions with the required spatial configuration of the active center of the enzyme lead to significant restructuring changes the energy state of molecules or substrate, which ultimately leads to the breaking of some and the formation of other covalent chemical bonds. Strictly speaking, all enzymatic reactions begin with intermolecular interactions between the substrate and the enzyme, therefore the importance of these interactions is especially great in biochemistry and molecular biology, and is the basis of enzymology). Hydrogen bonding A hydrogen bond is an extreme form of dipole-dipole bonding, referring to the attraction between a hydrogen atom that is bonded to an element with high electronegativity, usually nitrogen, oxygen, or fluorine. The hydrogen bond is often described as a strong electrostatic dipole–dipole interaction. However, it also has some features of covalent bonding: it is directional, stronger than a van der Waals force interaction, produces interatomic distances shorter than the sum of their van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a kind of valence. The number of Hydrogen bonds formed between molecules is equal to the number of active pairs. The molecule which donates its hydrogen is termed the donor molecule, while the molecule containing lone pair participating in H bonding is termed the acceptor molecule. The number of active pairs is equal to the common number between number of hydrogens the donor has and the number of lone pairs the acceptor has. Though both not depicted in the diagram, water molecules have four active bonds. The oxygen atom’s two lone pairs interact with a hydrogen each, forming two additional hydrogen bonds, and the second hydrogen atom also interacts with a neighbouring oxygen. Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides, which have little capability to hydrogen bond. Intramolecular hydrogen bonding is partly responsible for the secondary, tertiary, and quaternary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural. Salt bridge The attraction between cationic and anionic sites is a noncovalent, or intermolecular interaction which is usually referred to as ion pairing or salt bridge. It is essentially due to electrostatic forces, although in aqueous medium the association is driven by entropy and often even endothermic. Most salts form crystals with characteristic distances between the ions; in contrast to many other noncovalent interactions, salt bridges are not directional and show in the solid state usually contact determined only by the van der Waals radii of the ions. Inorganic as well as organic ions display in water at moderate ionic strength I similar salt bridge as association ΔG values around 5 to 6 kJ/mol for a 1:1 combination of anion and cation, almost independent of the nature (size, polarizability, etc.) of the ions. The ΔG values are additive and approximately a linear function of the charges, the interaction of e.g. a doubly charged phosphate anion with a single charged ammonium cation accounts for about 2x5 = 10 kJ/mol. The ΔG values depend on the ionic strength I of the solution, as described by the Debye-Hückel equation, at zero ionic strength one observes ΔG = 8 kJ/mol. Dipole–dipole and similar interactions Dipole–dipole interactions (or Keesom interactions) are electrostatic interactions between molecules which have permanent dipoles. This interaction is stronger than the London forces but is weaker than ion-ion interaction because only partial charges are involved. These interactions tend to align the molecules to increase attraction (reducing potential energy). An example of a dipole–dipole interaction can be seen in hydrogen chloride (HCl): the positive end of a polar molecule will attract the negative end of the other molecule and influence its position. Polar molecules have a net attraction between them. Examples of polar molecules include hydrogen chloride (HCl) and chloroform (CHCl3). Often molecules contain dipolar groups of atoms, but have no overall dipole moment on the molecule as a whole. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane and carbon dioxide. The dipole–dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole. The Keesom interaction is a van der Waals force. It is discussed further in the section "Van der Waals forces". Ion–dipole and ion–induced dipole forces Ion–dipole and ion–induced dipole forces are similar to dipole–dipole and dipole–induced dipole interactions but involve ions, instead of only polar and non-polar molecules. Ion–dipole and ion–induced dipole forces are stronger than dipole–dipole interactions because the charge of any ion is much greater than the charge of a dipole moment. Ion–dipole bonding is stronger than hydrogen bonding. An ion–dipole force consists of an ion and a polar molecule interacting. They align so that the positive and negative groups are next to one another, allowing maximum attraction. An important example of this interaction is hydration of ions in water which give rise to hydration enthalpy. The polar water molecules surround themselves around ions in water and the energy released during the process is known as hydration enthalpy. The interaction has its immense importance in justifying the stability of various ions (like Cu2+) in water. An ion–induced dipole force consists of an ion and a non-polar molecule interacting. Like a dipole–induced dipole force, the charge of the ion causes distortion of the electron cloud on the non-polar molecule. Van der Waals forces The van der Waals forces arise from interaction between uncharged atoms or molecules, leading not only to such phenomena as the cohesion of condensed phases and physical absorption of gases, but also to a universal force of attraction between macroscopic bodies. Keesom force (permanent dipole – permanent dipole) The first contribution to van der Waals forces is due to electrostatic interactions between rotating permanent dipoles, quadrupoles (all molecules with symmetry lower than cubic), and multipoles. It is termed the Keesom interaction, named after Willem Hendrik Keesom. These forces originate from the attraction between permanent dipoles (dipolar molecules) and are temperature dependent. They consist of attractive interactions between dipoles that are ensemble averaged over different rotational orientations of the dipoles. It is assumed that the molecules are constantly rotating and never get locked into place. This is a good assumption, but at some point molecules do get locked into place. The energy of a Keesom interaction depends on the inverse sixth power of the distance, unlike the interaction energy of two spatially fixed dipoles, which depends on the inverse third power of the distance. The Keesom interaction can only occur among molecules that possess permanent dipole moments, i.e., two polar molecules. Also Keesom interactions are very weak van der Waals interactions and do not occur in aqueous solutions that contain electrolytes. The angle averaged interaction is given by the following equation: where d = electric dipole moment, = permittivity of free space, = dielectric constant of surrounding material, T = temperature, = Boltzmann constant, and r = distance between molecules. Debye force (permanent dipoles–induced dipoles) The second contribution is the induction (also termed polarization) or Debye force, arising from interactions between rotating permanent dipoles and from the polarizability of atoms and molecules (induced dipoles). These induced dipoles occur when one molecule with a permanent dipole repels another molecule's electrons. A molecule with permanent dipole can induce a dipole in a similar neighboring molecule and cause mutual attraction. Debye forces cannot occur between atoms. The forces between induced and permanent dipoles are not as temperature dependent as Keesom interactions because the induced dipole is free to shift and rotate around the polar molecule. The Debye induction effects and Keesom orientation effects are termed polar interactions. The induced dipole forces appear from the induction (also termed polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced (by the former di/multi-pole) 31 on another. This interaction is called the Debye force, named after Peter J. W. Debye. One example of an induction interaction between permanent dipole and induced dipole is the interaction between HCl and Ar. In this system, Ar experiences a dipole as its electrons are attracted (to the H side of HCl) or repelled (from the Cl side) by HCl. The angle averaged interaction is given by the following equation: where = polarizability. This kind of interaction can be expected between any polar molecule and non-polar/symmetrical molecule. The induction-interaction force is far weaker than dipole–dipole interaction, but stronger than the London dispersion force. London dispersion force (fluctuating dipole–induced dipole interaction) The third and dominant contribution is the dispersion or London force (fluctuating dipole–induced dipole), which arises due to the non-zero instantaneous dipole moments of all atoms and molecules. Such polarization can be induced either by a polar molecule or by the repulsion of negatively charged electron clouds in non-polar molecules. Thus, London interactions are caused by random fluctuations of electron density in an electron cloud. An atom with a large number of electrons will have a greater associated London force than an atom with fewer electrons. The dispersion (London) force is the most important component because all materials are polarizable, whereas Keesom and Debye forces require permanent dipoles. The London interaction is universal and is present in atom-atom interactions as well. For various reasons, London interactions (dispersion) have been considered relevant for interactions between macroscopic bodies in condensed systems. Hamaker developed the theory of van der Waals between macroscopic bodies in 1937 and showed that the additivity of these interactions renders them considerably more long-range. Relative strength of forces This comparison is approximate. The actual relative strengths will vary depending on the molecules involved. For instance, the presence of water creates competing interactions that greatly weaken the strength of both ionic and hydrogen bonds. We may consider that for static systems, Ionic bonding and covalent bonding will always be stronger than intermolecular forces in any given substance. But it is not so for big moving systems like enzyme molecules interacting with substrate molecules. Here the numerous intramolecular (most often - hydrogen bonds) bonds form an active intermediate state where the intermolecular bonds cause some of the covalent bond to be broken, while the others are formed, in this way proceeding the thousands of enzymatic reactions, so important for living organisms. Effect on the behavior of gases Intermolecular forces are repulsive at short distances and attractive at long distances (see the Lennard-Jones potential). In a gas, the repulsive force chiefly has the effect of keeping two molecules from occupying the same volume. This gives a real gas a tendency to occupy a larger volume than an ideal gas at the same temperature and pressure. The attractive force draws molecules closer together and gives a real gas a tendency to occupy a smaller volume than an ideal gas. Which interaction is more important depends on temperature and pressure (see compressibility factor). In a gas, the distances between molecules are generally large, so intermolecular forces have only a small effect. The attractive force is not overcome by the repulsive force, but by the thermal energy of the molecules. Temperature is the measure of thermal energy, so increasing temperature reduces the influence of the attractive force. In contrast, the influence of the repulsive force is essentially unaffected by temperature. When a gas is compressed to increase its density, the influence of the attractive force increases. If the gas is made sufficiently dense, the attractions can become large enough to overcome the tendency of thermal motion to cause the molecules to disperse. Then the gas can condense to form a solid or liquid, i.e., a condensed phase. Lower temperature favors the formation of a condensed phase. In a condensed phase, there is very nearly a balance between the attractive and repulsive forces. Quantum mechanical theories Intermolecular forces observed between atoms and molecules can be described phenomenologically as occurring between permanent and instantaneous dipoles, as outlined above. Alternatively, one may seek a fundamental, unifying theory that is able to explain the various types of interactions such as hydrogen bonding, van der Waals force and dipole–dipole interactions. Typically, this is done by applying the ideas of quantum mechanics to molecules, and Rayleigh–Schrödinger perturbation theory has been especially effective in this regard. When applied to existing quantum chemistry methods, such a quantum mechanical explanation of intermolecular interactions provides an array of approximate methods that can be used to analyze intermolecular interactions. One of the most helpful methods to visualize this kind of intermolecular interactions, that we can find in quantum chemistry, is the non-covalent interaction index, which is based on the electron density of the system. London dispersion forces play a big role with this. Concerning electron density topology, recent methods based on electron density gradient methods have emerged recently, notably with the development of IBSI (Intrinsic Bond Strength Index), relying on the IGM (Independent Gradient Model) methodology.
Physical sciences
Chemical bonds
null
15445
https://en.wikipedia.org/wiki/Entropy%20%28information%20theory%29
Entropy (information theory)
In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variable , which takes values in the set and is distributed according to , the entropy is where denotes the sum over the variable's possible values. The choice of base for , the logarithm, varies for different applications. Base 2 gives the unit of bits (or "shannons"), while base e gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable. The concept of information entropy was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication", and is also referred to as Shannon entropy. Shannon's theory defines a data communication system composed of three elements: a source of data, a communication channel, and a receiver. The "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel. Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in his source coding theorem that the entropy represents an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in his noisy-channel coding theorem. Entropy in information theory is directly analogous to the entropy in statistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs's formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such as combinatorics and machine learning. The definition can be derived from a set of axioms establishing that entropy should be a measure of how informative the average outcome of a variable is. For a continuous random variable, differential entropy is analogous to entropy. The definition generalizes the above. Introduction The core idea of information theory is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. For instance, the knowledge that some particular number will not be the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular number will win a lottery has high informational value because it communicates the occurrence of a very low probability event. The information content, also called the surprisal or self-information, of an event is a function that increases as the probability of an event decreases. When is close to 1, the surprisal of the event is low, but if is close to 0, the surprisal of the event is high. This relationship is described by the function where is the logarithm, which gives 0 surprise when the probability of the event is 1. In fact, is the only function that satisfies а specific set of conditions defined in section . Hence, we can define the information, or surprisal, of an event by or equivalently, Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome of a random trial. This implies that rolling a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability () than each outcome of a coin toss (). Consider a coin with probability of landing on heads and probability of landing on tails. The maximum surprise is when , for which one outcome is not expected over the other. In this case a coin flip has an entropy of one bit. (Similarly, one trit with equiprobable values contains (about 1.58496) bits of information because it can have one of three values.) The minimum surprise is when or , when the event outcome is known ahead of time, and the entropy is zero bits. When the entropy is zero bits, this is sometimes referred to as unity, where there is no uncertainty at all – no freedom of choice – no information. Other values of p give entropies between zero and one bits. Example Information theory is useful to calculate the smallest amount of information required to convey a message, as in data compression. For example, consider the transmission of sequences comprising the 4 characters 'A', 'B', 'C', and 'D' over a binary channel. If all 4 letters are equally likely (25%), one cannot do better than using two bits to encode each letter. 'A' might code as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. However, if the probabilities of each letter are unequal, say 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, one could assign variable length codes. In this case, 'A' would be coded as '0', 'B' as '10', 'C' as '110', and 'D' as '111'. With this representation, 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect. English text, treated as a string of characters, has fairly low entropy; i.e. it is fairly predictable. We can be fairly certain that, for example, 'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message. Definition Named after Boltzmann's Η-theorem, Shannon defined the entropy (Greek capital letter eta) of a discrete random variable , which takes values in the set and is distributed according to such that : Here is the expected value operator, and is the information content of . is itself a random variable. The entropy can explicitly be written as: where is the base of the logarithm used. Common values of are 2, Euler's number , and 10, and the corresponding units of entropy are the bits for , nats for , and bans for . In the case of for some , the value of the corresponding summand is taken to be , which is consistent with the limit: One may also define the conditional entropy of two variables and taking values from sets and respectively, as: where and . This quantity should be understood as the remaining randomness in the random variable given the random variable . Measure theory Entropy can be formally defined in the language of measure theory as follows: Let be a probability space. Let be an event. The surprisal of is The expected surprisal of is A -almost partition is a set family such that and for all distinct . (This is a relaxation of the usual conditions for a partition.) The entropy of is Let be a sigma-algebra on . The entropy of is Finally, the entropy of the probability space is , that is, the entropy with respect to of the sigma-algebra of all measurable subsets of . Example Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modeled as a Bernoulli process. The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is because However, if we know the coin is not fair, but comes up heads or tails with probabilities and , where , then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, if = 0.7, then Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain. Characterization To understand the meaning of , first define an information function in terms of an event with probability . The amount of information acquired due to the observation of event follows from Shannon's solution of the fundamental properties of information: is monotonically decreasing in : an increase in the probability of an event decreases the information from an observed event, and vice versa. : events that always occur do not communicate information. : the information learned from independent events is the sum of the information learned from each event. Given two independent events, if the first event can yield one of equiprobable outcomes and another has one of equiprobable outcomes then there are equiprobable outcomes of the joint event. This means that if bits are needed to encode the first value and to encode the second, one needs to encode both. Shannon discovered that a suitable choice of is given by: In fact, the only possible values of are for . Additionally, choosing a value for is equivalent to choosing a value for , so that corresponds to the base for the logarithm. Thus, entropy is characterized by the above four properties. {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Proof |- |Let be the information function which one assumes to be twice continuously differentiable, one has: This differential equation leads to the solution for some . Property 2 gives . Property 1 and 2 give that for all , so that . |} The different units of information (bits for the binary logarithm , nats for the natural logarithm , bans for the decimal logarithm and so on) are constant multiples of each other. For instance, in case of a fair coin toss, heads provides bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity, tosses provide bits of information, which is approximately nats or decimal digits. The meaning of the events observed (the meaning of messages) does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves. Alternative characterization Another characterization of entropy uses the following properties. We denote and . Continuity: should be continuous, so that changing the values of the probabilities by a very small amount should only change the entropy by a small amount. Symmetry: should be unchanged if the outcomes are re-ordered. That is, for any permutation of . Maximum: should be maximal if all the outcomes are equally likely i.e. . Increasing number of outcomes: for equiprobable events, the entropy should increase with the number of outcomes i.e. Additivity: given an ensemble of uniformly distributed elements that are partitioned into boxes (sub-systems) with elements each, the entropy of the whole ensemble should be equal to the sum of the entropy of the system of boxes and the individual entropies of the boxes, each weighted with the probability of being in that particular box. Discussion The rule of additivity has the following consequences: for positive integers where , Choosing , this implies that the entropy of a certain outcome is zero: . This implies that the efficiency of a source set with symbols can be defined simply as being equal to its -ary entropy.
Mathematics
Information theory
null
15459
https://en.wikipedia.org/wiki/International%20Classification%20of%20Diseases
International Classification of Diseases
The International Classification of Diseases (ICD) is a globally used medical classification used in epidemiology, health management and for clinical purposes. The ICD is maintained by the World Health Organization (WHO), which is the directing and coordinating authority for health within the United Nations System. The ICD is originally designed as a health care classification system, providing a system of diagnostic codes for classifying diseases, including nuanced classifications of a wide variety of signs, symptoms, abnormal findings, complaints, social circumstances, and external causes of injury or disease. This system is designed to map health conditions to corresponding generic categories together with specific variations, assigning for these a designated code, up to six characters long. Thus, major categories are designed to include a set of similar diseases. The ICD is published by the WHO and used worldwide for morbidity and mortality statistics, reimbursement systems, and automated decision support in health care. This system is designed to promote international comparability in the collection, processing, classification, and presentation of these statistics. The ICD is a major project to statistically classify all health disorders, and provide diagnostic assistance. The ICD is a core statistically based classificatory diagnostic system for health care related issues of the WHO Family of International Classifications (WHO-FIC). The ICD is revised periodically and is currently in its 11th revision. The ICD-11, as it is therefore known, was accepted by WHO's World Health Assembly (WHA) on 25 May 2019 and officially came into effect on 1 January 2022. On 11 February 2022, the WHO stated that 35 countries were using the ICD-11. The ICD is part of a "family" of international classifications (WHOFIC) that complement each other, also including the International Classification of Functioning, Disability and Health (ICF) which focuses on the domains of functioning (disability) associated with health conditions, from both medical and social perspectives, and the International Classification of Health Interventions (ICHI) that classifies the whole range of medical, nursing, functioning and public health interventions. The title of the ICD is formally the International Statistical Classification of Diseases and Related Health Problems, although the original title, International Classification of Diseases, is still informally the name by which it is usually known. In the United States and some other countries, the Diagnostic and Statistical Manual of Mental Disorders (DSM) is preferred for the classification of mental disorders for some purposes. Historical synopsis In 1860, during the international statistical congress held in London, Florence Nightingale made a proposal that was to result in the development of the first model of systematic collection of hospital data. In 1893, a French physician, Jacques Bertillon, introduced the Bertillon Classification of Causes of Death at a congress of the International Statistical Institute in Chicago. A number of countries adopted Bertillon's system, which was based on the principle of distinguishing between general diseases and those localized to a particular organ or anatomical site, as used by the City of Paris for classifying deaths. Subsequent revisions represented a synthesis of English, German, and Swiss classifications, expanding from the original 44 titles to 161 titles. In 1898, the American Public Health Association (APHA) recommended that the registrars of Canada, Mexico, and the United States also adopt it. The APHA also recommended revising the system every 10 years to ensure the system remained current with medical practice advances. As a result, the first international conference to revise the International Classification of Causes of Death took place in 1900, with revisions occurring every ten years thereafter. At that time, the classification system was contained in one book, which included an Alphabetic Index as well as a Tabular List. The book was small compared with current coding texts. The revisions that followed contained minor changes, until the sixth revision of the classification system. With the sixth revision, the classification system expanded to two volumes. The sixth revision included morbidity and mortality conditions, and its title was modified to reflect the changes: International Statistical Classification of Diseases, Injuries and Causes of Death (ICD). Prior to the sixth revision, responsibility for ICD revisions fell to the Mixed Commission, a group composed of representatives from the International Statistical Institute and the Health Organization of the League of Nations. In 1948, the WHO assumed responsibility for preparing and publishing the revisions to the ICD every ten years. WHO sponsored the seventh and eighth revisions in 1957 and 1968, respectively. It later became clear that the established ten year interval between revisions was too short. The ICD is currently the most widely used statistical classification system for diseases in the world. In addition, some countries—including Australia, Canada, and the United States—have developed their own adaptations of ICD, with more procedure codes for classification of operative or diagnostic procedures. Versions of ICD ICD-6 The ICD-6, published in 1949, was the first to be shaped to become suitable for morbidity reporting. Accordingly, the name changed from International List of Causes of Death to International Statistical Classification of Diseases. The combined code section for injuries and their associated accidents was split into two, a chapter for injuries, and a chapter for their external causes. With use for morbidity there was a need for coding mental conditions, and for the first time a section on mental disorders was added . ICD-7 The International Conference for the Seventh Revision of the International Classification of Diseases was held in Paris under the auspices of WHO in February 1955. In accordance with a recommendation of the WHO Expert Committee on Health Statistics, this revision was limited to essential changes and amendments of errors and inconsistencies. ICD-8 The 8th Revision Conference convened by WHO met in Geneva, from 6 to 12 July 1965. This revision was more radical than the Seventh but left unchanged the basic structure of the Classification and the general philosophy of classifying diseases, whenever possible, according to their etiology rather than a particular manifestation. During the years that the Seventh and Eighth Revisions of the ICD were in force, the use of the ICD for indexing hospital medical records increased rapidly and some countries prepared national adaptations which provided the additional detail needed for this application of the ICD. ICDA-8 (United States) In the US, a group of consultants was asked to study the ICD-8 for its applicability to various users in the United States. This group recommended that further detail be provided for coding hospital and morbidity data. The American Hospital Association's "Advisory Committee to the Central Office on ICDA" developed the needed adaptation proposals, resulting in the publication of the International Classification of Diseases, Adapted (ICDA). In 1968, the United States Public Health Service published the International Classification of Diseases, Adapted, 8th Revision for use in the United States (ICDA-8). Beginning in 1968, ICDA-8 served as the basis for coding diagnostic data for both official morbidity and mortality statistics in the United States. ICD-9 The International Conference for the Ninth Revision of the International Statistical Classification of Diseases, Injuries, and Causes of Death, convened by WHO, met in Geneva from 30 September to 6 October 1975. In the discussions leading up to the conference, it had originally been intended that there should be little change other than updating of the classification. This was mainly because of the expense of adapting data processing systems each time the classification was revised. There had been an enormous growth of interest in the ICD and ways had to be found of responding to this, partly by modifying the classification itself and partly by introducing special coding provisions. A number of representations were made by specialist bodies which had become interested in using the ICD for their own statistics. Some subject areas in the classification were regarded as inappropriately arranged and there was considerable pressure for more detail and for adaptation of the classification to make it more relevant for the evaluation of medical care, by classifying conditions to the chapters concerned with the part of the body affected rather than to those dealing with the underlying generalized disease. At the other end of the scale, there were representations from countries and areas where a detailed and sophisticated classification was irrelevant, but which nevertheless needed a classification based on the ICD in order to assess their progress in health care and in the control of disease. A field test with a bi-axial classification approach—one axis (criterion) for anatomy, with another for etiology—showed the impracticability of such approach for routine use. The final proposals presented to and accepted by the Conference in 1978 retained the basic structure of the ICD, although with much additional detail at the level of the four digit subcategories, and some optional five digit subdivisions. For the benefit of users not requiring such detail, care was taken to ensure that the categories at the three digit level were appropriate. As the World Health Organization explains: "For the benefit of users wishing to produce statistics and indexes oriented towards medical care, the 9th Revision included an optional alternative method of classifying diagnostic statements, including information about both an underlying general disease and a manifestation in a particular organ or site. This system became known as the 'dagger and asterisk system' and is retained in the Tenth Revision. A number of other technical innovations were included in the Ninth Revision, aimed at increasing its flexibility for use in a variety of situations." It was eventually replaced by ICD-10, the version currently in use by the WHO and most countries. Given the widespread expansion in the tenth revision, it is not possible to convert ICD-9 data sets directly into ICD-10 data sets, although some tools are available to help guide users. Publication of ICD-9 without IP restrictions in a world with evolving electronic data systems led to a range of products based on ICD-9, such as MeDRA or the Read directory. International Classification of Procedures in Medicine (ICPM) When ICD-9 was published by the World Health Organization (WHO), the International Classification of Procedures in Medicine (ICPM) was also developed (1975) and published (1978). The ICPM surgical procedures fascicle was originally created by the United States, based on its adaptations of ICD (called ICDA), which had contained a procedure classification since 1962. ICPM is published separately from the ICD disease classification as a series of supplementary documents called fascicles (bundles or groups of items). Each fascicle contains a classification of modes of laboratory, radiology, surgery, therapy, and other diagnostic procedures. Many countries have adapted and translated the ICPM in parts or as a whole and are using it with amendments since then. ICD-9-CM (United States) The International Classification of Diseases, Clinical Modification (ICD-9-CM) was an adaptation created by the US National Center for Health Statistics (NCHS) and used in assigning diagnostic and procedure codes associated with inpatient, outpatient, and physician office utilization in the United States. The ICD-9-CM is based on the ICD-9 but provides for additional morbidity detail. It was updated annually on October 1. It consists three volumes: Volumes 1 and 2 contain diagnosis codes. (Volume 1 is a tabular listing, and volume 2 is an index.) Extended for ICD-9-CM Volume 3 contains procedure codes for surgical, diagnostic, and therapeutic procedures. ICD-9-CM only The NCHS and the Centers for Medicare and Medicaid Services are the US governmental agencies responsible for overseeing all changes and modifications to the ICD-9-CM. ICD-10 Work on ICD-10 began in 1983, and the new revision was endorsed by the Forty-third World Health Assembly in May 1990. The latest version came into effect in WHO Member States starting on 1 January 1993. The classification system allows more than 55,000 different codes and permits tracking of many new diagnoses and procedures, a significant expansion on the 17,000 codes available in ICD-9. Adoption was relatively swift in most of the world. Several materials are made available online by WHO to facilitate its use, including a manual, training guidelines, a browser, and files for download. Some countries have adapted the international standard, such as the "ICD-10-AM" published in Australia in 1998 (also used in New Zealand), and the "ICD-10-CA" introduced in Canada in 2000. ICD-10-CM (United States) Adoption of ICD-10-CM was slow in the United States. Since 1979, the US had required ICD-9-CM codes for Medicare and Medicaid claims, and most of the rest of the American medical industry followed suit. On 1 January 1999 the ICD-10 (without clinical extensions) was adopted for reporting mortality, but ICD-9-CM was still used for morbidity. Meanwhile, NCHS received permission from the WHO to create a clinical modification of the ICD-10, and has production of all these systems: ICD-10-CM, for diagnosis codes, replaces volumes 1 and 2. Annual updates are provided. ICD-10-PCS, for procedure codes, replaces volume 3. Annual updates are provided. On 21 August 2008, the US Department of Health and Human Services (HHS) proposed new code sets to be used for reporting diagnoses and procedures on health care transactions. Under the proposal, the ICD-9-CM code sets would be replaced with the ICD-10-CM code sets, effective 1 October 2013. On 17 April 2012 the Department of Health and Human Services (HHS) published a proposed rule that would delay, from 1 October 2013 to 1 October 2014, the compliance date for the ICD-10-CM and PCS. Once again, Congress delayed implementation date to 1 October 2015, after it was inserted into "Doc Fix" Bill without debate over objections of many. Revisions to ICD-10-CM Include: Relevant information for ambulatory and managed care encounter. Expanded injury codes. New combination codes for diagnosis/symptoms to reduce the number of codes needed to describe a problem fully. Addition of sixth and seventh digit classification. Classification specific to laterality. Classification refinement for increased data granularity. ICD-10-CA (Canada) ICD-10-CA is a clinical modification of ICD-10 developed by the Canadian Institute for Health Information for morbidity classification in Canada. ICD-10-CA applies beyond acute hospital care, and includes conditions and situations that are not diseases but represent risk factors to health, such as occupational and environmental factors, lifestyle and psycho-social circumstances. ICD-11 The eleventh revision of the International Classification of Diseases, or the ICD-11, is almost five times as big as the ICD-10. It was created following a decade of development involving over 300 specialists from 55 countries. Following an alpha version in May 2011 and a beta draft in May 2012, a stable version of the ICD-11 was released on 18 June 2018, and officially endorsed by all WHO members during the 72nd World Health Assembly on 25 May 2019. For the ICD-11, the WHO decided to differentiate between the core of the system and its derived specialty versions, such as the ICD-O for oncology. As such, the collection of all ICD entities is called the Foundation Component. From this common core, subsets can be derived. The primary derivative of the Foundation is called the ICD-11 MMS, and it is this system that is commonly referred to and recognized as "the ICD-11". MMS stands for Mortality and Morbidity Statistics. ICD-11 comes with an implementation package that includes transition tables from and to ICD-10, a translation tool, a coding tool, web-services, a manual, training material, and more. All tools are accessible after self-registration from the Maintenance Platform. The ICD-11 officially came into effect on 1 January 2022, although the WHO admitted that "not many countries are likely to adapt that quickly". In the United States, the advisory body of the Secretary of Health and Human Services has given an expected release year of 2025, but if a clinical modification is determined to be needed (similar to the ICD-10-CM), this could become 2027. Usage in the United States In the United States, the US Public Health Service published The International Classification of Diseases, Adapted for Indexing of Hospital Records and Operation Classification (ICDA), completed in 1962 and expanding the ICD-7 in a number of areas to more completely meet the indexing needs of hospitals. The US Public Health Service later published the Eighth Revision, International Classification of Diseases, Adapted for Use in the United States, commonly referred to as ICDA-8, for official national morbidity and mortality statistics. This was followed by the ICD, 9th Revision, Clinical Modification, known as ICD-9-CM, published by the US Department of Health and Human Services and used by hospitals and other healthcare facilities to better describe the clinical picture of the patient. The diagnosis component of ICD-9-CM is completely consistent with ICD-9 codes, and remains the data standard for reporting morbidity. National adaptations of the ICD-10 progressed to incorporate both clinical code (ICD-10-CM) and procedure code (ICD-10-PCS) with the revisions completed in 2003. In 2009, the US Centers for Medicare and Medicaid Services announced that it would begin using ICD-10 on April 1, 2010, with full compliance by all involved parties by 2013. However, the US extended the deadline twice and did not formally require transitioning to ICD-10-CM (for most clinical encounters) until October 1, 2015. The years for which causes of death in the United States have been classified by each revision as follows: ICD-1: 1900 ICD-2: 1910 ICD-3: 1921 ICD-4: 1930 ICD-5: 1939 ICD-6: 1949 ICD-7: 1958 ICDA-8: 1968 ICD-9-CM: 1979 ICD-10-CM: 1999 Cause of death on United States death certificates, statistically compiled by the Centers for Disease Control and Prevention (CDC), are coded in the ICD, which does not include codes for human and system factors commonly called medical errors. Mental health conditions The various ICD editions include sections that classify mental and behavioural disorders. The ICD-10 Classification of Mental and Behavioural Disorders: Clinical Descriptions and Diagnostic Guidelines – also known as the "blue book" – is derived from Chapter of ICD-10 and gives the diagnostic criteria for the conditions listed at each category therein. The blue book was developed separately to, but coexists with, the Diagnostic and Statistical Manual of Mental Disorders (DSM) of the American Psychiatric Association—though both seek to use the same diagnostic classifications. A survey of psychiatrists in 66 countries comparing use of the ICD-10 and DSM-IV found that the former was more often used for clinical diagnosis while the latter was more valued for research. As part of the development of the ICD-11, WHO established an "International Advisory Group" to guide what would become the chapter on "Mental, behavioural or neurodevelopmental disorders". The working group proposed that ICD-11 should declassify the categories within ICD-10 at "F66 Psychological and behavioural disorders that are associated with sexual development and orientation". The group reported to WHO that there was "no evidence" these classifications were clinically useful, as they do not "contribute to health service delivery or treatment selection nor provide essential information for public health surveillance." Adding that; despite ICD-10 explicitly stating "sexual orientation by itself is not to be considered a disorder", the inclusion of such categories "suggest that mental disorders exist that are uniquely linked to sexual orientation and gender expression." A position already recognised by the DSM, as well as other classification systems. The ICD is actually the official system for the US, although many mental health professionals do not realize this due to the dominance of the DSM. A psychologist has stated: "Serious problems with the clinical utility of both the ICD and the DSM are widely acknowledged."
Biology and health sciences
Disease: general classification
Health
15476
https://en.wikipedia.org/wiki/Internet%20protocol%20suite
Internet protocol suite
The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA. The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed, and received. This functionality is organized into four abstraction layers, which classify all related protocols according to each protocol's scope of networking. An implementation of the layers for a particular application forms a protocol stack. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment (link); the internet layer, providing internetworking between independent networks; the transport layer, handling host-to-host communication; and the application layer, providing process-to-process data exchange for applications. The technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force (IETF). The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems. History Early research Initially referred to as the DOD Internet Architecture Model, the Internet protocol suite has its roots in research and development sponsored by the Defense Advanced Research Projects Agency (DARPA) in the late 1960s. After DARPA initiated the pioneering ARPANET in 1969, Steve Crocker established a "Networking Working Group" which developed a host-host protocol, the Network Control Program (NCP). In the early 1970s, DARPA started work on several other data transmission technologies, including mobile packet radio, packet satellite service, local area networks, and other data networks in the public and private domains. In 1972, Bob Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf joined Kahn with the goal of designing the next protocol generation for the ARPANET to enable internetworking. They drew on the experience from the ARPANET research community, the International Network Working Group, which Cerf chaired, and researchers at Xerox PARC. By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the existing ARPANET protocols, this function was delegated to the hosts. Cerf credits Louis Pouzin and Hubert Zimmermann, designers of the CYCLADES network, with important influences on this design. The new protocol was implemented as the Transmission Control Program in 1974 by Cerf, Yogen Dalal and Carl Sunshine. Initially, the Transmission Control Program (the Internet Protocol did not then exist as a separate protocol) provided only a reliable byte stream service to its users, not datagrams. Several versions were developed through the Internet Experiment Note series. As experience with the protocol grew, collaborators recommended division of functionality into layers of distinct protocols, allowing users direct access to datagram service. Advocates included Bob Metcalfe and Yogen Dalal at Xerox PARC; Danny Cohen, who needed it for his packet voice work; and Jonathan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments (RFCs), the technical and strategic document series that has both documented and catalyzed Internet development. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would be inflexible and lead to scalability issues. In version 4, written in 1978, Postel split the Transmission Control Program into two distinct protocols, the Internet Protocol as connectionless layer and the Transmission Control Protocol as a reliable connection-oriented service. The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This end-to-end principle was pioneered by Louis Pouzin in the CYCLADES network, based on the ideas of Donald Davies. Using this design, it became possible to connect other networks to the ARPANET that used the same principle, irrespective of other local characteristics, thereby solving Kahn's initial internetworking problem. A popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, can run over "two tin cans and a string." Years later, as a joke in 1999, the IP over Avian Carriers formal protocol specification was created and successfully tested two years later. 10 years later still, it was adapted for IPv6. DARPA contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on several hardware platforms. During development of the protocol the version number of the packet routing layer progressed from version 1 to version 4, the latter of which was installed in the ARPANET in 1983. It became known as Internet Protocol version 4 (IPv4) as the protocol that is still in use in the Internet, alongside its current successor, Internet Protocol version 6 (IPv6). Early implementation In 1975, a two-network IP communications test was performed between Stanford and University College London. In November 1977, a three-network IP test was conducted between sites in the US, the UK, and Norway. Several other IP prototypes were developed at multiple research centers between 1978 and 1983. A computer called a router is provided with an interface to each network. It forwards network packets back and forth between them. Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways. Adoption In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, NORSAR/NDRE and Peter Kirstein's research group at University College London adopted the protocol. The migration of the ARPANET from NCP to TCP/IP was officially completed on flag day January 1, 1983, when the new protocols were permanently activated. In 1985, the Internet Advisory Board (later Internet Architecture Board) held a three-day TCP/IP workshop for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use. In 1985, the first Interop conference focused on network interoperability by broader adoption of TCP/IP. The conference was founded by Dan Lynch, an early Internet activist. From the beginning, large corporations, such as IBM and DEC, attended the meeting. IBM, AT&T and DEC were the first major corporations to adopt TCP/IP, this despite having competing proprietary protocols. In IBM, from 1984, Barry Appelman's group did TCP/IP development. They navigated the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies, such as FTP Software and the Wollongong Group, began offering TCP/IP stacks for DOS and Microsoft Windows. The first VM/CMS TCP/IP stack came from the University of Wisconsin. Some of the early TCP/IP stacks were written single-handedly by a few programmers. Jay Elinsky and Oleg Vishnepolsky of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively. In 1984 Donald Gillies at MIT wrote a ntcp multi-connection TCP which runs atop the IP/PacketDriver layer maintained by John Romkey at MIT in 1983–84. Romkey leveraged this TCP in 1986 when FTP Software was founded. Starting in 1985, Phil Karn created a multi-connection TCP application for ham radio systems (KA9Q TCP). The spread of TCP/IP was fueled further in June 1989, when the University of California, Berkeley agreed to place the TCP/IP code developed for BSD UNIX into the public domain. Various corporate vendors, including IBM, included this code in commercial TCP/IP software releases. For Windows 3.1, the dominant PC operating system among consumers in the first half of the 1990s, Peter Tattam's release of the Trumpet Winsock TCP/IP stack was key to bringing the Internet to home users. Trumpet Winsock allowed TCP/IP operations over a serial connection (SLIP or PPP). The typical home PC of the time had an external Hayes-compatible modem connected via an RS-232 port with an 8250 or 16550 UART which required this type of stack. Later, Microsoft would release their own TCP/IP add-on stack for Windows for Workgroups 3.11 and a native stack in Windows 95. These events helped cement TCP/IP's dominance over other protocols on Microsoft-based networks, which included IBM's Systems Network Architecture (SNA), and on other platforms such as Digital Equipment Corporation's DECnet, Open Systems Interconnection (OSI), and Xerox Network Systems (XNS). Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations were polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks. Formal specification and standards The technical standards underlying the Internet protocol suite and its constituent protocols have been delegated to the Internet Engineering Task Force (IETF). The characteristic architecture of the Internet protocol suite is its broad division into operating scopes for the protocols that constitute its core functionality. The defining specifications of the suite are RFC 1122 and 1123, which broadly outlines four abstraction layers (as well as related protocols); the link layer, IP layer, transport layer, and application layer, along with support protocols. These have stood the test of time, as the IETF has never modified this structure. As such a model of networking, the Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems. Key architectural principles The end-to-end principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this principle. The robustness principle states: "In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear)." "The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features." Encapsulation is used to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers. The data is further encapsulated at each level. An early pair of architectural documents, and , titled Requirements for Internet Hosts, emphasizes architectural principles over layering. RFC 1122/23 are structured in sections referring to layers, but the documents refer to many other architectural principles, and do not emphasize layering. They loosely defines a four-layer model, with the layers having names, not numbers, as follows: The application layer is the scope within which applications, or processes, create user data and communicate this data to other applications on another or the same host. The applications make use of the services provided by the underlying lower layers, especially the transport layer which provides reliable or unreliable pipes to other processes. The communications partners are characterized by the application architecture, such as the client–server model and peer-to-peer networking. This is the layer in which all application protocols, such as SMTP, FTP, SSH, HTTP, operate. Processes are addressed via ports which essentially represent services. The transport layer performs host-to-host communications on either the local network or remote networks separated by routers. It provides a channel for the communication needs of applications. UDP is the basic transport layer protocol, providing an unreliable connectionless datagram service. The Transmission Control Protocol provides flow-control, connection establishment, and reliable transmission of data. The internet layer exchanges datagrams across network boundaries. It provides a uniform networking interface that hides the actual topology (layout) of the underlying network connections. It is therefore also the layer that establishes internetworking. Indeed, it defines and establishes the Internet. This layer defines the addressing and routing structures used for the TCP/IP protocol suite. The primary protocol in this scope is the Internet Protocol, which defines IP addresses. Its function in routing is to transport datagrams to the next host, functioning as an IP router, that has the connectivity to a network closer to the final data destination. The link layer defines the networking methods within the scope of the local network link on which hosts communicate without intervening routers. This layer includes the protocols used to describe the local network topology and the interfaces needed to effect the transmission of internet layer datagrams to next-neighbor hosts. Link layer The protocols of the link layer operate within the scope of the local network connection to which a host is attached. This regime is called the link in TCP/IP parlance and is the lowest component layer of the suite. The link includes all hosts accessible without traversing a router. The size of the link is therefore determined by the networking hardware design. In principle, TCP/IP is designed to be hardware independent and may be implemented on top of virtually any link-layer technology. This includes not only hardware implementations but also virtual link layers such as virtual private networks and networking tunnels. The link layer is used to move packets between the internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on the link can be controlled in the device driver for the network card, as well as in firmware or by specialized chipsets. These perform functions, such as framing, to prepare the internet layer packets for transmission, and finally transmit the frames to the physical layer and over a transmission medium. The TCP/IP model includes specifications for translating the network addressing methods used in the Internet Protocol to link-layer addresses, such as media access control (MAC) addresses. All other aspects below that level, however, are implicitly assumed to exist and are not explicitly defined in the TCP/IP model. The link layer in the TCP/IP model has corresponding functions in Layer 2 of the OSI model. Internet layer Internetworking requires sending data from the source network to the destination network. This process is called routing and is supported by host addressing and identification using the hierarchical IP addressing system. The internet layer provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding datagrams to an appropriate next-hop router for further relaying to its destination. The internet layer has the responsibility of sending packets across potentially multiple networks. With this functionality, the internet layer makes possible internetworking, the interworking of different IP networks, and it essentially establishes the Internet. The internet layer does not distinguish between the various transport layer protocols. IP carries data for a variety of different upper layer protocols. These protocols are each identified by a unique protocol number: for example, Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and 2, respectively. The Internet Protocol is the principal component of the internet layer, and it defines two addressing systems to identify network hosts and to locate them on the network. The original address system of the ARPANET and its successor, the Internet, is Internet Protocol version 4 (IPv4). It uses a 32-bit IP address and is therefore capable of identifying approximately four billion hosts. This limitation was eliminated in 1998 by the standardization of Internet Protocol version 6 (IPv6) which uses 128-bit addresses. IPv6 production implementations emerged in approximately 2006. Transport layer The transport layer establishes basic data channels that applications use for task-specific data exchange. The layer establishes host-to-host connectivity in the form of end-to-end message transfer services that are independent of the underlying network and independent of the structure of user data and the logistics of exchanging information. Connectivity at the transport layer can be categorized as either connection-oriented, implemented in TCP, or connectionless, implemented in UDP. The protocols in this layer may provide error control, segmentation, flow control, congestion control, and application addressing (port numbers). For the purpose of providing process-specific transmission channels for applications, the layer establishes the concept of the network port. This is a numbered logical construct allocated specifically for each of the communication channels an application needs. For many types of services, these port numbers have been standardized so that client computers may address specific services of a server computer without the involvement of service discovery or directory services. Because IP provides only a best-effort delivery, some transport-layer protocols offer reliability. TCP is a connection-oriented protocol that addresses numerous reliability issues in providing a reliable byte stream: data arrives in-order data has minimal error (i.e., correctness) duplicate data is discarded lost or discarded packets are resent includes traffic congestion control The newer Stream Control Transmission Protocol (SCTP) is also a reliable, connection-oriented transport mechanism. It is message-stream-oriented, not byte-stream-oriented like TCP, and provides multiple streams multiplexed over a single connection. It also provides multihoming support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP). Reliability can also be achieved by running IP over a reliable data-link protocol such as the High-Level Data Link Control (HDLC). The User Datagram Protocol (UDP) is a connectionless datagram protocol. Like IP, it is a best-effort, unreliable protocol. Reliability is addressed through error detection using a checksum algorithm. UDP is typically used for applications such as streaming media (audio, video, Voice over IP, etc.) where on-time arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. Real-time Transport Protocol (RTP) is a datagram protocol that is used over UDP and is designed for real-time data such as streaming media. The applications at any given network address are distinguished by their TCP or UDP port. By convention, certain well-known ports are associated with specific applications. The TCP/IP model's transport or host-to-host layer corresponds roughly to the fourth layer in the OSI model, also called the transport layer. QUIC is rapidly emerging as an alternative transport protocol. Whilst it is technically carried via UDP packets it seeks to offer enhanced transport connectivity relative to TCP. HTTP/3 works exclusively via QUIC. Application layer The application layer includes the protocols used by most applications for providing user services or exchanging application data over the network connections established by the lower-level protocols. This may include some basic network support services such as routing protocols and host configuration. Examples of application layer protocols include the Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), the Simple Mail Transfer Protocol (SMTP), and the Dynamic Host Configuration Protocol (DHCP). Data coded according to application layer protocols are encapsulated into transport layer protocol units (such as TCP streams or UDP datagrams), which in turn use lower layer protocols to effect actual data transfer. The TCP/IP model does not consider the specifics of formatting and presenting data and does not define additional layers between the application and transport layers as in the OSI model (presentation and session layers). According to the TCP/IP model, such functions are the realm of libraries and application programming interfaces. The application layer in the TCP/IP model is often compared to a combination of the fifth (session), sixth (presentation), and seventh (application) layers of the OSI model. Application layer protocols are often associated with particular client–server applications, and common services have well-known port numbers reserved by the Internet Assigned Numbers Authority (IANA). For example, the HyperText Transfer Protocol uses server port 80 and Telnet uses server port 23. Clients connecting to a service usually use ephemeral ports, i.e., port numbers assigned only for the duration of the transaction at random or from a specific range configured in the application. At the application layer, the TCP/IP model distinguishes between user protocols and support protocols. Support protocols provide services to a system of network infrastructure. User protocols are used for actual user applications. For example, FTP is a user protocol and DNS is a support protocol. Although the applications are usually aware of key qualities of the transport layer connection such as the endpoint IP addresses and port numbers, application layer protocols generally treat the transport layer (and lower) protocols as black boxes which provide a stable network connection across which to communicate. The transport layer and lower-level layers are unconcerned with the specifics of application layer protocols. Routers and switches do not typically examine the encapsulated traffic, rather they just provide a conduit for it. However, some firewall and bandwidth throttling applications use deep packet inspection to interpret application data. An example is the Resource Reservation Protocol (RSVP). It is also sometimes necessary for Applications affected by NAT to consider the application payload. Layering evolution and representations in the literature The Internet protocol suite evolved through research and development funded over a period of time. In this process, the specifics of protocol components and their layering changed. In addition, parallel research and commercial interests from industry associations competed with design features. In particular, efforts in the International Organization for Standardization led to a similar goal, but with a wider scope of networking in general. Efforts to consolidate the two principal schools of layering, which were superficially similar, but diverged sharply in detail, led independent textbook authors to formulate abridging teaching tools. The following table shows various such networking models. The number of layers varies between three and seven. Some of the networking models are from textbooks, which are secondary sources that may conflict with the intent of RFC 1122 and other IETF primary sources. Comparison of TCP/IP and OSI layering The three top layers in the OSI model, i.e. the application layer, the presentation layer and the session layer, are not distinguished separately in the TCP/IP model which only has an application layer above the transport layer. While some pure OSI protocol applications, such as X.400, also combined them, there is no requirement that a TCP/IP protocol stack must impose monolithic architecture above the transport layer. For example, the NFS application protocol runs over the External Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol called Remote Procedure Call (RPC). RPC provides reliable record transmission, so it can safely use the best-effort UDP transport. Different authors have interpreted the TCP/IP model differently, and disagree whether the link layer, or any aspect of the TCP/IP model, covers OSI layer 1 (physical layer) issues, or whether TCP/IP assumes a hardware layer exists below the link layer. Several authors have attempted to incorporate the OSI model's layers 1 and 2 into the TCP/IP model since these are commonly referred to in modern standards (for example, by IEEE and ITU). This often results in a model with five layers, where the link layer or network access layer is split into the OSI model's layers 1 and 2. The IETF protocol development effort is not concerned with strict layering. Some of its protocols may not fit cleanly into the OSI model, although RFCs sometimes refer to it and often use the old OSI layer numbers. The IETF has repeatedly stated that Internet Protocol and architecture development is not intended to be OSI-compliant. RFC 3439, referring to the internet architecture, contains a section entitled: "Layering Considered Harmful". For example, the session and presentation layers of the OSI suite are considered to be included in the application layer of the TCP/IP suite. The functionality of the session layer can be found in protocols like HTTP and SMTP and is more evident in protocols like Telnet and the Session Initiation Protocol (SIP). Session-layer functionality is also realized with the port numbering of the TCP and UDP protocols, which are included in the transport layer of the TCP/IP suite. Functions of the presentation layer are realized in the TCP/IP applications with the MIME standard in data exchange. Another difference is in the treatment of routing protocols. The OSI routing protocol IS-IS belongs to the network layer, and does not depend on CLNS for delivering packets from one router to another, but defines its own layer-3 encapsulation. In contrast, OSPF, RIP, BGP and other routing protocols defined by the IETF are transported over IP, and, for the purpose of sending and receiving routing protocol packets, routers act as hosts. As a consequence, routing protocols are included in the application layer. Some authors, such as Tanenbaum in Computer Networks, describe routing protocols in the same layer as IP, reasoning that routing protocols inform decisions made by the forwarding process of routers. IETF protocols can be encapsulated recursively, as demonstrated by tunnelling protocols such as Generic Routing Encapsulation (GRE). GRE uses the same mechanism that OSI uses for tunnelling at the network layer. Implementations The Internet protocol suite does not presume any specific hardware or software environment. It only requires that hardware and a software layer exists that is capable of sending and receiving packets on a computer network. As a result, the suite has been implemented on essentially every computing platform. A minimal implementation of TCP/IP includes the following: Internet Protocol (IP), Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Internet Group Management Protocol (IGMP). In addition to IP, ICMP, TCP, UDP, Internet Protocol version 6 requires Neighbor Discovery Protocol (NDP), ICMPv6, and Multicast Listener Discovery (MLD) and is often accompanied by an integrated IPSec security layer.
Technology
Internet
null
15492
https://en.wikipedia.org/wiki/Imperial%20units
Imperial units
The imperial system of units, imperial system or imperial units (also known as British Imperial or Exchequer Standards of 1826) is the system of units first defined in the British Weights and Measures Act 1824 and continued to be developed through a series of Weights and Measures Acts and amendments. The imperial system developed from earlier English units as did the related but differing system of customary units of the United States. The imperial units replaced the Winchester Standards, which were in effect from 1588 to 1825. The system came into official use across the British Empire in 1826. By the late 20th century, most nations of the former empire had officially adopted the metric system as their main system of measurement, but imperial units are still used alongside metric units in the United Kingdom and in some other parts of the former empire, notably Canada. The modern UK legislation defining the imperial system of units is given in the Weights and Measures Act 1985 (as amended). Implementation The Weights and Measures Act 1824 was initially scheduled to go into effect on 1 May 1825. The Weights and Measures Act 1825 pushed back the date to 1 January 1826. The 1824 act allowed the continued use of pre-imperial units provided that they were customary, widely known, and clearly marked with imperial equivalents. Apothecaries' units Apothecaries' units are not mentioned in the acts of 1824 and 1825. At the time, apothecaries' weights and measures were regulated "in England, Wales, and Berwick-upon-Tweed" by the London College of Physicians, and in Ireland by the Dublin College of Physicians. In Scotland, apothecaries' units were unofficially regulated by the Edinburgh College of Physicians. The three colleges published, at infrequent intervals, pharmacopoeias, the London and Dublin editions having the force of law. Imperial apothecaries' measures, based on the imperial pint of 20 fluid ounces, were introduced by the publication of the London Pharmacopoeia of 1836, the Edinburgh Pharmacopoeia of 1839, and the Dublin Pharmacopoeia of 1850. The Medical Act 1858 transferred to the Crown the right to publish the official pharmacopoeia and to regulate apothecaries' weights and measures. Units Length Metric equivalents in this article usually assume the latest official definition. Before this date, the most precise measurement of the imperial Standard Yard was metres. Area Volume The Weights and Measures Act 1824 invalidated the various different gallons in use in the British Empire, declaring them to be replaced by the statute gallon (which became known as the imperial gallon), a unit close in volume to the ale gallon. The 1824 act defined as the volume of a gallon to be that of of distilled water weighed in air with brass weights with the barometer standing at at a temperature of . The 1824 act went on to give this volume as . The Weights and Measures Act 1963 refined this definition to be the volume of 10 pounds of distilled water of density weighed in air of density against weights of density , which works out to . The Weights and Measures Act 1985 defined a gallon to be exactly (approximately ). British apothecaries' volume measures These measurements were in use from 1826, when the new imperial gallon was defined. For pharmaceutical purposes, they were replaced by the metric system in the United Kingdom on 1 January 1971. In the US, though no longer recommended, the apothecaries' system is still used occasionally in medicine, especially in prescriptions for older medications. Mass and weight In the 19th and 20th centuries, the UK used three different systems for mass and weight. troy weight, used for precious metals; avoirdupois weight, used for most other purposes; and apothecaries' weight, now virtually unused since the metric system is used for all scientific purposes. The distinction between mass and weight is not always clearly drawn. Strictly a pound is a unit of mass, but it is commonly referred to as a weight. When a distinction is necessary, the term pound-force may refer to a unit of force rather than mass. The troy pound () was made the primary unit of mass by the Weights and Measures Act 1824 and its use was abolished in the UK on 1 January 1879, with only the troy ounce () and its decimal subdivisions retained. The Weights and Measures Act 1855 made the avoirdupois pound the primary unit of mass. In all the systems, the fundamental unit is the pound, and all other units are defined as fractions or multiples of it. |} Natural equivalents The 1824 Act of Parliament defined the yard and pound by reference to the prototype standards, and it also defined the values of certain physical constants, to make provision for re-creation of the standards if they were to be damaged. For the yard, the length of a pendulum beating seconds at the latitude of Greenwich at Mean Sea Level in vacuo was defined as inches. For the pound, the mass of a cubic inch of distilled water at an atmospheric pressure of 30 inches of mercury and a temperature of 62° Fahrenheit was defined as 252.458 grains, with there being 7,000 grains per pound. Following the destruction of the original prototypes in the 1834 Houses of Parliament fire, it proved impossible to recreate the standards from these definitions, and a new Weights and Measures Act 1855 was passed which permitted the recreation of the prototypes from recognized secondary standards. Current use === United Kingdom === Since the Weights and Measures Act 1985, British law defines base imperial units in terms of their metric equivalent. The metric system is routinely used in business and technology within the United Kingdom, with imperial units remaining in widespread use amongst the public. All UK roads use the imperial system except for weight limits, and newer height or width restriction signs give metric alongside imperial. Traders in the UK may accept requests from customers specified in imperial units, and scales which display in both unit systems are commonplace in the retail trade. Metric price signs may be accompanied by imperial price signs provided that the imperial signs are no larger and no more prominent than the metric ones. The United Kingdom completed its official partial transition to the metric system in 1995, with imperial units still legally mandated for certain applications such as draught beer and cider, and road-signs. Therefore, the speedometers on vehicles sold in the UK must be capable of displaying miles per hour. Even though the troy pound was outlawed in the UK in the Weights and Measures Act 1878, the troy ounce may still be used for the weights of precious stones and metals. The original railways (many built in the Victorian era) are a big user of imperial units, with distances officially measured in miles and yards or miles and chains, and also feet and inches, and speeds are in miles per hour. Some British people still use one or more imperial units in everyday life for distance (miles, yards, feet, and inches) and some types of volume measurement (especially milk and beer in pints; rarely for canned or bottled soft drinks, or petrol). , many British people also still use imperial units in everyday life for body weight (stones and pounds for adults, pounds and ounces for babies). Government documents aimed at the public may give body weight and height in imperial units as well as in metric. A survey in 2015 found that many people did not know their body weight or height in both systems. As of 2017, people under the age of 40 preferred the metric system but people aged 40 and over preferred the imperial system. As in other English-speaking countries, including Australia, Canada and the United States, the height of horses is usually measured in hands, standardised to . Fuel consumption for vehicles is commonly stated in miles per gallon (mpg), though official figures always include litres per equivalents and fuel is sold in litres. When sold draught in licensed premises, beer and cider must be sold in pints, half-pints or third-pints. Cow's milk is available in both litre- and pint-based containers in supermarkets and shops. Areas of land associated with farming, forestry and real estate are commonly advertised in acres and square feet but, for contracts and land registration purposes, the units are always hectares and square metres. Office space and industrial units are usually advertised in square feet. Steel pipe sizes are sold in increments of inches, while copper pipe is sold in increments of millimetres. Road bicycles have their frames measured in centimetres, while off-road bicycles have their frames measured in inches. Display sizes for screens on television sets and computer monitors are always diagonally measured in inches. Food sold by length or width, e.g. pizzas or sandwiches, is generally sold in inches. Clothing is usually sized in inches, with the metric equivalent often shown as a small supplementary indicator. Gas is usually measured by the cubic foot or cubic metre, but is billed like electricity by the kilowatt hour. Pre-packaged products can show both metric and imperial measures, and it is also common to see imperial pack sizes with metric only labels, e.g. a tin of Lyle's Golden Syrup is always labelled with no imperial indicator. Similarly most jars of jam and packs of sausages are labelled with no imperial indicator. India India began converting to the metric system from the imperial system between 1955 and 1962. The metric system in weights and measures was adopted by the Indian Parliament in December 1956 with the Standards of Weights and Measures Act, which took effect beginning 1 October 1958. By 1962, metric units became "mandatory and exclusive." Today all official measurements are made in the metric system. In common usage some older Indians may still refer to imperial units. Some measurements, such as the heights of mountains, are still recorded in feet. Tyre rim diameters are still measured in inches, as used worldwide. Industries like the construction and the real estate industry still use both the metric and the imperial system though it is more common for sizes of homes to be given in square feet and land in acres. In Standard Indian English, as in Australian, Canadian, New Zealand, Singaporean, and British English, metric units such as the litre, metre, and tonne utilise the traditional spellings brought over from French, which differ from those used in the United States and the Philippines. The imperial long ton is invariably spelt with one 'n'. Hong Kong Hong Kong has three main systems of units of measurement in current use: The Chinese units of measurement of the Qing Empire (no longer in widespread use in China); British imperial units; and The metric system. In 1976 the Hong Kong Government started the conversion to the metric system, and as of 2012 measurements for government purposes, such as road signs, are almost always in metric units. All three systems are officially permitted for trade, and in the wider society a mixture of all three systems prevails. The Chinese system's most commonly used units for length are (lei5), (zoeng6), (cek3), (cyun3), (fan1) in descending scale order. These units are now rarely used in daily life, the imperial and metric systems being preferred. The imperial equivalents are written with the same basic Chinese characters as the Chinese system. In order to distinguish between the units of the two systems, the units can be prefixed with "Ying" (, jing1) for the imperial system and "Wa" (, waa4) for the Chinese system. In writing, derived characters are often used, with an additional (mouth) radical to the left of the original Chinese character, for writing imperial units. The most commonly used units are the mile or "li" (, li1), the yard or "ma" (, maa5), the foot or "chek" (, cek3), and the inch or "tsun" (, cyun3). The traditional measure of flat area is the square foot (, fong1 cek3, ping4 fong1 cek3) of the imperial system, which is still in common use for real estate purposes. The measurement of agricultural plots and fields is traditionally conducted in (mau5) of the Chinese system. For the measurement of volume, Hong Kong officially uses the metric system, though the gallon (, gaa1 leon4-2) is also occasionally used. Canada During the 1970s, the metric system and SI units were introduced in Canada to replace the imperial system. Within the government, efforts to implement the metric system were extensive; almost any agency, institution, or function provided by the government uses SI units exclusively. Imperial units were eliminated from all public road signs and both systems of measurement will still be found on privately owned signs, such as the height warnings at the entrance of a parkade. In the 1980s, momentum to fully convert to the metric system stalled when the government of Brian Mulroney was elected. There was heavy opposition to metrication and as a compromise the government maintains legal definitions for and allows use of imperial units as long as metric units are shown as well. The law requires that measured products (such as fuel and meat) be priced in metric units and an imperial price can be shown if a metric price is present. There tends to be leniency in regards to fruits and vegetables being priced in imperial units only. Environment Canada still offers an imperial unit option beside metric units, even though weather is typically measured and reported in metric units in the Canadian media. Some radio stations near the United States border (such as CIMX and CIDR) primarily use imperial units to report the weather. Railways in Canada also continue to use imperial units. Imperial units are still used in ordinary conversation. Today, Canadians typically use a mix of metric and imperial measurements in their daily lives. The use of the metric and imperial systems varies by age. The older generation mostly uses the imperial system, while the younger generation more often uses the metric system. Quebec has implemented metrication more fully. Newborns are measured in SI at hospitals, but the birth weight and length is also announced to family and friends in imperial units. Drivers' licences use SI units, though many English-speaking Canadians give their height and weight in imperial. In livestock auction markets, cattle are sold in dollars per hundredweight (short), whereas hogs are sold in dollars per hundred kilograms. Imperial units still dominate in recipes, construction, house renovation and gardening. Land is now surveyed and registered in metric units whilst initial surveys used imperial units. For example, partitioning of farmland on the prairies in the late 19th and early 20th centuries was done in imperial units; this accounts for imperial units of distance and area retaining wide use in the Prairie Provinces. In English-speaking Canada commercial and residential spaces are mostly (but not exclusively) constructed using square feet, while in French-speaking Quebec commercial and residential spaces are constructed in metres and advertised using both square metres and square feet as equivalents. Carpet or flooring tile is purchased by the square foot, but less frequently also in square metres. Motor-vehicle fuel consumption is reported in both litres per and statute miles per imperial gallon, leading to the erroneous impression that Canadian vehicles are 20% more fuel-efficient than their apparently identical American counterparts for which fuel economy is reported in statute miles per US gallon (neither country specifies which gallon is used). Canadian railways maintain exclusive use of imperial measurements to describe train length (feet), train height (feet), capacity (tons), speed (mph), and trackage (miles). Imperial units also retain common use in firearms and ammunition. Imperial measures are still used in the description of cartridge types, even when the cartridge is of relatively recent invention (e.g., .204 Ruger, .17 HMR, where the calibre is expressed in decimal fractions of an inch). Ammunition that is already classified in metric is still kept metric (e.g., 9×19mm). In the manufacture of ammunition, bullet and powder weights are expressed in terms of grains for both metric and imperial cartridges. In keeping with the international standard, air navigation is based on nautical units, e.g., the nautical mile, which is neither imperial nor metric, and altitude is measured in imperial feet. Australia While metrication in Australia has largely ended the official use of imperial units, for particular measurements, international use of imperial units is still followed. In licensed venues, draught beer and cider is sold in glasses and jugs with sizes based on the imperial fluid ounce, though rounded to the nearest 5 mL. Newborns are measured in metric at hospitals, but the birth weight and length is sometimes also announced to family and friends in imperial units. Screen sizes, are frequently described in inches instead of or as well as centimetres. Property size is infrequently described in acres, but is mostly as square metres or hectares. Marine navigation is done in nautical miles, and water-based speed limits are in nautical miles per hour. Historical writing and presentations may include pre-metric units to reflect the context of the era represented. The illicit drug trade in Australia still often uses imperial measurements, particularly when dealing with smaller amounts closer to end user levels e.g. "8-ball" an 8th of an ounce or ; cannabis is often traded in ounces ("oz") and pounds ("p") Firearm barrel length are almost always referred by in inches, ammunition is also still measured in grains and ounces as well as grams. A persons height is frequently and informally described in feet and inches, but on official records is described in metres. The influence of British and American culture in Australia has been noted to be a cause for residual use of imperial units of measure. New Zealand New Zealand introduced the metric system on 15 December 1976. Aviation was exempt, with altitude and airport elevation continuing to be measured in feet whilst navigation is done in nautical miles; all other aspects (fuel quantity, aircraft weight, runway length, etc.) use metric units. Screen sizes for devices such as televisions, monitors and phones, and wheel rim sizes for vehicles, are stated in inches, as is the convention in the rest of the world - and a 1992 study found a continued use of imperial units for birth weight and human height alongside metric units. Ireland Ireland has officially changed over to the metric system since entering the European Union, with distances on new road signs being metric since 1997 and speed limits being metric since 2005. The imperial system remains in limited use – for sales of beer in pubs (traditionally sold by the pint). All other goods are required by law to be sold in metric units with traditional quantities being retained for goods like butter and sausages, which are sold in packaging. The majority of cars sold pre-2005 feature speedometers with miles per hour as the primary unit, but with a kilometres per hour display. Often signs such as those for bridge height can display both metric and imperial units. Imperial measurements continue to be used colloquially by the general population especially with height and distance measurements such as feet, inches, and acres as well as for weight with pounds and stones still in common use among people of all ages. Measurements such as yards have fallen out of favour with younger generations. Ireland's railways still use imperial measurements for distances and speed signage. Property is usually listed in square feet as well as metres also. Horse racing in Ireland still continues to use stones, pounds, miles and furlongs as measurements. Bahamas Imperial measurements remain in general use in the Bahamas. Legally, both the imperial and metric systems are recognised by the Weights and Measures Act 2006. Belize Both imperial units and metric units are used in Belize. Both systems are legally recognized by the National Metrology Act. Myanmar According to the CIA, in June 2009, Myanmar was one of three countries that had not adopted the SI metric system as their official system of weights and measures. Metrication efforts began in 2011. The Burmese government set a goal to metricate by 2019, which was not met, with the help of the German National Metrology Institute. Other countries Some imperial measurements remain in limited use in Malaysia, the Philippines, Sri Lanka and South Africa. Measurements in feet and inches, especially for a person's height, are frequently encountered in conversation and non-governmental publications. Prior to metrication, it was a common practice in Malaysia for people to refer to unnamed locations and small settlements along major roads by referring to how many miles the said locations were from the nearest major town. In some cases, these eventually became the official names of the locations; in other cases, such names have been largely or completely superseded by new names. An example of the former is Batu 32 (literally "Mile 32" in Malay), which refers to the area surrounding the intersection between Federal Route 22 (the Tamparuli-Sandakan highway) and Federal Route 13 (the Sandakan-Tawau highway). The area is so named because it is 32 miles west of Sandakan, the nearest major town. Petrol is still sold by the imperial gallon in Anguilla, Antigua and Barbuda, Belize, Myanmar, the Cayman Islands, Dominica, Grenada, Montserrat, St Kitts and Nevis and St. Vincent and the Grenadines. The United Arab Emirates Cabinet in 2009 issued the Decree No. (270 / 3) specifying that, from 1 January 2010, the new unit sale price for petrol will be the litre and not the gallon, which was in line with the UAE Cabinet Decision No. 31 of 2006 on the national system of measurement, which mandates the use of International System of units as a basis for the legal units of measurement in the country. Sierra Leone switched to selling fuel by the litre in May 2011. In October 2011, the Antigua and Barbuda government announced the re-launch of the Metrication Programme in accordance with the Metrology Act 2007, which established the International System of Units as the legal system of units. The Antigua and Barbuda government has committed to a full conversion from the imperial system by the first quarter of 2015.
Physical sciences
Measurement systems
null
15532
https://en.wikipedia.org/wiki/Integral
Integral
In mathematics, an integral is the continuous analog of a sum, which is used to calculate areas, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental operations of calculus, the other being differentiation. Integration was initially used to solve problems in mathematics and physics, such as finding the area under a curve, or determining displacement from velocity. Usage of integration expanded to a wide variety of scientific fields thereafter. A definite integral computes the signed area of the region in the plane that is bounded by the graph of a given function between two points in the real line. Conventionally, areas above the horizontal axis of the plane are positive while areas below are negative. Integrals also refer to the concept of an antiderivative, a function whose derivative is the given function; in this case, they are also called indefinite integrals. The fundamental theorem of calculus relates definite integration to differentiation and provides a method to compute the definite integral of a function when its antiderivative is known; differentiation and integration are inverse operations. Although methods of calculating areas and volumes dated from ancient Greek mathematics, the principles of integration were formulated independently by Isaac Newton and Gottfried Wilhelm Leibniz in the late 17th century, who thought of the area under a curve as an infinite sum of rectangles of infinitesimal width. Bernhard Riemann later gave a rigorous definition of integrals, which is based on a limiting procedure that approximates the area of a curvilinear region by breaking the region into infinitesimally thin vertical slabs. In the early 20th century, Henri Lebesgue generalized Riemann's formulation by introducing what is now referred to as the Lebesgue integral; it is more general than Riemann's in the sense that a wider class of functions are Lebesgue-integrable. Integrals may be generalized depending on the type of the function as well as the domain over which the integration is performed. For example, a line integral is defined for functions of two or more variables, and the interval of integration is replaced by a curve connecting two points in space. In a surface integral, the curve is replaced by a piece of a surface in three-dimensional space. History Pre-calculus integration The first documented systematic technique capable of determining integrals is the method of exhaustion of the ancient Greek astronomer Eudoxus and philosopher Democritus (ca. 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of divisions for which the area or volume was known. This method was further developed and employed by Archimedes in the 3rd century BC and used to calculate the area of a circle, the surface area and volume of a sphere, area of an ellipse, the area under a parabola, the volume of a segment of a paraboloid of revolution, the volume of a segment of a hyperboloid of revolution, and the area of a spiral. A similar method was independently developed in China around the 3rd century AD by Liu Hui, who used it to find the area of the circle. This method was later used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi and Zu Geng to find the volume of a sphere. In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen ( AD) derived a formula for the sum of fourth powers. Alhazen determined the equations to calculate the area enclosed by the curve represented by (which translates to the integral in contemporary notation), for any given non-negative integer value of . He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid. The next significant advances in integral calculus did not begin to appear until the 17th century. At this time, the work of Cavalieri with his method of indivisibles, and work by Fermat, began to lay the foundations of modern calculus, with Cavalieri computing the integrals of up to degree in Cavalieri's quadrature formula. The case n = −1 required the invention of a function, the hyperbolic logarithm, achieved by quadrature of the hyperbola in 1647. Further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the fundamental theorem of calculus. Wallis generalized Cavalieri's method, computing integrals of to a general power, including negative powers and fractional powers. Leibniz and Newton The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Leibniz and Newton. The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Leibniz and Newton developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions with continuous domains. This framework eventually became modern calculus, whose notation for integrals is drawn directly from the work of Leibniz. Formalization While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigour. Bishop Berkeley memorably attacked the vanishing increments used by Newton, calling them "ghosts of departed quantities". Calculus acquired a firmer footing with the development of limits. Integration was first rigorously formalized, using limits, by Riemann. Although all bounded piecewise continuous functions are Riemann-integrable on a bounded interval, subsequently more general functions were considered—particularly in the context of Fourier analysis—to which Riemann's definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory (a subfield of real analysis). Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the standard part of an infinite Riemann sum, based on the hyperreal number system. Historical notation The notation for the indefinite integral was introduced by Gottfried Wilhelm Leibniz in 1675. He adapted the integral symbol, ∫, from the letter ſ (long s), standing for summa (written as ſumma; Latin for "sum" or "total"). The modern notation for the definite integral, with limits above and below the integral sign, was first used by Joseph Fourier in Mémoires of the French Academy around 1819–1820, reprinted in his book of 1822. Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with or , which are used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted. First use of the term The term was first printed in Latin by Jacob Bernoulli in 1690: "Ergo et horum Integralia aequantur". Terminology and notation In general, the integral of a real-valued function with respect to a real variable on an interval is written as The integral sign represents integration. The symbol , called the differential of the variable , indicates that the variable of integration is . The function is called the integrand, the points and are called the limits (or bounds) of integration, and the integral is said to be over the interval , called the interval of integration. A function is said to be if its integral over its domain is finite. If limits are specified, the integral is called a definite integral. When the limits are omitted, as in the integral is called an indefinite integral, which represents a class of functions (the antiderivative) whose derivative is the integrand. The fundamental theorem of calculus relates the evaluation of definite integrals to indefinite integrals. There are several extensions of the notation for integrals to encompass integration on unbounded domains and/or in multiple dimensions (see later sections of this article). In advanced settings, it is not uncommon to leave out when only the simple Riemann integral is being used, or the exact type of integral is immaterial. For instance, one might write to express the linearity of the integral, a property shared by the Riemann integral and all generalizations thereof. Interpretations Integrals appear in many practical situations. For instance, from the length, width and depth of a swimming pool which is rectangular with a flat bottom, one can determine the volume of water it can contain, the area of its surface, and the length of its edge. But if it is oval with a rounded bottom, integrals are required to find exact and rigorous values for these quantities. In each case, one may divide the sought quantity into infinitely many infinitesimal pieces, then sum the pieces to achieve an accurate approximation. As another example, to find the area of the region bounded by the graph of the function between and , one can divide the interval into five pieces (), then construct rectangles using the right end height of each piece (thus ) and sum their areas to get the approximation which is larger than the exact value. Alternatively, when replacing these subintervals by ones with the left end height of each piece, the approximation one gets is too low: with twelve such subintervals the approximated area is only 0.6203. However, when the number of pieces increases to infinity, it will reach a limit which is the exact value of the area sought (in this case, ). One writes which means is the result of a weighted sum of function values, , multiplied by the infinitesimal step widths, denoted by , on the interval . Formal definitions There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions, but are also occasionally for pedagogical reasons. The most commonly used definitions are Riemann integrals and Lebesgue integrals. Riemann integral The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. A tagged partition of a closed interval on the real line is a finite sequence This partitions the interval into sub-intervals indexed by , each of which is "tagged" with a specific point . A Riemann sum of a function with respect to such a tagged partition is defined as thus each term of the sum is the area of a rectangle with height equal to the function value at the chosen point of the given sub-interval, and width the same as the width of sub-interval, . The mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, . The Riemann integral of a function over the interval is equal to if: For all there exists such that, for any tagged partition with mesh less than , When the chosen tags are the maximum (respectively, minimum) value of the function in each interval, the Riemann sum becomes an upper (respectively, lower) Darboux sum, suggesting the close connection between the Riemann integral and the Darboux integral. Lebesgue integral It is often of interest, both in theory and applications, to be able to pass to the limit under the integral. For instance, a sequence of functions can frequently be constructed that approximate, in a suitable sense, the solution to a problem. Then the integral of the solution function should be the limit of the integrals of the approximations. However, many functions that can be obtained as limits are not Riemann-integrable, and so such limit theorems do not hold with the Riemann integral. Therefore, it is of great importance to have a definition of the integral that allows a wider class of functions to be integrated. Such an integral is the Lebesgue integral, that exploits the following fact to enlarge the class of integrable functions: if the values of a function are rearranged over the domain, the integral of a function should remain the same. Thus Henri Lebesgue introduced the integral bearing his name, explaining this integral thus in a letter to Paul Montel: As Folland puts it, "To compute the Riemann integral of , one partitions the domain into subintervals", while in the Lebesgue integral, "one is in effect partitioning the range of ". The definition of the Lebesgue integral thus begins with a measure, μ. In the simplest case, the Lebesgue measure of an interval is its width, , so that the Lebesgue integral agrees with the (proper) Riemann integral when both exist. In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals. Using the "partitioning the range of " philosophy, the integral of a non-negative function should be the sum over of the areas between a thin horizontal strip between and . This area is just . Let . The Lebesgue integral of is then defined by where the integral on the right is an ordinary improper Riemann integral ( is a strictly decreasing positive function, and therefore has a well-defined improper Riemann integral). For a suitable class of functions (the measurable functions) this defines the Lebesgue integral. A general measurable function is Lebesgue-integrable if the sum of the absolute values of the areas of the regions between the graph of and the -axis is finite: In that case, the integral is, as in the Riemannian case, the difference between the area above the -axis and the area below the -axis: where Other integrals Although the Riemann and Lebesgue integrals are the most widely used definitions of the integral, a number of others exist, including: The Darboux integral, which is defined by Darboux sums (restricted Riemann sums) yet is equivalent to the Riemann integral. A function is Darboux-integrable if and only if it is Riemann-integrable. Darboux integrals have the advantage of being easier to define than Riemann integrals. The Riemann–Stieltjes integral, an extension of the Riemann integral which integrates with respect to a function as opposed to a variable. The Lebesgue–Stieltjes integral, further developed by Johann Radon, which generalizes both the Riemann–Stieltjes and Lebesgue integrals. The Daniell integral, which subsumes the Lebesgue integral and Lebesgue–Stieltjes integral without depending on measures. The Haar integral, used for integration on locally compact topological groups, introduced by Alfréd Haar in 1933. The Henstock–Kurzweil integral, variously defined by Arnaud Denjoy, Oskar Perron, and (most elegantly, as the gauge integral) Jaroslav Kurzweil, and developed by Ralph Henstock. The Khinchin integral, named after Aleksandr Khinchin. The Itô integral and Stratonovich integral, which define integration with respect to semimartingales such as Brownian motion. The Young integral, which is a kind of Riemann–Stieltjes integral with respect to certain functions of unbounded variation. The rough path integral, which is defined for functions equipped with some additional "rough path" structure and generalizes stochastic integration against both semimartingales and processes such as the fractional Brownian motion. The Choquet integral, a subadditive or superadditive integral created by the French mathematician Gustave Choquet in 1953. The Bochner integral, a generalization of the Lebesgue integral to functions that take values in a Banach space. Properties Linearity The collection of Riemann-integrable functions on a closed interval forms a vector space under the operations of pointwise addition and multiplication by a scalar, and the operation of integration is a linear functional on this vector space. Thus, the collection of integrable functions is closed under taking linear combinations, and the integral of a linear combination is the linear combination of the integrals: Similarly, the set of real-valued Lebesgue-integrable functions on a given measure space with measure is closed under taking linear combinations and hence form a vector space, and the Lebesgue integral is a linear functional on this vector space, so that: More generally, consider the vector space of all measurable functions on a measure space , taking values in a locally compact complete topological vector space over a locally compact topological field . Then one may define an abstract integration map assigning to each function an element of or the symbol , that is compatible with linear combinations. In this situation, the linearity holds for the subspace of functions whose integral is an element of (i.e. "finite"). The most important special cases arise when is , , or a finite extension of the field of p-adic numbers, and is a finite-dimensional vector space over , and when and is a complex Hilbert space. Linearity, together with some natural continuity properties and normalization for a certain class of "simple" functions, may be used to give an alternative definition of the integral. This is the approach of Daniell for the case of real-valued functions on a set , generalized by Nicolas Bourbaki to functions with values in a locally compact topological vector space. See for an axiomatic characterization of the integral. Inequalities A number of general inequalities hold for Riemann-integrable functions defined on a closed and bounded interval and can be generalized to other notions of integral (Lebesgue and Daniell). Upper and lower bounds. An integrable function on , is necessarily bounded on that interval. Thus there are real numbers and so that for all in . Since the lower and upper sums of over are therefore bounded by, respectively, and , it follows that Inequalities between functions. If for each in then each of the upper and lower sums of is bounded above by the upper and lower sums, respectively, of . Thus This is a generalization of the above inequalities, as is the integral of the constant function with value over . In addition, if the inequality between functions is strict, then the inequality between integrals is also strict. That is, if for each in , then Subintervals. If is a subinterval of and is non-negative for all , then Products and absolute values of functions. If and are two functions, then we may consider their pointwise products and powers, and absolute values: If is Riemann-integrable on then the same is true for , and Moreover, if and are both Riemann-integrable then is also Riemann-integrable, and This inequality, known as the Cauchy–Schwarz inequality, plays a prominent role in Hilbert space theory, where the left hand side is interpreted as the inner product of two square-integrable functions and on the interval . Hölder's inequality. Suppose that and are two real numbers, with , and and are two Riemann-integrable functions. Then the functions and are also integrable and the following Hölder's inequality holds: For , Hölder's inequality becomes the Cauchy–Schwarz inequality. Minkowski inequality. Suppose that is a real number and and are Riemann-integrable functions. Then and are also Riemann-integrable and the following Minkowski inequality holds: An analogue of this inequality for Lebesgue integral is used in construction of Lp spaces. Conventions In this section, is a real-valued Riemann-integrable function. The integral over an interval is defined if . This means that the upper and lower sums of the function are evaluated on a partition whose values are increasing. Geometrically, this signifies that integration takes place "left to right", evaluating within intervals where an interval with a higher index lies to the right of one with a lower index. The values and , the end-points of the interval, are called the limits of integration of . Integrals can also be defined if : With , this implies: The first convention is necessary in consideration of taking integrals over subintervals of ; the second says that an integral taken over a degenerate interval, or a point, should be zero. One reason for the first convention is that the integrability of on an interval implies that is integrable on any subinterval , but in particular integrals have the property that if is any element of , then: With the first convention, the resulting relation is then well-defined for any cyclic permutation of , , and . Fundamental theorem of calculus The fundamental theorem of calculus is the statement that differentiation and integration are inverse operations: if a continuous function is first integrated and then differentiated, the original function is retrieved. An important consequence, sometimes called the second fundamental theorem of calculus, allows one to compute integrals by using an antiderivative of the function to be integrated. First theorem Let be a continuous real-valued function defined on a closed interval . Let be the function defined, for all in , by Then, is continuous on , differentiable on the open interval , and for all in . Second theorem Let be a real-valued function defined on a closed interval [] that admits an antiderivative on . That is, and are functions such that for all in , If is integrable on then Extensions Improper integrals A "proper" Riemann integral assumes the integrand is defined and finite on a closed and bounded interval, bracketed by the limits of integration. An improper integral occurs when one or more of these conditions is not satisfied. In some cases such integrals may be defined by considering the limit of a sequence of proper Riemann integrals on progressively larger intervals. If the interval is unbounded, for instance at its upper end, then the improper integral is the limit as that endpoint goes to infinity: If the integrand is only defined or finite on a half-open interval, for instance , then again a limit may provide a finite result: That is, the improper integral is the limit of proper integrals as one endpoint of the interval of integration approaches either a specified real number, or , or . In more complicated cases, limits are required at both endpoints, or at interior points. Multiple integration Just as the definite integral of a positive function of one variable represents the area of the region between the graph of the function and the x-axis, the double integral of a positive function of two variables represents the volume of the region between the surface defined by the function and the plane that contains its domain. For example, a function in two dimensions depends on two real variables, x and y, and the integral of a function f over the rectangle R given as the Cartesian product of two intervals can be written where the differential indicates that integration is taken with respect to area. This double integral can be defined using Riemann sums, and represents the (signed) volume under the graph of over the domain R. Under suitable conditions (e.g., if f is continuous), Fubini's theorem states that this integral can be expressed as an equivalent iterated integral This reduces the problem of computing a double integral to computing one-dimensional integrals. Because of this, another notation for the integral over R uses a double integral sign: Integration over more general domains is possible. The integral of a function f, with respect to volume, over an n-dimensional region D of is denoted by symbols such as: Line integrals and surface integrals The concept of an integral can be extended to more general domains of integration, such as curved lines and surfaces inside higher-dimensional spaces. Such integrals are known as line integrals and surface integrals respectively. These have important applications in physics, as when dealing with vector fields. A line integral (sometimes called a path integral) is an integral where the function to be integrated is evaluated along a curve. Various different line integrals are in use. In the case of a closed curve it is also called a contour integral. The function to be integrated may be a scalar field or a vector field. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on the curve (commonly arc length or, for a vector field, the scalar product of the vector field with a differential vector in the curve). This weighting distinguishes the line integral from simpler integrals defined on intervals. Many simple formulas in physics have natural continuous analogs in terms of line integrals; for example, the fact that work is equal to force, , multiplied by displacement, , may be expressed (in terms of vector quantities) as: For an object moving along a path in a vector field such as an electric field or gravitational field, the total work done by the field on the object is obtained by summing up the differential work done in moving from to . This gives the line integral A surface integral generalizes double integrals to integration over a surface (which may be a curved set in space); it can be thought of as the double integral analog of the line integral. The function to be integrated may be a scalar field or a vector field. The value of the surface integral is the sum of the field at all points on the surface. This can be achieved by splitting the surface into surface elements, which provide the partitioning for Riemann sums. For an example of applications of surface integrals, consider a vector field on a surface ; that is, for each point in , is a vector. Imagine that a fluid flows through , such that determines the velocity of the fluid at . The flux is defined as the quantity of fluid flowing through in unit amount of time. To find the flux, one need to take the dot product of with the unit surface normal to at each point, which will give a scalar field, which is integrated over the surface: The fluid flux in this example may be from a physical fluid such as water or air, or from electrical or magnetic flux. Thus surface integrals have applications in physics, particularly with the classical theory of electromagnetism. Contour integrals In complex analysis, the integrand is a complex-valued function of a complex variable instead of a real function of a real variable . When a complex function is integrated along a curve in the complex plane, the integral is denoted as follows This is known as a contour integral. Integrals of differential forms A differential form is a mathematical concept in the fields of multivariable calculus, differential topology, and tensors. Differential forms are organized by degree. For example, a one-form is a weighted sum of the differentials of the coordinates, such as: where E, F, G are functions in three dimensions. A differential one-form can be integrated over an oriented path, and the resulting integral is just another way of writing a line integral. Here the basic differentials dx, dy, dz measure infinitesimal oriented lengths parallel to the three coordinate axes. A differential two-form is a sum of the form Here the basic two-forms measure oriented areas parallel to the coordinate two-planes. The symbol denotes the wedge product, which is similar to the cross product in the sense that the wedge product of two forms representing oriented lengths represents an oriented area. A two-form can be integrated over an oriented surface, and the resulting integral is equivalent to the surface integral giving the flux of . Unlike the cross product, and the three-dimensional vector calculus, the wedge product and the calculus of differential forms makes sense in arbitrary dimension and on more general manifolds (curves, surfaces, and their higher-dimensional analogs). The exterior derivative plays the role of the gradient and curl of vector calculus, and Stokes' theorem simultaneously generalizes the three theorems of vector calculus: the divergence theorem, Green's theorem, and the Kelvin-Stokes theorem. Summations The discrete equivalent of integration is summation. Summations and integrals can be put on the same foundations using the theory of Lebesgue integrals or time-scale calculus. Functional integrals An integration that is performed not over a variable (or, in physics, over a space or time dimension), but over a space of functions, is referred to as a functional integral. Applications Integrals are used extensively in many areas. For example, in probability theory, integrals are used to determine the probability of some random variable falling within a certain range. Moreover, the integral under an entire probability density function must equal 1, which provides a test of whether a function with no negative values could be a density function or not. Integrals can be used for computing the area of a two-dimensional region that has a curved boundary, as well as computing the volume of a three-dimensional object that has a curved boundary. The area of a two-dimensional region can be calculated using the aforementioned definite integral. The volume of a three-dimensional object such as a disc or washer can be computed by disc integration using the equation for the volume of a cylinder, , where is the radius. In the case of a simple disc created by rotating a curve about the -axis, the radius is given by , and its height is the differential . Using an integral with bounds and , the volume of the disc is equal to:Integrals are also used in physics, in areas like kinematics to find quantities like displacement, time, and velocity. For example, in rectilinear motion, the displacement of an object over the time interval is given by where is the velocity expressed as a function of time. The work done by a force (given as a function of position) from an initial position to a final position is: Integrals are also used in thermodynamics, where thermodynamic integration is used to calculate the difference in free energy between two given states. Computation Analytical The most basic technique for computing definite integrals of one real variable is based on the fundamental theorem of calculus. Let be the function of to be integrated over a given interval . Then, find an antiderivative of ; that is, a function such that on the interval. Provided the integrand and integral have no singularities on the path of integration, by the fundamental theorem of calculus, Sometimes it is necessary to use one of the many techniques that have been developed to evaluate integrals. Most of these techniques rewrite one integral as a different one which is hopefully more tractable. Techniques include integration by substitution, integration by parts, integration by trigonometric substitution, and integration by partial fractions. Alternative methods exist to compute more complex integrals. Many nonelementary integrals can be expanded in a Taylor series and integrated term by term. Occasionally, the resulting infinite series can be summed analytically. The method of convolution using Meijer G-functions can also be used, assuming that the integrand can be written as a product of Meijer G-functions. There are also many less common ways of calculating definite integrals; for instance, Parseval's identity can be used to transform an integral over a rectangular region into an infinite sum. Occasionally, an integral can be evaluated by a trick; for an example of this, see Gaussian integral. Computations of volumes of solids of revolution can usually be done with disk integration or shell integration. Specific results which have been worked out by various techniques are collected in the list of integrals. Symbolic Many problems in mathematics, physics, and engineering involve integration where an explicit formula for the integral is desired. Extensive tables of integrals have been compiled and published over the years for this purpose. With the spread of computers, many professionals, educators, and students have turned to computer algebra systems that are specifically designed to perform difficult or tedious tasks, including integration. Symbolic integration has been one of the motivations for the development of the first such systems, like Macsyma and Maple. A major mathematical difficulty in symbolic integration is that in many cases, a relatively simple function does not have integrals that can be expressed in closed form involving only elementary functions, include rational and exponential functions, logarithm, trigonometric functions and inverse trigonometric functions, and the operations of multiplication and composition. The Risch algorithm provides a general criterion to determine whether the antiderivative of an elementary function is elementary and to compute the integral if is elementary. However, functions with closed expressions of antiderivatives are the exception, and consequently, computerized algebra systems have no hope of being able to find an antiderivative for a randomly constructed elementary function. On the positive side, if the 'building blocks' for antiderivatives are fixed in advance, it may still be possible to decide whether the antiderivative of a given function can be expressed using these blocks and operations of multiplication and composition and to find the symbolic answer whenever it exists. The Risch algorithm, implemented in Mathematica, Maple and other computer algebra systems, does just that for functions and antiderivatives built from rational functions, radicals, logarithm, and exponential functions. Some special integrands occur often enough to warrant special study. In particular, it may be useful to have, in the set of antiderivatives, the special functions (like the Legendre functions, the hypergeometric function, the gamma function, the incomplete gamma function and so on). Extending Risch's algorithm to include such functions is possible but challenging and has been an active research subject. More recently a new approach has emerged, using D-finite functions, which are the solutions of linear differential equations with polynomial coefficients. Most of the elementary and special functions are D-finite, and the integral of a D-finite function is also a D-finite function. This provides an algorithm to express the antiderivative of a D-finite function as the solution of a differential equation. This theory also allows one to compute the definite integral of a D-function as the sum of a series given by the first coefficients and provides an algorithm to compute any coefficient. Rule-based integration systems facilitate integration. Rubi, a computer algebra system rule-based integrator, pattern matches an extensive system of symbolic integration rules to integrate a wide variety of integrands. This system uses over 6600 integration rules to compute integrals. The method of brackets is a generalization of Ramanujan's master theorem that can be applied to a wide range of univariate and multivariate integrals. A set of rules are applied to the coefficients and exponential terms of the integrand's power series expansion to determine the integral. The method is closely related to the Mellin transform. Numerical Definite integrals may be approximated using several methods of numerical integration. The rectangle method relies on dividing the region under the function into a series of rectangles corresponding to function values and multiplies by the step width to find the sum. A better approach, the trapezoidal rule, replaces the rectangles used in a Riemann sum with trapezoids. The trapezoidal rule weights the first and last values by one half, then multiplies by the step width to obtain a better approximation. The idea behind the trapezoidal rule, that more accurate approximations to the function yield better approximations to the integral, can be carried further: Simpson's rule approximates the integrand by a piecewise quadratic function. Riemann sums, the trapezoidal rule, and Simpson's rule are examples of a family of quadrature rules called the Newton–Cotes formulas. The degree Newton–Cotes quadrature rule approximates the polynomial on each subinterval by a degree polynomial. This polynomial is chosen to interpolate the values of the function on the interval. Higher degree Newton–Cotes approximations can be more accurate, but they require more function evaluations, and they can suffer from numerical inaccuracy due to Runge's phenomenon. One solution to this problem is Clenshaw–Curtis quadrature, in which the integrand is approximated by expanding it in terms of Chebyshev polynomials. Romberg's method halves the step widths incrementally, giving trapezoid approximations denoted by , , and so on, where is half of . For each new step size, only half the new function values need to be computed; the others carry over from the previous size. It then interpolate a polynomial through the approximations, and extrapolate to . Gaussian quadrature evaluates the function at the roots of a set of orthogonal polynomials. An -point Gaussian method is exact for polynomials of degree up to . The computation of higher-dimensional integrals (for example, volume calculations) makes important use of such alternatives as Monte Carlo integration. Mechanical The area of an arbitrary two-dimensional shape can be determined using a measuring instrument called planimeter. The volume of irregular objects can be measured with precision by the fluid displaced as the object is submerged. Geometrical Area can sometimes be found via geometrical compass-and-straightedge constructions of an equivalent square. Integration by differentiation Kempf, Jackson and Morales demonstrated mathematical relations that allow an integral to be calculated by means of differentiation. Their calculus involves the Dirac delta function and the partial derivative operator . This can also be applied to functional integrals, allowing them to be computed by functional differentiation. Examples Using the fundamental theorem of calculus The fundamental theorem of calculus allows straightforward calculations of basic functions:
Mathematics
Calculus and analysis
null
15642
https://en.wikipedia.org/wiki/January
January
January is the first month of the year in the Julian and Gregorian calendars. Its length is 31 days. The first day of the month is known as New Year's Day. It is, on average, the coldest month of the year within most of the Northern Hemisphere (where it is the second month of winter) and the warmest month of the year within most of the Southern Hemisphere (where it is the second month of summer). In the Southern hemisphere, January is the seasonal equivalent of July in the Northern hemisphere and vice versa. Ancient Roman observances during this month include Cervula and Juvenalia, celebrated January 1, as well as one of three Agonalia, celebrated January 9, and Carmentalia, celebrated January 11. These dates do not correspond to the modern Gregorian calendar. History January (in Latin, Ianuarius) is named after Janus, the god of beginnings and transitions in Roman mythology. Traditionally, the original Roman calendar consisted of 10 months totaling 304 days, winter being considered a month-less period. Around 713 BC, the semi-mythical successor of Romulus, King Numa Pompilius, is supposed to have added the months of January and February, so that the calendar covered a standard lunar year (354 days). Although March was originally the first month in the old Roman calendar, January became the first month of the calendar year either under Numa or under the Decemvirs about 450 BC (Roman writers differ). In contrast, each specific calendar year was identified by the names of the two consuls, who entered office on March 15 until 153 BC, at which point they started entering office on January 1. Various Christian feast dates were used for the New Year in Europe during the Middle Ages, including March 25 (Feast of the Annunciation) and December 25. However, medieval calendars were still displayed in the Roman fashion with twelve columns from January to December. Beginning in the 16th century, European countries began officially making January 1 the start of the New Year once again—sometimes called Circumcision Style because this was the date of the Feast of the Circumcision, being the seventh day after December 25. Historical names for January include its original Roman designation, Ianuarius, the Saxon term Wulf-monath (meaning "wolf month") and Charlemagne's designation Wintarmanoth ("winter / cold month"). In Slovene, it is traditionally called prosinec; the name, associated with millet bread and the act of asking for something, was first written in 1466 in the Škofja Loka manuscript. According to Theodor Mommsen, 1 January became the first day of the year in 600 AUC of the Roman calendar (153 BC), due to disasters in the Lusitanian War. A Lusitanian chief called Punicus invaded the Roman territory, defeated two Roman governors, and killed their troops. The Romans resolved to send a consul to Hispania, and in order to accelerate the dispatch of aid, "they even made the new consuls enter into office two months and a half before the legal time" (March 15). Symbols January's birthstone is the garnet, which represents constancy. Its birth flower is the cottage pink Dianthus caryophyllus, galanthus or traditional carnation. The zodiac signs are Capricorn (until January 19) and Aquarius (January 20 onward). Observances This list does not necessarily imply either official status or general observance. Month-long Alzheimer's Awareness Month (Canada) Dry January (United Kingdom) National Codependency Awareness Month (United States) National Mentoring Month (United States) National Healthy Weight Awareness Month (United States) Slavery and Human Trafficking Prevention Month (United States) Stalking Awareness Month (United States) Veganuary Food months in the United States This list does not necessarily imply either official status or general observance. Be Kind to Food Servers Month (by proclamation, State of Tennessee) California Dried Plum Digestive Health Month Hot Tea Month National Soup Month Oatmeal Month Non-Gregorian All Baha'i, Islamic, and Jewish observances begin at sundown prior to the date listed, and end at sundown on the date in question. List of observances set by the Bahá'í calendar List of observances set by the Chinese calendar List of observances set by the Hebrew calendar List of observances set by the Islamic calendar List of observances set by the Solar Hijri calendar Moveable This list does not necessarily imply either official status or general observance. See: List of movable Western Christian observances See: List of movable Eastern Christian observances January 2 unless that day is a Sunday, in which case January 3 New Year Holiday (Scotland) First Friday Children's Day (Bahamas) Second Saturday Children's Day (Thailand) Second Monday Birthday of Eugenio María de Hostos (Puerto Rico, United States) Coming of Age Day (Japan) Friday before third Monday Lee–Jackson Day (Virginia, United States, defunct) Third Friday International Fetish Day Sunday closest to January 22 National Sanctity of Human Life Day (United States) Third full week of January Hunt for Happiness Week (International observance) National Non-Smoking Week (Canada) Last full week of January National School Choice Week (United States) Third Monday Martin Luther King, Jr. Day (United States) Idaho Human Rights Day (Idaho, United States) Wednesday of the third full week of January Weedless Wednesday (Canada) Friday between January 19–25 Husband's Day (Iceland) Last Saturday National Seed Swap Day (United States) Last Sunday Liberation of Auschwitz Memorial Day (Netherlands) January 30 or the nearest Sunday World Leprosy Day Last Monday in January Bubble Wrap Appreciation Day Fourth Monday Community Manager Appreciation Day (International observance) National Heroes' Day (Cayman Islands) Monday Closest to January 29 Auckland Anniversary Day Fixed December 25 – January 5: Twelve Days of Christmas (Western Christianity) December 26 – January 1: Kwanzaa (African Americans) December 31 – January 1, in some cases until January 2: Hogmanay (Scotland) January 1 Feast of the Circumcision of Christ Feast of the Holy Name of Jesus (Anglican Communion, Lutheran Church) Feast of Fools (Medieval Europe) Constitution Day (Italy) Dissolution of Czechoslovakia-related observances: Day of the Establishment of the Slovak Republic (Slovakia) Restoration Day of the Independent Czech State (Czech Republic) Euro Day (European Union) Flag Day (Lithuania) Founding Day (Taiwan) Global Family Day Independence Day (Brunei, Cameroon, Haiti, Sudan) International Nepali Dhoti and Nepali Topi Day Jump-up Day (Montserrat, British Overseas Territories) Kalpataru Day (Ramakrishna Movement) National Bloody Mary Day (United States) National Tree Planting Day (Tanzania) New Year's Day Japanese New Year Novy God Day (Russia) Sjoogwachi (Okinawa Islands) Polar Bear Swim Day (Canada and United States) Public Domain Day (multiple countries) Solemnity of Mary, Mother of God (Catholic Church) World Day of Peace Triumph of the Revolution (Cuba) January 2 Ancestry Day (Haiti) Berchtold's Day (Liechtenstein, Switzerland, and the Alsace) Carnival Day (Saint Kitts and Nevis) Kakizome (Japan) National Creampuff Day (United States) National Science Fiction Day (United States) The second day of New Year (a holiday in Armenia, Kazakhstan, North Macedonia, Mauritius, Montenegro, New Zealand, Romania, Russia, Switzerland, and Ukraine) Nyinlong (Bhutan) Victory of Armed Forces Day (Cuba) January 3 Anniversary of the 1966 Coup d’état (Burkina Faso) Nakhatsenendyan toner (Armenia): January 3–5 Ministry of Religious Affairs Day (Indonesia) National Chocolate Covered Cherry Day (United States) Tamaseseri Festival (Hakozaki Shrine, Fukuoka, Japan) January 4 Day of the Fallen against the Colonial Repression (Angola) Day of the Martyrs (Democratic Republic of the Congo) Hwinukan mukee (Okinawa Islands, Japan) Independence Day (Myanmar) Ogoni Day (Movement for the Survival of the Ogoni People) World Braille Day January 5 National Bird Day (United States) National Whipped Cream Day (United States) Sausage Day (United Kingdom) Strawberry day (Japan) Take Our Daughters and Sons to Work Day (Sydney, Melbourne, and Brisbane, Australia) Tucindan (Serbia, Montenegro) January 6 Armed Forces Day (Iraq) Epiphany or Three Kings' Day (Western Christianity) or Theophany (Eastern Christianity), and its related observances: Befana Day (Italy) Christmas (Armenian Apostolic Church) Christmas Eve (Russia) Christmas Eve (Ukraine) Christmas Eve (Bosnia and Herzegovina) Christmas Eve (North Macedonia) Little Christmas (Ireland) Þrettándinn (Iceland) Three Wise Men Day Pathet Lao Day (Laos) January 7 Christmas (Eastern Orthodox Churches and Oriental Orthodox Churches using the Julian Calendar, Rastafari) Christmas in Russia Christmas in Ukraine Christmas (Bosnia and Herzegovina) Remembrance Day of the Dead (Armenia) Distaff Day (Medieval Europe) Nanakusa no sekku (Japan) Pioneer's Day (Liberia) Tricolour day (Italy) Victory from Genocide Day (Cambodia) January 8 The Eighth (United States) (defunct observance) Typing Day (international observance) January 9 Start of Hōonkō (Nishi Honganji) January 9–16 (Jōdo Shinshū Buddhism) Martyrs' Day (Panama) National Cassoulet Day (United States) Non-Resident Indian Day (India) Republic Day (Republika Srpska) (defunct, declared unconstitutional by the Constitutional Court of Bosnia and Herzegovina) St. Stephen's Day (Eastern Orthodox) January 10 Fête du Vodoun (Benin) Majority Rule Day (Bahamas) January 11 Children's Day (Tunisia) Eugenio María de Hostos Day (Puerto Rico) German Apples Day (Germany) Independence Manifesto Day (Morocco) Kagami biraki (Japan) National Human Trafficking Awareness Day (United States) Republic Day (Albania) January 12 Memorial Day (Turkmenistan) Prosecutor General's Day (Russia) National Youth Day (India) Zanzibar Revolution Day (Tanzania) January 13 Constitution Day (Mongolia) Democracy Day (Cape Verde) Liberation Day (Togo) Old New Year's Eve (Russia, Belarus, Ukraine, Serbia, Montenegro, Republic of Srpska, North Macedonia), and its related observances: Malanka (Ukraine, Russia, Belarus) St. Knut's Day (Sweden and Finland) Stephen Foster Memorial Day (United States) January 14 Azhyrnykhua (Abkhazia) Day of Defenders of the Motherland (Uzbekistan) Feast of Divina Pastora (Barquisimeto) Feast of the Ass (Medieval Christianity) Flag Day (Georgia) National Forest Conservation Day (Thailand) Ratification Day (United States) Revolution and Youth Day (Tunisia) Yennayer (Berbers) January 15 Arbor Day (Egypt) Armed Forces Day (Nigeria) Indian Army Day (India) John Chilembwe Day (Malawi) Korean Alphabet Day (North Korea) Sagichō at Tsurugaoka Hachimangū (Kamakura, Japan) Teacher's Day (Venezuela) Wikipedia Day (international observance) January 16 National Nothing Day National Religious Freedom Day (United States) Solemnity of Mary, Mother of God (Coptic Church) Teacher's Day (Myanmar) Teachers' Day (Thailand) Zuuruku Nichi (Okinawa Islands, Japan) Thiruvalluvar Day (Tamil Nadu, India) January 17 Hardware Freedom Day (international observance) National Day (Menorca) The opening ceremony of Patras Carnival, celebrated until Clean Monday (Patras) January 18 Revolution and Youth Day (Tunisia) Royal Thai Armed Forces Day (Thailand) Week of Prayer for Christian Unity (January 18–25) (Christianity) January 19 Confederate Heroes Day (Texas), and its related observance: Robert E. Lee Day (Alabama, Arkansas, Florida, Georgia and Mississippi) Lee–Jackson–King Day (Virginia, United States, defunct) Husband's Day (Iceland) Kokborok Day (Tripura, India) National Popcorn Day (United States) Theophany / Epiphany (Eastern and Oriental Orthodoxy), and its related observances: Timkat, (on 20th during Leap Year) (Ethiopian Orthodox) Vodici or Baptism of Jesus (North Macedonia) January 20 Armed Forces Day (Mali) Cheese Day (United States) Heroes' Day (Cape Verde) Inauguration Day, held every four years in odd-numbered years, except when January 20 falls on a Sunday (United States) Martyrs' Day (Azerbaijan) January 21 Babinden (Bulgaria, Serbia) Birthday of Princess Ingrid Alexandra (Norway) Errol Barrow Day (Barbados) Flag Day (Quebec) Grandmother's Day (Poland) Lady of Altagracia Day (Dominican Republic) Lincoln Alexander Day (Canada) National Hug Day (United States) January 22 Day of Unity of Ukraine (Ukraine) Grandfather's Day (Poland) National Hot Sauce Day (United States) January 23 Bounty Day (Pitcairn Island) Espousals of the Blessed Virgin Mary (Roman Catholic Church) National Pie Day (United States) Netaji Subhas Chandra Bose's Jayanti (Orissa, Tripura, and West Bengal, India) World Freedom Day (Taiwan and South Korea) January 24 Feast of Our Lady of Peace (Roman Catholic Church), and its related observances: Feria de Alasitas (La Paz) Moebius Syndrome Awareness Day (international observance) National Peanut Butter Day (United States) Unification Day (Romania) January 25 2011 Revolution Day (Egypt) Burns night (Scotland, Scottish community) Dydd Santes Dwynwen (Wales) Feast of the Conversion of Saint Paul (Eastern Orthodox, Oriental Orthodox, Roman Catholic, Anglican and Lutheran churches, which concludes the Week of Prayer for Christian Unity) National Police Day (Egypt) National Voters' Day (India) Tatiana Day (Russia, Eastern Orthodox) January 26 Australia Day (Australia) Duarte Day (Dominican Republic) Engineer's Day (Panama) International Customs Day Liberation Day (Uganda) Republic Day (India) January 27 Day of the lifting of the siege of Leningrad (Russia) Liberation of the remaining inmates of Auschwitz-related observances: Holocaust Memorial Day (UK) Holocaust Remembrance Day (Sweden) International Holocaust Remembrance Day Memorial Day (Italy) Memorial Day for the Victims of the Holocaust and Prevention of Crimes against Humanity (Czech Republic) Memorial Day for the Victims of National Socialism (Germany) National Holocaust Memorial Day (Greece) Family Literacy Day (Canada) Feast of Saint Slava (Serbia) National Chocolate Cake Day (United States) Saint Devota's Day (Monaco) January 28 Army Day (Armenia) Data Privacy Day (international observance) January 29 Kansas Day (Kansas, United States) January 30 Day of Azerbaijani customs (Azerbaijan) Day of Saudade (Brazil) Fred Korematsu Day (California, United States) Martyrdom of Mahatma Gandhi-related observances: Martyrs' Day (India) School Day of Non-violence and Peace (Spain) Start of the Season for Nonviolence January 30 – April 4 Teacher's Day (Greece) January 31 Amartithi (Meherabad, India, followers of Meher Baba) Independence Day (Nauru) Me-Dam-Me-Phi (Ahom people) Street Children's Day (Austria)
Technology
Months
null
15651
https://en.wikipedia.org/wiki/Julian%20calendar
Julian calendar
The Julian calendar is a solar calendar of 365 days in every year with an additional leap day every fourth year (without exception). The Julian calendar is still used as a religious calendar in parts of the Eastern Orthodox Church and in parts of Oriental Orthodoxy as well as by the Amazigh people (also known as the Berbers). The Julian calendar was proposed in 46 BC by (and takes its name from) Julius Caesar, as a reform of the earlier Roman calendar, which was largely a lunisolar one. It took effect on , by his edict. Caesar's calendar became the predominant calendar in the Roman Empire and subsequently most of the Western world for more than 1,600 years, until 1582 when Pope Gregory XIII promulgated a revised calendar. The Julian calendar has two types of years: a normal year of 365 days and a leap year of 366 days. They follow a simple cycle of three normal years and one leap year, giving an average year that is 365.25 days long. That is more than the actual solar year value of approximately 365.2422 days (the current value, which varies), which means the Julian calendar gains one day every 129 years. In other words, the Julian calendar gains 3.1 days every 400 years. Gregory's calendar reform modified the Julian rule, to reduce the average length of the calendar year from 365.25 days to 365.2425 days and thus corrected the Julian calendar's drift against the solar year: the Gregorian calendar gains just 0.1 day over 400 years. For any given event during the years from 1901 through 2099, its date according to the Julian calendar is 13 days behind its corresponding Gregorian date (for instance Julian 1 January falls on Gregorian 14 January). Most Catholic countries adopted the new calendar immediately; Protestant countries did so slowly in the course of the following two centuries or so; most Orthodox countries retain the Julian calendar for religious purposes but adopted the Gregorian as their civil calendar in the early part of the twentieth century. Table of months History Motivation The ordinary year in the previous Roman calendar consisted of 12 months, for a total of 355 days. In addition, a 27- or 28-day intercalary month, the Mensis Intercalaris, was sometimes inserted between February and March. This intercalary month was formed by inserting 22 or 23 days after the first 23 days of February; the last five days of February, which counted down toward the start of March, became the last five days of Intercalaris. The net effect was to add 22 or 23 days to the year, forming an intercalary year of 377 or 378 days. Some say the mensis intercalaris always had 27 days and began on either the first or the second day after the Terminalia (23 February). If managed correctly this system could have allowed the Roman year to stay roughly aligned to a tropical year. However, since the pontifices were often politicians, and because a Roman magistrate's term of office corresponded with a calendar year, this power was prone to abuse: a pontifex could lengthen a year in which he or one of his political allies was in office, or refuse to lengthen one in which his opponents were in power. Caesar's reform was intended to solve this problem permanently, by creating a calendar that remained aligned to the sun without any human intervention. This proved useful very soon after the new calendar came into effect. Varro used it in 37 BC to fix calendar dates for the start of the four seasons, which would have been impossible only 8 years earlier. A century later, when Pliny dated the winter solstice to 25 December because the sun entered the 8th degree of Capricorn on that date, this stability had become an ordinary fact of life. Context of the reform Although the approximation of days for the tropical year had been known for a long time, ancient solar calendars had used less precise periods, resulting in gradual misalignment of the calendar with the seasons. The octaeteris, a cycle of eight lunar years popularised by Cleostratus (and also commonly attributed to Eudoxus) which was used in some early Greek calendars, notably in Athens, is 1.53 days longer than eight mean Julian years. The length of nineteen years in the cycle of Meton was 6,940 days, six hours longer than the mean Julian year. The mean Julian year was the basis of the 76-year cycle devised by Callippus (a student under Eudoxus) to improve the Metonic cycle. In Persia (Iran) after the reform in the Persian calendar by introduction of the Persian Zoroastrian (i. e. Young Avestan) calendar in 503 BC and afterwards, the first day of the year (1 Farvardin=Nowruz) slipped against the vernal equinox at the rate of approximately one day every four years. Likewise in the Egyptian calendar, a fixed year of 365 days was in use, drifting by one day against the sun in four years. An unsuccessful attempt to add an extra day every fourth year was made in 238 BC (Decree of Canopus). Caesar probably experienced this "wandering" or "vague" calendar in that country. He landed in the Nile delta in October 48 BC and soon became embroiled in the Ptolemaic dynastic war, especially after Cleopatra managed to be "introduced" to him in Alexandria. Caesar imposed a peace, and a banquet was held to celebrate the event. Lucan depicted Caesar talking to a wise man called Acoreus during the feast, stating his intention to create a calendar more perfect than that of Eudoxus (Eudoxus was popularly credited with having determined the length of the year to be days). But the war soon resumed and Caesar was attacked by the Egyptian army for several months until he achieved victory. He then enjoyed a long cruise on the Nile with Cleopatra before leaving the country in June 47 BC. Caesar returned to Rome in 46 BC and, according to Plutarch, called in the best philosophers and mathematicians of his time to solve the problem of the calendar. Pliny says that Caesar was aided in his reform by the astronomer Sosigenes of Alexandria who is generally considered the principal designer of the reform. Sosigenes may also have been the author of the astronomical almanac published by Caesar to facilitate the reform. Eventually, it was decided to establish a calendar that would be a combination between the old Roman months, the fixed length of the Egyptian calendar, and the days of Greek astronomy. According to Macrobius, Caesar was assisted in this by a certain Marcus Flavius. Adoption of the Julian calendar Caesar's reform only applied to the Roman calendar. However, in the following decades many of the local civic and provincial calendars of the empire and neighbouring client kingdoms were aligned to the Julian calendar by transforming them into calendars with years of 365 days with an extra day intercalated every four years. The reformed calendars typically retained many features of the unreformed calendars. In many cases, the New Year was not on 1 January, the leap day was not on the traditional bissextile day, the old month names were retained, the lengths of the reformed months did not match the lengths of Julian months, and, even if they did, their first days did not match the first day of the corresponding Julian month. Nevertheless, since the reformed calendars had fixed relationships to each other and to the Julian calendar, the process of converting dates between them became quite straightforward, through the use of conversion tables known as "hemerologia". The three most important of these calendars are the Alexandrian calendar and the Ancient Macedonian calendar─which had two forms: the Syro-Macedonian and the 'Asian' calendars. Other reformed calendars are known from Cappadocia, Cyprus and the cities of (Roman) Syria and Palestine. Unreformed calendars continued to be used in Gaul (the Coligny calendar), Greece, Macedon, the Balkans and parts of Palestine, most notably in Judea. The Asian calendar was an adaptation of the Ancient Macedonian calendar used in the Roman province of Asia and, with minor variations, in nearby cities and provinces. It is known in detail through the survival of decrees promulgating it issued in 8BC by the proconsul Paullus Fabius Maximus. It renamed the first month Dios as , and arranged the months such that each month started on the ninth day before the kalends of the corresponding Roman month; thus the year began on 23 September, Augustus's birthday. Julian reform Realignment of the year The first step of the reform was to realign the start of the calendar year (1 January) to the tropical year by making 46 BC 445 days long, compensating for the intercalations which had been missed during Caesar's pontificate. This year had already been extended from 355 to 378 days by the insertion of a regular intercalary month in February. When Caesar decreed the reform, probably shortly after his return from the African campaign in late Quintilis (July), he added 67 more days by inserting two extraordinary intercalary months between November and December. These months are called Intercalaris Prior and Intercalaris Posterior in letters of Cicero written at the time; there is no basis for the statement sometimes seen that they were called "Undecimber" and "Duodecimber", terms that arose in the 18th century over a millennium after the Roman Empire's collapse. Their individual lengths are unknown, as is the position of the Nones and Ides within them. Because 46 BC was the last of a series of irregular years, this extra-long year was, and is, referred to as the "last year of confusion". The new calendar began operation after the realignment had been completed, in 45 BC. Months The Julian months were formed by adding ten days to a regular pre-Julian Roman year of 355 days, creating a regular Julian year of 365 days. Two extra days were added to January, Sextilis (August) and December, and one extra day was added to April, June, September, and November. February was not changed in ordinary years, and so continued to be the traditional 28 days. Thus, the ordinary (i.e., non-leap year) lengths of all of the months were set by the Julian calendar to the same values they still hold today. The Julian reform did not change the method used to account days of the month in the pre-Julian calendar, based on the Kalends, Nones and Ides, nor did it change the positions of these three dates within the months. Macrobius states that the extra days were added immediately before the last day of each month to avoid disturbing the position of the established religious ceremonies relative to the Nones and Ides of the month. The inserted days were all initially characterised as dies fasti (F – see Roman calendar). The character of a few festival days was changed. In the early Julio-Claudian period a large number of festivals were decreed to celebrate events of dynastic importance, which caused the character of the associated dates to be changed to NP. However, this practice was discontinued around the reign of Claudius, and the practice of characterising days fell into disuse around the end of the first century AD: the Antonine jurist Gaius speaks of dies nefasti as a thing of the past. Intercalation The old intercalary month was abolished. The new leap day was dated as ante diem bis sextum Kalendas Martias ('the sixth doubled day before the Kalends of March'), usually abbreviated as a.d. bis VI Kal. Mart.; hence it is called in English the bissextile day. The year in which it occurred was termed annus bissextus, in English the bissextile year. There is debate about the exact position of the bissextile day in the early Julian calendar. The earliest direct evidence is a statement of the 2nd century jurist Celsus, who states that there were two-halves of a 48-hour day, and that the intercalated day was the "posterior" half. An inscription from AD 168 states that a.d. V Kal. Mart. was the day after the bissextile day. The 19th century chronologist Ideler argued that Celsus used the term "posterior" in a technical fashion to refer to the earlier of the two days, which requires the inscription to refer to the whole 48-hour day as the bissextile. Some later historians share this view. Others, following Mommsen, take the view that Celsus was using the ordinary Latin (and English) meaning of "posterior". A third view is that neither half of the 48-hour "bis sextum" was originally formally designated as intercalated, but that the need to do so arose as the concept of a 48-hour day became obsolete. There is no doubt that the bissextile day eventually became the earlier of the two days for most purposes. In 238 Censorinus stated that it was inserted after the Terminalia (23 February) and was followed by the last five days of February, i.e., a.d. VI, V, IV, III and prid. Kal. Mart. (which would be 24 to 28 February in a common year and the 25th to 29th in a leap year). Hence he regarded the bissextum as the first half of the doubled day. All later writers, including Macrobius about 430, Bede in 725, and other medieval computists (calculators of Easter) followed this rule, as does the liturgical calendar of the Roman Catholic Church. However, Celsus' definition continued to be used for legal purposes. It was incorporated into Justinian's Digest, and in the English Statute De Anno et Die Bissextili of 1236, which was not formally repealed until 1879. The effect of the bissextile day on the nundinal cycle is not discussed in the sources. According to Dio Cassius, a leap day was inserted in 41 BC to ensure that the first market day of 40 BC did not fall on 1 January, which implies that the old 8-day cycle was not immediately affected by the Julian reform. However, he also reports that in AD 44, and on some previous occasions, the market day was changed to avoid a conflict with a religious festival. This may indicate that a single nundinal letter was assigned to both halves of the 48-hour bissextile day by this time, so that the Regifugium and the market day might fall on the same date but on different days. In any case, the 8-day nundinal cycle began to be displaced by the 7-day week in the first century AD, and dominical letters began to appear alongside nundinal letters in the fasti. Year length; leap years The Julian calendar has two types of year: "normal" years of 365 days and "leap" years of 366 days. There is a simple cycle of three "normal" years followed by a leap year and this pattern repeats forever without exception. The Julian year is, therefore, on average 365.25 days long. Consequently, the Julian year drifts over time with respect to the tropical (solar) year (365.24217 days). Although Greek astronomers had known, at least since Hipparchus, a century before the Julian reform, that the tropical year was slightly shorter than 365.25 days, the calendar did not compensate for this difference. As a result, the calendar year gains about three days every four centuries compared to observed equinox times and the seasons. This discrepancy was largely corrected by the Gregorian reform of 1582. The Gregorian calendar has the same months and month lengths as the Julian calendar, but, in the Gregorian calendar, year numbers evenly divisible by 100 are not leap years, except that those evenly divisible by 400 remain leap years (even then, the Gregorian calendar diverges from astronomical observations by one day in 3,030 years). Leap year error Although the new calendar was much simpler than the pre-Julian calendar, the pontifices initially added a leap day every three years, instead of every four. There are accounts of this in Solinus, Pliny, Ammianus, Suetonius, and Censorinus. Macrobius gives the following account of the introduction of the Julian calendar: So, according to Macrobius, the year was considered to begin after the Terminalia (23 February), the calendar was operated correctly from its introduction on 1 January 45 BC until the beginning of the fourth year (February 42 BC) at which point the priests inserted the first intercalation, Caesar's intention was to make the first intercalation at the beginning of the fifth year (February 41 BC), the priests made a further eleven intercalations after 42 BC at three-year intervals so that the twelfth intercalation fell in 9 BC, had Caesar's intention been followed there would have been intercalations every four years after 41 BC, so that the ninth intercalation would have been in 9 BC, after 9 BC, there were twelve years without leap years, so that the leap days Caesar would have had in 5 BC, 1 BC and AD 4 were omitted and after AD 4 the calendar was operated as Caesar intended, so that the next leap year was AD 8 and then leap years followed every fourth year thereafter. Some people have had different ideas as to how the leap years went. The above scheme is that of Scaliger (1583) in the table below. He established that the Augustan reform was instituted in 8 BC. The table below shows for each reconstruction the implied proleptic Julian date for the first day of Caesar's reformed calendar and the first Julian date on which the Roman calendar date matches the Julian calendar after the completion of Augustus' reform. By the systems of Scaliger, Ideler and Bünting, the leap years prior to the suspension happen to be BC years that are divisible by 3, just as, after leap year resumption, they are the AD years divisible by 4. Pierre Brind'Amour argued that "only one day was intercalated between 1/1/45 and 1/1/40 (disregarding a momentary 'fiddling' in December of 41) to avoid the nundinum falling on Kal. Ian." Alexander Jones says that the correct Julian calendar was in use in Egypt in 24 BC, implying that the first day of the reform in both Egypt and Rome, , was the Julian date 1 January if 45 BC was a leap year and 2 January if it was not. This necessitates fourteen leap days up to and including AD 8 if 45 BC was a leap year and thirteen if it was not. In 1999, a papyrus was discovered which gives the dates of astronomical phenomena in 24 BC in both the Egyptian and Roman calendars. From , Egypt had two calendars: the old Egyptian in which every year had 365 days and the new Alexandrian in which every fourth year had 366 days. Up to the date in both calendars was the same. The dates in the Alexandrian and Julian calendars are in one-to-one correspondence except for the period from 29 August in the year preceding a Julian leap year to the following 24 February. From a comparison of the astronomical data with the Egyptian and Roman dates, Alexander Jones concluded that the Egyptian astronomers (as opposed to travellers from Rome) used the correct Julian calendar. Due to the confusion about this period, we cannot be sure exactly what day (e.g. Julian day number) any particular Roman date refers to before March of 8 BC, except for those used in Egypt in 24BC which are secured by astronomy. An inscription has been discovered which orders a new calendar to be used in the Province of Asia to replace the previous Greek lunar calendar. According to one translation This is historically correct. It was decreed by the proconsul that the first day of the year in the new calendar shall be Augustus' birthday, a.d. IX Kal. Oct. Every month begins on the ninth day before the kalends. The date of introduction, the day after 14 Peritius, was 1 Dystrus, the next month. The month after that was Xanthicus. Thus Xanthicus began on a.d. IX Kal. Mart., and normally contained 31 days. In leap year, however, it contained an extra "Sebaste day", the Roman leap day, and thus had 32 days. From the lunar nature of the old calendar we can fix the starting date of the new one as 24 January, in the Julian calendar, which was a leap year. Thus from inception the dates of the reformed Asian calendar are in one-to-one correspondence with the Julian. Another translation of this inscription is This would move the starting date back three years to 8 BC, and from the lunar synchronism back to 26 January (Julian). But since the corresponding Roman date in the inscription is 24 January, this must be according to the incorrect calendar which in 8 BC Augustus had ordered to be corrected by the omission of leap days. As the authors of the previous paper point out, with the correct four-year cycle being used in Egypt and the three-year cycle abolished in Rome, it is unlikely that Augustus would have ordered the three-year cycle to be introduced in Asia. Month names The Julian reform did not immediately cause the names of any months to be changed. The old intercalary month was abolished and replaced with a single intercalary day at the same point (i.e., five days before the end of February). Roman The Romans later renamed months after Julius Caesar and Augustus, renaming Quintilis as "Iulius" (July) in 44 BC and Sextilis as "Augustus" (August) in 8 BC. Quintilis was renamed to honour Caesar because it was the month of his birth. According to a quoted by Macrobius, Sextilis was renamed to honour Augustus because several of the most significant events in his rise to power, culminating in the fall of Alexandria, occurred in that month. Other months were renamed by other emperors, but apparently none of the later changes survived their deaths. In AD 37, Caligula renamed September as "Germanicus" after his father; in AD 65, Nero renamed April as "Neroneus", May as "Claudius" and June as "Germanicus"; and in AD 84 Domitian renamed September as "Germanicus" and October as "Domitianus". Commodus was unique in renaming all twelve months after his own adopted names (January to December): "Amazonius", "Invictus", "Felix", "Pius", "Lucius", "Aelius", "Aurelius", "Commodus", "Augustus", "Herculeus", "Romanus", and "Exsuperatorius". The emperor Tacitus is said to have ordered that September, the month of his birth and accession, be renamed after him, but the story is doubtful since he did not become emperor before November 275. Similar honorific month names were implemented in many of the provincial calendars that were aligned to the Julian calendar. Other name changes were proposed but were never implemented. Tiberius rejected a senatorial proposal to rename September as "Tiberius" and October as "Livius", after his mother Livia. Antoninus Pius rejected a senatorial decree renaming September as "Antoninus" and November as "Faustina", after his empress. Charlemagne Much more lasting than the ephemeral month names of the post-Augustan Roman emperors were the Old High German names introduced by Charlemagne. According to his biographer, Einhard, Charlemagne renamed all of the months agriculturally in German. These names were used until the 15th century, over 700 years after his rule, and continued, with some modifications, to be used as "traditional" month names until the late 18th century. The names (January to December) were: Wintarmanoth ("winter month"), Hornung, Lentzinmanoth ("spring month", "Lent month"), Ostarmanoth ("Easter month"), Wonnemanoth ("joy-month", a corruption of Winnimanoth "pasture-month"), Brachmanoth ("fallow-month"), Heuuimanoth ("hay month"), Aranmanoth ("reaping month"), Witumanoth ("wood month"), Windumemanoth ("vintage month"), Herbistmanoth ("harvest month"), and Heilagmanoth ("holy month"). Eastern Europe The calendar month names used in western and northern Europe, in Byzantium, and by the Amazigh (Berbers), were derived from the Latin names. However, in eastern Europe older seasonal month names continued to be used into the 19th century, and in some cases are still in use, in many languages, including: Belarusian, Bulgarian, Croatian, Czech, Finnish, Georgian, Lithuanian, Macedonian, Polish, Romanian, Slovene, Ukrainian. When the Ottoman Empire adopted the Julian calendar, in the form of the Rumi calendar, the month names reflected Ottoman tradition. Year numbering The principal method used by the Romans to identify a year for dating purposes was to name it after the two consuls who took office in it, the eponymous period in question being the consular year. Beginning in 153 BC, consuls began to take office on 1 January, thus synchronizing the commencement of the consular and calendar years. The calendar year has begun in January and ended in December since about 450 BC according to Ovid or since about 713 BC according to Macrobius and Plutarch (see Roman calendar). Julius Caesar did not change the beginning of either the consular year or the calendar year. In addition to consular years, the Romans sometimes used the regnal year of the emperor, and by the late 4th century documents were also being dated according to the 15-year cycle of the indiction. In 537, Justinian required that henceforth the date must include the name of the emperor and his regnal year, in addition to the indiction and the consul, while also allowing the use of local eras. In 309 and 310, and from time to time thereafter, no consuls were appointed. When this happened, the consular date was given a count of years since the last consul (called "post-consular" dating). After 541, only the reigning emperor held the consulate, typically for only one year in his reign, and so post-consular dating became the norm. Similar post-consular dates were also known in the west in the early 6th century. The system of consular dating, long obsolete, was formally abolished in the law code of Leo VI, issued in 888. Only rarely did the Romans number the year from the founding of the city (of Rome), (AUC). This method was used by Roman historians to determine the number of years from one event to another, not to date a year. Different historians had several different dates for the founding. The , an inscription containing an official list of the consuls which was published by Augustus, used an epoch of 752 BC. The epoch used by Varro, 753 BC, has been adopted by modern historians. Indeed, Renaissance editors often added it to the manuscripts that they published, giving the false impression that the Romans numbered their years. Most modern historians tacitly assume that it began on the day the consuls took office, and ancient documents such as the which use other AUC systems do so in the same way. However, Censorinus, writing in the 3rd century AD, states that, in his time, the AUC year began with the Parilia, celebrated on 21 April, which was regarded as the actual anniversary of the foundation of Rome. Many local eras, such as the Era of Actium and the Spanish Era, were adopted for the Julian calendar or its local equivalent in the provinces and cities of the Roman Empire. Some of these were used for a considerable time. Perhaps the best known is the Era of Martyrs, sometimes also called (after Diocletian), which was associated with the Alexandrian calendar and often used by the Alexandrian Christians to number their Easters during the 4th and 5th centuries, and continues to be used by the Coptic and Ethiopian churches. In the eastern Mediterranean, the efforts of Christian chronographers such as Annianus of Alexandria to date the Biblical creation of the world led to the introduction of Anno Mundi eras based on this event. The most important of these was the Etos Kosmou, used throughout the Byzantine world from the 10th century and in Russia until 1700. In the west, the kingdoms succeeding the empire initially used indictions and regnal years, alone or in combination. The chronicler Prosper of Aquitaine, in the fifth century, used an era dated from the Passion of Christ, but this era was not widely adopted. Dionysius Exiguus proposed the system of Anno Domini in 525. This era gradually spread through the western Christian world, once the system was adopted by Bede in the eighth century. The Julian calendar was also used in some Muslim countries. The Rumi calendar, the Julian calendar used in the later years of the Ottoman Empire, adopted an era derived from the lunar AH year equivalent to AD 1840, i.e., the effective Rumi epoch was AD 585. In recent years, some users of the Berber calendar have adopted an era starting in 950 BC, the approximate date that the Libyan pharaoh came to power in Egypt. New Year's Day The Roman calendar began the year on 1 January, and this remained the start of the year after the Julian reform. However, even after local calendars were aligned to the Julian calendar, they started the new year on different dates. The Alexandrian calendar in Egypt started on 29 August (30 August after an Alexandrian leap year). Several local provincial calendars were aligned to start on the birthday of Augustus, 23 September. The indiction caused the Byzantine year, which used the Julian calendar, to begin on 1 September; this date is still used in the Eastern Orthodox Church for the beginning of the liturgical year. When the Julian calendar was adopted in AD 988 by Vladimir I of Kiev, the year was numbered Anno Mundi 6496, beginning on 1 March, six months after the start of the Byzantine Anno Mundi year with the same number. In 1492 (AM 7000), Ivan III, according to church tradition, realigned the start of the year to 1 September, so that AM 7000 only lasted for six months in Russia, from 1 March to 31 August 1492. In Anglo-Saxon England, the year most commonly began on 25 December, which, as (approximately) the winter solstice, had marked the start of the year in pagan times, though 25 March (the equinox) is occasionally documented in the 11th century. Sometimes the start of the year was reckoned as 24 September, the start of the so-called "western indiction" introduced by Bede. These practices changed after the Norman conquest. From 1087 to 1155 the English year began on 1 January, and from 1155 to 1751 it began on 25 March. In 1752 it was moved back to 1 January. (See Calendar [New Style] Act 1750). Even before 1752, 1 January was sometimes treated as the start of the new year – for example by Pepys – while the "year starting 25th March was called the Civil or Legal Year". To reduce misunderstandings on the date, it was not uncommon for a date between 1 January and 24 March to be written as "1661/62". This was to explain to the reader that the year was 1661 counting from March and 1662 counting from January as the start of the year. (For more detail, see Dual dating). Replacement by the Gregorian calendar The Gregorian calendar has replaced the Julian as the civil calendar in all countries which had been using itGreece being the last to do so, in 1923. The liturgical calendar used by Christian denominations in the west are almost all based on the Gregorian calendar, but most Eastern Orthodox churches continue to base theirs on the Julian. A calendar similar to the Julian one, the Alexandrian calendar, is the basis for the Ethiopian calendar, which is still the civil calendar of Ethiopia. Egypt converted from the Alexandrian calendar to Gregorian on 1 Thaut 1592/11 September 1875. During the changeover between calendars and for some time afterwards, dual dating was used in documents and gave the date according to both systems. In contemporary as well as modern texts that describe events during the period of change, it is customary to clarify to which calendar a given date refers by using an O.S. or N.S. suffix (denoting Old Style, Julian or New Style, Gregorian). Transition history In 1582, Pope Gregory XIII promulgated the Gregorian calendar. Reform was required as the Julian calendar year, with an average length of 365.25 days, was longer than the natural tropical year. On average, the astronomical solstices and the equinoxes advance by 10.8 minutes per year against the Julian calendar year. As a result, 21 March (which is the base date for calculating the date of Easter) gradually moved out of alignment with the March equinox. While Hipparchus and presumably Sosigenes were aware of the discrepancy, although not of its correct value, it was evidently felt to be of little importance at the time of the Julian reform (46 BC). However, it accumulated significantly over time: the Julian calendar gained a day every 128 years. By 1582, 21 March was ten days out of alignment with the March equinox, the date where it was reckoned to have been in 325, the year of the Council of Nicaea. Since the Julian and Gregorian calendars were long used simultaneously, although in different places, calendar dates in the transition period are often ambiguous, unless it is specified which calendar was being used. In some circumstances, double dates might be used, one in each calendar. The notation "Old Style" (O.S.) is sometimes used to indicate a date in the Julian calendar, as opposed to "New Style" (N.S.), which either represents the Gregorian date or the Julian date with the start of the year as 1 January. This notation is used to clarify dates from countries that continued to use the Julian calendar after the Gregorian reform, such as Great Britain, which did not adopt the reformed calendar until 1752, or Russia, which did not do so until 1918 (see Soviet calendar). This is why the Russian Revolution of 7 November 1917 N.S. is known as the October Revolution, because it began on 25 October O.S. Modern usage Eastern Orthodox Although most Eastern Orthodox countries (most of them in eastern or southeastern Europe) had adopted the Gregorian calendar by 1924, their national churches had not. The "Revised Julian calendar" was endorsed by a synod in Constantinople in May 1923, consisting of a solar part which was and will be identical to the Gregorian calendar until the year 2800, and a lunar part which calculated Easter astronomically at Jerusalem. All Eastern Orthodox churches refused to accept the lunar part, so all Orthodox churches continue to celebrate Easter according to the Julian calendar, with the exception of the Finnish Orthodox Church (the Estonian Orthodox Church was also an exception from 1923 to 1945). The Orthodox Churches of Jerusalem, Russia, Serbia, Montenegro, Poland (from 15 June 2014), North Macedonia, Georgia, and the Greek Old Calendarists and other groups continue to use the Julian calendar, thus they celebrate the Nativity on 25 December Julian (which is 7 January Gregorian until 2100). The Russian Orthodox Church has some parishes in the West that celebrate the Nativity on 25 December Gregorian until 2799. The Orthodox Church of Ukraine announced in late May 2023 that they would use the Gregorian calendar to celebrate Christmas on December 25, 2023, partly in reflection to Russia's invasion of the country in early 2022. Date of Easter Most branches of the Eastern Orthodox Church use the Julian calendar for calculating the date of Easter, upon which the timing of all the other moveable feasts depends. Some such churches have adopted the Revised Julian calendar for the observance of fixed feasts, while such Orthodox churches retain the Julian calendar for all purposes. Syriac Christianity The Ancient Assyrian Church of the East, an East Syriac rite that is commonly miscategorised under "eastern Orthodox", uses the Julian calendar, where its participants celebrate Christmas on 7 January Gregorian (which is 25 December Julian). The Assyrian Church of the East, the church it split from in 1968 (the replacement of traditional Julian calendar with Gregorian calendar being among the reasons), uses the Gregorian calendar ever since the year of the schism. Oriental Orthodox The Armenian Patriarchate of Jerusalem of Armenian Apostolic Orthodox Church uses Julian calendar, while the rest of Armenian Church uses Gregorian calendar. Both celebrate the Nativity as part of the Feast of Theophany according to their respective calendar. Berbers The Julian calendar is still used by the Berbers of the Maghreb in the form of the Berber calendar. Foula Foula in Shetland, Scotland, a small settlement on a remote island of the archipelago, still celebrates festivities according to the Julian calendar.
Technology
Timekeeping
null
15655
https://en.wikipedia.org/wiki/Jurassic
Jurassic
The Jurassic ( ) is a geologic period and stratigraphic system that spanned from the end of the Triassic Period million years ago (Mya) to the beginning of the Cretaceous Period, approximately 143.1 Mya. The Jurassic constitutes the middle period of the Mesozoic Era as well as the eighth period of the Phanerozoic Eon and is named after the Jura Mountains, where limestone strata from the period were first identified. The start of the Jurassic was marked by the major Triassic–Jurassic extinction event, associated with the eruption of the Central Atlantic Magmatic Province (CAMP). The beginning of the Toarcian Age started around 183 million years ago and is marked by the Toarcian Oceanic Anoxic Event, a global episode of oceanic anoxia, ocean acidification, and elevated global temperatures associated with extinctions, likely caused by the eruption of the Karoo-Ferrar large igneous provinces. The end of the Jurassic, however, has no clear, definitive boundary with the Cretaceous and is the only boundary between geological periods to remain formally undefined. By the beginning of the Jurassic, the supercontinent Pangaea had begun rifting into two landmasses: Laurasia to the north and Gondwana to the south. The climate of the Jurassic was warmer than the present, and there were no ice caps. Forests grew close to the poles, with large arid expanses in the lower latitudes. On land, the fauna transitioned from the Triassic fauna, dominated jointly by dinosauromorph and pseudosuchian archosaurs, to one dominated by dinosaurs alone. The first stem-group birds appeared during the Jurassic, evolving from a branch of theropod dinosaurs. Other major events include the appearance of the earliest crabs and modern frogs, salamanders and lizards. Mammaliaformes, one of the few cynodont lineages to survive the end of the Triassic, continued to diversify throughout the period, with the Jurassic seeing the emergence of the first crown group mammals. Crocodylomorphs made the transition from a terrestrial to an aquatic life. The oceans were inhabited by marine reptiles such as ichthyosaurs and plesiosaurs, while pterosaurs were the dominant flying vertebrates. Modern sharks and rays first appeared and diversified during the period, while the first known crown-group teleost fish (the dominant group of modern fish) appeared near the end of the period. The flora was dominated by ferns and gymnosperms, including conifers, of which many modern groups made their first appearance during the period, as well as other groups like the extinct Bennettitales. Etymology and history The chronostratigraphic term "Jurassic" is linked to the Jura Mountains, a forested mountain range that mainly follows the France–Switzerland border. The name "Jura" is derived from the Celtic root via Gaulish *iuris "wooded mountain", which was borrowed into Latin as a name of a place and evolved into Juria and finally Jura. During a tour of the region in 1795, German naturalist Alexander von Humboldt recognized carbonate deposits within the Jura Mountains as geologically distinct from the Triassic aged Muschelkalk of southern Germany, but he erroneously concluded that they were older. He then named them ('Jura limestone') in 1799. In 1829, the French naturalist Alexandre Brongniart published a book entitled Description of the Terrains that Constitute the Crust of the Earth or Essay on the Structure of the Known Lands of the Earth. In this book, Brongniart used the phrase terrains jurassiques when correlating the "Jura-Kalkstein" of Humboldt with similarly aged oolitic limestones in Britain, thus coining and publishing the term "Jurassic". The German geologist Leopold von Buch in 1839 established the three-fold division of the Jurassic, originally named from oldest to the youngest: the Black Jurassic, Brown Jurassic, and White Jurassic. The term "Lias" had previously been used for strata of equivalent age to the Black Jurassic in England by William Conybeare and William Phillips in 1822. William Phillips, the geologist, worked with William Conybeare to find out more about the Black Jurassic in England. The French palaeontologist Alcide d'Orbigny in papers between 1842 and 1852 divided the Jurassic into ten stages based on ammonite and other fossil assemblages in England and France, of which seven are still used, but none has retained its original definition. The German geologist and palaeontologist Friedrich August von Quenstedt in 1858 divided the three series of von Buch in the Swabian Jura into six subdivisions defined by ammonites and other fossils. The German palaeontologist Albert Oppel in his studies between 1856 and 1858 altered d'Orbigny's original scheme and further subdivided the stages into biostratigraphic zones, based primarily on ammonites. Most of the modern stages of the Jurassic were formalized at the Colloque du Jurassique à Luxembourg in 1962. Geology The Jurassic Period is divided into three epochs: Early, Middle, and Late. Similarly, in stratigraphy, the Jurassic is divided into the Lower Jurassic, Middle Jurassic, and Upper Jurassic series. Geologists divide the rocks of the Jurassic into a stratigraphic set of units called stages, each formed during corresponding time intervals called ages. Stages can be defined globally or regionally. For global stratigraphic correlation, the International Commission on Stratigraphy (ICS) ratify global stages based on a Global Boundary Stratotype Section and Point (GSSP) from a single formation (a stratotype) identifying the lower boundary of the stage. The ages of the Jurassic from youngest to oldest are as follows: Stratigraphy Jurassic stratigraphy is primarily based on the use of ammonites as index fossils. The first appearance datum of specific ammonite taxa is used to mark the beginnings of stages, as well as smaller timespans within stages, referred to as "ammonite zones"; these, in turn, are also sometimes subdivided further into subzones. Global stratigraphy is based on standard European ammonite zones, with other regions being calibrated to the European successions. Early Jurassic The oldest part of the Jurassic Period has historically been referred to as the Lias or Liassic, roughly equivalent in extent to the Early Jurassic, but also including part of the preceding Rhaetian. The Hettangian Stage was named by Swiss palaeontologist Eugène Renevier in 1864 after Hettange-Grande in north-eastern France. The GSSP for the base of the Hettangian is located at the Kuhjoch Pass, Karwendel Mountains, Northern Calcareous Alps, Austria; it was ratified in 2010. The beginning of the Hettangian, and thus the Jurassic as a whole, is marked by the first appearance of the ammonite Psiloceras spelae tirolicum in the Kendlbach Formation exposed at Kuhjoch. The base of the Jurassic was previously defined as the first appearance of Psiloceras planorbis by Albert Oppel in 1856–58, but this was changed as the appearance was seen as too localised an event for an international boundary. The Sinemurian Stage was first defined and introduced into scientific literature by Alcide d'Orbigny in 1842. It takes its name from the French town of Semur-en-Auxois, near Dijon. The original definition of Sinemurian included what is now the Hettangian. The GSSP of the Sinemurian is located at a cliff face north of the hamlet of East Quantoxhead, 6 kilometres east of Watchet, Somerset, England, within the Blue Lias, and was ratified in 2000. The beginning of the Sinemurian is defined by the first appearance of the ammonite Vermiceras quantoxense. Albert Oppel in 1858 named the Pliensbachian Stage after the hamlet of Pliensbach in the community of Zell unter Aichelberg in the Swabian Alb, near Stuttgart, Germany. The GSSP for the base of the Pliensbachian is found at the Wine Haven locality in Robin Hood's Bay, Yorkshire, England, in the Redcar Mudstone Formation, and was ratified in 2005. The beginning of the Pliensbachian is defined by the first appearance of the ammonite Bifericeras donovani. The village Thouars (Latin: Toarcium), just south of Saumur in the Loire Valley of France, lends its name to the Toarcian Stage. The Toarcian was named by Alcide d'Orbigny in 1842, with the original locality being Vrines quarry around 2 km northwest of Thouars. The GSSP for the base of the Toarcian is located at Peniche, Portugal, and was ratified in 2014. The boundary is defined by the first appearance of ammonites belonging to the subgenus Dactylioceras (Eodactylites). Middle Jurassic The Aalenian is named after the city of Aalen in Germany. The Aalenian was defined by Swiss geologist Karl Mayer-Eymar in 1864. The lower boundary was originally between the dark clays of the Black Jurassic and the overlying clayey sandstone and ferruginous oolite of the Brown Jurassic sequences of southwestern Germany. The GSSP for the base of the Aalenian is located at Fuentelsaz in the Iberian range near Guadalajara, Spain, and was ratified in 2000. The base of the Aalenian is defined by the first appearance of the ammonite Leioceras opalinum. Alcide d'Orbigny in 1842 named the Bajocian Stage after the town of Bayeux (Latin: Bajoce) in Normandy, France. The GSSP for the base of the Bajocian is located in the Murtinheira section at Cabo Mondego, Portugal; it was ratified in 1997. The base of the Bajocian is defined by the first appearance of the ammonite Hyperlioceras mundum. The Bathonian is named after the city of Bath, England, introduced by Belgian geologist d'Omalius d'Halloy in 1843, after an incomplete section of oolitic limestones in several quarries in the region. The GSSP for the base of the Bathonian is Ravin du Bès, Bas-Auran area, Alpes de Haute Provence, France; it was ratified in 2009. The base of the Bathonian is defined by the first appearance of the ammonite Gonolkites convergens, at the base of the Zigzagiceras zigzag ammonite zone. The Callovian is derived from the Latinized name of the village of Kellaways in Wiltshire, England, and was named by Alcide d'Orbigny in 1852, originally the base at the contact between the Forest Marble Formation and the Cornbrash Formation. However, this boundary was later found to be within the upper part of the Bathonian. The base of the Callovian does not yet have a certified GSSP. The working definition for the base of the Callovian is the first appearance of ammonites belonging to the genus Kepplerites. Late Jurassic The Oxfordian is named after the city of Oxford in England and was named by Alcide d'Orbigny in 1844 in reference to the Oxford Clay. The base of the Oxfordian lacks a defined GSSP. W. J. Arkell in studies in 1939 and 1946 placed the lower boundary of the Oxfordian as the first appearance of the ammonite Quenstedtoceras mariae (then placed in the genus Vertumniceras). Subsequent proposals have suggested the first appearance of Cardioceras redcliffense as the lower boundary. The village of Kimmeridge on the coast of Dorset, England, is the origin of the name of the Kimmeridgian. The stage was named by Alcide d'Orbigny in 1842 in reference to the Kimmeridge Clay. The GSSP for the base of the Kimmeridgian is the Flodigarry section at Staffin Bay on the Isle of Skye, Scotland, which was ratified in 2021. The boundary is defined by the first appearance of ammonites marking the boreal Bauhini Zone and the subboreal Baylei Zone. The Tithonian was introduced in scientific literature by Albert Oppel in 1865. The name Tithonian is unusual in geological stage names because it is derived from Greek mythology rather than a place name. Tithonus was the son of Laomedon of Troy and fell in love with Eos, the Greek goddess of dawn. His name was chosen by Albert Oppel for this stratigraphical stage because the Tithonian finds itself hand in hand with the dawn of the Cretaceous. The base of the Tithonian currently lacks a GSSP. The working definition for the base of the Tithonian is the first appearance of the ammonite genus Gravesia. The upper boundary of the Jurassic is currently undefined, and the Jurassic–Cretaceous boundary is currently the only system boundary to lack a defined GSSP. Placing a GSSP for this boundary has been difficult because of the strong regionality of most biostratigraphic markers, and lack of any chemostratigraphic events, such as isotope excursions (large sudden changes in ratios of isotopes), that could be used to define or correlate a boundary. Calpionellids, an enigmatic group of planktonic protists with urn-shaped calcitic tests briefly abundant during the latest Jurassic to earliest Cretaceous, have been suggested to represent the most promising candidates for fixing the Jurassic–Cretaceous boundary In particular, the first appearance Calpionella alpina, co-inciding with the base of the eponymous Alpina subzone, has been proposed as the definition of the base of the Cretaceous. The working definition for the boundary has often been placed as the first appearance of the ammonite Strambergella jacobi, formerly placed in the genus Berriasella, but its use as a stratigraphic indicator has been questioned, as its first appearance does not correlate with that of C. alpina. Mineral and hydrocarbon deposits The Kimmeridge Clay and equivalents are the major source rock for the North Sea oil. The Arabian Intrashelf Basin, deposited during the Middle and Late Jurassic, is the setting of the world's largest oil reserves, including the Ghawar Field, the world's largest oil field. The Jurassic-aged Sargelu and Naokelekan formations are major source rocks for oil in Iraq. Over 1500 gigatons of Jurassic coal reserves are found in north-west China, primarily in the Turpan-Hami Basin and the Ordos Basin. Impact structures Major impact structures include the Morokweng impact structure, a 70 km diameter impact structure buried beneath the Kalahari desert in northern South Africa. The impact is dated to the Tithonian, approximately 146.06 ± 0.16 Mya. Another major structure is the Puchezh-Katunki crater, 40 kilometres in diameter, buried beneath Nizhny Novgorod Oblast in western Russia. The impact has been dated to the Sinemurian, 195.9 ± 1.0 Ma. Paleogeography and tectonics At the beginning of the Jurassic, all of the world's major landmasses were coalesced into the supercontinent Pangaea, which during the Early Jurassic began to break up into northern supercontinent Laurasia and the southern supercontinent Gondwana. The rifting between North America and Africa was the first to initiate, beginning in the early Jurassic, associated with the emplacement of the Central Atlantic Magmatic Province. During the Jurassic, the North Atlantic Ocean remained relatively narrow, while the South Atlantic did not open until the Cretaceous. The continents were surrounded by Panthalassa, with the Tethys Ocean between Gondwana and Asia. At the end of the Triassic, there was a marine transgression in Europe, flooding most parts of central and western Europe transforming it into an archipelago of islands surrounded by shallow seas. During the Jurassic, both the North and South Pole were covered by oceans. Beginning in the Early Jurassic, the Boreal Ocean was connected to the proto-Atlantic by the "Viking corridor" or Transcontinental Laurasian Seaway, a passage between the Baltic Shield and Greenland several hundred kilometers wide. During the Callovian, the Turgai Epicontinental Sea formed, creating a marine barrier between Europe and Asia. Madagascar and Antarctica began to rift away from Africa during the late Early Jurassic in association with the eruption of the Karoo-Ferrar large igneous provinces, opening the western Indian Ocean and beginning the fragmentation of Gondwana. At the beginning of the Jurassic, North and South America remained connected, but by the beginning of the Late Jurassic they had rifted apart to form the Caribbean Seaway, also known as the Hispanic Corridor, which connected the North Atlantic Ocean with eastern Panthalassa. Palaeontological data suggest that the seaway had been open since the Early Jurassic. As part of the Nevadan orogeny, which began during the Triassic, the Cache Creek Ocean closed, and various terranes including the large Wrangellia Terrane accreted onto the western margin of North America. By the Middle Jurassic the Siberian plate and the North China-Amuria block had collided, resulting in the closure of the Mongol-Okhotsk Ocean. During the Early Jurassic, around 190 million years ago, the Pacific Plate originated at the triple junction of the Farallon, Phoenix, and Izanagi tectonic plates, the three main oceanic plates of Panthalassa. The previously stable triple junction had converted to an unstable arrangement surrounded on all sides by transform faults because of a kink in one of the plate boundaries, resulting in the formation of the Pacific Plate at the centre of the junction. During the Middle to early Late Jurassic, the Sundance Seaway, a shallow epicontinental sea, covered much of northwest North America.The eustatic sea level is estimated to have been close to present levels during the Hettangian and Sinemurian, rising several tens of metres during the late Sinemurian–Pliensbachian before regressing to near present levels by the late Pliensbachian. There seems to have been a gradual rise to a peak of ~75 m above present sea level during the Toarcian. During the latest part of the Toarcian, the sea level again dropped by several tens of metres. It progressively rose from the Aalenian onwards, aside from dips of a few tens of metres in the Bajocian and around the Callovian–Oxfordian boundary, peaking possibly as high as 140 metres above present sea level at the Kimmeridgian–Tithonian boundary. The sea levels falls in the late Tithonian, perhaps to around 100 metres, before rebounding to around 110 metres at the Tithonian–Berriasian boundary. The sea level within the long-term trends across the Jurassic was cyclical, with 64 fluctuations, 15 of which were over 75 metres. The most noted cyclicity in Jurassic rocks is fourth order, with a periodicity of approximately 410,000 years. During the Early Jurassic the world's oceans transitioned from an aragonite sea to a calcite sea chemistry, favouring the dissolution of aragonite and precipitation of calcite. The rise of calcareous plankton during the Middle Jurassic profoundly altered ocean chemistry, with the deposition of biomineralized plankton on the ocean floor acting as a buffer against large CO2 emissions. Climate The climate of the Jurassic was generally warmer than that of present, by around , with atmospheric carbon dioxide likely about four times higher. Intermittent "cold snap" intervals are known to have occurred during this time period, however, interrupting the otherwise warm greenhouse climate. Forests likely grew near the poles, where they experienced warm summers and cold, sometimes snowy winters; there were unlikely to have been ice sheets given the high summer temperatures that prevented the accumulation of snow, though there may have been mountain glaciers. Dropstones and glendonites in northeastern Siberia during the Early to Middle Jurassic indicate cold winters. The ocean depths were likely warmer than present, and coral reefs grew 10° of latitude further north and south. The Intertropical Convergence Zone likely existed over the oceans, resulting in large areas of desert and scrubland in the lower latitudes between 40° N and S of the equator. Tropical rainforest and tundra biomes are likely to have been rare or absent. The Jurassic also witnessed the decline of the Pangaean megamonsoon that had characterised the preceding Permian and Triassic periods. Variation in the frequency of wildfire activity in the Jurassic was governed by the 405 kyr eccentricity cycle. Thanks to the breakup of Pangaea, the hydrological cycle during the Jurassic was significantly enhanced. The beginning of the Jurassic was likely marked by a thermal spike corresponding to the Triassic–Jurassic extinction and eruption of the Central Atlantic magmatic province. The first part of the Jurassic was marked by the Early Jurassic Cool Interval between 199 and 183 million years ago. It has been proposed that glaciation was present in the Northern Hemisphere during both the early Pliensbachian and the latest Pliensbachian. There was a spike in global temperatures of around during the early part of the Toarcian corresponding to the Toarcian Oceanic Anoxic Event and the eruption of the Karoo-Ferrar large igneous provinces in southern Gondwana, with the warm interval extending to the end of the Toarcian around 174 million years ago. During the Toarcian Warm Interval, ocean surface temperatures likely exceeded , and equatorial and subtropical (30°N–30°S) regions are likely to have been extremely arid, with temperatures in the interior of Pangea likely in excess of .The Toarcian Warm Interval is followed by the Middle Jurassic Cool Interval (MJCI) between 174 and 164 million years ago, which may have been punctuated by brief, ephemeral icehouse intervals. During the Aalenian, precessionally forced climatic changes dictated peatland wildfire magnitude and frequency. The European climate appears to have become noticeably more humid at the Aalenian-Bajocian boundary but then became more arid during the middle Bajocian. A transient ice age possibly occurred in the late Bajocian. The Callovian-Oxfordian boundary at the end of the MJCI witnessed particularly notable global cooling, potentially even an ice age. This is followed by the Kimmeridgian Warm Interval (KWI) between 164 and 150 million years ago. Based on fossil wood distribution, this was one of the wettest intervals of the Jurassic. The Pangaean interior had less severe seasonal swings than in previous warm periods as the expansion of the Central Atlantic and Western Indian Ocean provided new sources of moisture. A prominent drop in temperatures occurred during the Tithonian, known as the Early Tithonian Cooling Event (ETCE). The end of the Jurassic was marked by the Tithonian–early Barremian Cool Interval (TBCI), beginning 150 million years ago and continuing into the Early Cretaceous. Climatic events Toarcian Oceanic Anoxic Event The Toarcian Oceanic Anoxic Event (TOAE), also known as the Jenkyns Event, was an episode of widespread oceanic anoxia during the early part of the Toarcian Age, c. 183 Mya. It is marked by a globally documented high amplitude negative carbon isotope excursion, as well as the deposition of black shales and the extinction and collapse of carbonate-producing marine organisms, associated with a major rise in global temperatures. The TOAE is often attributed to the eruption of the Karoo-Ferrar large igneous provinces and the associated increase of carbon dioxide concentration in the atmosphere, as well as the possible associated release of methane clathrates. This likely accelerated the hydrological cycle and increased silicate weathering, as evidenced by an increased amount of organic matter of terrestrial origin found in marine deposits during the TOAE. Groups affected include ammonites, ostracods, foraminifera, bivalves, cnidarians, and especially brachiopods, for which the TOAE represented one of the most severe extinctions in their evolutionary history. While the event had significant impact on marine invertebrates, it had little effect on marine reptiles. During the TOAE, the Sichuan Basin was transformed into a giant lake, probably three times the size of modern-day Lake Superior, represented by the Da'anzhai Member of the Ziliujing Formation. The lake likely sequestered ~460 gigatons (Gt) of organic carbon and ~1,200 Gt of inorganic carbon during the event. Seawater pH, which had already substantially decreased prior to the event, increased slightly during the early stages of the TOAE, before dropping to its lowest point around the middle of the event. This ocean acidification is the probable cause of the collapse of carbonate production. Additionally, anoxic conditions were exacerbated by enhanced recycling of phosphorus back into ocean water as a result of high ocean acidity and temperature inhibiting its mineralisation into apatite; the abundance of phosphorus in marine environments caused further eutrophication and consequent anoxia in a positive feedback loop. End-Jurassic transition The end-Jurassic transition was originally considered one of eight mass extinctions, but is now considered to be a complex interval of faunal turnover, with the increase in diversity of some groups and decline in others, though the evidence for this is primarily European, probably controlled by changes in eustatic sea level. Flora End-Triassic extinction There is no evidence of a mass extinction of plants at the Triassic–Jurassic boundary. At the Triassic–Jurassic boundary in Greenland, the sporomorph (pollen and spores) record suggests a complete floral turnover. An analysis of macrofossil floral communities in Europe suggests that changes were mainly due to local ecological succession. At the end of the Triassic, the Peltaspermaceae became extinct in most parts of the world, with Lepidopteris persisting into the Early Jurassic in Patagonia. Dicroidium, a corystosperm seed fern that was a dominant part of Gondwanan floral communities during the Triassic, also declined at the Triassic–Jurassic boundary, surviving as a relict in Antarctica into the Early Jurassic. Floral composition Conifers Conifers formed a dominant component of Jurassic floras. The Late Triassic and Jurassic was a major time of diversification of conifers, with most modern conifer groups appearing in the fossil record by the end of the Jurassic, having evolved from voltzialean ancestors. Araucarian conifers have their first unambiguous records during the Early Jurassic, and members of the modern genus Araucaria were widespread across both hemispheres by the Middle Jurassic. Also abundant during the Jurassic is the extinct family Cheirolepidiaceae, often recognised through their highly distinctive Classopolis pollen. Jurassic representatives include the pollen cone Classostrobus and the seed cone Pararaucaria. Araucarian and Cheirolepidiaceae conifers often occur in association. The oldest definitive record of the cypress family (Cupressaceae) is Austrohamia minuta from the Early Jurassic (Pliensbachian) of Patagonia, known from many parts of the plant. The reproductive structures of Austrohamia have strong similarities to those of the primitive living cypress genera Taiwania and Cunninghamia. By the Middle to Late Jurassic Cupressaceae were abundant in warm temperate–tropical regions of the Northern Hemisphere, most abundantly represented by the genus Elatides. The Jurassic also saw the first appearances of some modern genera of cypresses, such as Sequoia. Members of the extinct genus Schizolepidopsis which likely represent a stem-group to the pine family (Pinaceae), were widely distributed across Eurasia during the Jurassic. The oldest unambiguous record of Pinaceae is the pine cone Eathiestrobus, known from the Late Jurassic (Kimmeridgian) of Scotland, which remains the only known unequivocal fossil of the group before the Cretaceous. Despite being the earliest known member of the Pinaceae, Eathiestrobus appears to be a member of the pinoid clade of the family, suggesting that the initial diversification of Pinaceae occurred earlier than has been found in the fossil record. The earliest record of the yew family (Taxaceae) is Palaeotaxus rediviva, from the Hettangian of Sweden, suggested to be closely related to the living Austrotaxus, while Marskea jurassica from the Middle Jurassic of Yorkshire, England and material from the Callovian–Oxfordian Daohugou Bed in China are thought to be closely related to Amentotaxus, with the latter material assigned to the modern genus, indicating that Taxaceae had substantially diversified by the end of the Jurassic. The oldest unambiguous members of Podocarpaceae are known from the Jurassic, found across both hemispheres, including Scarburgia and Harrisiocarpus from the Middle Jurassic of England, as well as unnamed species from the Middle-Late Jurassic of Patagonia. During the Early Jurassic, the flora of the mid-latitudes of Eastern Asia were dominated by the extinct deciduous broad leafed conifer Podozamites, which appears to not be closely related to any living family of conifer. Its range extended northwards into polar latitudes of Siberia and then contracted northward in the Middle to Late Jurassic, corresponding to the increasing aridity of the region. Ginkgoales Ginkgoales, of which the sole living species is Ginkgo biloba, were more diverse during the Jurassic: they were among the most important components of Eurasian Jurassic floras and were adapted to a wide variety of climatic conditions. The earliest representatives of the genus Ginkgo, represented by ovulate and pollen organs similar to those of the modern species, are known from the Middle Jurassic in the Northern Hemisphere. Several other lineages of ginkgoaleans are known from Jurassic rocks, including Yimaia, Grenana, Nagrenia and Karkenia. These lineages are associated with Ginkgo-like leaves, but are distinguished from living and fossil representatives of Ginkgo by having differently arranged reproductive structures. Umaltolepis from the Jurassic of Asia has strap-shaped ginkgo-like leaves with highly distinct reproductive structures with similarities to those of peltasperm and corystosperm seed ferns, has been suggested to be a member of Ginkgoales sensu lato. Bennettitales Bennettitales, having first become widespread during the preceding Triassic, were diverse and abundant members of Jurassic floras across both hemispheres. The foliage of Bennettitales bears strong similarities to those of cycads, to such a degree that they cannot be reliably distinguished on the basis of morphology alone. Leaves of Bennettitales can be distinguished from those of cycads their different arrangement of stomata, and the two groups are not thought to be closely related. Jurassic Bennettitales predominantly belong to the group Williamsoniaceae, which grew as shrubs and small trees. The Williamsoniaceae are thought to have had a divaricate branching habit, similar to that of living Banksia, and adapted to growing in open habitats with poor soil nutrient conditions. Bennettitales exhibit complex, flower-like reproductive structures some of which are thought to have been pollinated by insects. Several groups of insects that bear long proboscis, including extinct families such as kalligrammatid lacewings and extant ones such as acrocerid flies, are suggested to have been pollinators of bennettitales, feeding on nectar produced by bennettitalean cones. Cycads Cycads reached their apex of diversity during the Jurassic and Cretaceous Periods. Despite the Mesozoic sometimes being called the "Age of Cycads", cycads are thought to have been a relatively minor component of mid-Mesozoic floras, with the Bennettitales and Nilssoniales, which have cycad-like foliage, being dominant. The Nilssoniales have often been considered cycads or cycad relatives, but have been found to be distinct on chemical grounds, and perhaps more closely allied with Bennettitales. The relationships of most Mesozoic cycads to living groups are ambiguous, with no Jurassic cycads belonging to either of the two modern groups of cycads, though some Jurassic cycads possibly represent stem-group relatives of modern Cycadaceae, like the leaf genus Paracycas known Europe, and Zamiaceae, like some European species of the leaf genus Pseudoctenis. Also widespread during the Jurassic was the extinct Ctenis lineage, which appears to be distantly related to modern cycads. Modern cycads are pollinated by beetles, and such an association is thought to have formed by the Early Jurassic. Other seed plants Although there have been several claimed records, there are no widely accepted Jurassic fossil records of flowering plants, which make up 90% of living plant species, and fossil evidence suggests that the group diversified during the following Cretaceous. The earliest known gnetophytes, one of the four main living groups of gymnosperms, appeared by the end of the Jurassic, with the oldest unequivocal gnetophyte being the seed Dayvaultia from the Late Jurassic of North America. "Seed ferns" (Pteridospermatophyta) is a collective term to refer to disparate lineages of fern like plants that produce seeds but have uncertain affinities to living seed plant groups. A prominent group of Jurassic seed ferns is the Caytoniales, which reached their zenith during the Jurassic, with widespread records in the Northern Hemisphere, though records in the Southern Hemisphere remain rare. Due to their berry-like seed-bearing capsules, they have often been suggested to have been closely related or perhaps ancestral to flowering plants, but the evidence for this is inconclusive. Corystosperm-aligned seed ferns, such as Pachypteris and Komlopteris were widespread across both hemispheres during the Jurassic. Czekanowskiales, also known as Leptostrobales, are a group of seed plants uncertain affinities with persistent heavily dissected leaves borne on deciduous short shoots, subtended by scale-like leaves, known from the Late Triassic (possibly Late Permian) to Cretaceous. They are thought to have had a tree- or shrub-like habit and formed a conspicuous component of Northern Hemisphere Mesozoic temperate and warm-temperate floras. The genus Phoenicopsis was widespread in Early-Middle Jurassic floras of Eastern Asia and Siberia. The Pentoxylales, a small but clearly distinct group of liana-like seed plants of obscure affinities, first appeared during the Jurassic. Their distribution appears to have been confined to Eastern Gondwana. Ferns and allies Living families of ferns widespread during the Jurassic include Dipteridaceae, Matoniaceae, Gleicheniaceae, Osmundaceae and Marattiaceae. Polypodiales, which make up 80% of living fern diversity, have no record from the Jurassic and are thought to have diversified in the Cretaceous, though the widespread Jurassic herbaceous fern genus Coniopteris, historically interpreted as a close relative of tree ferns of the family Dicksoniaceae, has recently been reinterpreted as an early relative of the group. The Cyatheales, the group containing most modern tree ferns, appeared during the Late Jurassic, represented by members of the genus Cyathocaulis, which are suggested to be early members of Cyatheaceae on the basis of cladistic analysis. Only a handful of possible records exist of the Hymenophyllaceae from the Jurassic, including Hymenophyllites macrosporangiatus from the Russian Jurassic. The oldest remains of modern horsetails of the genus Equisetum first appear in the Early Jurassic, represented by Equisetum dimorphum from the Early Jurassic of Patagonia and Equisetum laterale from the Early to Middle Jurassic of Australia. Silicified remains of Equisetum thermale from the Late Jurassic of Argentina exhibit all the morphological characters of modern members of the genus. The estimated split between Equisetum bogotense and all other living Equisetum is estimated to have occurred no later than the Early Jurassic. Lower plants Quillworts virtually identical to modern species are known from the Jurassic onwards. Isoetites rolandii from the Middle Jurassic of Oregon is the earliest known species to represent all major morphological features of modern Isoetes. More primitive forms such as Nathorstiana, which retain an elongated stem, persisted into the Early Cretaceous. The moss Kulindobryum from the Middle Jurassic of Russia, which was found associated with dinosaur bones, is thought to be related to the Splachnaceae, which grow on animal caracasses. Bryokhutuliinia from the same region is thought to be related to Dicranales. Heinrichsiella from the Jurassic of Patagonia is thought to belong to either Polytrichaceae or Timmiellaceae. The liverwort Pellites hamiensis from the Middle Jurassic Xishanyao Formation of China is the oldest record of the family Pelliaceae. Pallaviciniites sandaolingensis from the same deposit is thought to belong to the subclass Pallaviciniineae within the Pallaviciniales. Ricciopsis sandaolingensis, also from the same deposit, is the only Jurassic record of Ricciaceae. Fauna Reptiles Crocodylomorphs The Triassic–Jurassic extinction decimated pseudosuchian diversity, with crocodylomorphs, which originated during the early Late Triassic, being the only group of pseudosuchians to survive. All other pseudosuchians, including the herbivorous aetosaurs and carnivorous "rauisuchians", became extinct. The morphological diversity of crocodylomorphs during the Early Jurassic was around the same as that of Late Triassic pseudosuchians, but they occupied different areas of morphospace, suggesting that they occupied different ecological niches to their Triassic counterparts and that there was an extensive and rapid radiation of crocodylomorphs during this interval. While living crocodilians are mostly confined to an aquatic ambush predator lifestyle, Jurassic crocodylomorphs exhibited a wide variety of life habits. An unnamed protosuchid known from teeth from the Early Jurassic of Arizona represents the earliest known herbivorous crocodylomorph, an adaptation that appeared several times during the Mesozoic. The Thalattosuchia, a clade of predominantly marine crocodylomorphs, first appeared during the Early Jurassic and became a prominent part of marine ecosystems. Within Thalattosuchia, the Metriorhynchidae became highly adapted for life in the open ocean, including the transformation of limbs into flippers, the development of a tail fluke, and smooth, scaleless skin. The morphological diversity of crocodylomorphs during the Early and Middle Jurassic was relatively low compared to that in later time periods and was dominated by terrestrial small-bodied, long-legged sphenosuchians, early crocodyliforms and thalattosuchians. The Neosuchia, a major group of crocodylomorphs, first appeared during the Early to Middle Jurassic. The Neosuchia represents the transition from an ancestrally terrestrial lifestyle to a freshwater aquatic ecology similar to that occupied by modern crocodilians. The timing of the origin of Neosuchia is disputed. The oldest record of Neosuchians has been suggested to be Calsoyasuchus, from the Early Jurassic of Arizona, which in many analyses has been recovered as the earliest branching member of the neosuchian family Goniopholididae, which radically alters times of diversification for crocodylomorphs. However, this placement has been disputed, with some analyses finding it outside Neosuchia, which would place the oldest records of Neosuchia in the Middle Jurassic. Razanandrongobe from the Middle Jurassic of Madagascar has been suggested the represent the oldest record of Notosuchia, a primarily Gondwanan clade of mostly terrestrial crocodylomorphs, otherwise known from the Cretaceous and Cenozoic. Turtles Stem-group turtles (Testudinata) diversified during the Jurassic. Jurassic stem-turtles belong to two progressively more advanced clades, the Mesochelydia and Perichelydia. It is thought that the ancestral condition for mesochelydians is aquatic, as opposed to terrestrial for testudinates. The two modern groups of turtles (Testudines), Pleurodira and Cryptodira, diverged by the beginning of the Late Jurassic. The oldest known pleurodires, the Platychelyidae, are known from the Late Jurassic of Europe and the Americas, while the oldest unambiguous cryptodire, Sinaspideretes, an early relative of softshell turtles, is known from the Late Jurassic of China. The Thalassochelydia, a diverse lineage of marine turtles unrelated to modern sea turtles, are known from the Late Jurassic of Europe and South America. Lepidosaurs Rhynchocephalians (the sole living representative being the tuatara) had achieved a global distribution by the beginning of the Jurassic, and represented the dominant group of small reptiles during the Jurassic globally. Rhynchocephalians reached their highest morphological diversity in their evolutionary history during the Jurassic, occupying a wide range of lifestyles, including the aquatic pleurosaurs with long snake-like bodies and reduced limbs, the specialized herbivorous eilenodontines, as well as the sapheosaurs which had broad tooth plates indicative of durophagy. Rhynchocephalians disappeared from Asia after the Early Jurassic. The last common ancestor of living squamates (which includes lizards and snakes) is estimated to have lived around 190 million years ago during the Early Jurassic, with the major divergences between modern squamate lineages estimated to have occurred during the Early to Middle Jurassic. Squamates first appear in the fossil record during the Middle Jurassic including members of modern clades such as Scincomorpha, though many Jurassic squamates have unclear relationships to living groups. Eichstaettisaurus from the Late Jurassic of Germany has been suggested to be an early relative of geckos and displays adaptations for climbing. Dorsetisaurus from the Late Jurassic of North America and Europe represents the oldest widely accepted record of Anguimorpha. Marmoretta from the Middle Jurassic of Britain has been suggested to represent a late surviving lepidosauromorph outside both Rhynchocephalia and Squamata, though some studies have recovered it as a stem-squamate. Choristoderes The earliest known remains of Choristodera, a group of freshwater aquatic reptiles with uncertain affinities to other reptile groups, are found in the Middle Jurassic. Only two genera of choristodere are known from the Jurassic. One is the small lizard-like Cteniogenys, thought to be the most basal known choristodere; it is known from the Middle to Late Jurassic of Europe and Late Jurassic of North America, with similar remains also known from the upper Middle Jurassic of Kyrgyzstan and western Siberia. The other is Coeruleodraco from the Late Jurassic of China, which is a more advanced choristodere, though still small and lizard-like in morphology. Ichthyosaurs Ichthyosaurs suffered an evolutionary bottleneck during the end-Triassic extinction, with all non-neoichthyosaurians becoming extinct. Ichthyosaurs reached their apex of species diversity during the Early Jurassic, with an array of morphologies including the huge apex predator Temnodontosaurus and swordfish-like Eurhinosaurus, though Early Jurassic ichthyosaurs were significantly less morphologically diverse than their Triassic counterparts. At the Early–Middle Jurassic boundary, between the end of the Toarcian and the beginning of the Bajocian, most lineages of ichythosaur appear to have become extinct, with the first appearance of the Ophthalmosauridae, the clade that would encompass almost all ichthyosaurs from then on, during the early Bajocian. Ophthalmosaurids were diverse by the Late Jurassic, but failed to fill many of the niches that had been occupied by ichthyosaurs during the Early Jurassic. Plesiosaurs Plesiosaurs originated at the end of the Triassic (Rhaetian). By the end of the Triassic, all other sauropterygians, including placodonts and nothosaurs, had become extinct. At least six lineages of plesiosaur crossed the Triassic–Jurassic boundary. Plesiosaurs were already diverse in the earliest Jurassic, with the majority of plesiosaurs in the Hettangian-aged Blue Lias belonging to the Rhomaleosauridae. Early plesiosaurs were generally small-bodied, with body size increasing into the Toarcian. There appears to have been a strong turnover around the Early–Middle Jurassic boundary, with microcleidids and rhomaleosaurids becoming extinct and nearly extinct respectively after the end of the Toarcian with the first appearance of the dominant clade of plesiosaurs of the latter half of the Jurassic, the Cryptoclididae during the Bajocian. The Middle Jurassic saw the evolution of short-necked and large-headed thalassophonean pliosaurs from ancestrally small-headed, long-necked forms. Some thalassophonean pliosaurs, such as some species of Pliosaurus, had skulls up to two metres in length with body lengths estimated around 10–12 meters(32–39 ft), making them the apex predators of Late Jurassic oceans. Plesiosaurs invaded freshwater environments during the Jurassic, with indeterminate remains of small-bodied pleisosaurs known from freshwater sediments from the Jurassic of China and Australia. Pterosaurs Pterosaurs first appeared in the Late Triassic. A major radiation of Jurassic pterosaurs is the Rhamphorhynchidae, which first appeared in the late Early Jurassic (Toarcian); they are thought to been piscivorous. Anurognathids, which first appeared in the Middle Jurassic, possessed short heads and densely furred bodies, and are thought to have been insectivores. Derived monofenestratan pterosaurs such as wukongopterids appeared in the late Middle Jurassic. Advanced short-tailed pterodactyloids first appeared at the Middle–Late Jurassic boundary. Jurassic pterodactyloids include the ctenochasmatids, like Ctenochasma, which have closely spaced needle-like teeth that were presumably used for filter feeding. The bizarre Late Jurassic ctenochasmatoid Cycnorhamphus had a jaw with teeth only at the tips, with bent jaws like those of living openbill storks that may have been used to hold and crush hard invertebrates. Dinosaurs Dinosaurs, which had morphologically diversified in the Late Triassic, experienced a major increase in diversity and abundance during the Early Jurassic in the aftermath of the end-Triassic extinction and the extinction of other reptile groups, becoming the dominant vertebrates in terrestrial ecosystems. Chilesaurus, a morphologically aberrant herbivorous dinosaur from the Late Jurassic of South America, has uncertain relationships to the three main groups of dinosaurs, having been recovered as a member of all three in different analyses. Theropods Advanced theropods belonging to Neotheropoda first appeared in the Late Triassic. Basal neotheropods, such as coelophysoids and dilophosaurs, persisted into the Early Jurassic, but became extinct by the Middle Jurassic. The earliest averostrans appear during the Early Jurassic, with the earliest known member of Ceratosauria being Saltriovenator from the early Sinemurian (199.3–197.5 million years ago) of Italy. The unusual ceratosaur Limusaurus from the Late Jurassic of China had a herbivorous diet, with adults having edentulous beaked jaws, making it the earliest known theropod to have converted from an ancestrally carnivorous diet. The earliest members of the Tetanurae appeared during the late Early Jurassic or early Middle Jurassic. The Megalosauridae represent the oldest radiation of the Tetanurae, first appearing in Europe during the Bajocian. The oldest member of Allosauroidea has been suggested to be Asfaltovenator from the Middle Jurassic of South America. Coelurosaurs first appeared during the Middle Jurassic, including early tyrannosaurs such as Proceratosaurus from the Bathonian of Britain. Some coelurosaurs from the Late Jurassic of China including Shishugounykus and Haplocheirus are suggested to represent early alvarezsaurs, however, this has been questioned. Scansoriopterygids, a group of small feathered coelurosaurs with membraneous, bat-like wings for gliding, are known from the Middle to Late Jurassic of China. The oldest record of troodontids is suggested to be Hesperornithoides from the Late Jurassic of North America. Tooth remains suggested to represent those of dromaeosaurs are known from the Jurassic, but no body remains are known until the Cretaceous. Birds The earliest avialans, which include birds and their ancestors, appear during the Middle to Late Jurassic, definitively represented by Archaeopteryx from the Late Jurassic of Germany. Avialans belong to the clade Paraves within Coelurosauria, which also includes dromaeosaurs and troodontids. The Anchiornithidae from the Middle-Late Jurassic of Eurasia have frequently suggested to be avialans, but have also alternatively found as a separate lineage of paravians. Ornithischians The earliest definitive ornithischians appear during the Early Jurassic, represented by basal ornithischians like Lesothosaurus, heterodontosaurids, and early members of Thyreophora. The earliest members of Ankylosauria and Stegosauria appear during the Middle Jurassic. The basal neornithischian Kulindadromeus from the Middle Jurassic of Russia indicates that at least some ornithischians were covered in protofeathers. The earliest members of Ankylopollexia, which become prominent in the Cretaceous, appeared during the Late Jurassic, represented by bipedal forms such as Camptosaurus. Ceratopsians first appeared in the Late Jurassic of China, represented by members of Chaoyangsauridae. Sauropodomorphs Sauropods became the dominant large herbivores in terrestrial ecosystems during the Jurassic. Some Jurassic sauropods reached gigantic sizes, becoming the largest organisms to have ever lived on land. Basal bipedal sauropodomorphs, such as massospondylids, continued to exist into the Early Jurassic, but became extinct by the beginning of the Middle Jurassic. Quadrupedal sauropomorphs appeared during the Late Triassic. The quadrupedal Ledumahadi from the earliest Jurassic of South Africa reached an estimated weight of 12 tons, far in excess of other known basal sauropodomorphs. Gravisaurian sauropods first appeared during the Early Jurassic, with the oldest definitive record being Vulcanodon from Zimbabwe, likely of Sinemurian age. Eusauropods first appeared during the late Early Jurassic (Toarcian) and diversified during the Middle Jurassic; these included cetiosaurids, turiasaurs, and mamenchisaurs. Neosauropods such as macronarians and diplodocoids first appeared during the Middle Jurassic, before becoming abundant and globally distributed during the Late Jurassic. Amphibians The diversity of temnospondyls had progressively declined through the Late Triassic, with only brachyopoids surviving into the Jurassic and beyond. Members of the family Brachyopidae are known from Jurassic deposits in Asia, while the chigutisaurid Siderops is known from the Early Jurassic of Australia. Modern lissamphibians began to diversify during the Jurassic. The Early Jurassic Prosalirus thought to represent the first frog relative with a morphology capable of hopping like living frogs. Morphologically recognisable stem-frogs like the South American Notobatrachus are known from the Middle Jurassic, with modern crown-group frogs like Enneabatrachus and Rhadinosteus appearing by the Late Jurassic. While the earliest salamander-line amphibians are known from the Triassic, crown group salamanders first appear during the Middle to Late Jurassic in Eurasia, alongside stem-group relatives. Many Jurassic stem-group salamanders, such as Marmorerpeton and Kokartus, are thought to have been neotenic. Early representatives of crown group salamanders include Chunerpeton, Pangerpeton and Linglongtriton from the Middle to Late Jurassic Yanliao Biota of China. Some of these are suggested to belong to Cryptobranchoidea, which contains living Asiatic and giant salamanders. Beiyanerpeton, and Qinglongtriton from the same biota are thought to be early members of Salamandroidea, the group which contains all other living salamanders. Salamanders dispersed into North America by the end of the Jurassic, as evidenced by Iridotriton, found in the Late Jurassic Morrison Formation. The stem-caecilian Eocaecilia is known from the Early Jurassic of Arizona. The fourth group of lissamphibians, the extinct salamander-like albanerpetontids, first appeared in the Middle Jurassic, represented by Anoualerpeton priscus from the Bathonian of Britain, as well as indeterminate remains from equivalently aged sediments in France and the Anoual Formation of Morocco. Mammaliaformes Mammaliaformes, including mammals, having originated from cynodonts at the end of the Triassic, diversified extensively during the Jurassic. While most Jurassic mammalaliaforms are solely known from isolated teeth and jaw fragments, exceptionally preserved remains have revealed a variety of lifestyles. The docodontan Castorocauda was adapted to aquatic life, similarly to the platypus and otters. Some members of Haramiyida and the eutriconodontan tribe Volaticotherini had a patagium akin to those of flying squirrels, allowing them to glide through the air. The aardvark-like mammal Fruitafossor, of uncertain taxonomy, was likely a specialist on colonial insects, similarly to living anteaters. Australosphenida, a group of mammals possibly related to living monotremes, first appeared in the Middle Jurassic of Gondwana. The earliest records of multituberculates, of the longest lasting and most successful orders of mammals, are known from the Middle Jurassic. Therian mammals, represented today by living placentals and marsupials, diversified meteorically during the Middle Jurassic. They have their earliest records during the early Late Jurassic, represented by Juramaia, a eutherian mammal closer to the ancestry of placentals than marsupials. Juramaia is much more advanced than expected for its age, as other therian mammals are not known until the Early Cretaceous, and it has been suggested that Juramaia may also originate from the Early Cretaceous instead. Two groups of non-mammaliaform cynodonts persisted beyond the end of the Triassic. The insectiviorous Tritheledontidae has a few records from the Early Jurassic. The Tritylodontidae, a herbiviorous group of cynodonts that first appeared during the Rhaetian, has abundant records from the Jurassic, overwhelmingly from the Northern Hemisphere. Fish Jawless fish The last known species of conodont, a class of jawless fish whose hard, tooth-like elements are key index fossils, finally became extinct during the earliest Jurassic after over 300 million years of evolutionary history, with an asynchronous extinction occurring first in the Tethys and eastern Panthalassa and survivors persisting into the earliest Hettangian of Hungary and central Panthalassa. End-Triassic conodonts were represented by only a handful of species and had been progressively declining through the Middle and Late Triassic. Yanliaomyzon from the Middle Jurassic of China represents the oldest post Paleozoic lamprey, and the oldest lamprey to have the toothed feeding apparatus and likely the three stage life cycle typical of modern members of the group. Sarcopterygii Lungfish (Dipnoi) were present in freshwater environments of both hemispheres during the Jurassic. Some studies have proposed that the last common ancestor of all living lungfish lived during the Jurassic. Mawsoniids, a marine and freshwater/brackish group of coelacanths, which first appeared in North America during the Triassic, expanded into Europe and South America by the end of the Jurassic. The marine Latimeriidae, which contains the living coelacanths of the genus Latimeria, were also present in the Jurassic, having originated in the Triassic, with a number of records from the Jurassic of Europe including Swenzia, thought to be the closest known relative of living coelacanths. Actinopterygii Ray-finned fish (Actinopterygii) were major components of Jurassic freshwater and marine ecosystems. Archaic "palaeoniscoid" fish, which were common in both marine and freshwater habitats during the preceding Triassic declined during the Jurassic, being largely replaced by more derived actinopterygian lineages. The oldest known Acipenseriformes, the group that contains living sturgeon and paddlefish, are from the Early Jurassic. Amiiform fish (which today only includes the bowfin) first appeared during the Early Jurassic, represented by Caturus from the Pliensbachian of Britain; after their appearance in the western Tethys, they expanded to Africa, North America and Southeast and East Asia by the end of the Jurassic, with the modern family Amiidae appearing during the Late Jurassic. Pycnodontiformes, which first appeared in the western Tethys during the Late Triassic, expanded to South America and Southeast Asia by the end of the Jurassic, having a high diversity in Europe during the Late Jurassic. During the Jurassic, the Ginglymodi, the only living representatives being gars (Lepisosteidae) were diverse in both freshwater and marine environments. The oldest known representatives of anatomically modern gars appeared during the Late Jurassic. Stem-group teleosts, which make up over 99% of living Actinopterygii, had first appeared during the Triassic in the western Tethys; they underwent a major diversification beginning in the Late Jurassic, with early representatives of modern teleost clades such as Elopomorpha and Osteoglossoidei appearing during this time. The Pachycormiformes, a group of marine stem-teleosts, first appeared in the Early Jurassic and included both tuna-like predatory and filter-feeding forms, the latter included the largest bony fish known to have existed: Leedsichthys, with an estimated maximum length of over 15 metres, known from the late Middle to Late Jurassic. Chondrichthyes During the Early Jurassic, the shark-like hybodonts, which represented the dominant group of chondrichthyans during the preceding Triassic, were common in both marine and freshwater settings; however, by the Late Jurassic, hybodonts had become minor components of most marine communities, having been largely replaced by modern neoselachians, but remained common in freshwater and restricted marine environments. The Neoselachii, which contains all living sharks and rays, radiated beginning in the Early Jurassic. The oldest known ray (Batoidea) is Antiquaobatis from the Pliensbachian of Germany. Jurassic batoids known from complete remains retain a conservative, guitarfish-like morphology. The oldest known Hexanchiformes and carpet sharks (Orectolobiformes) are from the Early Jurassic (Pliensbachian and Toarcian, respectively) of Europe. The oldest known members of the Heterodontiformes, the only living members of which are the bullhead shark (Heterodontus), first appeared in the Early Jurassic, with representatives of the living genus appearing during the Late Jurassic. The oldest record of angelsharks (Squatiniformes) is Pseudorhina from the Late Jurassic (Oxfordian–Tithonian) of Europe, which already has a bodyform similar to members of the only living genus of the order, Squatina. The oldest known remains of Carcharhiniformes, the largest order of living sharks, first appear in the late Middle Jurassic (Bathonian) of the western Tethys (England and Morocco). Known dental and exceptionally preserved body remains of Jurassic Carchariniformes are similar to those of living catsharks. Synechodontiformes, an extinct group of sharks closely related to Neoselachii, were also widespread during the Jurassic. The oldest remains of modern chimaeras are from the Early Jurassic of Europe, with members of the living family Callorhinchidae appearing during the Middle Jurassic. Unlike most living chimaeras, Jurassic chimeras are often found in shallow water environments. The closely related Squaloraja and myriacanthoids are also known from the Jurassic of Europe. Insects and arachnids There appears to have been no major extinction of insects at the Triassic–Jurassic boundary. Many important insect fossil localities are known from the Jurassic of Eurasia, the most important being the Karabastau Formation of Kazakhstan and the various Yanliao Biota deposits in Inner Mongolia, China, such as the Daohugou Bed, dating to the Callovian–Oxfordian. The diversity of insects stagnated throughout the Early and Middle Jurassic, but during the latter third of the Jurassic origination rates increased substantially while extinction rates remained flat. The increasing diversity of insects in the Middle–Late Jurassic corresponds with a substantial increase in the diversity of insect mouthparts. The Middle to Late Jurassic was a time of major diversification for beetles, particularly for the suborder Polyphaga, which represents 90% of living beetle species but which was rare during the preceding Triassic. Weevils first appear in the fossil record during the Middle to Late Jurassic, but are suspected to have originated during the Late Triassic to Early Jurassic. Orthopteran diversity had declined during the Late Triassic, but recovered during the Early Jurassic, with the Hagloidea, a superfamily of ensiferan orthopterans today confined to a few living species, being particularly diverse during the Jurassic. The oldest known lepidopterans (the group containing butterflies and moths) are known from the Triassic–Jurassic boundary, with wing scales belonging to the suborder Glossata and Micropterigidae-grade moths from the deposits of this age in Germany. Modern representatives of both dragonflies and damselflies also first appeared during the Jurassic. Although modern representatives are not known until the Cenozoic, ectoparasitic insects thought to represent primitive fleas, belonging to the family Pseudopulicidae, are known from the Middle Jurassic of Asia. These insects are substantially different from modern fleas, lacking the specialised morphology of the latter and being larger. Parasitoid wasps (Apocrita) first appeared during the Early Jurassic and subsequently became widespread, reshaping terrestrial food webs. The Jurassic saw also saw the first appearances of several other groups of insects, including Phasmatodea (stick insects), Mantophasmatidae (gladiators), Embioptera (webspinners), and Raphidioptera (snakeflies). The earliest scale insect (Coccomorpha) is known from amber dating to the Late Jurassic, though the group probably originated earlier during the Triassic. Only a handful of records of mites are known from the Jurassic, including Jureremus, an oribatid mite belonging to the family Cymbaeremaeidae known from the Late Jurassic of Britain and Russia, and a member of the still living orbatid genus Hydrozetes from the Early Jurassic of Sweden. Spiders diversified through the Jurassic. The Early Jurassic Seppo koponeni may represent a stem group to Palpimanoidea. Eoplectreurys from the Middle Jurassic of China is considered a stem lineage of Synspermiata. The oldest member of the family Archaeidae, Patarchaea, is known from the Middle Jurassic of China. Mongolarachne from the Middle Jurassic of China is among the largest known fossil spiders, with legs over 5 centimetres long. The only scorpion known from the Jurassic is Liassoscorpionides from the Early Jurassic of Germany, of uncertain placement. Eupnoi harvestmen (Opiliones) are known from the Middle Jurassic of China, including members of the family Sclerosomatidae. Marine invertebrates End-Triassic extinction During the end-Triassic extinction, 46%–72% of all marine genera became extinct. The effects of the end Triassic extinction were greatest at tropical latitudes and were more severe in Panthalassa than the Tethys or Boreal oceans. Tropical reef ecosystems collapsed during the event, and would not fully recover until much later in the Jurassic. Sessile filter feeders and photosymbiotic organisms were among those most severely affected. Marine ecosystems Having declined at the Triassic–Jurassic boundary, reefs substantially expanded during the Late Jurassic, including both sponge reefs and scleractinian coral reefs. Late Jurassic reefs were similar in form to modern reefs but had more microbial carbonates and hypercalcified sponges, and had weak biogenic binding. Reefs sharply declined at the close of the Jurassic, which caused an associated drop in diversity in decapod crustaceans. The earliest planktonic foraminifera, which constitute the suborder Globigerinina, are known from the late Early Jurassic (mid-Toarcian) of the western Tethys, expanding across the whole Tethys by the Middle Jurassic and becoming globally distributed in tropical latitudes by the Late Jurassic. Coccolithophores and dinoflagellates, which had first appeared during the Triassic, radiated during the Early to Middle Jurassic, becoming prominent members of the phytoplankton. Microconchid tube worms, the last remaining order of Tentaculita, a group of animals of uncertain affinities that were convergent on Spirorbis tube worms, were rare after the Triassic and had become reduced to the single genus Punctaconchus, which became extinct in the late Bathonian. The oldest known diatom is from Late Jurassic–aged amber from Thailand, assigned to the living genus Hemiaulus. Echinoderms Crinoids diversified throughout the Jurassic, reaching their peak Mesozoic diversity during the Late Jurassic, primarily due to the radiation of sessile forms belonging to the orders Cyrtocrinida and Millericrinida. Echinoids (sea urchins) underwent substantial diversification beginning in the Early Jurassic, primarily driven by the radiation of irregular (asymmetrical) forms, which were adapting to deposit feeding. Rates of diversification sharply dropped during the Late Jurassic. Crustaceans The Jurassic was a significant time for the evolution of decapods. The first true crabs (Brachyura) are known from the Early Jurassic, with the earliest being Eocarcinus praecursor from the early Pliensbachian of England, which lacked the crab-like morphology (carcinisation) of modern crabs, and Eoprosopon klugi from the late Pliensbachian of Germany, which may belong to the living family Homolodromiidae. Most Jurassic crabs are known only from carapace pieces, which makes it difficult to determine their relationships. While rare in the Early and Middle Jurassic, crabs became abundant during the Late Jurassic as they expanded from their ancestral silty sea floor habitat into hard substrate habitats like reefs, with crevices in reefs providing refuge from predators. Hermit crabs also first appeared during the Jurassic, with the earliest known being Schobertella hoelderi from the late Hettangian of Germany. Early hermit crabs are associated with ammonite shells rather than those of gastropods. Glypheids, which today are only known from two species, reached their peak diversity during the Jurassic, with around 150 species out of a total fossil record of 250 known from the period. Jurassic barnacles were of low diversity compared to present, but several important evolutionary innovations are known, including the first appearances of calcite shelled forms and species with an epiplanktonic mode of life. Brachiopods Brachiopod diversity declined during the Triassic–Jurassic extinction. Spire-bearing brachiopods (Spiriferinida and Athyridida) did not recover their biodiversity, becoming extinct in the TOAE. Rhynchonellida and Terebratulida also declined during the Triassic–Jurassic extinction but rebounded during the Early Jurassic; neither clade underwent much morphological variation. Brachiopods substantially declined in the Late Jurassic; the causes are poorly understood. Proposed reasons include increased predation, competition with bivalves, enhanced bioturbation or increased grazing pressure. Bryozoans Like the preceding Triassic, bryozoan diversity was relatively low compared to the Paleozoic. The vast majority of Jurassic bryozoans are members of Cyclostomatida, which experienced a radiation during the Middle Jurassic, with all Jurassic representatives belonging to the suborders Tubuliporina and Cerioporina. Cheilostomata, the dominant group of modern bryozoans, first appeared during the Late Jurassic. Molluscs Gastropods Marine gastropods were significantly affected by the T-J extinction, with around 56% of genera going extinct, with Neritimorpha being particularly strongly effected, while Heterobranchia suffered much lower losses than other groups. While present, the diversity of freshwater and land snails was much lower during the Jurassic than in contemporary ecosystems, with the diversity of these groups not reaching levels comparable to modern times until the following Cretaceous. Bivalves The end-Triassic extinction had a severe impact on bivalve diversity, though it had little impact on bivalve ecological diversity. The extinction was selective, having less of an impact on deep burrowers, but there is no evidence of a differential impact between surface-living (epifaunal) and burrowing (infaunal) bivalves. Bivalve family level diversity after the Early Jurassic was static, though genus diversity experienced a gradual increase throughout the period. Rudists, the dominant reef-building organisms of the Cretaceous, first appeared in the Late Jurassic (mid-Oxfordian) in the northern margin of the western Tethys, expanding to the eastern Tethys by the end of the Jurassic. Cephalopods Ammonites were devastated by the end-Triassic extinction, with only a handful of genera belonging to the family Psiloceratidae of the suborder Phylloceratina surviving and becoming ancestral to all later Jurassic and Cretaceous ammonites. Ammonites explosively diversified during the Early Jurassic, with the orders Psiloceratina, Ammonitina, Lytoceratina, Haploceratina, Perisphinctina and Ancyloceratina all appearing during the Jurassic. Ammonite faunas during the Jurassic were regional, being divided into around 20 distinguishable provinces and subprovinces in two realms, the northern high latitude Pan-Boreal realm, consisting of the Arctic, northern Panthalassa and northern Atlantic regions, and the equatorial–southern Pan-Tethyan realm, which included the Tethys and most of Panthalassa. Ammonite diversifications occurred coevally with marine transgressions, while their diversity nadirs occurred during marine regressions. The oldest definitive records of the squid-like belemnites are from the earliest Jurassic (Hettangian–Sinemurian) of Europe and Japan; they expanded worldwide during the Jurassic. Belemnites were shallow-water dwellers, inhabiting the upper 200 metres of the water column on the continental shelves and in the littoral zone. They were key components of Jurassic ecosystems, both as predators and prey, as evidenced by the abundance of belemnite guards in Jurassic rocks. The earliest vampyromorphs, of which the only living member is the vampire squid, first appeared during the Early Jurassic. The earliest octopuses appeared during the Middle Jurassic, having split from their closest living relatives, the vampyromorphs, during the Triassic to Early Jurassic. All Jurassic octopuses are solely known from the hard gladius. Octopuses likely originated from bottom-dwelling (benthic) ancestors which lived in shallow environments. Proteroctopus from the late Middle Jurassic La Voulte-sur-Rhône lagerstätte, previously interpreted as an early octopus, is now thought to be a basal taxon outside the clade containing vampyromorphs and octopuses.
Physical sciences
Geological periods
null
15785
https://en.wikipedia.org/wiki/June
June
June—abbreviated Jun—is the sixth month of the year in the Julian and Gregorian calendars—the latter the most widely used calendar in the world. Its length is 30 days. June succeeds May and precedes July. This month marks the start of summer in the Northern Hemisphere and contains the summer solstice, which is the day with the most daylight hours. In the Southern Hemisphere, June is the start of winter and contains the winter solstice, the day with the fewest hours of daylight out of the year. In places north of the Arctic Circle, the June solstice is when the midnight sun occurs, during which the Sun remains visible even at midnight. The Atlantic hurricane season—when tropical or subtropical cyclones are most likely to form in the north Atlantic Ocean—begins on 1 June and lasts until 30 November. Several monsoons and subsequent wet seasons also commence in the Northern Hemisphere during this month. Multiple meteor showers occur annually in June, including the Arietids, which are among the most intense daylight meteor showers of the year; they last between 22 May and 2 July, peaking in intensity on 8 June. Numerous observances take place in June. Midsummer, the celebration of the summer solstice in the Northern Hemisphere, is celebrated in several countries. In Catholicism, this month is dedicated to the devotion of the Sacred Heart of Jesus, and known as the Month of the Sacred Heart. In the United States, June is dedicated to Pride Month, a month-long observance of LGBT individuals. Father's Day, which honours fathers and fatherhood, occurs on the third Sunday in June in most countries. Overview June is the sixth month of the year in the Julian and Gregorian calendars—the latter the most widely used calendar in the world. Containing 30 days, June succeeds May and precedes July. It is one of four months that have 30 days—alongside April, September and November—and is the second 30-day month of the year, following April, the fourth month of the year, and preceding September—the ninth month of the year. June is in the second quarter (Q2) of a calendar year, alongside April and May, and the sixth and final month in the first half of the year (January–June). Under the ISO week date system, June incidentally begins in either the 22nd or 23rd week of the year. This month is abbreviated as Jun, and may be spelled with or without a concluding period (full stop). Etymologically, June is ultimately derived from the Latin month of Iunius, named after the ancient Roman goddess Juno (Latin: ). The present English spelling was influenced by the Anglo-Norman join, junye and junie. It was also written in Middle English as Iun and Juin, while the spelling variant Iune was in use until the 17th century. It displaced the Old English name for June, ærra liþa. As of , June last occurred days ago (UTC); it will be June again on 1 June 2025. History June originates from the month of Iunius (also called ) in the original Roman calendar used during the Roman Republic. The origin of this calendar is obscure. Iunius was originally the fourth month of the year, and had 29 days alongside ("April"), (later renamed "August"), , and . It is not known when the Romans reset the course of the year so that ("January") and ("February"), originally the 11th and 12th months respectively, came first—thus moving Iunius to the sixth month of the year—but later Roman scholars generally dated this to 153 BC. In ancient Rome, the period from mid-May through mid-June may have been considered inauspicious for marriages. The Roman poet Ovid claimed to have consulted the flaminica Dialis, the high priestess of the god Jupiter, about setting a date for his daughter's wedding, but was advised to wait until after 15 June. The Greek philosopher and writer Plutarch, however, implied that the entire month of June was more favorable for weddings than May. In 46 BC, Julius Caesar reformed the calendar, which thus became known as the Julian calendar after himself. This reform fixed the calendar to 365 days with a leap year every fourth year, and made June 30 days long; however, this reform resulted in the average year of the Julian calendar being 365.25 days long, slightly more than the actual solar year of 365.2422 days (the current value, which varies). In 1582, Pope Gregory XIII promulgated a revised calendar—the Gregorian calendar—that reduced the average length of the calendar year from 365.25 days to 365.2425, correcting the Julian calendar's drift against the solar year. Climate, daylight and astronomy In the Northern Hemisphere, June marks the commencement of summer, while in the Southern Hemisphere, it is the start of winter. In the Northern Hemisphere, the beginning of the traditional astronomical summer is 21 June, while meteorological summer commences on 1 June. In the Southern Hemisphere, astronomical winter starts on 21 June while meteorological winter begins on 1 June. The June solstice—known as the summer solstice in the Northern Hemisphere and winter solstice in the Southern Hemisphere—occurs for one-day between 20–22 June (most often on 21 June), marking the longest day of the year in terms of daylight hours in the Northern Hemisphere and the shortest day in the Southern Hemisphere. In places north of the Arctic Circle, this is when the midnight sun occurs for the longest period, during which the Sun remains visible even at midnight. Conversely, it is polar night in places within the Antarctic Circle, during which the Sun remains below the horizon for more than 24 hours. In astronomy, certain meteor showers occur annually during this month. The Arietids—among the most intense daylight meteor showers of the year—last from 22 May until 2 July, peaking in intensity on 8 June; the Beta Taurids take place between 5 June and 18 July, peaking on 28 June; and the June Bootids commence between 22 June and 2 July, peaking on 27 June. The full moon that occurs in June is most commonly known as the strawberry moon because it coincides with the strawberry-picking season; other names for it include the rose moon, honey moon and the poetic midsummer moon. Climate June is one of the hottest months in the Northern Hemisphere, alongside July and August, with July being the hottest; in the Southern Hemisphere, it is the inverse. For instance, the lowest temperature ever recorded in South America occurred on 1 June 1907 in the town of Sarmiento in the Chubut Province of Argentina, measuring -32.8°C (-27°F). The Atlantic hurricane season—when tropical or subtropical cyclones are most likely to form in the north Atlantic Ocean—begins on 1 June and lasts until 30 November. In the Indian Ocean north of the equator, around the Indian subcontinent, year-round tropical cyclones appear frequently between May and June. In contrast, Mediterranean tropical-like cyclones are least likely to form in June because of the dry season of the Mediterranean having stable air. The East Asian, North American, South Asian (Indian) and West African monsoons generally begin in June, while the European monsoon season intensifies that month. The East Asian monsoon commences the East Asian rainy season. The highest volume of rainfall ever recorded in a one-hour period occurred on 22 June 1947 in the small city of Holt, Missouri in the United States, measuring 305 mm (12 inches) of rainfall. The greatest rainfall within a 48-hour period occurred between 15–16 June 1995 in the town of Cherrapunji in Meghalaya, India, with 2.493 metres (98.15 inches) of rainfall recorded. Agriculture The crops which are harvested this month include several varieties of corn; wheat, barley, maize, rapeseed, rice, rye and sorghum in most of the Northern Hemisphere, and maize, cotton, pearl millet, sorghum and soybeans in most of the Southern Hemisphere. In much of the Northern Hemisphere, apricots, blackberries, blueberries, cherries, mangoes, raspberries, strawberries and watermelons are fruits which are considered to be in season or at their peak in June. Vegetables that are in season in this hemisphere during June include asparagus, beetroot, cucumbers, lettuce, peas, radishes, spinach, tomatoes and zucchini (courgettes). In much of the Southern Hemisphere, the fruits which are in season are avocados, bananas, citrus (such as grapefruit, lemons, mandarins and oranges), kiwifruit and pears. Observances In Catholicism, June is dedicated to the devotion of the Sacred Heart of Jesus. This observance is called the Month of the Sacred Heart. In Canada, June is ALS Awareness Month, a campaign to spread awareness and raise funds for a cure for amyotrophic lateral sclerosis, and Filipino Heritage Month. In the United States, June is Pride Month, which is the celebration of LGBTQ individuals. Caribbean-American Heritage Month also occurs annually in June. In Brazil, the Festa Juninas (June Festivals) occur throughout the entire month to celebrate the harvest. It is also National Safety Month in the United States, a month-long observance aimed at increasing awareness of, and ultimately decreasing, the number of unintentional injuries and deaths in the country. National Smile Month, the largest oral health campaign in the United Kingdom and organised by the Oral Health Foundation, commences between alternating dates from mid-May to mid-June. In Barbados, June is part of the Season of Emancipation which takes place between 14 April and 23 August to commemorate the emancipation of slaves of African descent. Global single-day observances The first day of June commences with International Children's Day and World Milk Day. International Whores' Day, an observance to honour sex workers (prostitutes) and recognise their often exploited and poor working conditions, occurs on 2 June. Several memorials and other commemorations are held around the world on 4 June to honour the 1989 Tiananmen Square protests and massacre that occurred in China. Similar annual memorials are held for the Normandy landings (D-Day), the largest seaborne invasion in history, which occurred on 6 June 1944 as part of the Second World War. Global Wind Day is on 15 June, and on 16 June is the International Day of the African Child, which raises awareness for the need of improved education provided to children in Africa. Autistic Pride Day occurs on 18 June. 19 June is World Sauntering Day, which encourages people to slow down ("saunter") and enjoy life. Go Skateboarding Day and World Hydrography Day both occur on 21 June. Midsummer, the various celebrations of the commencement of summer, happens on 21 June; it is also associated with the Fête de la Musique (World Music Day). 25 June is the observation of World Vitiligo Day, which aims to decrease negative sentiments regarding vitiligo—a chronic autoimmune disorder that causes patches of skin to lose pigment or colour. 26 June is World Refrigeration Day. Global Running Day occurs on the first Wednesday in June. Father's Day, which honours fathers and fatherhood, most often occurs on the third Sunday in June. The King's Official Birthday, which celebrates the birthday of the monarch of the Commonwealth realms (presently Charles III), occurs in either May or June. It includes the British Trooping the Colour commemoration. The Dragon Boat Festival, observed in China and by the Chinese communities of Southeast Asia, may commence between late May and mid-June. United Nations The following are global holidays which are formally observed by the United Nations: 1 June: Global Day of Parents 3 June: World Bicycle Day 4 June: International Day of Innocent Children Victims of Aggression 5 June: World Environment Day and International Day for the Fight Against Illegal, Unreported and Unregulated Fishing 6 June: UN Russian Language Day 7 June: World Food Safety Day 8 June: World Oceans Day 10 June: International Day for Dialogue Among Civilizations 12 June: World Day Against Child Labour 13 June: International Albinism Awareness Day 14 June: World Blood Donor Day 15 June: World Elder Abuse Awareness Day 16 June: International Day of Family Remittances 17 June: World Day to Combat Desertification and Drought 18 June: International Day for Countering Hate Speech and Sustainable Gastronomy Day 19 June: International Day for the Elimination of Sexual Violence in Conflict 20 June: World Refugee Day 21 June: International Day of Yoga 23 June: United Nations Public Service Day and International Widows' Day 24 June: International Day of Women in Diplomacy 25 June: Day of the Seafarer 26 June: International Day Against Drug Abuse and Illicit Trafficking and International Day in Support of Victims of Torture 27 June: Micro-, Small and Medium-sized Enterprises Day 29 June: International Day of the Tropics 30 June: International Asteroid Day and International Day of Parliamentarism Religious single-day observances As Easter is celebrated on the first Sunday after the Paschal full moon, which is the first full moon on or after 21 March (a fixed approximation of the March equinox), Ascension Day, observed 39 days after Easter, can occur in June. Pentecost is the fiftieth day after Easter Sunday, while Trinity Sunday is the first Sunday after Pentecost. The Catholic Church also observes the Feast of the Sacred Heart, which happens on the Friday following the second Sunday after Pentecost. The Feast of Corpus Christi, observed by the Latin Church and certain Western Orthodox, Lutheran, and Anglican churches, takes place on the Thursday after Trinity Sunday. The feast of Saints Peter and Paul, a liturgical feast observed by numerous denominations, always occurs on 29 June. In Buddhism, Vesak (Buddha Day), the most significant Buddhist festival, occurs on 2 June in Singapore and on 3 June in Thailand as of 2024. Shavuot, one of the biblically-ordained Three Pilgrimage Festivals observed in Judaism, takes place during the month of Sivan in the Hebrew calendar, which corresponds to being between May and June in the Gregorian calendar. Islamic holidays are determined by the Hijri calendar (colloquially the Islamic calendar), a lunar calendar of 354 or 355 days; thus, Islamic observances do not align with those of the Gregorian calendar. This is the same for Hindu holidays, which are based on the Hindu calendar. Other events The quadrennial FIFA World Cup, an international association football tournament and the most-watched sporting event on television, usually commences in June. The annual Wimbledon Championships, the oldest tennis tournament in the world and widely regarded as the most prestigious, traditionally occurred on the last Monday in June. Glastonbury Festival, a major music festival in the United Kingdom, also takes place in June, attracting over 100,000 attendees. People June is a female given name for a person born in June. In astrology, the Zodiac signs for people born between 21 May and 21 June is Gemini (♊︎); for those born between 22 June and 22 July, their sign is Cancer (♋︎). The birthstones associated with June in the United States are pearl, moonstone and alexandrite. The birth flowers of June are rose and honeysuckle. Births Noteworthy people born in June include: 1st – Frank Whittle, English engineer and Royal Air Force air officer who invented the turbojet engine (1907). 8th – Tim Berners-Lee, English computer scientist who invented the World Wide Web (1955). 9th: Leopold I, Holy Roman Emperor and King of Hungary, Croatia, and Bohemia (1640). Peter the Great, Tsar and later the first Emperor of all Russia (1627). 14th – Che Guevara, Argentine Marxist revolutionary, guerrilla leader, diplomat and military theorist; a major figure of the Cuban Revolution (1928). 17th – Igor Stravinsky, Russian composer (1882). 18th – Paul McCartney, English singer, songwriter and musician, former member of the Beatles (1942). 19th – José Rizal, Filipino nationalist, writer and polymath, a national hero (pambansang bayani) of the Philippines (1861). 24th – Lionel Messi, Argentine footballer (1987). 28th: Henry VIII, King of England known for his six marriages and commencement of the English Reformation (1491). Jean-Jacques Rousseau, Genevan philosopher influential in the Age of Enlightenment (1712). 29th – Yusuf I of Granada, seventh Nasrid ruler of the Emirate of Granada who preceded over its golden age (1318). Deaths Noteworthy people who died in June include: 1st – Emperor Gaozu of Han, founder and first emperor of the Han dynasty of China (195 BC). 3rd – William Harvey, English physician, first known to describe the circulatory system of the human body (1657). 4th: Antonio José de Sucre, Venezuelan general and politician, influential in the Spanish American wars of independence (1830). Wilhelm II, final German Emperor and King of Prussia (1941). 8th: Andrew Jackson, American lawyer and general who served as the seventh president of the United States (1845). Muhammad, Arab religious, social and political leader, founder of Islam (632). 9th: Nero, Roman emperor, last of the Julio-Claudian dynasty (AD 68). Charles Dickens, English novelist, journalist, short story writer and social critic (1870). 10th – Frederick Barbarossa, Holy Roman Emperor regarded as among the empire's greatest of the medieval era (1190). 10th or 11th – Alexander the Great, King of Macedon, regarded as one of the greatest and most successful military commanders (323 BC). 14th – Max Weber, German sociologist and historian, central figure in the development of sociology and the social sciences (1920). 17th – Uthman, third caliph of the Rashidun Caliphate who ordered the official compilation of the standardised version of the Quran (656). 18th – Leo III the Isaurian, first Byzantine emperor of the Isaurian dynasty (741). 21st: Edward III, King of England who restored royal authority (1377). Niccolò Machiavelli, Florentine diplomat, author, philosopher and historian regarded as the father of modern political philosophy and political science (1527). 24th – Hongwu Emperor, founding emperor of the Ming dynasty of China (1398). 25th – Michael Jackson, American singer, songwriter and dancer, among the best-selling music artists of all time (2009). 27th – Joseph Smith, American religious leader, founder of Mormonism and the Latter Day Saint movement (1844). 28th – James Madison, American Founding Father and fourth president of the United States (1836).
Technology
Months
null
15786
https://en.wikipedia.org/wiki/July
July
July is the seventh month of the year in the Julian and Gregorian calendars. Its length is 31 days. It was named by the Roman Senate in honour of Roman general Julius Caesar in 44 B.C., being the month of his birth. Before then it was called Quintilis, being the fifth month of the calendar that started with March. It is on average the warmest month in most of the Northern Hemisphere, where it is the second month of summer, and the coldest month in much of the Southern Hemisphere, where it is the second month of winter. The second half of the year commences in July. In the Southern Hemisphere, July is the seasonal equivalent of January in the Northern hemisphere. "Dog days" are considered to begin in early July in the Northern Hemisphere, when the hot sultry weather of summer usually starts. Spring lambs born in late winter or early spring are usually sold before 1 July. Symbols July's birthstone is the ruby, which symbolizes contentment. Its birth flowers are the larkspur and the water lily. The zodiac signs are Cancer (until July 22) and Leo (July 23 onward). Observances This list does not necessarily imply either official status nor general observance. Season of Emancipation 14 April to 23 August (Barbados) Honor America Days: 14 June to 4 July (United States) Month-long In Catholic tradition, July is the Month of the Most Precious Blood of Jesus. National Hot Dog Month (United States) National Ice Cream Month (United States) Disability Pride Month (United States) Non-Gregorian (All Baha'i, Islamic, and Jewish observances begin at the sundown before the date listed, and end at sundown of the date in question unless otherwise noted.) List of observances set by the Bahá'í calendar List of observances set by the Chinese calendar List of observances set by the Hebrew calendar List of observances set by the Islamic calendar List of observances set by the Solar Hijri calendar Movable Phi Ta Khon (Dan Sai, Loei province, Isan, Thailand) – Dates are selected by village mediums and can take place anywhere between March and July. Matariki (Māori New Year) – Different iwi celebrate according to their own tradition and the New Zealand Government calculates the public holiday each year according to advice from the Matariki Advisory Committee. Dates can fall from late June to late July. Ra o te Ui Ariki (Cook Islands) July 6 Collector Car Appreciation Day (United States) Senior Citizen's Day (Kiribati) Shark Week (United States) Earth Overshoot Day
Technology
Months
null
15881
https://en.wikipedia.org/wiki/Java%20%28programming%20language%29
Java (programming language)
Java is a high-level, class-based, object-oriented programming language that is designed to have as few implementation dependencies as possible. It is a general-purpose programming language intended to let programmers write once, run anywhere (WORA), meaning that compiled Java code can run on all platforms that support Java without the need to recompile. Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of the underlying computer architecture. The syntax of Java is similar to C and C++, but has fewer low-level facilities than either of them. The Java runtime provides dynamic capabilities (such as reflection and runtime code modification) that are typically not available in traditional compiled languages. Java gained popularity shortly after its release, and has been a popular programming language since then. Java was the third most popular programming language in according to GitHub. Although still widely popular, there has been a gradual decline in use of Java in recent years with other languages using JVM gaining popularity. Java was designed by James Gosling at Sun Microsystems. It was released in May 1995 as a core component of Sun's Java platform. The original and reference implementation Java compilers, virtual machines, and class libraries were released by Sun under proprietary licenses. As of May 2007, in compliance with the specifications of the Java Community Process, Sun had relicensed most of its Java technologies under the GPL-2.0-only license. Oracle, which bought Sun in 2010, offers its own HotSpot Java Virtual Machine. However, the official reference implementation is the OpenJDK JVM, which is open-source software used by most developers and is the default JVM for almost all Linux distributions. Java 23 is the version current . Java 20 and 22 are no longer maintained. Java 8, 11, 17, and 21 are long-term support versions still under maintenance. History James Gosling, Mike Sheridan, and Patrick Naughton initiated the Java language project in June 1991. Java was originally designed for interactive television, but it was too advanced for the digital cable television industry at the time. The language was initially called Oak after an oak tree that stood outside Gosling's office. Later the project went by the name Green and was finally renamed Java, from Java coffee, a type of coffee from Indonesia. Gosling designed Java with a C/C++-style syntax that system and application programmers would find familiar. Sun Microsystems released the first public implementation as Java 1.0 in 1996. It promised write once, run anywhere (WORA) functionality, providing no-cost run-times on popular platforms. Fairly secure and featuring configurable security, it allowed network- and file-access restrictions. Major web browsers soon incorporated the ability to run Java applets within web pages, and Java quickly became popular. The Java 1.0 compiler was re-written in Java by Arthur van Hoff to comply strictly with the Java 1.0 language specification. With the advent of Java 2 (released initially as J2SE 1.2 in December 1998 1999), new versions had multiple configurations built for different types of platforms. J2EE included technologies and APIs for enterprise applications typically run in server environments, while J2ME featured APIs optimized for mobile applications. The desktop version was renamed J2SE. In 2006, for marketing purposes, Sun renamed new J2 versions as Java EE, Java ME, and Java SE, respectively. In 1997, Sun Microsystems approached the ISO/IEC JTC 1 standards body and later the Ecma International to formalize Java, but it soon withdrew from the process. Java remains a de facto standard, controlled through the Java Community Process. At one time, Sun made most of its Java implementations available without charge, despite their proprietary software status. Sun generated revenue from Java through the selling of licenses for specialized products such as the Java Enterprise System. On November 13, 2006, Sun released much of its Java virtual machine (JVM) as free and open-source software (FOSS), under the terms of the GPL-2.0-only license. On May 8, 2007, Sun finished the process, making all of its JVM's core code available under free software/open-source distribution terms, aside from a small portion of code to which Sun did not hold the copyright. Sun's vice-president Rich Green said that Sun's ideal role with regard to Java was as an evangelist. Following Oracle Corporation's acquisition of Sun Microsystems in 2009–10, Oracle has described itself as the steward of Java technology with a relentless commitment to fostering a community of participation and transparency. This did not prevent Oracle from filing a lawsuit against Google shortly after that for using Java inside the Android SDK (see the Android section). On April 2, 2010, James Gosling resigned from Oracle. In January 2016, Oracle announced that Java run-time environments based on JDK 9 will discontinue the browser plugin. Java software runs on everything from laptops to data centers, game consoles to scientific supercomputers. Oracle (and others) highly recommend uninstalling outdated and unsupported versions of Java, due to unresolved security issues in older versions. Principles There were five primary goals in creating the Java language: It must be simple, object-oriented, and familiar. It must be robust and secure. It must be architecture-neutral and portable. It must execute with high performance. It must be interpreted, threaded, and dynamic. Versions , Java 8, 11, 17, and 21 are supported as long-term support (LTS) versions, with Java 25, releasing in September 2025, as the next scheduled LTS version. Oracle released the last zero-cost public update for the legacy version Java 8 LTS in January 2019 for commercial use, although it will otherwise still support Java 8 with public updates for personal use indefinitely. Other vendors such as Adoptium continue to offer free builds of OpenJDK's long-term support (LTS) versions. These builds may include additional security patches and bug fixes. Major release versions of Java, along with their release dates: Editions Sun has defined and supports four editions of Java targeting different application environments and segmented many of its APIs so that they belong to one of the platforms. The platforms are: Java Card for smart-cards. Java Platform, Micro Edition (Java ME) – targeting environments with limited resources. Java Platform, Standard Edition (Java SE) – targeting workstation environments. Java Platform, Enterprise Edition (Java EE) – targeting large distributed enterprise or Internet environments. The classes in the Java APIs are organized into separate groups called packages. Each package contains a set of related interfaces, classes, subpackages and exceptions. Sun also provided an edition called Personal Java that has been superseded by later, standards-based Java ME configuration-profile pairings. Execution system Java JVM and bytecode One design goal of Java is portability, which means that programs written for the Java platform must run similarly on any combination of hardware and operating system with adequate run time support. This is achieved by compiling the Java language code to an intermediate representation called Java bytecode, instead of directly to architecture-specific machine code. Java bytecode instructions are analogous to machine code, but they are intended to be executed by a virtual machine (VM) written specifically for the host hardware. End-users commonly use a Java Runtime Environment (JRE) installed on their device for standalone Java applications or a web browser for Java applets. Standard libraries provide a generic way to access host-specific features such as graphics, threading, and networking. The use of universal bytecode makes porting simple. However, the overhead of interpreting bytecode into machine instructions made interpreted programs almost always run more slowly than native executables. Just-in-time (JIT) compilers that compile byte-codes to machine code during runtime were introduced from an early stage. Java's Hotspot compiler is actually two compilers in one; and with GraalVM (included in e.g. Java 11, but removed as of Java 16) allowing tiered compilation. Java itself is platform-independent and is adapted to the particular platform it is to run on by a Java virtual machine (JVM), which translates the Java bytecode into the platform's machine language. Performance Programs written in Java have a reputation for being slower and requiring more memory than those written in C++. However, Java programs' execution speed improved significantly with the introduction of just-in-time compilation in 1997/1998 for Java 1.1, the addition of language features supporting better code analysis (such as inner classes, the StringBuilder class, optional assertions, etc.), and optimizations in the Java virtual machine, such as HotSpot becoming Sun's default JVM in 2000. With Java 1.5, the performance was improved with the addition of the package, including lock-free implementations of the ConcurrentMaps and other multi-core collections, and it was improved further with Java 1.6. Non-JVM Some platforms offer direct hardware support for Java; there are micro controllers that can run Java bytecode in hardware instead of a software Java virtual machine, and some ARM-based processors could have hardware support for executing Java bytecode through their Jazelle option, though support has mostly been dropped in current implementations of ARM. Automatic memory management Java uses an automatic garbage collector to manage memory in the object lifecycle. The programmer determines when objects are created, and the Java runtime is responsible for recovering the memory once objects are no longer in use. Once no references to an object remain, the unreachable memory becomes eligible to be freed automatically by the garbage collector. Something similar to a memory leak may still occur if a programmer's code holds a reference to an object that is no longer needed, typically when objects that are no longer needed are stored in containers that are still in use. If methods for a non-existent object are called, a null pointer exception is thrown. One of the ideas behind Java's automatic memory management model is that programmers can be spared the burden of having to perform manual memory management. In some languages, memory for the creation of objects is implicitly allocated on the stack or explicitly allocated and deallocated from the heap. In the latter case, the responsibility of managing memory resides with the programmer. If the program does not deallocate an object, a memory leak occurs. If the program attempts to access or deallocate memory that has already been deallocated, the result is undefined and difficult to predict, and the program is likely to become unstable or crash. This can be partially remedied by the use of smart pointers, but these add overhead and complexity. Garbage collection does not prevent logical memory leaks, i.e. those where the memory is still referenced but never used. Garbage collection may happen at any time. Ideally, it will occur when a program is idle. It is guaranteed to be triggered if there is insufficient free memory on the heap to allocate a new object; this can cause a program to stall momentarily. Explicit memory management is not possible in Java. Java does not support C/C++ style pointer arithmetic, where object addresses can be arithmetically manipulated (e.g. by adding or subtracting an offset). This allows the garbage collector to relocate referenced objects and ensures type safety and security. As in C++ and some other object-oriented languages, variables of Java's primitive data types are either stored directly in fields (for objects) or on the stack (for methods) rather than on the heap, as is commonly true for non-primitive data types (but see escape analysis). This was a conscious decision by Java's designers for performance reasons. Java contains multiple types of garbage collectors. Since Java 9, HotSpot uses the Garbage First Garbage Collector (G1GC) as the default. However, there are also several other garbage collectors that can be used to manage the heap, such as the Z Garbage Collector (ZGC) introduced in Java 11, and Shenandoah GC, introduced in Java 12 but unavailable in Oracle-produced OpenJDK builds. Shenandoah is instead available in third-party builds of OpenJDK, such as Eclipse Temurin. For most applications in Java, G1GC is sufficient. In prior versions of Java, such as Java 8, the Parallel Garbage Collector was used as the default garbage collector. Having solved the memory management problem does not relieve the programmer of the burden of handling properly other kinds of resources, like network or database connections, file handles, etc., especially in the presence of exceptions. Syntax The syntax of Java is largely influenced by C++ and C. Unlike C++, which combines the syntax for structured, generic, and object-oriented programming, Java was built almost exclusively as an object-oriented language. All code is written inside classes, and every data item is an object, with the exception of the primitive data types, (i.e. integers, floating-point numbers, boolean values, and characters), which are not objects for performance reasons. Java reuses some popular aspects of C++ (such as the method). Unlike C++, Java does not support operator overloading or multiple inheritance for classes, though multiple inheritance is supported for interfaces. Java uses comments similar to those of C++. There are three different styles of comments: a single line style marked with two slashes (//), a multiple line style opened with /* and closed with */, and the Javadoc commenting style opened with /** and closed with */. The Javadoc style of commenting allows the user to run the Javadoc executable to create documentation for the program and can be read by some integrated development environments (IDEs) such as Eclipse to allow developers to access documentation within the IDE. Hello world The following is a simple example of a "Hello, World!" program that writes a message to the standard output: public class Example { public static void main(String[] args) { System.out.println("Hello World!"); } } Special classes Applet Java applets are programs embedded in other applications, mainly in web pages displayed in web browsers. The Java applet API was deprecated with the release of Java 9 in 2017. Servlet Java servlet technology provides Web developers with a simple, consistent mechanism for extending the functionality of a Web server and for accessing existing business systems. Servlets are server-side Java EE components that generate responses to requests from clients. Most of the time, this means generating HTML pages in response to HTTP requests, although there are a number of other standard servlet classes available, for example for WebSocket communication. The Java servlet API has to some extent been superseded (but still used under the hood) by two standard Java technologies for web services: the Java API for RESTful Web Services (JAX-RS 2.0) useful for AJAX, JSON and REST services, and the Java API for XML Web Services (JAX-WS) useful for SOAP Web Services. Typical implementations of these APIs on Application Servers or Servlet Containers use a standard servlet for handling all interactions with the HTTP requests and responses that delegate to the web service methods for the actual business logic. JavaServer Pages JavaServer Pages (JSP) are server-side Java EE components that generate responses, typically HTML pages, to HTTP requests from clients. JSPs embed Java code in an HTML page by using the special delimiters <% and %>. A JSP is compiled to a Java servlet, a Java application in its own right, the first time it is accessed. After that, the generated servlet creates the response. Swing application Swing is a graphical user interface library for the Java SE platform. It is possible to specify a different look and feel through the pluggable look and feel system of Swing. Clones of Windows, GTK+, and Motif are supplied by Sun. Apple also provides an Aqua look and feel for macOS. Where prior implementations of these looks and feels may have been considered lacking, Swing in Java SE 6 addresses this problem by using more native GUI widget drawing routines of the underlying platforms. JavaFX application JavaFX is a software platform for creating and delivering desktop applications, as well as rich web applications that can run across a wide variety of devices. JavaFX is intended to replace Swing as the standard graphical user interface (GUI) library for Java SE, but since JDK 11 JavaFX has not been in the core JDK and instead in a separate module. JavaFX has support for desktop computers and web browsers on Microsoft Windows, Linux, and macOS. JavaFX does not have support for native OS look and feels. Generics In 2004, generics were added to the Java language, as part of J2SE 5.0. Prior to the introduction of generics, each variable declaration had to be of a specific type. For container classes, for example, this is a problem because there is no easy way to create a container that accepts only specific types of objects. Either the container operates on all subtypes of a class or interface, usually Object, or a different container class has to be created for each contained class. Generics allow compile-time type checking without having to create many container classes, each containing almost identical code. In addition to enabling more efficient code, certain runtime exceptions are prevented from occurring, by issuing compile-time errors. If Java prevented all runtime type errors (ClassCastExceptions) from occurring, it would be type safe. In 2016, the type system of Java was proven unsound in that it is possible to use generics to construct classes and methods that allow assignment of an instance of one class to a variable of another unrelated class. Such code is accepted by the compiler, but fails at run time with a class cast exception. Criticism Criticisms directed at Java include the implementation of generics, speed, the handling of unsigned numbers, the implementation of floating-point arithmetic, and a history of security vulnerabilities in the primary Java VM implementation HotSpot. Developers have criticized the complexity and verbosity of the Java Persistence API (JPA), a standard part of Java EE. This has led to increased adoption of higher-level abstractions like Spring Data JPA, which aims to simplify database operations and reduce boilerplate code. The growing popularity of such frameworks suggests limitations in the standard JPA implementation's ease-of-use for modern Java development. Class libraries The Java Class Library is the standard library, developed to support application development in Java. It is controlled by Oracle in cooperation with others through the Java Community Process program. Companies or individuals participating in this process can influence the design and development of the APIs. This process has been a subject of controversy during the 2010s. The class library contains features such as: The core libraries, which include: Input/output (I/O or IO) and non-blocking I/O (NIO), or IO/NIO Networking (new user agent (HTTP client) since Java 11) Reflective programming (reflection) Concurrent computing (concurrency) Generics Scripting, Compiler Functional programming (Lambda, streaming) Collection libraries that implement data structures such as lists, dictionaries, trees, sets, queues and double-ended queue, or stacks XML Processing (Parsing, Transforming, Validating) libraries Security Internationalization and localization libraries The integration libraries, which allow the application writer to communicate with external systems. These libraries include: The Java Database Connectivity (JDBC) API for database access Java Naming and Directory Interface (JNDI) for lookup and discovery Java remote method invocation (RMI) and Common Object Request Broker Architecture (CORBA) for distributed application development Java Management Extensions (JMX) for managing and monitoring applications User interface libraries, which include: The (heavyweight, or native) Abstract Window Toolkit (AWT), which provides GUI components, the means for laying out those components and the means for handling events from those components The (lightweight) Swing libraries, which are built on AWT but provide (non-native) implementations of the AWT widgetry APIs for audio capture, processing, and playback JavaFX A platform dependent implementation of the Java virtual machine that is the means by which the bytecodes of the Java libraries and third-party applications are executed Plugins, which enable applets to be run in web browsers Java Web Start, which allows Java applications to be efficiently distributed to end users across the Internet Licensing and documentation Documentation Javadoc is a comprehensive documentation system, created by Sun Microsystems. It provides developers with an organized system for documenting their code. Javadoc comments have an extra asterisk at the beginning, i.e. the delimiters are /** and */, whereas the normal multi-line comments in Java are delimited by /* and */, and single-line comments start with //. Implementations Oracle Corporation owns the official implementation of the Java SE platform, due to its acquisition of Sun Microsystems on January 27, 2010. This implementation is based on the original implementation of Java by Sun. The Oracle implementation is available for Windows, macOS, Linux, and Solaris. Because Java lacks any formal standardization recognized by Ecma International, ISO/IEC, ANSI, or other third-party standards organizations, the Oracle implementation is the de facto standard. The Oracle implementation is packaged into two different distributions: The Java Runtime Environment (JRE) which contains the parts of the Java SE platform required to run Java programs and is intended for end users, and the Java Development Kit (JDK), which is intended for software developers and includes development tools such as the Java compiler, Javadoc, Jar, and a debugger. Oracle has also released GraalVM, a high performance Java dynamic compiler and interpreter. OpenJDK is another Java SE implementation that is licensed under the GNU GPL. The implementation started when Sun began releasing the Java source code under the GPL. As of Java SE 7, OpenJDK is the official Java reference implementation. The goal of Java is to make all implementations of Java compatible. Historically, Sun's trademark license for usage of the Java brand insists that all implementations be compatible. This resulted in a legal dispute with Microsoft after Sun claimed that the Microsoft implementation did not support Java remote method invocation (RMI) or Java Native Interface (JNI) and had added platform-specific features of their own. Sun sued in 1997, and, in 2001, won a settlement of US$20 million, as well as a court order enforcing the terms of the license from Sun. As a result, Microsoft no longer ships Java with Windows. Platform-independent Java is essential to Java EE, and an even more rigorous validation is required to certify an implementation. This environment enables portable server-side applications. Use outside the Java platform The Java programming language requires the presence of a software platform in order for compiled programs to be executed. Oracle supplies the Java platform for use with Java. The Android SDK is an alternative software platform, used primarily for developing Android applications with its own GUI system. Android The Java language is a key pillar in Android, an open source mobile operating system. Although Android, built on the Linux kernel, is written largely in C, the Android SDK uses the Java language as the basis for Android applications but does not use any of its standard GUI, SE, ME or other established Java standards. The bytecode language supported by the Android SDK is incompatible with Java bytecode and runs on its own virtual machine, optimized for low-memory devices such as smartphones and tablet computers. Depending on the Android version, the bytecode is either interpreted by the Dalvik virtual machine or compiled into native code by the Android Runtime. Android does not provide the full Java SE standard library, although the Android SDK does include an independent implementation of a large subset of it. It supports Java 6 and some Java 7 features, offering an implementation compatible with the standard library (Apache Harmony). Controversy The use of Java-related technology in Android led to a legal dispute between Oracle and Google. On May 7, 2012, a San Francisco jury found that if APIs could be copyrighted, then Google had infringed Oracle's copyrights by the use of Java in Android devices. District Judge William Alsup ruled on May 31, 2012, that APIs cannot be copyrighted, but this was reversed by the United States Court of Appeals for the Federal Circuit in May 2014. On May 26, 2016, the district court decided in favor of Google, ruling the copyright infringement of the Java API in Android constitutes fair use. In March 2018, this ruling was overturned by the Appeals Court, which sent down the case of determining the damages to federal court in San Francisco. Google filed a petition for writ of certiorari with the Supreme Court of the United States in January 2019 to challenge the two rulings that were made by the Appeals Court in Oracle's favor. On April 5, 2021, the Court ruled 6–2 in Google's favor, that its use of Java APIs should be considered fair use. However, the court refused to rule on the copyrightability of APIs, choosing instead to determine their ruling by considering Java's API copyrightable "purely for argument's sake."
Technology
Programming
null
15944
https://en.wikipedia.org/wiki/Jet%20engine
Jet engine
A jet engine is a type of reaction engine, discharging a fast-moving jet of heated gas (usually air) that generates thrust by jet propulsion. While this broad definition may include rocket, water jet, and hybrid propulsion, the term typically refers to an internal combustion air-breathing jet engine such as a turbojet, turbofan, ramjet, pulse jet, or scramjet. In general, jet engines are internal combustion engines. Air-breathing jet engines typically feature a rotating air compressor powered by a turbine, with the leftover power providing thrust through the propelling nozzle—this process is known as the Brayton thermodynamic cycle. Jet aircraft use such engines for long-distance travel. Early jet aircraft used turbojet engines that were relatively inefficient for subsonic flight. Most modern subsonic jet aircraft use more complex high-bypass turbofan engines. They give higher speed and greater fuel efficiency than piston and propeller aeroengines over long distances. A few air-breathing engines made for high-speed applications (ramjets and scramjets) use the ram effect of the vehicle's speed instead of a mechanical compressor. The thrust of a typical jetliner engine went from (de Havilland Ghost turbojet) in the 1950s to (General Electric GE90 turbofan) in the 1990s, and their reliability went from 40 in-flight shutdowns per 100,000 engine flight hours to less than 1 per 100,000 in the late 1990s. This, combined with greatly decreased fuel consumption, permitted routine transatlantic flight by twin-engined airliners by the turn of the century, where previously a similar journey would have required multiple fuel stops. History The principle of the jet engine is not new; however, the technical advances necessary to make the idea work did not come to fruition until the 20th century. A rudimentary demonstration of jet power dates back to the aeolipile, a device described by Hero of Alexandria in 1st-century Egypt. This device directed steam power through two nozzles to cause a sphere to spin rapidly on its axis. It was seen as a curiosity. Meanwhile, practical applications of the turbine can be seen in the water wheel and the windmill. Historians have further traced the theoretical origin of the principles of jet engines to traditional Chinese firework and rocket propulsion systems. Such devices' use for flight is documented in the story of Ottoman soldier Lagâri Hasan Çelebi, who reportedly achieved flight using a cone-shaped rocket in 1633. The earliest attempts at airbreathing jet engines were hybrid designs in which an external power source first compressed air, which was then mixed with fuel and burned for jet thrust. The Italian Caproni Campini N.1, and the Japanese Tsu-11 engine intended to power Ohka kamikaze planes towards the end of World War II were unsuccessful. Even before the start of World War II, engineers were beginning to realize that engines driving propellers were approaching limits due to issues related to propeller efficiency, which declined as blade tips approached the speed of sound. If aircraft performance were to increase beyond such a barrier, a different propulsion mechanism was necessary. This was the motivation behind the development of the gas turbine engine, the most common form of jet engine. The key to a practical jet engine was the gas turbine, extracting power from the engine itself to drive the compressor. The gas turbine was not a new idea: the patent for a stationary turbine was granted to John Barber in England in 1791. The first gas turbine to successfully run self-sustaining was built in 1903 by Norwegian engineer Ægidius Elling. Such engines did not reach manufacture due to issues of safety, reliability, weight and, especially, sustained operation. The first patent for using a gas turbine to power an aircraft was filed in 1921 by Maxime Guillaume. His engine was an axial-flow turbojet, but was never constructed, as it would have required considerable advances over the state of the art in compressors. Alan Arnold Griffith published An Aerodynamic Theory of Turbine Design in 1926 leading to experimental work at the RAE. In 1928, RAF College Cranwell cadet Frank Whittle formally submitted his ideas for a turbojet to his superiors. In October 1929, he developed his ideas further. On 16 January 1930, in England, Whittle submitted his first patent (granted in 1932). The patent showed a two-stage axial compressor feeding a single-sided centrifugal compressor. Practical axial compressors were made possible by ideas from A.A.Griffith in a seminal paper in 1926 ("An Aerodynamic Theory of Turbine Design"). Whittle would later concentrate on the simpler centrifugal compressor only. Whittle was unable to interest the government in his invention, and development continued at a slow pace. In Spain, pilot and engineer Virgilio Leret Ruiz was granted a patent for a jet engine design in March 1935. Republican president Manuel Azaña arranged for initial construction at the Hispano-Suiza aircraft factory in Madrid in 1936, but Leret was executed months later by Francoist Moroccan troops after unsuccessfully defending his seaplane base on the first days of the Spanish Civil War. His plans, hidden from Francoists, were secretly given to the British embassy in Madrid a few years later by his wife, Carlota O'Neill, upon her release from prison. In 1935, Hans von Ohain started work on a similar design to Whittle's in Germany, both compressor and turbine being radial, on opposite sides of the same disc, initially unaware of Whittle's work. Von Ohain's first device was strictly experimental and could run only under external power, but he was able to demonstrate the basic concept. Ohain was then introduced to Ernst Heinkel, one of the larger aircraft industrialists of the day, who immediately saw the promise of the design. Heinkel had recently purchased the Hirth engine company, and Ohain and his master machinist Max Hahn were set up there as a new division of the Hirth company. They had their first HeS 1 centrifugal engine running by September 1937. Unlike Whittle's design, Ohain used hydrogen as fuel, supplied under external pressure. Their subsequent designs culminated in the gasoline-fuelled HeS 3 of , which was fitted to Heinkel's simple and compact He 178 airframe and flown by Erich Warsitz in the early morning of August 27, 1939, from Rostock-Marienehe aerodrome, an impressively short time for development. The He 178 was the world's first jet plane. Heinkel applied for a US patent covering the Aircraft Power Plant by Hans Joachim Pabst von Ohain on May 31, 1939; patent number US2256198, with M Hahn referenced as inventor. Von Ohain's design, an axial-flow engine, as opposed to Whittle's centrifugal flow engine, was eventually adopted by most manufacturers by the 1950s. Austrian Anselm Franz of Junkers' engine division (Junkers Motoren or "Jumo") introduced the axial-flow compressor in their jet engine. Jumo was assigned the next engine number in the RLM 109-0xx numbering sequence for gas turbine aircraft powerplants, "004", and the result was the Jumo 004 engine. After many lesser technical difficulties were solved, mass production of this engine started in 1944 as a powerplant for the world's first jet-fighter aircraft, the Messerschmitt Me 262 (and later the world's first jet-bomber aircraft, the Arado Ar 234). A variety of reasons conspired to delay the engine's availability, causing the fighter to arrive too late to improve Germany's position in World War II, however this was the first jet engine to be used in service. Meanwhile, in Britain the Gloster E28/39 had its maiden flight on 15 May 1941 and the Gloster Meteor finally entered service with the RAF in July 1944. These were powered by turbojet engines from Power Jets Ltd., set up by Frank Whittle. The first two operational turbojet aircraft, the Messerschmitt Me 262 and then the Gloster Meteor entered service within three months of each other in 1944; the Me 262 in April and the Gloster Meteor in July. The Meteor only saw around 15 aircraft enter World War II action, while up to 1400 Me 262 were produced, with 300 entering combat, delivering the first ground attacks and air combat victories of jet planes. Following the end of the war the German jet aircraft and jet engines were extensively studied by the victorious allies and contributed to work on early Soviet and US jet fighters. The legacy of the axial-flow engine is seen in the fact that practically all jet engines on fixed-wing aircraft have had some inspiration from this design. By the 1950s, the jet engine was almost universal in combat aircraft, with the exception of cargo, liaison and other specialty types. By this point, some of the British designs were already cleared for civilian use, and had appeared on early models like the de Havilland Comet and Avro Canada Jetliner. By the 1960s, all large civilian aircraft were also jet powered, leaving the piston engine in low-cost niche roles such as cargo flights. The efficiency of turbojet engines was still rather worse than piston engines, but by the 1970s, with the advent of high-bypass turbofan jet engines (an innovation not foreseen by the early commentators such as Edgar Buckingham, at high speeds and high altitudes that seemed absurd to them), fuel efficiency was about the same as the best piston and propeller engines. Uses Jet engines power jet aircraft, cruise missiles and unmanned aerial vehicles. In the form of rocket engines they power model rocketry, spaceflight, and military missiles. Jet engines have propelled high speed cars, particularly drag racers, with the all-time record held by a rocket car. A turbofan powered car, ThrustSSC, currently holds the land speed record. Jet engine designs are frequently modified for non-aircraft applications, as industrial gas turbines or marine powerplants. These are used in electrical power generation, for powering water, natural gas, or oil pumps, and providing propulsion for ships and locomotives. Industrial gas turbines can create up to 50,000 shaft horsepower. Many of these engines are derived from older military turbojets such as the Pratt & Whitney J57 and J75 models. There is also a derivative of the P&W JT8D low-bypass turbofan that creates up to 35,000 horsepower (HP) . Jet engines are also sometimes developed into, or share certain components such as engine cores, with turboshaft and turboprop engines, which are forms of gas turbine engines that are typically used to power helicopters and some propeller-driven aircraft. Types of jet engine There are a large number of different types of jet engines, all of which achieve forward thrust from the principle of jet propulsion. Airbreathing Commonly aircraft are propelled by airbreathing jet engines. Most airbreathing jet engines that are in use are turbofan jet engines, which give good efficiency at speeds just below the speed of sound. Turbojet A turbojet engine is a gas turbine engine that works by compressing air with an inlet and a compressor (axial, centrifugal, or both), mixing fuel with the compressed air, burning the mixture in the combustor, and then passing the hot, high pressure air through a turbine and a nozzle. The compressor is powered by the turbine, which extracts energy from the expanding gas passing through it. The engine converts internal energy in the fuel to increased momentum of the gas flowing through the engine, producing thrust. All the air entering the compressor is passed through the combustor, and turbine, unlike the turbofan engine described below. Turbofan Turbofans differ from turbojets in that they have an additional fan at the front of the engine, which accelerates air in a duct bypassing the core gas turbine engine. Turbofans are the dominant engine type for medium and long-range airliners. Turbofans are usually more efficient than turbojets at subsonic speeds, but at high speeds their large frontal area generates more drag. Therefore, in supersonic flight, and in military and other aircraft where other considerations have a higher priority than fuel efficiency, fans tend to be smaller or absent. Because of these distinctions, turbofan engine designs are often categorized as low-bypass or high-bypass, depending upon the amount of air which bypasses the core of the engine. Low-bypass turbofans have a bypass ratio of around 2:1 or less. Propfan A propfan engine is a type of airbreathing jet engine which combines aspects of turboprop and turbofan. It’s design consists of a central gas turbine which drives open-air contra-rotating propellers. Unlike turboprop engines, in which the propeller and the engine are considered two separate products, the propfan’s gas generator and its unshrouded propeller module are heavily integrated and are considered to be a single product. Additionally, the propfan’s short, heavily twisted variable pitch blades closely remember the ducted fan blades of turbofan engines. Propfans are designed to offer the speed and performance of turbofan engines with fuel efficiency of turboprops. However, due to low fuel costs and high cabin noise, early propfan projects were abandoned. Very few aircraft have flown with propfans, with the Antonov An-70 being the first and only aircraft to fly while being powered solely by propfan engines. Advanced technology engine The term Advanced technology engine refers to the modern generation of jet engines. The principle is that a turbine engine will function more efficiently if the various sets of turbines can revolve at their individual optimum speeds, instead of at the same speed. The true advanced technology engine has a triple spool, meaning that instead of having a single drive shaft, there are three, in order that the three sets of blades may revolve at different speeds. An interim state is a twin-spool engine, allowing only two different speeds for the turbines. Ram compression Ram compression jet engines are airbreathing engines similar to gas turbine engines in so far as they both use the Brayton cycle. Gas turbine and ram compression engines differ, however, in how they compress the incoming airflow. Whereas gas turbine engines use axial or centrifugal compressors to compress incoming air, ram engines rely only on air compressed in the inlet or diffuser. A ram engine thus requires a substantial initial forward airspeed before it can function. Ramjets are considered the simplest type of air breathing jet engine because they have no moving parts in the engine proper, only in the accessories. Scramjets differ mainly in the fact that the air does not slow to subsonic speeds. Rather, they use supersonic combustion. They are efficient at even higher speed. Very few have been built or flown. Non-continuous combustion Other types of jet propulsion Rocket The rocket engine uses the same basic physical principles of thrust as a form of reaction engine, but is distinct from the jet engine in that it does not require atmospheric air to provide oxygen; the rocket carries all components of the reaction mass. However some definitions treat it as a form of jet propulsion. Because rockets do not breathe air, this allows them to operate at arbitrary altitudes and in space. This type of engine is used for launching satellites, space exploration and crewed access, and permitted landing on the Moon in 1969. Rocket engines are used for high altitude flights, or anywhere where very high accelerations are needed since rocket engines themselves have a very high thrust-to-weight ratio. However, the high exhaust speed and the heavier, oxidizer-rich propellant results in far more propellant use than turbofans. Even so, at extremely high speeds they become energy-efficient. An approximate equation for the net thrust of a rocket engine is: Where is the net thrust, is the specific impulse, is a standard gravity, is the propellant flow in kg/s, is the cross-sectional area at the exit of the exhaust nozzle, and is the atmospheric pressure. Hybrid Combined-cycle engines simultaneously use two or more different principles of jet propulsion. Water jet A water jet, or pump-jet, is a marine propulsion system that uses a jet of water. The mechanical arrangement may be a ducted propeller with nozzle, or a centrifugal compressor and nozzle. The pump-jet must be driven by a separate engine such as a Diesel or gas turbine. General physical principles All jet engines are reaction engines that generate thrust by emitting a jet of fluid rearwards at relatively high speed. The forces on the inside of the engine needed to create this jet give a strong thrust on the engine which pushes the craft forwards. Jet engines make their jet from propellant stored in tanks that are attached to the engine (as in a 'rocket') as well as in duct engines (those commonly used on aircraft) by ingesting an external fluid (very typically air) and expelling it at higher speed. Propelling nozzle A propelling nozzle produces a high velocity exhaust jet. Propelling nozzles turn internal and pressure energy into high velocity kinetic energy. The total pressure and temperature don't change through the nozzle but their static values drop as the gas speeds up. The velocity of the air entering the nozzle is low, about Mach 0.4, a prerequisite for minimizing pressure losses in the duct leading to the nozzle. The temperature entering the nozzle may be as low as sea level ambient for a fan nozzle in the cold air at cruise altitudes. It may be as high as the 1000 Kelvin exhaust gas temperature for a supersonic afterburning engine or 2200 K with afterburner lit. The pressure entering the nozzle may vary from 1.5 times the pressure outside the nozzle, for a single stage fan, to 30 times for the fastest manned aircraft at Mach 3+. Convergent nozzles are only able to accelerate the gas up to local sonic (Mach 1) conditions. To reach high flight speeds, even greater exhaust velocities are required, and so a convergent-divergent nozzle is needed on high-speed aircraft. The engine thrust is highest if the static pressure of the gas reaches the ambient value as it leaves the nozzle. This only happens if the nozzle exit area is the correct value for the nozzle pressure ratio (npr). Since the npr changes with engine thrust setting and flight speed this is seldom the case. Also at supersonic speeds the divergent area is less than required to give complete internal expansion to ambient pressure as a trade-off with external body drag. Whitford gives the F-16 as an example. Other underexpanded examples were the XB-70 and SR-71. The nozzle size, together with the area of the turbine nozzles, determines the operating pressure of the compressor. Thrust Energy efficiency relating to aircraft jet engines This overview highlights where energy losses occur in complete jet aircraft powerplants or engine installations. A jet engine at rest, as on a test stand, sucks in fuel and generates thrust. How well it does this is judged by how much fuel it uses and what force is required to restrain it. This is a measure of its efficiency. If something deteriorates inside the engine (known as performance deterioration) it will be less efficient and this will show when the fuel produces less thrust. If a change is made to an internal part which allows the air/combustion gases to flow more smoothly the engine will be more efficient and use less fuel. A standard definition is used to assess how different things change engine efficiency and also to allow comparisons to be made between different engines. This definition is called specific fuel consumption, or how much fuel is needed to produce one unit of thrust. For example, it will be known for a particular engine design that if some bumps in a bypass duct are smoothed out the air will flow more smoothly giving a pressure loss reduction of x% and y% less fuel will be needed to get the take-off thrust, for example. This understanding comes under the engineering discipline Jet engine performance. How efficiency is affected by forward speed and by supplying energy to aircraft systems is mentioned later. The efficiency of the engine is controlled primarily by the operating conditions inside the engine which are the pressure produced by the compressor and the temperature of the combustion gases at the first set of rotating turbine blades. The pressure is the highest air pressure in the engine. The turbine rotor temperature is not the highest in the engine but is the highest at which energy transfer takes place ( higher temperatures occur in the combustor). The above pressure and temperature are shown on a Thermodynamic cycle diagram. The efficiency is further modified by how smoothly the air and the combustion gases flow through the engine, how well the flow is aligned (known as incidence angle) with the moving and stationary passages in the compressors and turbines. Non-optimum angles, as well as non-optimum passage and blade shapes can cause thickening and separation of Boundary layers and formation of Shock waves. It is important to slow the flow (lower speed means less pressure losses or Pressure drop) when it travels through ducts connecting the different parts. How well the individual components contribute to turning fuel into thrust is quantified by measures like efficiencies for the compressors, turbines and combustor and pressure losses for the ducts. These are shown as lines on a Thermodynamic cycle diagram. The engine efficiency, or thermal efficiency, known as . is dependent on the Thermodynamic cycle parameters, maximum pressure and temperature, and on component efficiencies, , and and duct pressure losses. The engine needs compressed air for itself just to run successfully. This air comes from its own compressor and is called secondary air. It does not contribute to making thrust so makes the engine less efficient. It is used to preserve the mechanical integrity of the engine, to stop parts overheating and to prevent oil escaping from bearings for example. Only some of this air taken from the compressors returns to the turbine flow to contribute to thrust production. Any reduction in the amount needed improves the engine efficiency. Again, it will be known for a particular engine design that a reduced requirement for cooling flow of x% will reduce the specific fuel consumption by y%. In other words, less fuel will be required to give take-off thrust, for example. The engine is more efficient. All of the above considerations are basic to the engine running on its own and, at the same time, doing nothing useful, i.e. it is not moving an aircraft or supplying energy for the aircraft's electrical, hydraulic and air systems. In the aircraft the engine gives away some of its thrust-producing potential, or fuel, to power these systems. These requirements, which cause installation losses, reduce its efficiency. It is using some fuel that does not contribute to the engine's thrust. Finally, when the aircraft is flying the propelling jet itself contains wasted kinetic energy after it has left the engine. This is quantified by the term propulsive, or Froude, efficiency and may be reduced by redesigning the engine to give it bypass flow and a lower speed for the propelling jet, for example as a turboprop or turbofan engine. At the same time forward speed increases the by increasing the Overall pressure ratio. The overall efficiency of the engine at flight speed is defined as . The at flight speed depends on how well the intake compresses the air before it is handed over to the engine compressors. The intake compression ratio, which can be as high as 32:1 at Mach 3, adds to that of the engine compressor to give the Overall pressure ratio and for the Thermodynamic cycle. How well it does this is defined by its pressure recovery or measure of the losses in the intake. Mach 3 manned flight has provided an interesting illustration of how these losses can increase dramatically in an instant. The North American XB-70 Valkyrie and Lockheed SR-71 Blackbird at Mach 3 each had pressure recoveries of about 0.8, due to relatively low losses during the compression process, i.e. through systems of multiple shocks. During an 'unstart' the efficient shock system would be replaced by a very inefficient single shock beyond the inlet and an intake pressure recovery of about 0.3 and a correspondingly low pressure ratio. The propelling nozzle at speeds above about Mach 2 usually has extra internal thrust losses because the exit area is not big enough as a trade-off with external afterbody drag. Although a bypass engine improves propulsive efficiency it incurs losses of its own inside the engine itself. Machinery has to be added to transfer energy from the gas generator to a bypass airflow. The low loss from the propelling nozzle of a turbojet is added to with extra losses due to inefficiencies in the added turbine and fan. These may be included in a transmission, or transfer, efficiency . However, these losses are more than made up by the improvement in propulsive efficiency. There are also extra pressure losses in the bypass duct and an extra propelling nozzle. With the advent of turbofans with their loss-making machinery what goes on inside the engine has been separated by Bennett, for example, between gas generator and transfer machinery giving . The energy efficiency () of jet engines installed in vehicles has two main components: propulsive efficiency (): how much of the energy of the jet ends up in the vehicle body rather than being carried away as kinetic energy of the jet. cycle efficiency (): how efficiently the engine can accelerate the jet Even though overall energy efficiency is: for all jet engines the propulsive efficiency is highest as the exhaust jet velocity gets closer to the vehicle speed as this gives the smallest residual kinetic energy. For an airbreathing engine an exhaust velocity equal to the vehicle velocity, or a equal to one, gives zero thrust with no net momentum change. The formula for air-breathing engines moving at speed with an exhaust velocity , and neglecting fuel flow, is: And for a rocket: In addition to propulsive efficiency, another factor is cycle efficiency; a jet engine is a form of heat engine. Heat engine efficiency is determined by the ratio of temperatures reached in the engine to that exhausted at the nozzle. This has improved constantly over time as new materials have been introduced to allow higher maximum cycle temperatures. For example, composite materials, combining metals with ceramics, have been developed for HP turbine blades, which run at the maximum cycle temperature. The efficiency is also limited by the overall pressure ratio that can be achieved. Cycle efficiency is highest in rocket engines (~60+%), as they can achieve extremely high combustion temperatures. Cycle efficiency in turbojet and similar is nearer to 30%, due to much lower peak cycle temperatures. The combustion efficiency of most aircraft gas turbine engines at sea level takeoff conditions is almost 100%. It decreases nonlinearly to 98% at altitude cruise conditions. Air-fuel ratio ranges from 50:1 to 130:1. For any type of combustion chamber there is a rich and weak limit to the air-fuel ratio, beyond which the flame is extinguished. The range of air-fuel ratio between the rich and weak limits is reduced with an increase of air velocity. If the increasing air mass flow reduces the fuel ratio below certain value, flame extinction occurs. Consumption of fuel or propellant A closely related (but different) concept to energy efficiency is the rate of consumption of propellant mass. Propellant consumption in jet engines is measured by specific fuel consumption, specific impulse, or effective exhaust velocity. They all measure the same thing. Specific impulse and effective exhaust velocity are strictly proportional, whereas specific fuel consumption is inversely proportional to the others. For air-breathing engines such as turbojets, energy efficiency and propellant (fuel) efficiency are much the same thing, since the propellant is a fuel and the source of energy. In rocketry, the propellant is also the exhaust, and this means that a high energy propellant gives better propellant efficiency but can in some cases actually give lower energy efficiency. It can be seen in the table (just below) that the subsonic turbofans such as General Electric's CF6 turbofan use a lot less fuel to generate thrust for a second than did the Concorde's Rolls-Royce/Snecma Olympus 593 turbojet. However, since energy is force times distance and the distance per second was greater for the Concorde, the actual power generated by the engine for the same amount of fuel was higher for the Concorde at Mach 2 than the CF6. Thus, the Concorde's engines were more efficient in terms of energy per distance traveled. Thrust-to-weight ratio The thrust-to-weight ratio of jet engines with similar configurations varies with scale, but is mostly a function of engine construction technology. For a given engine, the lighter the engine, the better the thrust-to-weight is, the less fuel is used to compensate for drag due to the lift needed to carry the engine weight, or to accelerate the mass of the engine. As can be seen in the following table, rocket engines generally achieve much higher thrust-to-weight ratios than duct engines such as turbojet and turbofan engines. This is primarily because rockets almost universally use dense liquid or solid reaction mass which gives a much smaller volume and hence the pressurization system that supplies the nozzle is much smaller and lighter for the same performance. Duct engines have to deal with air which is two to three orders of magnitude less dense and this gives pressures over much larger areas, which in turn results in more engineering materials being needed to hold the engine together and for the air compressor. Comparison of types Propeller engines handle larger air mass flows, and give them smaller acceleration, than jet engines. Since the increase in air speed is small, at high flight speeds the thrust available to propeller-driven aeroplanes is small. However, at low speeds, these engines benefit from relatively high propulsive efficiency. On the other hand, turbojets accelerate a much smaller mass flow of intake air and burned fuel, but they then reject it at very high speed. When a de Laval nozzle is used to accelerate a hot engine exhaust, the outlet velocity may be locally supersonic. Turbojets are particularly suitable for aircraft travelling at very high speeds. Turbofans have a mixed exhaust consisting of the bypass air and the hot combustion product gas from the core engine. The amount of air that bypasses the core engine compared to the amount flowing into the engine determines what is called a turbofan's bypass ratio (BPR). While a turbojet engine uses all of the engine's output to produce thrust in the form of a hot high-velocity exhaust gas jet, a turbofan's cool low-velocity bypass air yields between 30% and 70% of the total thrust produced by a turbofan system. The net thrust (FN) generated by a turbofan can also be expanded as: where: Rocket engines have extremely high exhaust velocity and thus are best suited for high speeds (hypersonic) and great altitudes. At any given throttle, the thrust and efficiency of a rocket motor improves slightly with increasing altitude (because the back-pressure falls thus increasing net thrust at the nozzle exit plane), whereas with a turbojet (or turbofan) the falling density of the air entering the intake (and the hot gases leaving the nozzle) causes the net thrust to decrease with increasing altitude. Rocket engines are more efficient than even scramjets above roughly Mach 15. Altitude and speed With the exception of scramjets, jet engines, deprived of their inlet systems can only accept air at around half the speed of sound. The inlet system's job for transonic and supersonic aircraft is to slow the air and perform some of the compression. The limit on maximum altitude for engines is set by flammability – at very high altitudes the air becomes too thin to burn, or after compression, too hot. For turbojet engines altitudes of about 40 km appear to be possible, whereas for ramjet engines 55 km may be achievable. Scramjets may theoretically manage 75 km. Rocket engines of course have no upper limit. At more modest altitudes, flying faster compresses the air at the front of the engine, and this greatly heats the air. The upper limit is usually thought to be about Mach 5–8, as above about Mach 5.5, the atmospheric nitrogen tends to react due to the high temperatures at the inlet and this consumes significant energy. The exception to this is scramjets which may be able to achieve about Mach 15 or more, as they avoid slowing the air, and rockets again have no particular speed limit. Noise The noise emitted by a jet engine has many sources. These include, in the case of gas turbine engines, the fan, compressor, combustor, turbine and propelling jet/s. The propelling jet produces jet noise which is caused by the violent mixing action of the high speed jet with the surrounding air. In the subsonic case the noise is produced by eddies and in the supersonic case by Mach waves. The sound power radiated from a jet varies with the jet velocity raised to the eighth power for velocities up to and varies with the velocity cubed above . Thus, the lower speed exhaust jets emitted from engines such as high bypass turbofans are the quietest, whereas the fastest jets, such as rockets, turbojets, and ramjets, are the loudest. For commercial jet aircraft the jet noise has reduced from the turbojet through bypass engines to turbofans as a result of a progressive reduction in propelling jet velocities. For example, the JT8D, a bypass engine, has a jet velocity of whereas the JT9D, a turbofan, has jet velocities of (cold) and (hot). The advent of the turbofan replaced the very distinctive jet noise with another sound known as "buzz saw" noise. The origin is the shockwaves originating at the supersonic fan blade tip at takeoff thrust. Cooling Adequate heat transfer away from the working parts of the jet engine is critical to maintaining strength of engine materials and ensuring long life for the engine. After 2016, research is ongoing in the development of transpiration cooling techniques to jet engine components. Operation In a jet engine, each major rotating section usually has a separate gauge devoted to monitoring its speed of rotation. Depending on the make and model, a jet engine may have an N gauge that monitors the low-pressure compressor section and/or fan speed in turbofan engines. The gas generator section may be monitored by an N gauge, while triple spool engines may have an N gauge as well. Each engine section rotates at many thousands RPM. Their gauges therefore are calibrated in percent of a nominal speed rather than actual RPM, for ease of display and interpretation.
Technology
Basics_8
null
16009
https://en.wikipedia.org/wiki/JPEG
JPEG
JPEG ( , short for Joint Photographic Experts Group) is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable trade off between storage size and image quality. JPEG typically achieves 10:1 compression with noticeable, but widely agreed to be acceptable perceptible loss in image quality. Since its introduction in 1992, JPEG has been the most widely used image compression standard in the world, and the most widely used digital image format, with several billion JPEG images produced every day as of 2015. The Joint Photographic Experts Group created the standard in 1992. JPEG was largely responsible for the proliferation of digital images and digital photos across the Internet and later social media. JPEG compression is used in a number of image file formats. JPEG/Exif is the most common image format used by digital cameras and other photographic image capture devices; along with JPEG/JFIF, it is the most common format for storing and transmitting photographic images on the World Wide Web. These format variations are often not distinguished and are simply called JPEG. The MIME media type for JPEG is "image/jpeg", except in older Internet Explorer versions, which provide a MIME type of "image/pjpeg" when uploading JPEG images. JPEG files usually have a filename extension of "jpg" or "jpeg". JPEG/JFIF supports a maximum image size of 65,535×65,535 pixels, hence up to 4 gigapixels for an aspect ratio of 1:1. In 2000, the JPEG group introduced a format intended to be a successor, JPEG 2000, but it was unable to replace the original JPEG as the dominant image standard. History Background The original JPEG specification published in 1992 implements processes from various earlier research papers and patents cited by the CCITT (now ITU-T) and Joint Photographic Experts Group. The JPEG specification cites patents from several companies. The following patents provided the basis for its arithmetic coding algorithm. IBM February 4, 1986 Kottappuram M. A. Mohiuddin and Jorma J. Rissanen Multiplication-free multi-alphabet arithmetic code February 27, 1990 G. Langdon, J. L. Mitchell, W. B. Pennebaker, and Jorma J. Rissanen Arithmetic coding encoder and decoder system June 19, 1990 W. B. Pennebaker and J. L. Mitchell Probability adaptation for arithmetic coders Mitsubishi Electric (1021672) January 21, 1989 Toshihiro Kimura, Shigenori Kino, Fumitaka Ono, Masayuki Yoshida Coding system (2-46275) February 26, 1990 Tomohiro Kimura, Shigenori Kino, Fumitaka Ono, and Masayuki Yoshida Coding apparatus and coding method The JPEG specification also cites three other patents from IBM. Other companies cited as patent holders include AT&T (two patents) and Canon Inc. Absent from the list is , filed by Compression Labs' Wen-Hsiung Chen and Daniel J. Klenke in October 1986. The patent describes a DCT-based image compression algorithm, and would later be a cause of controversy in 2002 (see Patent controversy below). However, the JPEG specification did cite two earlier research papers by Wen-Hsiung Chen, published in 1977 and 1984. JPEG standard "JPEG" stands for Joint Photographic Experts Group, the name of the committee that created the JPEG standard and other still picture coding standards. The "Joint" stood for ISO TC97 WG8 and CCITT SGVIII. Founded in 1986, the group developed the JPEG standard during the late 1980s. The group published the JPEG standard in 1992. In 1987, ISO TC 97 became ISO/IEC JTC 1 and, in 1992, CCITT became ITU-T. Currently on the JTC1 side, JPEG is one of two sub-groups of ISO/IEC Joint Technical Committee 1, Subcommittee 29, Working Group 1 (ISO/IEC JTC 1/SC 29/WG 1) – titled as Coding of still pictures. On the ITU-T side, ITU-T SG16 is the respective body. The original JPEG Group was organized in 1986, issuing the first JPEG standard in 1992, which was approved in September 1992 as ITU-T Recommendation T.81 and, in 1994, as ISO/IEC 10918-1. The JPEG standard specifies the codec, which defines how an image is compressed into a stream of bytes and decompressed back into an image, but not the file format used to contain that stream. The Exif and JFIF standards define the commonly used file formats for interchange of JPEG-compressed images. JPEG standards are formally named as Information technology – Digital compression and coding of continuous-tone still images. ISO/IEC 10918 consists of the following parts: Ecma International TR/98 specifies the JPEG File Interchange Format (JFIF); the first edition was published in June 2009. Patent controversy In 2002, Forgent Networks asserted that it owned and would enforce patent rights on the JPEG technology, arising from a patent that had been filed on October 27, 1986, and granted on October 6, 1987: by Compression Labs' Wen-Hsiung Chen and Daniel J. Klenke. While Forgent did not own Compression Labs at the time, Chen later sold Compression Labs to Forgent, before Chen went on to work for Cisco. This led to Forgent acquiring ownership over the patent. Forgent's 2002 announcement created a furor reminiscent of Unisys' attempts to assert its rights over the GIF image compression standard. The JPEG committee investigated the patent claims in 2002 and were of the opinion that they were invalidated by prior art, a view shared by various experts. Between 2002 and 2004, Forgent was able to obtain about US$105 million by licensing their patent to some 30 companies. In April 2004, Forgent sued 31 other companies to enforce further license payments. In July of the same year, a consortium of 21 large computer companies filed a countersuit, with the goal of invalidating the patent. In addition, Microsoft launched a separate lawsuit against Forgent in April 2005. In February 2006, the United States Patent and Trademark Office agreed to re-examine Forgent's JPEG patent at the request of the Public Patent Foundation. On May 26, 2006, the USPTO found the patent invalid based on prior art. The USPTO also found that Forgent knew about the prior art, yet it intentionally avoided telling the Patent Office. This makes any appeal to reinstate the patent highly unlikely to succeed. Forgent also possesses a similar patent granted by the European Patent Office in 1994, though it is unclear how enforceable it is. As of October 27, 2006, the U.S. patent's 20-year term appears to have expired, and in November 2006, Forgent agreed to abandon enforcement of patent claims against use of the JPEG standard. The JPEG committee has as one of its explicit goals that their standards (in particular their baseline methods) be implementable without payment of license fees, and they have secured appropriate license rights for their JPEG 2000 standard from over 20 large organizations. Beginning in August 2007, another company, Global Patent Holdings, LLC claimed that its patent () issued in 1993, is infringed by the downloading of JPEG images on either a website or through e-mail. If not invalidated, this patent could apply to any website that displays JPEG images. The patent was under reexamination by the U.S. Patent and Trademark Office from 2000 to 2007; in July 2007, the Patent Office revoked all of the original claims of the patent but found that an additional claim proposed by Global Patent Holdings (claim 17) was valid. Global Patent Holdings then filed a number of lawsuits based on claim 17 of its patent. In its first two lawsuits following the reexamination, both filed in Chicago, Illinois, Global Patent Holdings sued the Green Bay Packers, CDW, Motorola, Apple, Orbitz, Officemax, Caterpillar, Kraft and Peapod as defendants. A third lawsuit was filed on December 5, 2007, in South Florida against ADT Security Services, AutoNation, Florida Crystals Corp., HearUSA, MovieTickets.com, Ocwen Financial Corp. and Tire Kingdom, and a fourth lawsuit on January 8, 2008, in South Florida against the Boca Raton Resort & Club. A fifth lawsuit was filed against Global Patent Holdings in Nevada. That lawsuit was filed by Zappos.com, Inc., which was allegedly threatened by Global Patent Holdings, and sought a judicial declaration that the '341 patent is invalid and not infringed. Global Patent Holdings had also used the '341 patent to sue or threaten outspoken critics of broad software patents, including Gregory Aharonian and the anonymous operator of a website blog known as the "Patent Troll Tracker." On December 21, 2007, patent lawyer Vernon Francissen of Chicago asked the U.S. Patent and Trademark Office to reexamine the sole remaining claim of the '341 patent on the basis of new prior art. On March 5, 2008, the U.S. Patent and Trademark Office agreed to reexamine the '341 patent, finding that the new prior art raised substantial new questions regarding the patent's validity. In light of the reexamination, the accused infringers in four of the five pending lawsuits have filed motions to suspend (stay) their cases until completion of the U.S. Patent and Trademark Office's review of the '341 patent. On April 23, 2008, a judge presiding over the two lawsuits in Chicago, Illinois granted the motions in those cases. On July 22, 2008, the Patent Office issued the first "Office Action" of the second reexamination, finding the claim invalid based on nineteen separate grounds. On Nov. 24, 2009, a Reexamination Certificate was issued cancelling all claims. Beginning in 2011 and continuing as of early 2013, an entity known as Princeton Digital Image Corporation, based in Eastern Texas, began suing large numbers of companies for alleged infringement of . Princeton claims that the JPEG image compression standard infringes the '056 patent and has sued large numbers of websites, retailers, camera and device manufacturers and resellers. The patent was originally owned and assigned to General Electric. The patent expired in December 2007, but Princeton has sued large numbers of companies for "past infringement" of this patent. (Under U.S. patent laws, a patent owner can sue for "past infringement" up to six years before the filing of a lawsuit, so Princeton could theoretically have continued suing companies until December 2013.) As of March 2013, Princeton had suits pending in New York and Delaware against more than 55 companies. General Electric's involvement in the suit is unknown, although court records indicate that it assigned the patent to Princeton in 2009 and retains certain rights in the patent. Typical use The JPEG compression algorithm operates at its best on photographs and paintings of realistic scenes with smooth variations of tone and color. For web usage, where reducing the amount of data used for an image is important for responsive presentation, JPEG's compression benefits make JPEG popular. JPEG/Exif is also the most common format saved by digital cameras. However, JPEG is not well suited for line drawings and other textual or iconic graphics, where the sharp contrasts between adjacent pixels can cause noticeable artifacts. Such images are better saved in a lossless graphics format such as TIFF, GIF, PNG, or a raw image format. The JPEG standard includes a lossless coding mode, but that mode is not supported in most products. As the typical use of JPEG is a lossy compression method, which reduces the image fidelity, it is inappropriate for exact reproduction of imaging data (such as some scientific and medical imaging applications and certain technical image processing work). JPEG is also not well suited to files that will undergo multiple edits, as some image quality is lost each time the image is recompressed, particularly if the image is cropped or shifted, or if encoding parameters are changed – see digital generation loss for details. To prevent image information loss during sequential and repetitive editing, the first edit can be saved in a lossless format, subsequently edited in that format, then finally published as JPEG for distribution. JPEG compression JPEG uses a lossy form of compression based on the discrete cosine transform (DCT). This mathematical operation converts each frame/field of the video source from the spatial (2D) domain into the frequency domain (a.k.a. transform domain). A perceptual model based loosely on the human psychovisual system discards high-frequency information, i.e. sharp transitions in intensity, and color hue. In the transform domain, the process of reducing information is called quantization. In simpler terms, quantization is a method for optimally reducing a large number scale (with different occurrences of each number) into a smaller one, and the transform-domain is a convenient representation of the image because the high-frequency coefficients, which contribute less to the overall picture than other coefficients, are characteristically small-values with high compressibility. The quantized coefficients are then sequenced and losslessly packed into the output bitstream. Nearly all software implementations of JPEG permit user control over the compression ratio (as well as other optional parameters), allowing the user to trade off picture-quality for smaller file size. In embedded applications (such as miniDV, which uses a similar DCT-compression scheme), the parameters are pre-selected and fixed for the application. The compression method is usually lossy, meaning that some original image information is lost and cannot be restored, possibly affecting image quality. There is an optional lossless mode defined in the JPEG standard. However, this mode is not widely supported in products. There is also an interlaced progressive JPEG format, in which data is compressed in multiple passes of progressively higher detail. This is ideal for large images that will be displayed while downloading over a slow connection, allowing a reasonable preview after receiving only a portion of the data. However, support for progressive JPEGs is not universal. When progressive JPEGs are received by programs that do not support them (such as versions of Internet Explorer before Windows 7) the software displays the image only after it has been completely downloaded. There are also many medical imaging, traffic and camera applications that create and process 12-bit JPEG images both grayscale and color. 12-bit JPEG format is included in an Extended part of the JPEG specification. The libjpeg codec supports 12-bit JPEG and there even exists a high-performance version. Lossless editing Several alterations to a JPEG image can be performed losslessly (that is, without recompression and the associated quality loss) as long as the image size is a multiple of 1 MCU block (Minimum Coded Unit) (usually 16 pixels in both directions, for 4:2:0 chroma subsampling). Utilities that implement this include: jpegtran and its GUI, Jpegcrop. IrfanView using "JPG Lossless Crop (PlugIn)" and "JPG Lossless Rotation (PlugIn)", which require installing the JPG_TRANSFORM plugin. FastStone Image Viewer using "Lossless Crop to File" and "JPEG Lossless Rotate". XnViewMP using "JPEG lossless transformations". ACDSee supports lossless rotation (but not lossless cropping) with its "Force lossless JPEG operations" option. Blocks can be rotated in 90-degree increments, flipped in the horizontal, vertical and diagonal axes and moved about in the image. Not all blocks from the original image need to be used in the modified one. The top and left edge of a JPEG image must lie on an 8 × 8 pixel block boundary (or 16 × 16 pixel for larger MCU sizes), but the bottom and right edge need not do so. This limits the possible lossless crop operations, and prevents flips and rotations of an image whose bottom or right edge does not lie on a block boundary for all channels (because the edge would end up on top or left, where – as aforementioned – a block boundary is obligatory). Rotations where the image is not a multiple of 8 or 16, which value depends upon the chroma subsampling, are not lossless. Rotating such an image causes the blocks to be recomputed which results in loss of quality. When using lossless cropping, if the bottom or right side of the crop region is not on a block boundary, then the rest of the data from the partially used blocks will still be present in the cropped file and can be recovered. It is also possible to transform between baseline and progressive formats without any loss of quality, since the only difference is the order in which the coefficients are placed in the file. Furthermore, several JPEG images can be losslessly joined, as long as they were saved with the same quality and the edges coincide with block boundaries. JPEG files The file format known as "JPEG Interchange Format" (JIF) is specified in Annex B of the standard. However, this "pure" file format is rarely used, primarily because of the difficulty of programming encoders and decoders that fully implement all aspects of the standard and because of certain shortcomings of the standard: Color space definition Component sub-sampling registration Pixel aspect ratio definition. Several additional standards have evolved to address these issues. The first of these, released in 1992, was the JPEG File Interchange Format (or JFIF), followed in recent years by Exchangeable image file format (Exif) and ICC color profiles. Both of these formats use the actual JIF byte layout, consisting of different markers, but in addition, employ one of the JIF standard's extension points, namely the application markers: JFIF uses APP0, while Exif uses APP1. Within these segments of the file that were left for future use in the JIF standard and are not read by it, these standards add specific metadata. Thus, in some ways, JFIF is a cut-down version of the JIF standard in that it specifies certain constraints (such as not allowing all the different encoding modes), while in other ways, it is an extension of JIF due to the added metadata. The documentation for the original JFIF standard states: Image files that employ JPEG compression are commonly called "JPEG files", and are stored in variants of the JIF image format. Most image capture devices (such as digital cameras) that output JPEG are actually creating files in the Exif format, the format that the camera industry has standardized on for metadata interchange. On the other hand, since the Exif standard does not allow color profiles, most image editing software stores JPEG in JFIF format, and includes the APP1 segment from the Exif file to include the metadata in an almost-compliant way; the JFIF standard is interpreted somewhat flexibly. Strictly speaking, the JFIF and Exif standards are incompatible, because each specifies that its marker segment (APP0 or APP1, respectively) appear first. In practice, most JPEG files contain a JFIF marker segment that precedes the Exif header. This allows older readers to correctly handle the older format JFIF segment, while newer readers also decode the following Exif segment, being less strict about requiring it to appear first. JPEG filename extensions The most common filename extensions for files employing JPEG compression are .jpg and .jpeg, though .jpe, .jfif and .jif are also used. It is also possible for JPEG data to be embedded in other file types – TIFF encoded files often embed a JPEG image as a thumbnail of the main image; and MP3 files can contain a JPEG of cover art in the ID3v2 tag. Color profile Many JPEG files embed an ICC color profile (color space). Commonly used color profiles include sRGB and Adobe RGB. Because these color spaces use a non-linear transformation, the dynamic range of an 8-bit JPEG file is about 11 stops; see gamma curve. If the image doesn't specify color profile information (untagged), the color space is assumed to be sRGB for the purposes of display on webpages. Syntax and structure A JPEG image consists of a sequence of segments, each beginning with a marker, each of which begins with a 0xFF byte, followed by a byte indicating what kind of marker it is. Some markers consist of just those two bytes; others are followed by two bytes (high then low), indicating the length of marker-specific payload data that follows. (The length includes the two bytes for the length, but not the two bytes for the marker.) Some markers are followed by entropy-coded data; the length of such a marker does not include the entropy-coded data. Note that consecutive 0xFF bytes are used as fill bytes for padding purposes, although this fill byte padding should only ever take place for markers immediately following entropy-coded scan data (see JPEG specification section B.1.1.2 and E.1.2 for details; specifically "In all cases where markers are appended after the compressed data, optional 0xFF fill bytes may precede the marker"). Within the entropy-coded data, after any 0xFF byte, a 0x00 byte is inserted by the encoder before the next byte, so that there does not appear to be a marker where none is intended, preventing framing errors. Decoders must skip this 0x00 byte. This technique, called byte stuffing (see JPEG specification section F.1.2.3), is only applied to the entropy-coded data, not to marker payload data. Note however that entropy-coded data has a few markers of its own; specifically the Reset markers (0xD0 through 0xD7), which are used to isolate independent chunks of entropy-coded data to allow parallel decoding, and encoders are free to insert these Reset markers at regular intervals (although not all encoders do this). There are other Start Of Frame markers that introduce other kinds of JPEG encodings. Since several vendors might use the same APPn marker type, application-specific markers often begin with a standard or vendor name (e.g., "Exif" or "Adobe") or some other identifying string. At a restart marker, block-to-block predictor variables are reset, and the bitstream is synchronized to a byte boundary. Restart markers provide means for recovery after bitstream error, such as transmission over an unreliable network or file corruption. Since the runs of macroblocks between restart markers may be independently decoded, these runs may be decoded in parallel. JPEG codec example Although a JPEG file can be encoded in various ways, most commonly it is done with JFIF encoding. The encoding process consists of several steps: The representation of the colors in the image is converted from RGB to , consisting of one luma component (Y'), representing brightness, and two chroma components, (CB and CR), representing color. This step is sometimes skipped. The resolution of the chroma data is reduced, usually by a factor of 2 or 3. This reflects the fact that the eye is less sensitive to fine color details than to fine brightness details. The image is split into blocks of 8×8 pixels, and for each block, each of the Y, CB, and CR data undergoes the discrete cosine transform (DCT). A DCT is similar to a Fourier transform in the sense that it produces a kind of spatial frequency spectrum. The amplitudes of the frequency components are quantized. Human vision is much more sensitive to small variations in color or brightness over large areas than to the strength of high-frequency brightness variations. Therefore, the magnitudes of the high-frequency components are stored with a lower accuracy than the low-frequency components. The quality setting of the encoder (for example 50 or 95 on a scale of 0–100 in the Independent JPEG Group's library) affects to what extent the resolution of each frequency component is reduced. If an excessively low quality setting is used, the high-frequency components are discarded altogether. The resulting data for all 8×8 blocks is further compressed with a lossless algorithm, a variant of Huffman encoding. The decoding process reverses these steps, except the quantization because it is irreversible. In the remainder of this section, the encoding and decoding processes are described in more detail. Encoding Many of the options in the JPEG standard are not commonly used, and as mentioned above, most image software uses the simpler JFIF format when creating a JPEG file, which among other things specifies the encoding method. Here is a brief description of one of the more common methods of encoding when applied to an input that has 24 bits per pixel (eight each of red, green, and blue). This particular option is a lossy data compression method. They are represented in matrices below. Color space transformation First, the image should be converted from RGB (by default sRGB, but other color spaces are possible) into a different color space called (or, informally, YCbCr). It has three components Y', CB and CR: the Y' component represents the brightness of a pixel, and the CB and CR components represent the chrominance (split into blue and red components). This is basically the same color space as used by digital color television as well as digital video including video DVDs. The color space conversion allows greater compression without a significant effect on perceptual image quality (or greater perceptual image quality for the same compression). The compression is more efficient because the brightness information, which is more important to the eventual perceptual quality of the image, is confined to a single channel. This more closely corresponds to the perception of color in the human visual system. The color transformation also improves compression by statistical decorrelation. A particular conversion to is specified in the JFIF standard, and should be performed for the resulting JPEG file to have maximum compatibility. However, some JPEG implementations in "highest quality" mode do not apply this step and instead keep the color information in the RGB color model, where the image is stored in separate channels for red, green and blue brightness components. This results in less efficient compression, and would not likely be used when file size is especially important. Downsampling Due to the densities of color- and brightness-sensitive receptors in the human eye, humans can see considerably more fine detail in the brightness of an image (the Y' component) than in the hue and color saturation of an image (the Cb and Cr components). Using this knowledge, encoders can be designed to compress images more efficiently. The transformation into the color model enables the next usual step, which is to reduce the spatial resolution of the Cb and Cr components (called "downsampling" or "chroma subsampling"). The ratios at which the downsampling is ordinarily done for JPEG images are 4:4:4 (no downsampling), 4:2:2 (reduction by a factor of 2 in the horizontal direction), or (most commonly) 4:2:0 (reduction by a factor of 2 in both the horizontal and vertical directions). For the rest of the compression process, Y', Cb and Cr are processed separately and in a very similar manner. Block splitting After subsampling, each channel must be split into 8×8 blocks. Depending on chroma subsampling, this yields Minimum Coded Unit (MCU) blocks of size 8×8 (4:4:4 – no subsampling), 16×8 (4:2:2), or most commonly 16×16 (4:2:0). In video compression MCUs are called macroblocks. If the data for a channel does not represent an integer number of blocks then the encoder must fill the remaining area of the incomplete blocks with some form of dummy data. Filling the edges with a fixed color (for example, black) can create ringing artifacts along the visible part of the border; repeating the edge pixels is a common technique that reduces (but does not necessarily eliminate) such artifacts, and more sophisticated border filling techniques can also be applied. Discrete cosine transform Next, each 8×8 block of each component (Y, Cb, Cr) is converted to a frequency-domain representation, using a normalized, two-dimensional type-II discrete cosine transform (DCT), see Citation 1 in discrete cosine transform. The DCT is sometimes referred to as "type-II DCT" in the context of a family of transforms as in discrete cosine transform, and the corresponding inverse (IDCT) is denoted as "type-III DCT". As an example, one such 8×8 8-bit subimage might be: Before computing the DCT of the 8×8 block, its values are shifted from a positive range to one centered on zero. For an 8-bit image, each entry in the original block falls in the range . The midpoint of the range (in this case, the value 128) is subtracted from each entry to produce a data range that is centered on zero, so that the modified range is . This step reduces the dynamic range requirements in the DCT processing stage that follows. This step results in the following values: The next step is to take the two-dimensional DCT, which is given by: where is the horizontal spatial frequency, for the integers . is the vertical spatial frequency, for the integers . is a normalizing scale factor to make the transformation orthonormal is the pixel value at coordinates is the DCT coefficient at coordinates If we perform this transformation on our matrix above, we get the following (rounded to the nearest two digits beyond the decimal point): Note the top-left corner entry with the rather large magnitude. This is the DC coefficient (also called the constant component), which defines the basic hue for the entire block. The remaining 63 coefficients are the AC coefficients (also called the alternating components). The advantage of the DCT is its tendency to aggregate most of the signal in one corner of the result, as may be seen above. The quantization step to follow accentuates this effect while simultaneously reducing the overall size of the DCT coefficients, resulting in a signal that is easy to compress efficiently in the entropy stage. The DCT temporarily increases the bit-depth of the data, since the DCT coefficients of an 8-bit/component image take up to 11 or more bits (depending on fidelity of the DCT calculation) to store. This may force the codec to temporarily use 16-bit numbers to hold these coefficients, doubling the size of the image representation at this point; these values are typically reduced back to 8-bit values by the quantization step. The temporary increase in size at this stage is not a performance concern for most JPEG implementations, since typically only a very small part of the image is stored in full DCT form at any given time during the image encoding or decoding process. Quantization The human eye is good at seeing small differences in brightness over a relatively large area, but not so good at distinguishing the exact strength of a high frequency brightness variation. This allows one to greatly reduce the amount of information in the high frequency components. This is done by simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. This rounding operation is the only lossy operation in the whole process (other than chroma subsampling) if the DCT computation is performed with sufficiently high precision. As a result of this, it is typically the case that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers, which take many fewer bits to represent. The elements in the quantization matrix control the compression ratio, with larger values producing greater compression. A typical quantization matrix (for a quality of 50% as specified in the original JPEG Standard), is as follows: The quantized DCT coefficients are computed with where is the unquantized DCT coefficients; is the quantization matrix above; and is the quantized DCT coefficients. Using this quantization matrix with the DCT coefficient matrix from above results in: For example, using −415 (the DC coefficient) and rounding to the nearest integer Notice that most of the higher-frequency elements of the sub-block (i.e., those with an x or y spatial frequency greater than 4) are quantized into zero values. Entropy coding Entropy coding is a special form of lossless data compression. It involves arranging the image components in a "zigzag" order employing run-length encoding (RLE) algorithm that groups similar frequencies together, inserting length coding zeros, and then using Huffman coding on what is left. The JPEG standard also allows, but does not require, decoders to support the use of arithmetic coding, which is mathematically superior to Huffman coding. However, this feature has rarely been used, as it was historically covered by patents requiring royalty-bearing licenses, and because it is slower to encode and decode compared to Huffman coding. Arithmetic coding typically makes files about 5–7% smaller. The previous quantized DC coefficient is used to predict the current quantized DC coefficient. The difference between the two is encoded rather than the actual value. The encoding of the 63 quantized AC coefficients does not use such prediction differencing. The zigzag sequence for the above quantized coefficients are shown below. (The format shown is just for ease of understanding/viewing.) {| style="text-align: right" |- |style="width: 2em"| −26 || style="width: 2em"| || style="width: 2em"| || style="width: 2em"| || style="width: 2em"| || style="width: 2em"| || style="width: 2em"| || style="width: 2em"| |- | −3 || 0 |- | −3 || −2 || −6 |- | 2 || −4 || 1 || −3 |- | 1 || 1 || 5 || 1 || 2 |- | −1 || 1 || −1 || 2 || 0 || 0 |- | 0 || 0 || 0 || −1 || −1 || 0 || 0 |- | 0 || 0 || 0 || 0 || 0 || 0 || 0 || 0 |- | 0 || 0 || 0 || 0 || 0 || 0 || 0 |- | 0 || 0 || 0 || 0 || 0 || 0 |- | 0 || 0 || 0 || 0 || 0 |- | 0 || 0 || 0 || 0 |- | 0 || 0 || 0 |- | 0 || 0 |- | 0 |} If the i-th block is represented by and positions within each block are represented by where and , then any coefficient in the DCT image can be represented as . Thus, in the above scheme, the order of encoding pixels (for the -th block) is , , , , , , , and so on. This encoding mode is called baseline sequential encoding. Baseline JPEG also supports progressive encoding. While sequential encoding encodes coefficients of a single block at a time (in a zigzag manner), progressive encoding encodes similar-positioned batch of coefficients of all blocks in one go (called a scan), followed by the next batch of coefficients of all blocks, and so on. For example, if the image is divided into N 8×8 blocks , then a 3-scan progressive encoding encodes DC component, for all blocks, i.e., for all , in first scan. This is followed by the second scan which encoding a few more components (assuming four more components, they are to , still in a zigzag manner) coefficients of all blocks (so the sequence is: ), followed by all the remained coefficients of all blocks in the last scan. Once all similar-positioned coefficients have been encoded, the next position to be encoded is the one occurring next in the zigzag traversal as indicated in the figure above. It has been found that baseline progressive JPEG encoding usually gives better compression as compared to baseline sequential JPEG due to the ability to use different Huffman tables (see below) tailored for different frequencies on each "scan" or "pass" (which includes similar-positioned coefficients), though the difference is not too large. In the rest of the article, it is assumed that the coefficient pattern generated is due to sequential mode. In order to encode the above generated coefficient pattern, JPEG uses Huffman encoding. The JPEG standard provides general-purpose Huffman tables; encoders may also choose to generate Huffman tables optimized for the actual frequency distributions in images being encoded. The process of encoding the zig-zag quantized data begins with a run-length encoding explained below, where: is the non-zero, quantized AC coefficient. RUNLENGTH is the number of zeroes that came before this non-zero AC coefficient. SIZE is the number of bits required to represent . AMPLITUDE is the bit-representation of . The run-length encoding works by examining each non-zero AC coefficient and determining how many zeroes came before the previous AC coefficient. With this information, two symbols are created: {| style="text-align: center" class="wikitable" |- ! Symbol 1 || Symbol 2 |- | (RUNLENGTH, SIZE) || (AMPLITUDE) |} Both RUNLENGTH and SIZE rest on the same byte, meaning that each only contains four bits of information. The higher bits deal with the number of zeroes, while the lower bits denote the number of bits necessary to encode the value of . This has the immediate implication of Symbol 1 being only able store information regarding the first 15 zeroes preceding the non-zero AC coefficient. However, JPEG defines two special Huffman code words. One is for ending the sequence prematurely when the remaining coefficients are zero (called "End-of-Block" or "EOB"), and another when the run of zeroes goes beyond 15 before reaching a non-zero AC coefficient. In such a case where 16 zeroes are encountered before a given non-zero AC coefficient, Symbol 1 is encoded "specially" as: (15, 0)(0). The overall process continues until "EOB" denoted by (0, 0) is reached. With this in mind, the sequence from earlier becomes: (0, 2)(-3);(1, 2)(-3);(0, 2)(-2);(0, 3)(-6);(0, 2)(2);(0, 3)(-4);(0, 1)(1);(0, 2)(-3);(0, 1)(1);(0, 1)(1); (0, 3)(5);(0, 1)(1);(0, 2)(2);(0, 1)(-1);(0, 1)(1);(0, 1)(-1);(0, 2)(2);(5, 1)(-1);(0, 1)(-1);(0, 0); (The first value in the matrix, −26, is the DC coefficient; it is not encoded the same way. See above.) From here, frequency calculations are made based on occurrences of the coefficients. In our example block, most of the quantized coefficients are small numbers that are not preceded immediately by a zero coefficient. These more-frequent cases will be represented by shorter code words. Compression ratio and artifacts The resulting compression ratio can be varied according to need by being more or less aggressive in the divisors used in the quantization phase. Ten to one compression usually results in an image that cannot be distinguished by eye from the original. A compression ratio of 100:1 is usually possible, but will look distinctly artifacted compared to the original. The appropriate level of compression depends on the use to which the image will be put. Those who use the World Wide Web may be familiar with the irregularities known as compression artifacts that appear in JPEG images, which may take the form of noise around contrasting edges (especially curves and corners), or "blocky" images. These are due to the quantization step of the JPEG algorithm. They are especially noticeable around sharp corners between contrasting colors (text is a good example, as it contains many such corners). The analogous artifacts in MPEG video are referred to as mosquito noise, as the resulting "edge busyness" and spurious dots, which change over time, resemble mosquitoes swarming around the object. These artifacts can be reduced by choosing a lower level of compression; they may be completely avoided by saving an image using a lossless file format, though this will result in a larger file size. The images created with ray-tracing programs have noticeable blocky shapes on the terrain. Certain low-intensity compression artifacts might be acceptable when simply viewing the images, but can be emphasized if the image is subsequently processed, usually resulting in unacceptable quality. Consider the example below, demonstrating the effect of lossy compression on an edge detection processing step. Some programs allow the user to vary the amount by which individual blocks are compressed. Stronger compression is applied to areas of the image that show fewer artifacts. This way it is possible to manually reduce JPEG file size with less loss of quality. Since the quantization stage always results in a loss of information, JPEG standard is always a lossy compression codec. (Information is lost both in quantizing and rounding of the floating-point numbers.) Even if the quantization matrix is a matrix of ones, information will still be lost in the rounding step. Decoding Decoding to display the image consists of doing all the above in reverse. Taking the DCT coefficient matrix (after adding the difference of the DC coefficient back in) and taking the entry-for-entry product with the quantization matrix from above results in which closely resembles the original DCT coefficient matrix for the top-left portion. The next step is to take the two-dimensional inverse DCT (a 2D type-III DCT), which is given by: where is the pixel row, for the integers . is the pixel column, for the integers . is the normalizing scale factor defined earlier, for the integers . is the approximated DCT coefficient at coordinates is the reconstructed pixel value at coordinates Rounding the output to integer values (since the original had integer values) results in an image with values (still shifted down by 128) and adding 128 to each entry This is the decompressed subimage. In general, the decompression process may produce values outside the original input range of . If this occurs, the decoder needs to clip the output values so as to keep them within that range to prevent overflow when storing the decompressed image with the original bit depth. The decompressed subimage can be compared to the original subimage (also see images to the right) by taking the difference (original − uncompressed) results in the following error values: with an average absolute error of about 5 values per pixels (i.e., ). The error is most noticeable in the bottom-left corner where the bottom-left pixel becomes darker than the pixel to its immediate right. Required precision The required implementation precision of a JPEG codec is implicitly defined through the requirements formulated for compliance to the JPEG standard. These requirements are specified in ITU.T Recommendation T.83 | ISO/IEC 10918-2. Unlike MPEG standards and many later JPEG standards, the above document defines both required implementation precisions for the encoding and the decoding process of a JPEG codec by means of a maximal tolerable error of the forwards and inverse DCT in the DCT domain as determined by reference test streams. For example, the output of a decoder implementation must not exceed an error of one quantization unit in the DCT domain when applied to the reference testing codestreams provided as part of the above standard. While unusual, and unlike many other and more modern standards, ITU.T T.83 | ISO/IEC 10918-2 does not formulate error bounds in the image domain. Effects of JPEG compression JPEG compression artifacts blend well into photographs with detailed non-uniform textures, allowing higher compression ratios. Notice how a higher compression ratio first affects the high-frequency textures in the upper-left corner of the image, and how the contrasting lines become more fuzzy. The very high compression ratio severely affects the quality of the image, although the overall colors and image form are still recognizable. However, the precision of colors suffer less (for a human eye) than the precision of contours (based on luminance). This justifies the fact that images should be first transformed in a color model separating the luminance from the chromatic information, before subsampling the chromatic planes (which may also use lower quality quantization) in order to preserve the precision of the luminance plane with more information bits. Sample photographs For information, the uncompressed 24-bit RGB bitmap image below (73,242 pixels) would require 219,726 bytes (excluding all other information headers). The filesizes indicated below include the internal JPEG information headers and some metadata. For highest quality images (Q=100), about 8.25 bits per color pixel is required. On grayscale images, a minimum of 6.5 bits per pixel is enough (a comparable Q=100 quality color information requires about 25% more encoded bits). The highest quality image below (Q=100) is encoded at nine bits per color pixel, the medium quality image (Q=25) uses one bit per color pixel. For most applications, the quality factor should not go below 0.75 bit per pixel (Q=12.5), as demonstrated by the low quality image. The image at lowest quality uses only 0.13 bit per pixel, and displays very poor color. This is useful when the image will be displayed in a significantly scaled-down size. A method for creating better quantization matrices for a given image quality using PSNR instead of the Q factor is described in Minguillón & Pujol (2001). {| class="wikitable" |+ align="bottom"| Note: The above images are not IEEE / CCIR / EBU test images, and the encoder settings are not specified or available. |- ! Image !! Quality !! Size (bytes) !! Compression ratio !! Comment |- | | Highest quality (Q = 100) | 81,447 | 2.7:1 | Extremely minor artifacts |- | | High quality (Q = 50) | 14,679 | 15:1 | Initial signs of subimage artifacts |- | | Medium quality (Q = 25) | 9,407 | 23:1 | Stronger artifacts; loss of high frequency information |- | | Low quality (Q = 10) | 4,787 | 46:1 | Severe high frequency loss leads to obvious artifacts on subimage boundaries ("macroblocking") |- | | Lowest quality (Q = 1) | 1,523 | 144:1 | Extreme loss of color and detail; the leaves are nearly unrecognizable. |} The medium quality photo uses only 4.3% of the storage space required for the uncompressed image, but has little noticeable loss of detail or visible artifacts. However, once a certain threshold of compression is passed, compressed images show increasingly visible defects. See the article on rate–distortion theory for a mathematical explanation of this threshold effect. A particular limitation of JPEG in this regard is its non-overlapped 8×8 block transform structure. More modern designs such as JPEG 2000 and JPEG XR exhibit a more graceful degradation of quality as the bit usage decreases – by using transforms with a larger spatial extent for the lower frequency coefficients and by using overlapping transform basis functions. Lossless further compression From 2004 to 2008, new research emerged on ways to further compress the data contained in JPEG images without modifying the represented image. This has applications in scenarios where the original image is only available in JPEG format, and its size needs to be reduced for archiving or transmission. Standard general-purpose compression tools cannot significantly compress JPEG files. Typically, such schemes take advantage of improvements to the naive scheme for coding DCT coefficients, which fails to take into account: Correlations between magnitudes of adjacent coefficients in the same block; Correlations between magnitudes of the same coefficient in adjacent blocks; Correlations between magnitudes of the same coefficient/block in different channels; The DC coefficients when taken together resemble a downscale version of the original image multiplied by a scaling factor. Well-known schemes for lossless coding of continuous-tone images can be applied, achieving somewhat better compression than the Huffman coded DPCM used in JPEG. Some standard but rarely used options already exist in JPEG to improve the efficiency of coding DCT coefficients: the arithmetic coding option, and the progressive coding option (which produces lower bitrates because values for each coefficient are coded independently, and each coefficient has a significantly different distribution). Modern methods have improved on these techniques by reordering coefficients to group coefficients of larger magnitude together; using adjacent coefficients and blocks to predict new coefficient values; dividing blocks or coefficients up among a small number of independently coded models based on their statistics and adjacent values; and most recently, by decoding blocks, predicting subsequent blocks in the spatial domain, and then encoding these to generate predictions for DCT coefficients. Typically, such methods can compress existing JPEG files between 15 and 25 percent, and for JPEGs compressed at low-quality settings, can produce improvements of up to 65%. A freely available tool called packJPG is based on the 2007 paper "Improved Redundancy Reduction for JPEG Files." As of version 2.5k of 2016, it reports a typical 20% reduction by transcoding. JPEG XL (ISO/IEC 18181) of 2018 reports a similar reduction in its transcoding. Derived formats for stereoscopic 3D JPEG Stereoscopic JPS is a stereoscopic JPEG image used for creating 3D effects from 2D images. It contains two static images, one for the left eye and one for the right eye; encoded as two side-by-side images in a single JPG file. JPEG Stereoscopic (JPS, extension .jps) is a JPEG-based format for stereoscopic images. It has a range of configurations stored in the JPEG APP3 marker field, but usually contains one image of double width, representing two images of identical size in cross-eyed (i.e. left frame on the right half of the image and vice versa) side-by-side arrangement. This file format can be viewed as a JPEG without any special software, or can be processed for rendering in other modes. JPEG Multi-Picture Format JPEG Multi-Picture Format (MPO, extension .mpo) is a JPEG-based format for storing multiple images in a single file. It contains two or more JPEG files concatenated together. It also defines a JPEG APP2 marker segment for image description. Various devices use it to store 3D images, such as Fujifilm FinePix Real 3D W1, HTC Evo 3D, JVC GY-HMZ1U AVCHD/MVC extension camcorder, Nintendo 3DS, Panasonic Lumix DMC-TZ20, DMC-TZ30, DMC-TZ60, DMC-TS4 (FT4), and Sony DSC-HX7V. Other devices use it to store "preview images" that can be displayed on a TV. In the last few years, due to the growing use of stereoscopic images, much effort has been spent by the scientific community to develop algorithms for stereoscopic image compression. Implementations A very important implementation of a JPEG codec is the free programming library libjpeg of the Independent JPEG Group. It was first published in 1991 and was key for the success of the standard. This library was used in countless applications. The development went quiet in 1998; when libjpeg resurfaced with the 2009 version 7, it broke ABI compatibility with previous versions. Version 8 of 2010 introduced non-standard extensions, a decision criticized by the original IJG leader Tom Lane. libjpeg-turbo, forked from the 1998 libjpeg 6b, improves on libjpeg with SIMD optimizations. Originally seen as a maintained fork of libjpeg, it has become more popular after the incompatible changes of 2009. In 2019, it became the ITU|ISO/IEC reference implementation as ISO/IEC 10918-7 and ITU-T T.873. ISO/IEC Joint Photography Experts Group maintains the other reference software implementation under the JPEG XT heading. It can encode both base JPEG (ISO/IEC 10918-1 and 18477–1) and JPEG XT extensions (ISO/IEC 18477 Parts 2 and 6–9), as well as JPEG-LS (ISO/IEC 14495). In 2016, "JPEG on steroids" was introduced as an option for the ISO JPEG XT reference implementation. There is persistent interest in encoding JPEG in unconventional ways that maximize image quality for a given file size. In 2014, Mozilla created MozJPEG from libjpeg-turbo, a slower but higher-quality encoder intended for web images. In March 2017, Google released the open source project Guetzli, which trades off a much longer encoding time for smaller file size (similar to what Zopfli does for PNG and other lossless data formats). In April 2024, Google introduced Jpegli, a new JPEG coding library that offers enhanced capabilities and a 35% compression ratio improvement at high quality compression settings, while the coding speed is comparable with MozJPEG. Successors The Joint Photographic Experts Group has developed several newer standards meant to complement or replace the functionality of the original JPEG format. JPEG LS Originating in 1993 and published as ISO-14495-1/ITU-T.87, JPEG LS offers a low-complexity lossless file format which was more efficient than JPEG's original lossless implementation. It also features a lossy mode close to lossless. Its functionality is largely limited to that, and largely shares the same limitations of the original JPEG in other aspects. JPEG 2000 JPEG 2000 was published as ISO/IEC 15444 in December 2000. It is based on a discrete wavelet transform (DWT) and was designed to completely replace the original JPEG standard and exceed it in every way. It allows up to 38 bits per colour channel and 16384 channels, more than any other format, with a multitude of colour spaces, and thus high dynamic range (HDR). Furthermore, it supports alpha transparency coding, billions-by-billions pixel images, which is also more than any other format, and lossless compression. It has significantly improved lossy compression ratio with significantly less visible artefacts at strong compression levels. JPEG XT JPEG XT (ISO/IEC 18477) was published in June 2015; it extends base JPEG format with support for higher integer bit depths (up to 16 bit), high dynamic range imaging and floating-point coding, lossless coding, and alpha channel coding. Extensions are backward compatible with the base JPEG/JFIF file format and 8-bit lossy compressed image. JPEG XT uses an extensible file format based on JFIF. Extension layers are used to modify the JPEG 8-bit base layer and restore the high-resolution image. Existing software is forward compatible and can read the JPEG XT binary stream, though it would only decode the base 8-bit layer. JPEG XL JPEG XL (ISO/IEC 18181) was published in 2021–2022. It replaces the JPEG format with a new DCT-based royalty-free format and allows efficient transcoding as a storage option for traditional JPEG images. The new format is designed to exceed the still image compression performance shown by HEIF HM, Daala and WebP. It supports billion-by-billion pixel images, up to 32-bit-per-component high dynamic range with the appropriate transfer functions (PQ and HLG), patch encoding of synthetic images such as bitmap fonts and gradients, animated images, alpha channel coding, and a choice of RGB/YCbCr/ICtCp color encoding.
Technology
File formats
null
16188
https://en.wikipedia.org/wiki/Jackal
Jackal
Jackals are canids native to Africa and Eurasia. While the word "" has historically been used for many canines of the subtribe canina, in modern use it most commonly refers to three species: the closely related black-backed jackal (Lupulella mesomelas) and side-striped jackal (Lupulella adusta) of Central and Southern Africa, and the golden jackal (Canis aureus) of south-central Europe and Asia. The African golden wolf (Canis lupaster) was also formerly considered a jackal. While they do not form a monophyletic clade, all jackals are opportunistic omnivores, predators of small to medium-sized animals and proficient scavengers. Their long legs and curved canine teeth are adapted for hunting small mammals, birds, and reptiles, and their large feet and fused leg bones give them a physique well-suited for long-distance running, capable of maintaining speeds of for extended periods of time. Jackals are crepuscular, most active at dawn and dusk. Their most common social unit is a monogamous pair, which defends its territory from other pairs by vigorously chasing intruders and marking landmarks around the territory with their urine and feces. The territory may be large enough to hold some young adults, which stay with their parents until they establish their own territories. Jackals may occasionally assemble in small packs, for example, to scavenge a carcass, but they normally hunt either alone or in pairs. Etymology The English word "jackal" dates back to 1600 and derives from the French , itself from Ottoman Turkish (), itself from Persian (), from or cognate with Sanskrit () meaning "the howler". Taxonomy and relationships Similarities between jackals and coyotes led Lorenz Oken, in the third volume of his Lehrbuch der Naturgeschichte (1815), to place these species into a new separate genus, Thos, named after the classical Greek word "jackal", but his theory had little immediate impact on taxonomy at the time. Angel Cabrera, in his 1932 monograph on the mammals of Morocco, questioned whether or not the presence of a cingulum on the upper molars of the jackals and its corresponding absence in the rest of Canis could justify a subdivision of that genus. In practice, Cabrera chose the undivided-genus alternative and referred to the jackals as Canis instead of Thos. Oken's Thos theory was revived in 1914 by Edmund Heller, who embraced the separate genus theory. Heller's names and the designations he gave to various jackal species and subspecies live on in current taxonomy, although the genus has been changed from Thos to Canis. The wolf-like canids are a group of large carnivores that are genetically closely related. They all have 78 chromosomes. The group includes genus Canis, Cuon, and Lycaon. The members are the dog (C. lupus familiaris), gray wolf (C. lupus), coyote (C. latrans), golden jackal (C. aureus), Ethiopian wolf (C. simensis), black-backed jackal (C. mesomelas), side-striped jackal (C. adustus), dhole (Cuon alpinus), and African wild dog (Lycaon pictus). The latest recognized member is the African wolf (C. lupaster), which was once thought to be an African branch of the golden jackal. As they possess 78 chromosomes, all members of the genus Canis are karyologically indistinguishable from each other, and from the dhole and the African hunting dog. The two African jackals are shown to be the most basal members of this clade, indicating the clade's origin from Africa. Canis arnensis arrived in Mediterranean Europe 1.9 million years ago and is probably the ancestor of modern jackals. The paraphyletic nature of Canis with respect to Lycaon and Cuon has led to suggestions that the two African jackals should be assigned to different genera, Schaeffia for the side-striped jackal and Lupulella for the black-backed jackal or Lupulella for both. The intermediate size and shape of the Ethiopian wolf has at times led it to be regarded as a jackal, thus it has also been called the "red jackal" or the "Simien jackal". Species Folklore and literature Like foxes and coyotes, jackals are often depicted as clever sorcerers in the myths and legends of their regions. They are mentioned roughly 14 times in the Bible. It is frequently used as a literary device to illustrate desolation, loneliness, and abandonment, with reference to its habit of living in the ruins of former cities and other areas abandoned by humans. It is called "wild dog" in several translations of the Bible. In the King James Bible, Isaiah 13:21 refers to 'doleful creatures', which some commentators suggest are either jackals or hyenas. In the Indian Panchatantra stories, the jackal is mentioned as wily and wise. In Bengali tantrik tradition, they represent the goddess Kali. It is said she appears as jackals when meat is offered to her. The Serer religion and creation myth posits jackals were among the first animals created by Roog, the supreme deity of the Serer people.
Biology and health sciences
Carnivora
null
16217
https://en.wikipedia.org/wiki/Jaguar
Jaguar
The jaguar (Panthera onca) is a large cat species and the only living member of the genus Panthera that is native to the Americas. With a body length of up to and a weight of up to , it is the biggest cat species in the Americas and the third largest in the world. Its distinctively marked coat features pale yellow to tan colored fur covered by spots that transition to rosettes on the sides, although a melanistic black coat appears in some individuals. The jaguar's powerful bite allows it to pierce the carapaces of turtles and tortoises, and to employ an unusual killing method: it bites directly through the skull of mammalian prey between the ears to deliver a fatal blow to the brain. The modern jaguar's ancestors probably entered the Americas from Eurasia during the Early Pleistocene via the land bridge that once spanned the Bering Strait. Today, the jaguar's range extends from the Southwestern United States across Mexico and much of Central America, the Amazon rainforest and south to Paraguay and northern Argentina. It inhabits a variety of forested and open terrains, but its preferred habitat is tropical and subtropical moist broadleaf forest, wetlands and wooded regions. It is adept at swimming and is largely a solitary, opportunistic, stalk-and-ambush apex predator. As a keystone species, it plays an important role in stabilizing ecosystems and in regulating prey populations. The jaguar is threatened by habitat loss, habitat fragmentation, poaching for trade with its body parts and killings in human–wildlife conflict situations, particularly with ranchers in Central and South America. It has been listed as Near Threatened on the IUCN Red List since 2002. The wild population is thought to have declined since the late 1990s. Priority areas for jaguar conservation comprise 51 Jaguar Conservation Units (JCUs), defined as large areas inhabited by at least 50 breeding jaguars. The JCUs are located in 36 geographic regions ranging from Mexico to Argentina. The jaguar has featured prominently in the mythology of indigenous peoples of the Americas, including those of the Aztec and Maya civilizations. Etymology The word "jaguar" is possibly derived from the Tupi-Guarani word meaning 'wild beast that overcomes its prey at a bound'. In North America, the word is pronounced disyllabic , while in British English, it is pronounced with three syllables . Because that word also applies to other animals, indigenous peoples in Guyana call it , with the added sufix eté, meaning "true beast". "Onca" is derived from the Portuguese name for a spotted cat that is larger than a lynx; cf. ounce. The word "panther" is derived from classical Latin , itself from the ancient Greek (). Taxonomy and evolution Taxonomy In 1758, Carl Linnaeus described the jaguar in his work Systema Naturae and gave it the scientific name Felis onca. In the 19th and 20th centuries, several jaguar type specimens formed the basis for descriptions of subspecies. In 1939, Reginald Innes Pocock recognized eight subspecies based on the geographic origins and skull morphology of these specimens. Pocock did not have access to sufficient zoological specimens to critically evaluate their subspecific status but expressed doubt about the status of several. Later consideration of his work suggested only three subspecies should be recognized. The description of P. o. palustris was based on a fossil skull. By 2005, nine subspecies were considered to be valid taxa: P. o. onca was a jaguar from Brazil. P. o. peruviana was a jaguar skull from Peru. P. o. hernandesii was a jaguar from Mazatlán in Mexico. P. o. palustris was a fossil jaguar mandible excavated in the Sierras Pampeanas of Córdova District, Argentina. P. o. centralis was a skull of a male jaguar from Talamanca, Costa Rica. P. o. goldmani was a jaguar skin from Yohatlan in Campeche, Mexico. P. o. paraguensis was a skull of a male jaguar from Paraguay. P. o. arizonensis was a skin and skull of a male jaguar from the vicinity of Cibecue, Arizona. P. o. veraecrucis was a skull of a male jaguar from San Andrés Tuxtla in Mexico. Reginald Innes Pocock placed the jaguar in the genus Panthera and observed that it shares several morphological features with the leopard (P. pardus). He, therefore, concluded that they are most closely related to each other. Results of morphological and genetic research indicate a clinal north–south variation between populations, but no evidence for subspecific differentiation. DNA analysis of 84 jaguar samples from South America revealed that the gene flow between jaguar populations in Colombia was high in the past. Since 2017, the jaguar is considered to be a monotypic taxon, though the modern Panthera onca onca is still distinguished from two fossil subspecies, Panthera onca augusta and Panthera onca mesembrina. However, the 2024 study suggested that the validity of subspecific assignments on both P. o. augusta and P. o. mesembrina remains unresolved, since both fossil and living jaguars show a considerable variation in morphometry. Evolution The Panthera lineage is estimated to have genetically diverged from the common ancestor of the Felidae around to . Some genetic analyzes place the jaguar as a sister species to the lion with which it diverged , but other studies place the lion closer to the leopard. The lineage of the jaguar appears to have originated in Africa and spread to Eurasia 1.95–1.77 mya. The living jaguar species is often suggested to have descended from the Eurasian Panthera gombaszoegensis. The ancestor of the jaguar entered the American continent via Beringia, the land bridge that once spanned the Bering Strait, Some authors have disputed the close relationship between P. gombaszoegensis (which is primarily known from Europe) and the modern jaguar. The oldest fossils of modern jaguars (P. onca) have been found in North America dating between 850,000-820,000 years ago. Results of mitochondrial DNA analysis of 37 jaguars indicate that current populations evolved between 510,000 and 280,000 years ago in northern South America and subsequently recolonized North and Central America after the extinction of jaguars there during the Late Pleistocene. Two extinct subspecies of jaguar are recognized in the fossil record: the North American P. o. augusta and South American P. o. mesembrina. Description The jaguar is a compact and muscular animal. It is the largest cat native to the Americas and the third largest in the world, exceeded in size only by the tiger and the lion. It stands tall at the shoulders. Its size and weight vary considerably depending on sex and region: weights in most regions are normally in the range of . Exceptionally big males have been recorded to weigh as much as . The smallest females from Middle America weigh about . It is sexually dimorphic, with females typically being 10–20% smaller than males. The length from the nose to the base of the tail varies from . The tail is long and the shortest of any big cat. Its muscular legs are shorter than the legs of other Panthera species with similar body weight. Size tends to increase from north to south. Jaguars in the Chamela-Cuixmala Biosphere Reserve on the Pacific coast of central Mexico weighed around . Jaguars in Venezuela and Brazil are much larger, with average weights of about in males and of about in females. The jaguar's coat ranges from pale yellow to tan or reddish-yellow, with a whitish underside and covered in black spots. The spots and their shapes vary: on the sides, they become rosettes which may include one or several dots. The spots on the head and neck are generally solid, as are those on the tail where they may merge to form bands near the end and create a black tip. They are elongated on the middle of the back, often connecting to create a median stripe, and blotchy on the belly. These patterns serve as camouflage in areas with dense vegetation and patchy shadows. Jaguars living in forests are often darker and considerably smaller than those living in open areas, possibly due to the smaller numbers of large, herbivorous prey in forest areas. The jaguar closely resembles the leopard but is generally more robust, with stockier limbs and a more square head. The rosettes on a jaguar's coat are larger, darker, fewer in number and have thicker lines, with a small spot in the middle. It has powerful jaws with the third-highest bite force of all felids, after the tiger and the lion. It has an average bite force at the canine tip of 887.0 Newton and a bite force quotient at the canine tip of 118.6. A jaguar can bite with a force of with the canine teeth and at the carnassial notch. Color variation Melanistic jaguars are also known as black panthers. The black morph is less common than the spotted one. Black jaguars have been documented in Central and South America. Melanism in the jaguar is caused by deletions in the melanocortin 1 receptor gene and inherited through a dominant allele. Black jaguars occur at higher densities in tropical rainforest and are more active during the daytime. This suggests that melanism provides camouflage in dense vegetation with high illumination. In 2004, a camera trap in the Sierra Madre Occidental mountains photographed the first documented black jaguar in Northern Mexico. Black jaguars were also photographed in Costa Rica's Alberto Manuel Brenes Biological Reserve, in the mountains of the Cordillera de Talamanca, in Barbilla National Park and in eastern Panama. Distribution and habitat In 1999, the jaguar's historic range at the turn of the 20th century was estimated at , stretching from the southern United States through Central America to southern Argentina. By the turn of the 21st century, its global range had decreased to about , with most declines occurring in the southern United States, northern Mexico, northern Brazil, and southern Argentina. Its present range extends from Mexico through Central America to South America comprising Belize, Guatemala, Honduras, Nicaragua, Costa Rica, particularly on the Osa Peninsula, Panama, Colombia, Venezuela, Guyana, Suriname, French Guiana, Ecuador, Peru, Bolivia, Brazil, Paraguay and Argentina. It is considered to be locally extinct in El Salvador and Uruguay. Jaguars have been occasionally sighted in Arizona, New Mexico and Texas, with 62 accounts reported in the 20th century. Between 2012 and 2015, a male vagrant jaguar was recorded in 23 locations in the Santa Rita Mountains. Eight jaguars were photographed in the southwestern US between 1996 and 2024. The jaguar prefers dense forest and typically inhabits dry deciduous forests, tropical and subtropical moist broadleaf forests, rainforests and cloud forests in Central and South America; open, seasonally flooded wetlands, dry grassland and historically also oak forests in the United States. It has been recorded at elevations up to but avoids montane forests. It favors riverine habitat and swamps with dense vegetation cover. In the Mayan forests of Mexico and Guatemala, 11 GPS-collared jaguars preferred undisturbed dense habitat away from roads; females avoided even areas with low levels of human activity, whereas males appeared less disturbed by human population density. A young male jaguar was also recorded in the semi-arid Sierra de San Carlos at a waterhole. Former range In the 19th century, the jaguar was still sighted at the North Platte River north of Longs Peak in Colorado, in coastal Louisiana, northern Arizona and New Mexico. Multiple verified zoological reports of the jaguar are known in California, two as far north as Monterey in 1814 and 1826. The only record of an active jaguar den with breeding adults and kittens in the United States was in the Tehachapi Mountains of California prior to 1860. The jaguar persisted in California until about 1860. The last confirmed jaguar in Texas was shot in 1948, southeast of Kingsville, Texas. In Arizona, a female was shot in the White Mountains in 1963. By the late 1960s, the jaguar was thought to have been extirpated in the United States. Arizona outlawed jaguar hunting in 1969, but by then no females remained, and over the next 25 years only two males were sighted and killed in the state. In 1996, a rancher and hunting guide from Douglas, Arizona came across a jaguar in the Peloncillo Mountains and became a researcher on jaguars, placing trail cameras, which recorded four more jaguars. Behavior and ecology The jaguar is mostly active at night and during twilight. However, jaguars living in densely forested regions of the Amazon Rainforest and the Pantanal are largely active by day, whereas jaguars in the Atlantic Forest are primarily active by night. The activity pattern of the jaguar coincides with the activity of its main prey species. Jaguars are good swimmers and play and hunt in the water, possibly more than tigers. They have been recorded moving between islands and the shore. Jaguars are also good at climbing trees but do so less often than cougars. Ecological role The adult jaguar is an apex predator, meaning it is at the top of the food chain and is not preyed upon in the wild. The jaguar has also been termed a keystone species, as it is assumed that it controls the population levels of prey such as herbivorous and seed-eating mammals and thus maintains the structural integrity of forest systems. However, field work has shown this may be natural variability, and the population increases may not be sustained. Thus, the keystone predator hypothesis is not accepted by all scientists. The jaguar is sympatric with the cougar. In central Mexico, both prey on white-tailed deer, which makes up 54% and 66% of jaguar and cougar's prey, respectively. In northern Mexico, the jaguar and the cougar share the same habitat, and their diet overlaps dependent on prey availability. Jaguars seemed to prefer deer and calves. In Mexico and Central America, neither of the two cats are considered to be the dominant predator. In South America, the jaguar is larger than the cougar and tends to take larger prey, usually over . The cougar's prey usually weighs between , which is thought to be the reason for its smaller size. This situation may be advantageous to the cougar. Its broader prey niche, including its ability to take smaller prey, may give it an advantage over the jaguar in human-altered landscapes. Hunting and diet The jaguar is an obligate carnivore and depends solely on flesh for its nutrient requirements. An analysis of 53 studies documenting the diet of the jaguar revealed that its prey ranges in weight from ; it prefers prey weighing , with the capybara and the giant anteater being the most selected. When available, it also preys on marsh deer, southern tamandua, collared peccary and black agouti. In floodplains, jaguars opportunistically take reptiles such as turtles and caimans. Consumption of reptiles appears to be more frequent in jaguars than in other big cats. One remote population in the Brazilian Pantanal is recorded to primarily feed on aquatic reptiles and fish. The jaguar also preys on livestock in cattle ranching areas where wild prey is scarce. The daily food requirement of a captive jaguar weighing was estimated at of meat. The jaguar's bite force allows it to pierce the carapaces of the yellow-spotted Amazon river turtle and the yellow-footed tortoise. It employs an unusual killing method: it bites mammalian prey directly through the skull between the ears to deliver a fatal bite to the brain. It kills capybara by piercing its canine teeth through the temporal bones of its skull, breaking its zygomatic arch and mandible and penetrating its brain, often through the ears. It has been hypothesized to be an adaptation to cracking open turtle shells; armored reptiles may have formed an abundant prey base for the jaguar following the late Pleistocene extinctions. However, this is disputed, as even in areas where jaguars prey on reptiles, they are still taken relatively infrequently compared to mammals in spite of their greater abundance. Between October 2001 and April 2004, 10 jaguars were monitored in the southern Pantanal. In the dry season from April to September, they killed prey at intervals ranging from one to seven days; and ranging from one to 16 days in the wet season from October to March. The jaguar uses a stalk-and-ambush strategy when hunting rather than chasing prey. The cat will slowly walk down forest paths, listening for and stalking prey before rushing or ambushing. The jaguar attacks from cover and usually from a target's blind spot with a quick pounce; the species' ambushing abilities are considered nearly peerless in the animal kingdom by both indigenous people and field researchers and are probably a product of its role as an apex predator in several different environments. The ambush may include leaping into water after prey, as a jaguar is quite capable of carrying a large kill while swimming; its strength is such that carcasses as large as a heifer can be hauled up a tree to avoid flood levels. After killing prey, the jaguar will drag the carcass to a thicket or other secluded spot. It begins eating at the neck and chest. The heart and lungs are consumed, followed by the shoulders. Social activity The jaguar is generally solitary except for females with cubs. In 1977, groups consisting of a male, female and cubs, and two females with two males were sighted several times in a study area in the Paraguay River valley. In some areas, males may form paired coalitions which together mark, defend and invade territories, find and mate with the same females and search for and share prey. A radio-collared female moved in a home range of , which partly overlapped with another female. The home range of the male in this study area overlapped with several females. The jaguar uses scrape marks, urine, and feces to mark its territory. The size of home ranges depends on the level of deforestation and human population density. The home ranges of females vary from in the Pantanal to in the Amazon to in the Atlantic Forest. Male jaguar home ranges vary from in the Pantanal to in the Amazon to in the Atlantic Forest and in the Cerrado. Studies employing GPS telemetry in 2003 and 2004 found densities of only six to seven jaguars per in the Pantanal region, compared with 10 to 11 using traditional methods; this suggests the widely used sampling methods may inflate the actual numbers of individuals in a sampling area. Fights between males occur but are rare, and avoidance behavior has been observed in the wild. In one wetland population with degraded territorial boundaries and more social proximity, adults of the same sex are more tolerant of each other and engage in more friendly and co-operative interactions. The jaguar roars/grunts for long-distance communication; intensive bouts of counter-calling between individuals have been observed in the wild. This vocalization is described as "hoarse" with five or six guttural notes. Chuffing is produced by individuals when greeting, during courting, or by a mother comforting her cubs. This sound is described as low intensity snorts, possibly intended to signal tranquility and passivity. Cubs have been recorded bleating, gurgling and mewing. Reproduction and life cycle In captivity, the female jaguar is recorded to reach sexual maturity at the age of about 2.5 years. Estrus lasts 7–15 days with an estrus cycle of 41.8 to 52.6 days. During estrus, she exhibits increased restlessness with rolling and prolonged vocalizations. She is an induced ovulator but can also ovulate spontaneously. Gestation lasts 91 to 111 days. The male is sexually mature at the age of three to four years. His mean ejaculate volume is 8.6±1.3 ml. Generation length of the jaguar is 9.8 years. In the Pantanal, breeding pairs were observed to stay together for up to five days. Females had one to two cubs. The young are born with closed eyes but open them after two weeks. Cubs are weaned at the age of three months but remain in the birth den for six months before leaving to accompany their mother on hunts. Jaguars remain with their mothers for up to two years. They appear to rarely live beyond 11 years, but captive individuals may live 22 years. In 2001, a male jaguar killed and partially consumed two cubs in Emas National Park. DNA paternity testing of blood samples revealed that the male was the father of the cubs. Two more cases of infanticide were documented in the northern Pantanal in 2013. To defend against infanticide, the female may hide her cubs and distract the male with courtship behavior. Attacks on humans The Spanish conquistadors feared the jaguar. According to Charles Darwin, the indigenous peoples of South America stated that people did not need to fear the jaguar as long as capybaras were abundant. The first official record of a jaguar killing a human in Brazil dates to June 2008. Two children were attacked by jaguars in Guyana. The majority of known attacks on people happened when it had been cornered or wounded. Threats The jaguar is threatened by loss and fragmentation of habitat, illegal killing in retaliation for livestock depredation and for illegal trade in jaguar body parts. It is listed as Near Threatened on the IUCN Red List since 2002, as the jaguar population has probably declined by 20–25% since the mid-1990s. Deforestation is a major threat to the jaguar across its range. Habitat loss was most rapid in drier regions such as the Argentine pampas, the arid grasslands of Mexico and the southwestern United States. In 2002, it was estimated that the range of the jaguar had declined to about 46% of its range in the early 20th century. In 2018, it was estimated that its range had declined by 55% in the last century. The only remaining stronghold is the Amazon rainforest, a region that is rapidly being fragmented by deforestation. Between 2000 and 2012, forest loss in the jaguar range amounted to , with fragmentation increasing in particular in corridors between Jaguar Conservation Units (JCUs). By 2014, direct linkages between two JCUs in Bolivia were lost, and two JCUs in northern Argentina became completely isolated due to deforestation. In Mexico, the jaguar is primarily threatened by poaching. Its habitat is fragmented in northern Mexico, in the Gulf of Mexico and the Yucatán Peninsula, caused by changes in land use, construction of roads and tourism infrastructure. In Panama, 220 of 230 jaguars were killed in retaliation for predation on livestock between 1998 and 2014. In Venezuela, the jaguar was extirpated in about 26% of its range in the country since 1940, mostly in dry savannas and unproductive scrubland in the northeastern region of Anzoátegui. In Ecuador, the jaguar is threatened by reduced prey availability in areas where the expansion of the road network facilitated access of human hunters to forests. In the Alto Paraná Atlantic forests, at least 117 jaguars were killed in Iguaçu National Park and the adjacent Misiones Province between 1995 and 2008. Some Afro-Colombians in the Colombian Chocó Department hunt jaguars for consumption and sale of meat. Between 2008 and 2012, at least 15 jaguars were killed by livestock farmers in central Belize. The international trade of jaguar skins boomed between the end of the Second World War and the early 1970s. Significant declines occurred in the 1960s, as more than 15,000 jaguars were yearly killed for their skins in the Brazilian Amazon alone; the trade in jaguar skins decreased since 1973 when the Convention on International Trade in Endangered Species was enacted. Interview surveys with 533 people in the northwestern Bolivian Amazon revealed that local people killed jaguars out of fear, in retaliation, and for trade. Between August 2016 and August 2019, jaguar skins and body parts were seen for sale in tourist markets in the Peruvian cities of Lima, Iquitos and Pucallpa. Human-wildlife conflict, opportunistic hunting and hunting for trade in domestic markets are key drivers for killing jaguars in Belize and Guatemala. Seizure reports indicate that at least 857 jaguars were involved in trade between 2012 and 2018, including 482 individuals in Bolivia alone; 31 jaguars were seized in China. Between 2014 and early 2019, 760 jaguar fangs were seized that originated in Bolivia and were destined for China. Undercover investigations revealed that the smuggling of jaguar body parts is run by Chinese residents in Bolivia. Conservation The jaguar is listed on CITES Appendix I, which means that all international commercial trade in jaguars or their body parts is prohibited. Hunting jaguars is prohibited in Argentina, Brazil, Colombia, French Guiana, Honduras, Nicaragua, Panama, Paraguay, Suriname, the United States, and Venezuela. Hunting jaguars is restricted in Guatemala and Peru. In Ecuador, hunting jaguars is prohibited, and it is classified as threatened with extinction. In Guyana, it is protected as an endangered species, and hunting it is illegal. In 1986, the Cockscomb Basin Wildlife Sanctuary was established in Belize as the world's first protected area for jaguar conservation. Jaguar Conservation Units In 1999, field scientists from 18 jaguar range countries determined the most important areas for long-term jaguar conservation based on the status of jaguar population units, stability of prey base and quality of habitat. These areas, called "Jaguar Conservation Units" (JCUs), are large enough for at least 50 breeding individuals and range in size from ; 51 JCUs were designated in 36 geographic regions including: the Sierra Madre Occidental and Sierra de Tamaulipas in Mexico the Selva Maya tropical forests extending over Mexico, Belize and Guatemala the Chocó–Darién moist forests from Honduras and Panama to Colombia Venezuelan Llanos northern Cerrado and Amazon basin in Brazil Tropical Andes in Bolivia and Peru Misiones Province in Argentina Optimal routes of travel between core jaguar population units were identified across its range in 2010 to implement wildlife corridors that connect JCUs. These corridors represent areas with the shortest distance between jaguar breeding populations, require the least possible energy input of dispersing individuals and pose a low mortality risk. They cover an area of and range in length from in Mexico and Central America and from in South America. Cooperation with local landowners and municipal, state, or federal agencies is essential to maintain connected populations and prevent fragmentation in both JCUs and corridors. Seven of 13 corridors in Mexico are functioning with a width of at least and a length of no more than . The other corridors may hamper passage, as they are narrower and longer. In August 2012, the United States Fish and Wildlife Service set aside in Arizona and New Mexico for the protection of the jaguar. The Jaguar Recovery Plan was published in April 2019, in which Interstate 10 is considered to form the northern boundary of the Jaguar Recovery Unit in Arizona and New Mexico. In Mexico, a national conservation strategy was developed from 2005 on and published in 2016. The Mexican jaguar population increased from an estimated 4,000 individuals in 2010 to about 4,800 individuals in 2018. This increase is seen as a positive effect of conservation measures that were implemented in cooperation with governmental and non-governmental institutions and landowners. An evaluation of JCUs from Mexico to Argentina revealed that they overlap with high-quality habitats of about 1,500 mammals to varying degrees. Since co-occurring mammals benefit from the JCU approach, the jaguar has been called an umbrella species. Central American JCUs overlap with the habitat of 187 of 304 regional endemic amphibian and reptile species, of which 19 amphibians occur only in the jaguar range. Approaches In setting up protected reserves, efforts generally also have to be focused on the surrounding areas, as jaguars are unlikely to confine themselves to the bounds of a reservation, especially if the population is increasing in size. Human attitudes in the areas surrounding reserves and laws and regulations to prevent poaching are essential to make conservation areas effective. To estimate population sizes within specific areas and to keep track of individual jaguars, camera trapping and wildlife tracking telemetry are widely used, and feces are sought out with the help of detection dogs to study jaguar health and diet. Current conservation efforts often focus on educating ranch owners and promoting ecotourism. Ecotourism setups are being used to generate public interest in charismatic animals such as the jaguar while at the same time generating revenue that can be used in conservation efforts. A key concern in jaguar ecotourism is the considerable habitat space the species requires. If ecotourism is used to aid in jaguar conservation, some considerations need to be made as to how existing ecosystems will be kept intact, or how new ecosystems will be put into place that are large enough to support a growing jaguar population. Conservationists and professionals in Mexico and the United States have established the Northern Jaguar Reserve in northern Mexico. Advocacy for reintroduction of the jaguar to its former range in Arizona and New Mexico have been supported by documentation of natural migrations by individual jaguars into the southern reaches of both states, the recency of extirpation from those regions by human action, and supportive arguments pertaining to biodiversity, ecological, human, and practical considerations. In culture and mythology In the pre-Columbian Americas, the jaguar was a symbol of power and strength. In the Andes, a jaguar cult disseminated by the early Chavín culture became accepted over most of today's Peru by 900 BC. The later Moche culture in northern Peru used the jaguar as a symbol of power in many of their ceramics. In the Muisca religion in Altiplano Cundiboyacense, the jaguar was considered a sacred animal, and people dressed in jaguar skins during religious rituals. The skins were traded with peoples in the nearby Orinoquía Region. The name of the Muisca ruler Nemequene was derived from the Chibcha words nymy and quyne, meaning "force of the jaguar". Sculptures with "Olmec were-jaguar" motifs were found on the Yucatán Peninsula in Veracruz and Tabasco; they show stylized jaguars with half-human faces. In the later Maya civilization, the jaguar was known as balam or bolom''' in many of the Mayan languages, and was used to symbolize warriors and the elite class for being brave, fierce and strong. The cat was associated with the underworld and its image was used to decorate tombs and grave-good vessels. The Aztec civilization called the jaguar ocelotl and considered it to be the king of the animals. It was believed to be fierce and courageous, but also wise, dignified and careful. The military had two classes of warriors, the ocelotl or jaguar warriors and the cuauhtli'' or eagle warriors and each dressed like their representative animal. In addition, members of the royal class would decorate in jaguar skins. The jaguar was considered to be the totem animal of the powerful deities Tezcatlipoca and Tepeyollotl. A conch shell gorget depicting a jaguar was found in a burial mound in Benton County, Missouri. The gorget shows evenly-engraved lines and measures . Rock drawings made by the Hopi, Anasazi and Pueblo all over the desert and chaparral regions of the American Southwest show an explicitly spotted cat, presumably a jaguar, as it is drawn much larger than an ocelot. The jaguar is also used as a symbol in contemporary culture. It is the national animal of Guyana and is featured in its coat of arms.
Biology and health sciences
Carnivora
null
16225
https://en.wikipedia.org/wiki/Jacquard%20machine
Jacquard machine
The Jacquard machine () is a device fitted to a loom that simplifies the process of manufacturing textiles with such complex patterns as brocade, damask and matelassé. The resulting ensemble of the loom and Jacquard machine is then called a Jacquard loom. The machine was patented by Joseph Marie Jacquard in 1804, based on earlier inventions by the Frenchmen Basile Bouchon (1725), Jean Baptiste Falcon (1728), and Jacques Vaucanson (1740). The machine was controlled by a "chain of cards"; a number of punched cards laced together into a continuous sequence. Multiple rows of holes were punched on each card, with one complete card corresponding to one row of the design. Both the Jacquard process and the necessary loom attachment are named after their inventor. This mechanism is probably one of the most important weaving innovations, as Jacquard shedding made possible the automatic production of unlimited varieties of complex pattern weaving. The term "Jacquard" is not specific or limited to any particular loom, but rather refers to the added control mechanism that automates the patterning. The process can also be used for patterned knitwear and machine-knitted textiles such as jerseys. This use of replaceable punched cards to control a sequence of operations is considered an important step in the history of computing hardware, having inspired Charles Babbage's Analytical Engine. History Traditionally, figured designs were made on a drawloom. The heddles with warp ends to be pulled up were manually selected by a second operator, the draw boy, not the weaver. The work was slow and labour-intensive, and the complexity of the pattern was limited by practical factors. The first prototype of a Jacquard-type loom was made in the second half of the 15th century by an Italian weaver from Calabria, Jean le Calabrais, who was invited to Lyon by Louis XI. He introduced a new kind of machine which was able to work the yarns faster and more precisely. Over the years, improvements to the loom were ongoing. An improvement of the draw loom took place in 1725, when Basile Bouchon introduced the principle of applying a perforated band of paper. A continuous roll of paper was punched by hand, in sections, each of which represented one lash or tread, and the length of the roll was determined by the number of shots in each repeat of pattern. The Jacquard machine then evolved from this approach. Joseph Marie Jacquard saw that a mechanism could be developed for the production of sophisticated patterns. He possibly combined mechanical elements of other inventors, but certainly innovated. His machine was generally similar to Vaucanson's arrangement, but he made use of Jean-Baptiste Falcon's individual pasteboard cards and his square prism (or card "cylinder"): he is credited with having fully perforated each of its four sides, replacing Vaucanson's perforated "barrel". Jacquard's machine contained eight rows of needles and uprights, where Vaucanson had a double row. This modification enabled him to increase the figuring capacity of the machine. In his first machine, he supported the harness by knotted cords, which he elevated by a single trap board. One of the chief advantages claimed for the Jacquard machine was that unlike previous damask-weaving machines, in which the figuring shed was usually drawn once for every four shots, with the new apparatus, it could be drawn on every shot, thus producing a fabric with greater definition of outline. Jacquard's invention had a deep influence on Charles Babbage. In that respect, he is viewed by some authors as a precursor of modern computing technology. Principles of operation As shown in the diagram, the cards are fastened into a continuous chain (1) which passes over a square box. At each quarter rotation, a new card is presented to the Jacquard head which represents one row (one "pick" of the shuttle carrying the weft). The box swings from the right to the position shown and presses against the control rods (2). For each hole in the card, a rod passes through and is unmoved; where there is no hole, a rod is pushed to the left. Each rod acts upon a hook (3). When the rod is pushed in, the hook moves out of position to the left; a rod that is not pushed in leaves its hook in place. A beam (4) then rises under the hooks, and the hooks in the rest position are raised. The hooks that have been displaced are not moved by the beam. Each hook can have multiple cords (5). Each cord passes through a guide (6) and is attached to a corresponding heddle (7) and return weight (8). The heddles raise the warp to create the shed through which the shuttle carrying the weft will pass. A loom with a 400-hook head might have four threads connected to each hook, resulting in a fabric that is 1600 warp ends wide with four repeats of the weave going across. The term "Jacquard loom" is somewhat inaccurate. It is the "Jacquard head" that adapts to a great many dobby looms that allow the weaving machine to then create the intricate patterns often seen in Jacquard weaving. Jacquard-driven looms, although relatively common in the textile industry, are not as ubiquitous as dobby looms which are usually faster and much cheaper to operate. However, dobby looms are not capable of producing many different weaves from one warp. Modern jacquard machines are controlled by computers in place of the original punched cards and can have thousands of hooks. The threading of a Jacquard machine is so labor-intensive that many looms are threaded only once. Subsequent warps are then tied into the existing warp with the help of a knotting robot which ties on each new thread individually. Even for a small loom with only a few thousand warp ends, the process of re-threading can take days. Mechanical Jacquard devices Originally, Jacquard machines were mechanical, and the fabric design was stored on a series of punched cards which were joined to form a continuous chain. The Jacquards were often small and controlled relatively few warp ends. This required a number of repeats across the loom width. Larger capacity machines, or the use of multiple machines, allowed greater control with fewer repeats; hence, larger designs could be woven across the loom width. A factory must choose looms and shedding mechanisms to suit its commercial requirements. As a rule, greater warp control means greater expense. So it is not economical to purchase Jacquard machines if one can make do with a dobby mechanism. Beyond the capital expense, Jacquard machines cost more to maintain as they are complex, require highly-skilled operators, and use expensive systems to prepare designs for the loom. Thus, they are more likely to produce faults than dobby or cam shedding. Also, the looms will not run as quickly and down-time will increase because it takes time to change the continuous chain of cards when a design changes. It is best to weave larger batches with mechanical Jacquards. Electronic Jacquard machines In 1855, a Frenchman adapted the Jacquard mechanism to a system by which it could be worked by electro-magnets. There was significant interest, but trials were not successful, and the development was soon forgotten. Bonas Textile Machinery NV launched the first successful electronic Jacquard at ITMA Milan in 1983. Although the machines were initially small, modern technology has allowed Jacquard machine capacity to increase significantly, and single end warp control can extend to more than 10,000 warp ends. This eliminates the need for repeats and symmetrical designs and invites almost infinite versatility. The computer-controlled machines significantly reduce the down time associated with changing punchcards, thereby allowing smaller batch sizes. However, electronic Jacquards are costly and may not be necessary in a factory weaving large batch sizes and smaller designs. Larger machines accommodating single-end warp control are very expensive and can only be justified when great versatility or very specialized designs are required. For example, they are an ideal tool to increase the ability and versatility of niche linen Jacquard weavers who remain active in Europe and the West, while most large batch commodity weaving has moved to low-cost production. Linen products associated with Jacquard weaving are linen damask napery, Jacquard apparel fabrics and damask bed linen. Jacquard weaving uses all sorts of fibers and blends of fibers, and it is used in the production of fabrics for many end uses. Jacquard weaving can also be used to create fabrics that have a Matelassé or a brocade pattern. The woven silk prayer book A pinnacle of production using a Jacquard machine is a prayer book, woven in silk, entitled . All 58 pages of the prayer book were woven silk, made with a Jacquard machine using black and gray thread, at 160 threads per cm (400 threads per inch). The pages have elaborate borders with text and pictures of saints. An estimated 200,000 to 500,000 punchcards were necessary to encode the pages. The book was issued in 1886 and 1887 in Lyon, France, and was publicly displayed at the 1889 Exposition Universelle (World's Fair). It was designed by R. P. J. Hervier, woven by J. A. Henry, and published by A. Roux. It took two years and almost 50 trials to get correct. An estimated 50 or 60 copies were produced. Importance in computing The Jacquard head used replaceable punched cards to control a sequence of operations. It is considered an important step in the history of computing hardware. The ability to change the pattern of the loom's weave by simply changing cards was an important conceptual precursor to the development of computer programming and data entry. Charles Babbage knew of Jacquard machines and planned to use cards to store programs in his Analytical Engine. In the late 19th century, Herman Hollerith took the idea of using punched cards to store information a step further when he created a punched card tabulating machine which he used to input data for the 1890 U.S. Census. A large data processing industry using punched-card technology was developed in the first half of the twentieth centurydominated initially by the International Business Machine corporation (IBM) with its line of unit record equipment. The cards were used for data, however, with programming done by plugboards. Some early computers, such as the 1944 IBM Automatic Sequence Controlled Calculator (Harvard Mark I) received program instructions from a paper tape punched with holes, similar to Jacquard's string of cards. Later computers executed programs from higher-speed memory, though cards were commonly used to load the programs into memory. Punched cards remained in use in computing up until the mid-1980s.
Technology
Weaving
null
16290
https://en.wikipedia.org/wiki/Jerk%20%28physics%29
Jerk (physics)
Jerk (also known as jolt) is the rate of change of an object's acceleration over time. It is a vector quantity (having both magnitude and direction). Jerk is most commonly denoted by the symbol and expressed in m/s3 (SI units) or standard gravities per second (g0/s). Expressions As a vector, jerk can be expressed as the first time derivative of acceleration, second time derivative of velocity, and third time derivative of position: Where: is acceleration is velocity is position is time Third-order differential equations of the form are sometimes called jerk equations. When converted to an equivalent system of three ordinary first-order non-linear differential equations, jerk equations are the minimal setting for solutions showing chaotic behaviour. This condition generates mathematical interest in jerk systems. Systems involving fourth-order derivatives or higher are accordingly called hyperjerk systems. Physiological effects and human perception Human body position is controlled by balancing the forces of antagonistic muscles. In balancing a given force, such as holding up a weight, the postcentral gyrus establishes a control loop to achieve the desired equilibrium. If the force changes too quickly, the muscles cannot relax or tense fast enough and overshoot in either direction, causing a temporary loss of control. The reaction time for responding to changes in force depends on physiological limitations and the attention level of the brain: an expected change will be stabilized faster than a sudden decrease or increase of load. To avoid vehicle passengers losing control over body motion and getting injured, it is necessary to limit the exposure to both the maximum force (acceleration) and maximum jerk, since time is needed to adjust muscle tension and adapt to even limited stress changes. Sudden changes in acceleration can cause injuries such as whiplash. Excessive jerk may also result in an uncomfortable ride, even at levels that do not cause injury. Engineers expend considerable design effort minimizing "jerky motion" on elevators, trams, and other conveyances. For example, consider the effects of acceleration and jerk when riding in a car: Skilled and experienced drivers can accelerate smoothly, but beginners often provide a jerky ride. When changing gears in a car with a foot-operated clutch, the accelerating force is limited by engine power, but an inexperienced driver can cause severe jerk because of intermittent force closure over the clutch. The feeling of being pressed into the seats in a high-powered sports car is due to the acceleration. As the car launches from rest, there is a large positive jerk as its acceleration rapidly increases. After the launch, there is a small, sustained negative jerk as the force of air resistance increases with the car's velocity, gradually decreasing acceleration and reducing the force pressing the passenger into the seat. When the car reaches its top speed, the acceleration has reached 0 and remains constant, after which there is no jerk until the driver decelerates or changes direction. When braking suddenly or during collisions, passengers whip forward with an initial acceleration that is larger than during the rest of the braking process because muscle tension regains control of the body quickly after the onset of braking or impact. These effects are not modeled in vehicle testing because cadavers and crash test dummies do not have active muscle control. To minimize the jerk, curves along roads are designed to be clothoids as are railroad curves and roller coaster loops. Force, acceleration, and jerk For a constant mass , acceleration is directly proportional to force according to Newton's second law of motion: In classical mechanics of rigid bodies, there are no forces associated with the derivatives of acceleration; however, physical systems experience oscillations and deformations as a result of jerk. In designing the Hubble Space Telescope, NASA set limits on both jerk and jounce. The Abraham–Lorentz force is the recoil force on an accelerating charged particle emitting radiation. This force is proportional to the particle's jerk and to the square of its charge. The Wheeler–Feynman absorber theory is a more advanced theory, applicable in a relativistic and quantum environment, and accounting for self-energy. In an idealized setting Discontinuities in acceleration do not occur in real-world environments because of deformation, quantum mechanics effects, and other causes. However, a jump-discontinuity in acceleration and, accordingly, unbounded jerk are feasible in an idealized setting, such as an idealized point mass moving along a piecewise smooth, whole continuous path. The jump-discontinuity occurs at points where the path is not smooth. Extrapolating from these idealized settings, one can qualitatively describe, explain and predict the effects of jerk in real situations. Jump-discontinuity in acceleration can be modeled using a Dirac delta function in jerk, scaled to the height of the jump. Integrating jerk over time across the Dirac delta yields the jump-discontinuity. For example, consider a path along an arc of radius , which tangentially connects to a straight line. The whole path is continuous, and its pieces are smooth. Now assume a point particle moves with constant speed along this path, so its tangential acceleration is zero. The centripetal acceleration given by is normal to the arc and inward. When the particle passes the connection of pieces, it experiences a jump-discontinuity in acceleration given by , and it undergoes a jerk that can be modeled by a Dirac delta, scaled to the jump-discontinuity. For a more tangible example of discontinuous acceleration, consider an ideal spring–mass system with the mass oscillating on an idealized surface with friction. The force on the mass is equal to the vector sum of the spring force and the kinetic frictional force. When the velocity changes sign (at the maximum and minimum displacements), the magnitude of the force on the mass changes by twice the magnitude of the frictional force, because the spring force is continuous and the frictional force reverses direction with velocity. The jump in acceleration equals the force on the mass divided by the mass. That is, each time the mass passes through a minimum or maximum displacement, the mass experiences a discontinuous acceleration, and the jerk contains a Dirac delta until the mass stops. The static friction force adapts to the residual spring force, establishing equilibrium with zero net force and zero velocity. Consider the example of a braking and decelerating car. The brake pads generate kinetic frictional forces and constant braking torques on the disks (or drums) of the wheels. Rotational velocity decreases linearly to zero with constant angular deceleration. The frictional force, torque, and car deceleration suddenly reach zero, which indicates a Dirac delta in physical jerk. The Dirac delta is smoothed down by the real environment, the cumulative effects of which are analogous to damping of the physiologically perceived jerk. This example neglects the effects of tire sliding, suspension dipping, real deflection of all ideally rigid mechanisms, etc. Another example of significant jerk, analogous to the first example, is the cutting of a rope with a particle on its end. Assume the particle is oscillating in a circular path with non-zero centripetal acceleration. When the rope is cut, the particle's path changes abruptly to a straight path, and the force in the inward direction changes suddenly to zero. Imagine a monomolecular fiber cut by a laser; the particle would experience very high rates of jerk because of the extremely short cutting time. In rotation Consider a rigid body rotating about a fixed axis in an inertial reference frame. If its angular position as a function of time is , the angular velocity, acceleration, and jerk can be expressed as follows: Angular velocity, , is the time derivative of . Angular acceleration, , is the time derivative of . Angular jerk, , is the time derivative of . Angular acceleration equals the torque acting on the body, divided by the body's moment of inertia with respect to the momentary axis of rotation. A change in torque results in angular jerk. The general case of a rotating rigid body can be modeled using kinematic screw theory, which includes one axial vector, angular velocity , and one polar vector, linear velocity . From this, the angular acceleration is defined as and the angular jerk is given by taking the angular acceleration from Angular acceleration#Particle in three dimensions as , we obtain replacing we can have the last item as , and we finally get or vice versa, replacing with : For example, consider a Geneva drive, a device used for creating intermittent rotation of a driven wheel (the blue wheel in the animation) by continuous rotation of a driving wheel (the red wheel in the animation). During one cycle of the driving wheel, the driven wheel's angular position changes by 90 degrees and then remains constant. Because of the finite thickness of the driving wheel's fork (the slot for the driving pin), this device generates a discontinuity in the angular acceleration , and an unbounded angular jerk in the driven wheel. Jerk does not preclude the Geneva drive from being used in applications such as movie projectors and cams. In movie projectors, the film advances frame-by-frame, but the projector operation has low noise and is highly reliable because of the low film load (only a small section of film weighing a few grams is driven), the moderate speed (2.4 m/s), and the low friction. With cam drive systems, use of a dual cam can avoid the jerk of a single cam; however, the dual cam is bulkier and more expensive. The dual-cam system has two cams on one axle that shifts a second axle by a fraction of a revolution. The graphic shows step drives of one-sixth and one-third rotation per one revolution of the driving axle. There is no radial clearance because two arms of the stepped wheel are always in contact with the double cam. Generally, combined contacts may be used to avoid the jerk (and wear and noise) associated with a single follower (such as a single follower gliding along a slot and changing its contact point from one side of the slot to the other can be avoided by using two followers sliding along the same slot, one side each). In elastically deformable matter An elastically deformable mass deforms under an applied force (or acceleration); the deformation is a function of its stiffness and the magnitude of the force. If the change in force is slow, the jerk is small, and the propagation of deformation is considered instantaneous as compared to the change in acceleration. The distorted body acts as if it were in a quasistatic regime, and only a changing force (nonzero jerk) can cause propagation of mechanical waves (or electromagnetic waves for a charged particle); therefore, for nonzero to high jerk, a shock wave and its propagation through the body should be considered. The propagation of deformation is shown in the graphic "Compression wave patterns" as a compressional plane wave through an elastically deformable material. Also shown, for angular jerk, are the deformation waves propagating in a circular pattern, which causes shear stress and possibly other modes of vibration. The reflection of waves along the boundaries cause constructive interference patterns (not pictured), producing stresses that may exceed the material's limits. The deformation waves may cause vibrations, which can lead to noise, wear, and failure, especially in cases of resonance. The graphic captioned "Pole with massive top" shows a block connected to an elastic pole and a massive top. The pole bends when the block accelerates, and when the acceleration stops, the top will oscillate (damped) under the regime of pole stiffness. One could argue that a greater (periodic) jerk might excite a larger amplitude of oscillation because small oscillations are damped before reinforcement by a shock wave. One can also argue that a larger jerk might increase the probability of exciting a resonant mode because the larger wave components of the shock wave have higher frequencies and Fourier coefficients. To reduce the amplitude of excited stress waves and vibrations, one can limit jerk by shaping motion and making the acceleration continuous with slopes as flat as possible. Due to limitations of abstract models, algorithms for reducing vibrations include higher derivatives, such as jounce, or suggest continuous regimes for both acceleration and jerk. One concept for limiting jerk is to shape acceleration and deceleration sinusoidally with zero acceleration in between (see graphic captioned "Sinusoidal acceleration profile"), making the speed appear sinusoidal with constant maximum speed. The jerk, however, will remain discontinuous at the points where acceleration enters and leaves the zero phases. In the geometric design of roads and tracks Roads and tracks are designed to limit the jerk caused by changes in their curvature. Design standards for high-speed rail vary from 0.2 m/s3 to 0.6 m/s3. Track transition curves limit the jerk when transitioning from a straight line to a curve, or vice versa. Recall that in constant-speed motion along an arc, acceleration is zero in the tangential direction and nonzero in the inward normal direction. Transition curves gradually increase the curvature and, consequently, the centripetal acceleration. An Euler spiral, the theoretically optimum transition curve, linearly increases centripetal acceleration and results in constant jerk (see graphic). In real-world applications, the plane of the track is inclined (cant) along the curved sections. The incline causes vertical acceleration, which is a design consideration for wear on the track and embankment. The Wiener Kurve (Viennese Curve) is a patented curve designed to minimize this wear. Rollercoasters are also designed with track transitions to limit jerk. When entering a loop, acceleration values can reach around 4g (40 m/s2), and riding in this high acceleration environment is only possible with track transitions. S-shaped curves, such as figure eights, also use track transitions for smooth rides. In motion control In motion control, the design focus is on straight, linear motion, with the need to move a system from one steady position to another (point-to-point motion). The design concern from a jerk perspective is vertical jerk; the jerk from tangential acceleration is effectively zero since linear motion is non-rotational. Motion control applications include passenger elevators and machining tools. Limiting vertical jerk is considered essential for elevator riding convenience. ISO 8100-34 specifies measurement methods for elevator ride quality with respect to jerk, acceleration, vibration, and noise; however, the standard does not specify levels for acceptable or unacceptable ride quality. It is reported that most passengers rate a vertical jerk of 2 m/s3 as acceptable and 6 m/s3 as intolerable. For hospitals, 0.7 m/s3 is the recommended limit. A primary design goal for motion control is to minimize the transition time without exceeding speed, acceleration, or jerk limits. Consider a third-order motion-control profile with quadratic ramping and deramping phases in velocity (see figure). This motion profile consists of the following seven segments: Acceleration build up — positive jerk limit; linear increase in acceleration to the positive acceleration limit; quadratic increase in velocity Upper acceleration limit — zero jerk; linear increase in velocity Acceleration ramp down — negative jerk limit; linear decrease in acceleration; (negative) quadratic increase in velocity, approaching the desired velocity limit Velocity limit — zero jerk; zero acceleration Deceleration build up — negative jerk limit; linear decrease in acceleration to the negative acceleration limit; (negative) quadratic decrease in velocity Lower deceleration limit — zero jerk; linear decrease in velocity Deceleration ramp down — positive jerk limit; linear increase in acceleration to zero; quadratic decrease in velocity; approaching the desired position at zero speed and zero acceleration Segment four's time period (constant velocity) varies with distance between the two positions. If this distance is so small that omitting segment four would not suffice, then segments two and six (constant acceleration) could be equally reduced, and the constant velocity limit would not be reached. If this modification does not sufficiently reduce the crossed distance, then segments one, three, five, and seven could be shortened by an equal amount, and the constant acceleration limits would not be reached. Other motion profile strategies are used, such as minimizing the square of jerk for a given transition time and, as discussed above, sinusoidal-shaped acceleration profiles. Motion profiles are tailored for specific applications including machines, people movers, chain hoists, automobiles, and robotics. In manufacturing Jerk is an important consideration in manufacturing processes. Rapid changes in acceleration of a cutting tool can lead to premature tool wear and result in uneven cuts; consequently, modern motion controllers include jerk limitation features. In mechanical engineering, jerk, in addition to velocity and acceleration, is considered in the development of cam profiles because of tribological implications and the ability of the actuated body to follow the cam profile without chatter. Jerk is often considered when vibration is a concern. A device that measures jerk is called a "jerkmeter". Further derivatives Further time derivatives have also been named, as snap or jounce (fourth derivative), crackle (fifth derivative), and pop (sixth derivative). The seventh derivative is known as "Bang," as it is a logical continuation to the cycle. The eighth derivative has been referred to as "Boom," and the 9th is known as "Crash." However, time derivatives of position of higher order than four appear rarely. The terms snap, crackle, and popfor the fourth, fifth, and sixth derivatives of positionwere inspired by the advertising mascots Snap, Crackle, and Pop.
Physical sciences
Classical mechanics
Physics
16327
https://en.wikipedia.org/wiki/Joule
Joule
The joule ( , or ; symbol: J) is the unit of energy in the International System of Units (SI). It is equal to the amount of work done when a force of one newton displaces a mass through a distance of one metre in the direction of that force. It is also the energy dissipated as heat when an electric current of one ampere passes through a resistance of one ohm for one second. It is named after the English physicist James Prescott Joule (1818–1889). Definition In terms of SI base units and in terms of SI derived units with special names, the joule is defined as One joule is also equivalent to any of the following: The work required to move an electric charge of one coulomb through an electrical potential difference of one volt, or one coulomb-volt (C⋅V). This relationship can be used to define the volt. The work required to produce one watt of power for one second, or one watt-second (W⋅s) (compare kilowatt-hour, which is 3.6 megajoules). This relationship can be used to define the watt. History The CGS system had been declared official in 1881, at the first International Electrical Congress. The erg was adopted as its unit of energy in 1882. Wilhelm Siemens, in his inauguration speech as chairman of the British Association for the Advancement of Science (23 August 1882) first proposed the joule as unit of heat, to be derived from the electromagnetic units ampere and ohm, in cgs units equivalent to . The naming of the unit in honour of James Prescott Joule (1818–1889), at the time retired and aged 63, followed the recommendation of Siemens: At the second International Electrical Congress, on 31 August 1889, the joule was officially adopted alongside the watt and the quadrant (later renamed to henry). Joule died in the same year, on 11 October 1889. At the fourth congress (1893), the "international ampere" and "international ohm" were defined, with slight changes in the specifications for their measurement, with the "international joule" being the unit derived from them. In 1935, the International Electrotechnical Commission (as the successor organisation of the International Electrical Congress) adopted the "Giorgi system", which by virtue of assuming a defined value for the magnetic constant also implied a redefinition of the joule. The Giorgi system was approved by the International Committee for Weights and Measures in 1946. The joule was now no longer defined based on electromagnetic unit, but instead as the unit of work performed by one unit of force (at the time not yet named newton) over the distance of 1 metre. The joule was explicitly intended as the unit of energy to be used in both electromagnetic and mechanical contexts. The ratification of the definition at the ninth General Conference on Weights and Measures, in 1948, added the specification that the joule was also to be preferred as the unit of heat in the context of calorimetry, thereby officially deprecating the use of the calorie. This is the definition declared in the modern International System of Units in 1960. The definition of the joule as J = kg⋅m2⋅s−2 has remained unchanged since 1946, but the joule as a derived unit has inherited changes in the definitions of the second (in 1960 and 1967), the metre (in 1983) and the kilogram (in 2019). Practical examples One joule represents (approximately): The typical energy released as heat by a person at rest every 1/60 s (~, basal metabolic rate); about / day. The amount of electricity required to run a device for . The energy required to accelerate a mass at through a distance of . The kinetic energy of a mass travelling at , or a mass travelling at . The energy required to lift an apple up 1 m, assuming the apple has a mass of 101.97 g. The heat required to raise the temperature of 0.239 g of water from 0 °C to 1 °C. The kinetic energy of a human moving very slowly (). The kinetic energy of a tennis ball moving at . The food energy (kcal) in slightly more than half of an ordinary-sized sugar crystal (/crystal). Multiples . The minimal energy needed to change a bit of data in computation at around room temperature – approximately – is given by the Landauer limit. is about the kinetic energy of a flying mosquito. The Large Hadron Collider (LHC) produces collisions of the microjoule order (7 TeV) per particle. Nutritional food labels in most countries express energy in kilojoules (kJ). One square metre of the Earth receives about of solar radiation every second in full daylight. A human in a sprint has approximately 3 kJ of kinetic energy, while a cheetah in a (76 mph) sprint has approximately 20 kJ. . The megajoule is approximately the kinetic energy of a one megagram (tonne) vehicle moving at (100 mph). The energy required to heat of liquid water at constant pressure from to is approximately . . is about the chemical energy of combusting of petroleum. 2 GJ is about the Planck energy unit. . The terajoule is about (which is often used in energy tables). About of energy was released by Little Boy. The International Space Station, with a mass of approximately and orbital velocity of , has a kinetic energy of roughly . In 2017, Hurricane Irma was estimated to have a peak wind energy of . . is about of TNT, which is the amount of energy released by the Tsar Bomba, the largest man-made explosion ever. . The 2011 Tōhoku earthquake and tsunami in Japan had of energy according to its rating of 9.0 on the moment magnitude scale. Yearly U.S. energy consumption amounts to roughly , and the world final energy consumption was in 2021. One petawatt-hour of electricity, or any other form of energy, is . The zettajoule is somewhat more than the amount of energy required to heat the Baltic Sea by 1 °C, assuming properties similar to those of pure water. Human annual world energy consumption is approximately . The energy to raise the temperature of Earth's atmosphere 1 °C is approximately . The yottajoule is a little less than the amount of energy required to heat the Indian Ocean by 1 °C, assuming properties similar to those of pure water. The thermal output of the Sun is approximately per second. Conversions 1 joule is equal to (approximately unless otherwise stated): (exactly) (foot-pound) (foot-poundal) Units with exact equivalents in joules include: 1 thermochemical calorie = 4.184J 1 International Table calorie = 4.1868J 1W⋅h = 1kW⋅h = 1W⋅s = 1ton TNT = 1foe = Newton-metre and torque In mechanics, the concept of force (in some direction) has a close analogue in the concept of torque (about some angle): A result of this similarity is that the SI unit for torque is the newton-metre, which works out algebraically to have the same dimensions as the joule, but they are not interchangeable. The General Conference on Weights and Measures has given the unit of energy the name joule, but has not given the unit of torque any special name, hence it is simply the newton-metre (N⋅m) – a compound name derived from its constituent parts. The use of newton-metres for torque but joules for energy is helpful to avoid misunderstandings and miscommunication. The distinction may be seen also in the fact that energy is a scalar quantity – the dot product of a force vector and a displacement vector. By contrast, torque is a vector – the cross product of a force vector and a distance vector. Torque and energy are related to one another by the equation where E is energy, τ is (the vector magnitude of) torque, and θ is the angle swept (in radians). Since plane angles are dimensionless, it follows that torque and energy have the same dimensions. Watt-second A watt-second (symbol W s or W⋅s) is a derived unit of energy equivalent to the joule. The watt-second is the energy equivalent to the power of one watt sustained for one second. While the watt-second is equivalent to the joule in both units and meaning, there are some contexts in which the term "watt-second" is used instead of "joule", such as in the rating of photographic electronic flash units.
Physical sciences
Energy, power, force and pressure
null
16389
https://en.wikipedia.org/wiki/Java%20virtual%20machine
Java virtual machine
A Java virtual machine (JVM) is a virtual machine that enables a computer to run Java programs as well as programs written in other languages that are also compiled to Java bytecode. The JVM is detailed by a specification that formally describes what is required in a JVM implementation. Having a specification ensures interoperability of Java programs across different implementations so that program authors using the Java Development Kit (JDK) need not worry about idiosyncrasies of the underlying hardware platform. The JVM reference implementation is developed by the OpenJDK project as open source code and includes a JIT compiler called HotSpot. The commercially supported Java releases available from Oracle are based on the OpenJDK runtime. Eclipse OpenJ9 is another open source JVM for OpenJDK. JVM specification The Java virtual machine is an abstract (virtual) computer defined by a specification. It is a part of the Java runtime environment. The garbage collection algorithm used and any internal optimization of the Java virtual machine instructions (their translation into machine code) are not specified. The main reason for this omission is to not unnecessarily constrain implementers. Any Java application can be run only inside some concrete implementation of the abstract specification of the Java virtual machine. Starting with Java Platform, Standard Edition (J2SE) 5.0, changes to the JVM specification have been developed under the Java Community Process as JSR 924. , changes to the specification to support changes proposed to the class file format (JSR 202) are being done as a maintenance release of JSR 924. The specification for the JVM was published as the blue book, whose preface states: One of Oracle's JVMs is named HotSpot; the other, inherited from BEA Systems, is JRockit. Oracle owns the Java trademark and may allow its use to certify implementation suites as fully compatible with Oracle's specification. Class loader One of the organizational units of JVM byte code is a class. A class loader implementation must be able to recognize and load anything that conforms to the Java class file format. Any implementation is free to recognize other binary forms besides class files, but it must recognize class files. The class loader performs three basic activities in this strict order: Loading: finds and imports the binary data for a type Linking: performs verification, preparation, and (optionally) resolution Verification: ensures the correctness of the imported type Preparation: allocates memory for class variables and initializing the memory to default values Resolution: transforms symbolic references from the type into direct references. Initialization: invokes Java code that initializes class variables to their proper starting values. In general, there are three types of class loader: bootstrap class loader, extension class loader and System / Application class loader. Every Java virtual machine implementation must have a bootstrap class loader that is capable of loading trusted classes, as well as an extension class loader or application class loader. The Java virtual machine specification does not specify how a class loader should locate classes. Virtual machine architecture The JVM operates on specific types of data as specified in Java Virtual Machine specifications. The data types can be divided into primitive types (integers, Floating-point, long etc.) and Reference types. The earlier JVM were only 32-bit machines. long and double types, which are 64-bits, are supported natively, but consume two units of storage in a frame's local variables or operand stack, since each unit is 32 bits. boolean, byte, short, and char types are all sign-extended (except char which is zero-extended) and operated on as 32-bit integers, the same as int types. The smaller types only have a few type-specific instructions for loading, storing, and type conversion. boolean is operated on as 8-bit byte values, with 0 representing false and 1 representing true. (Although boolean has been treated as a type since The Java Virtual Machine Specification, Second Edition clarified this issue, in compiled and executed code there is little difference between a boolean and a byte except for name mangling in method signatures and the type of boolean arrays. booleans in method signatures are mangled as Z while bytes are mangled as B. Boolean arrays carry the type boolean[] but use 8 bits per element, and the JVM has no built-in capability to pack booleans into a bit array, so except for the type they perform and behave the same as byte arrays. In all other uses, the boolean type is effectively unknown to the JVM as all instructions to operate on booleans are also used to operate on bytes.) However, newer JVM releases, such as the OpenJDK HotSpot JVM, support 64-bit architecture. Consequently, you can install a 32-bit or 64-bit JVM on a 64-bit operating system. The primary advantage of running Java in a 64-bit environment is the larger address space. This allows for a much larger Java heap size and an increased maximum number of Java Threads, which is needed for certain kinds of large applications; however there is a performance hit in using 64-bit JVM compared to 32-bit JVM. The JVM has a garbage-collected heap for storing objects and arrays. Code, constants, and other class data are stored in the "method area". The method area is logically part of the heap, but implementations may treat the method area separately from the heap, and for example might not garbage collect it. Each JVM thread also has its own call stack (called a "Java Virtual Machine stack" for clarity), which stores frames. A new frame is created each time a method is called, and the frame is destroyed when that method exits. Each frame provides an "operand stack" and an array of "local variables". The operand stack is used for operands to run computations and for receiving the return value of a called method, while local variables serve the same purpose as registers and are also used to pass method arguments. Thus, the JVM is both a stack machine and a register machine. In practice, HotSpot eliminates every stack besides the native thread/call stack even when running in Interpreted mode, as its Templating Interpreter technically functions as a compiler. Bytecode instructions The JVM has instructions for the following groups of tasks: The aim is binary compatibility. Each particular host operating system needs its own implementation of the JVM and runtime. These JVMs interpret the bytecode semantically the same way, but the actual implementation may be different. More complex than just emulating bytecode is compatibly and efficiently implementing the Java core API that must be mapped to each host operating system. These instructions operate on a set of common rather the native data types of any specific instruction set architecture. JVM languages A JVM language is any language with functionality that can be expressed in terms of a valid class file which can be hosted by the Java Virtual Machine. A class file contains Java Virtual Machine instructions (Java byte code) and a symbol table, as well as other ancillary information. The class file format is the hardware- and operating system-independent binary format used to represent compiled classes and interfaces. There are several JVM languages, both old languages ported to JVM and completely new languages. JRuby and Jython are perhaps the most well-known ports of existing languages, i.e. Ruby and Python respectively. Of the new languages that have been created from scratch to compile to Java bytecode, Clojure, Groovy, Scala and Kotlin may be the most popular ones. A notable feature with the JVM languages is that they are compatible with each other, so that, for example, Scala libraries can be used with Java programs and vice versa. Java 7 JVM implements JSR 292: Supporting Dynamically Typed Languages on the Java Platform, a new feature which supports dynamically typed languages in the JVM. This feature is developed within the Da Vinci Machine project whose mission is to extend the JVM so that it supports languages other than Java. Bytecode verifier A basic philosophy of Java is that it is inherently safe from the standpoint that no user program can crash the host machine or otherwise interfere inappropriately with other operations on the host machine, and that it is possible to protect certain methods and data structures belonging to trusted code from access or corruption by untrusted code executing within the same JVM. Furthermore, common programmer errors that often led to data corruption or unpredictable behavior such as accessing off the end of an array or using an uninitialized pointer are not allowed to occur. Several features of Java combine to provide this safety, including the class model, the garbage-collected heap, and the verifier. The JVM verifies all bytecode before it is executed. This verification consists primarily of three types of checks: Branches are always to valid locations Data is always initialized and references are always type-safe Access to private or package private data and methods is rigidly controlled The first two of these checks take place primarily during the verification step that occurs when a class is loaded and made eligible for use. The third is primarily performed dynamically, when data items or methods of a class are first accessed by another class. The verifier permits only some bytecode sequences in valid programs, e.g. a jump (branch) instruction can only target an instruction within the same method. Furthermore, the verifier ensures that any given instruction operates on a fixed stack location, allowing the JIT compiler to transform stack accesses into fixed register accesses. Because of this, that the JVM is a stack architecture does not imply a speed penalty for emulation on register-based architectures when using a JIT compiler. In the face of the code-verified JVM architecture, it makes no difference to a JIT compiler whether it gets named imaginary registers or imaginary stack positions that must be allocated to the target architecture's registers. In fact, code verification makes the JVM different from a classic stack architecture, of which efficient emulation with a JIT compiler is more complicated and typically carried out by a slower interpreter. Additionally, the Interpreter used by the default JVM is a special type known as a Template Interpreter, which translates bytecode directly to native, register based machine language rather than emulate a stack like a typical interpreter. In many aspects the HotSpot Interpreter can be considered a JIT compiler rather than a true interpreter, meaning the stack architecture that the bytecode targets is not actually used in the implementation, but merely a specification for the intermediate representation that can well be implemented in a register based architecture. Another instance of a stack architecture being merely a specification and implemented in a register based virtual machine is the Common Language Runtime. The original specification for the bytecode verifier used natural language that was incomplete or incorrect in some respects. A number of attempts have been made to specify the JVM as a formal system. By doing this, the security of current JVM implementations can more thoroughly be analyzed, and potential security exploits prevented. It will also be possible to optimize the JVM by skipping unnecessary safety checks, if the application being run is proven to be safe. Secure execution of remote code A virtual machine architecture allows very fine-grained control over the actions that code within the machine is permitted to take. It assumes the code is "semantically" correct, that is, it successfully passed the (formal) bytecode verifier process, materialized by a tool, possibly off-board the virtual machine. This is designed to allow safe execution of untrusted code from remote sources, a model used by Java applets, and other secure code downloads. Once bytecode-verified, the downloaded code runs in a restricted "sandbox", which is designed to protect the user from misbehaving or malicious code. As an addition to the bytecode verification process, publishers can purchase a certificate with which to digitally sign applets as safe, giving them permission to ask the user to break out of the sandbox and access the local file system, clipboard, execute external pieces of software, or network. Formal proof of bytecode verifiers have been done by the Javacard industry (Formal Development of an Embedded Verifier for Java Card Byte Code) Bytecode interpreter and just-in-time compiler For each hardware architecture a different Java bytecode interpreter is needed. When a computer has a Java bytecode interpreter, it can run any Java bytecode program, and the same program can be run on any computer that has such an interpreter. When Java bytecode is executed by an interpreter, the execution will always be slower than the execution of the same program compiled into native machine language. This problem is mitigated by just-in-time (JIT) compilers for executing Java bytecode. A JIT compiler may translate Java bytecode into native machine language while executing the program. The translated parts of the program can then be executed much more quickly than they could be interpreted. This technique gets applied to those parts of a program frequently executed. This way a JIT compiler can significantly speed up the overall execution time. There is no necessary connection between the Java programming language and Java bytecode. A program written in Java can be compiled directly into the machine language of a real computer and programs written in other languages than Java can be compiled into Java bytecode. Java bytecode is intended to be platform-independent and secure. Some JVM implementations do not include an interpreter, but consist only of a just-in-time compiler. JVM in the web browser At the start of the Java platform's lifetime, the JVM was marketed as a web technology for creating Rich Web Applications. , most web browsers and operating systems bundling web browsers do not ship with a Java plug-in, nor do they permit side-loading any non-Flash plug-in. The Java browser plugin was deprecated in JDK 9. The NPAPI Java browser plug-in was designed to allow the JVM to execute so-called Java applets embedded into HTML pages. For browsers with the plug-in installed, the applet is allowed to draw into a rectangular region on the page assigned to it. Because the plug-in includes a JVM, Java applets are not restricted to the Java programming language; any language targeting the JVM may run in the plug-in. A restricted set of APIs allow applets access to the user's microphone or 3D acceleration, although applets are not able to modify the page outside its rectangular region. Adobe Flash Player, the main competing technology, works in the same way in this respect. according to W3Techs, Java applet and Silverlight use had fallen to 0.1% each for all web sites, while Flash had fallen to 10.8%. JavaScript JVMs and interpreters Since May 2016, JavaPoly allows users to import unmodified Java libraries, and invoke them directly from JavaScript. JavaPoly allows websites to use unmodified Java libraries, even if the user does not have Java installed on their computer. Transpilation to JavaScript With the continuing improvements in JavaScript execution speed, combined with the increased use of mobile devices whose web browsers do not implement support for plugins, there are efforts to target those users through transpilation to JavaScript. It is possible to either transpile the source code or JVM bytecode to JavaScript. Compiling the JVM bytecode, which is universal across JVM languages, allows building upon the language's existing compiler to bytecode. The main JVM bytecode to JavaScript transpilers are TeaVM, the compiler contained in Dragome Web SDK, Bck2Brwsr, and j2js-compiler. Leading transpilers from JVM languages to JavaScript include the Java-to-JavaScript transpiler contained in Google Web Toolkit, Clojurescript (Clojure), GrooScript (Apache Groovy), Scala.js (Scala) and others.
Technology
Virtualization
null
16421
https://en.wikipedia.org/wiki/Kennedy%20Space%20Center
Kennedy Space Center
The John F. Kennedy Space Center (KSC, originally known as the NASA Launch Operations Center), located on Merritt Island, Florida, is one of the National Aeronautics and Space Administration's (NASA) ten field centers. Since 1968, KSC has been NASA's primary launch center of American spaceflight, research, and technology. Launch operations for the Apollo, Skylab and Space Shuttle programs were carried out from Kennedy Space Center Launch Complex 39 and managed by KSC. Located on the east coast of Florida, KSC is adjacent to Cape Canaveral Space Force Station (CCSFS). The management of the two entities work very closely together, share resources, and operate facilities on each other's property. Though the first Apollo flights and all Project Mercury and Project Gemini flights took off from the then-Cape Canaveral Air Force Station, the launches were managed by KSC and its previous organization, the Launch Operations Directorate. Starting with the fourth Gemini mission, the NASA launch control center in Florida (Mercury Control Center, later the Launch Control Center) began handing off control of the vehicle to the Mission Control Center in Houston, shortly after liftoff; in prior missions it held control throughout the entire mission. Additionally, the center manages launch of robotic and commercial crew missions and researches food production and in-situ resource utilization for off-Earth exploration. Since 2010, the center has worked to become a multi-user spaceport through industry partnerships, even adding a new launch pad (LC-39C) in 2015. There are about 700 facilities and buildings grouped throughout the center's . Among the unique facilities at KSC are the tall Vehicle Assembly Building for stacking NASA's largest rockets, the Launch Control Center, which conducts space launches at KSC, the Operations and Checkout Building, which houses the astronauts' dormitories and suit-up area, a Space Station factory, and a long Shuttle Landing Facility. There is also a Visitor Complex on site that is open to the public. Formation Since 1949, the military had been performing launch operations at what would become Cape Canaveral Space Force Station. In December 1959, the Department of Defense transferred 5,000 personnel and the Missile Firing Laboratory to NASA to become the Launch Operations Directorate under NASA's Marshall Space Flight Center. President John F. Kennedy's 1961 goal of a crewed lunar landing by 1970 required an expansion of launch operations. On July 1, 1962, the Launch Operations Directorate was separated from MSFC to become the Launch Operations Center (LOC). Also, Cape Canaveral was inadequate to host the new launch facility design required for the mammoth tall, thrust Saturn V rocket, which would be assembled vertically in a large hangar and transported on a mobile platform to one of several launch pads. Therefore, the decision was made to build a new LOC site located adjacent to Cape Canaveral on Merritt Island. NASA began land acquisition in 1962, buying title to and negotiating with the state of Florida for an additional . The major buildings in KSC's Industrial Area were designed by architect Charles Luckman. Construction began in November 1962, and Kennedy visited the site twice in 1962, and again just a week before his assassination on November 22, 1963. On November 29, 1963, the facility was named by President Lyndon B. Johnson under Executive Order 11129. Johnson's order joined both the civilian LOC and the military Cape Canaveral station ("the facilities of Station No. 1 of the Atlantic Missile Range") under the designation "John F. Kennedy Space Center", spawning some confusion joining the two in the public mind. NASA administrator James E. Webb clarified this by issuing a directive stating the Kennedy Space Center name applied only to the LOC, while the Air Force issued a general order renaming the military launch site Cape Kennedy Air Force Station. Location Located on Merritt Island, Florida, the center is north-northwest of Cape Canaveral on the Atlantic Ocean, midway between Miami and Jacksonville on Florida's Space Coast, due east of Orlando. It is long and roughly wide, covering . KSC is a major central Florida tourist destination and is approximately one hour's drive from the Orlando area. The Kennedy Space Center Visitor Complex offers public tours of the center and Cape Canaveral Space Force Station. Historical programs Apollo program From 1967 through 1973, there were 13 Saturn V launches, including the ten remaining Apollo missions after Apollo 7. The first of two uncrewed flights, Apollo 4 (Apollo-Saturn 501) on November 9, 1967, was also the first rocket launch from KSC. The Saturn V's first crewed launch on December 21, 1968, was Apollo 8's lunar orbiting mission. The next two missions tested the Lunar Module: Apollo 9 (Earth orbit) and Apollo 10 (lunar orbit). Apollo 11, launched from Pad A on July 16, 1969, made the first Moon landing on July 20. The Apollo 11 launch included crewmembers Neil Armstrong, Michael Collins, and Buzz Aldrin, and attracted a record-breaking 650 million television viewers. Apollo 12 followed four months later. From 1970 to 1972, the Apollo program concluded at KSC with the launches of missions 13 through 17. Skylab On May 14, 1973, the last Saturn V launch put the Skylab space station in orbit from Pad 39A. By this time, the Cape Kennedy pads 34 and 37 used for the Saturn IB were decommissioned, so Pad 39B was modified to accommodate the Saturn IB, and used to launch three crewed missions to Skylab that year, as well as the final Apollo spacecraft for the Apollo–Soyuz Test Project in 1975. Space Shuttle As the Space Shuttle was being designed, NASA received proposals for building alternative launch-and-landing sites at locations other than KSC, which demanded study. KSC had important advantages, including its existing facilities; location on the Intracoastal Waterway; and its southern latitude, which gives a velocity advantage to missions launched in easterly near-equatorial orbits. Disadvantages included: its inability to safely launch military missions into polar orbit, since spent boosters would be likely to fall on the Carolinas or Cuba; corrosion from the salt air; and frequent cloudy or stormy weather. Although building a new site at White Sands Missile Range in New Mexico was seriously considered, NASA announced its decision in April 1972 to use KSC for the shuttle. Since the Shuttle could not be landed automatically or by remote control, the launch of Columbia on April 12, 1981 for its first orbital mission STS-1, was NASA's first crewed launch of a vehicle that had not been tested in prior uncrewed launches. In 1976, the VAB's south parking area was the site of Third Century America, a science and technology display commemorating the U.S. bicentennial. Concurrent with this event, the U.S. flag was painted on the south side of the VAB. During the late 1970s, LC-39 was reconfigured to support the Space Shuttle. Two Orbiter Processing Facilities were built near the VAB as hangars with a third added in the 1980s. KSC's Shuttle Landing Facility (SLF) was the orbiters' primary end-of-mission landing site, although the first KSC landing did not take place until the tenth flight, when Challenger completed STS-41-B on February 11, 1984; the primary landing site until then was Edwards Air Force Base in California, subsequently used as a backup landing site. The SLF also provided a return-to-launch-site (RTLS) abort option, which was not utilized. The SLF is among the longest runways in the world. Constellation On October 28, 2009, the Ares I-X launch from Pad 39B was the first uncrewed launch from KSC since the Skylab workshop in 1973. Expendable launch vehicles (ELVs) Beginning in 1958, NASA and military worked side by side on robotic mission launches (previously referred to as unmanned), cooperating as they broke ground in the field. In the early 1960s, NASA had as many as two robotic mission launches a month. The frequent number of flights allowed for quick evolution of the vehicles, as engineers gathered data, learned from anomalies and implemented upgrades. In 1963, with the intent of KSC ELV work focusing on the ground support equipment and facilities, a separate Atlas/Centaur organization was formed under NASA's Lewis Center (now Glenn Research Center (GRC)), taking that responsibility from the Launch Operations Center (aka KSC). Though almost all robotics missions launched from the Cape Canaveral Space Force Station (CCSFS), KSC "oversaw the final assembly and testing of rockets as they arrived at the Cape." In 1965, KSC's Unmanned Launch Operations directorate became responsible for all NASA uncrewed launch operations, including those at Vandenberg Space Force Base. From the 1950s to 1978, KSC chose the rocket and payload processing facilities for all robotic missions launching in the U.S., overseeing their near launch processing and checkout. In addition to government missions, KSC performed this service for commercial and foreign missions also, though non-U.S. government entities provided reimbursement. NASA also funded Cape Canaveral Space Force Station launch pad maintenance and launch vehicle improvements. All this changed with the Commercial Space Launch Act of 1984, after which NASA only coordinated its own and National Oceanic and Atmospheric Administration (NOAA) ELV launches. Companies were able to "operate their own launch vehicles" and utilize NASA's launch facilities. Payload processing handled by private firms also started to occur outside of KSC. Reagan's 1988 space policy furthered the movement of this work from KSC to commercial companies. That same year, launch complexes on Cape Canaveral Air Force Force Station started transferring from NASA to Air Force Space Command management. In the 1990s, though KSC was not performing the hands-on ELV work, engineers still maintained an understanding of ELVs and had contracts allowing them insight into the vehicles so they could provide knowledgeable oversight. KSC also worked on ELV research and analysis and the contractors were able to utilize KSC personnel as a resource for technical issues. KSC, with the payload and launch vehicle industries, developed advances in automation of the ELV launch and ground operations to enable competitiveness of U.S. rockets against the global market. In 1998, the Launch Services Program (LSP) formed at KSC, pulling together programs (and personnel) that already existed at KSC, GRC, Goddard Space Flight Center, and more to manage the launch of NASA and NOAA robotic missions. Cape Canaveral Space Force Station and VAFB are the primary launch sites for LSP missions, though other sites are occasionally used. LSP payloads such as the Mars Science Laboratory have been processed at KSC before being transferred to a launch pad on Cape Canaveral Space Force Station. Artemis program On 16 November 2022, at 06:47:44 UTC the Space Launch System (SLS) was launched from Complex 39B as part of the Artemis I mission. Space station processing As the International Space Station modules design began in the early 1990s, KSC began to work with other NASA centers and international partners to prepare for processing before launch onboard the Space Shuttles. KSC utilized its hands-on experience processing the 22 Spacelab missions in the Operations and Checkout Building to gather expectations of ISS processing. These experiences were incorporated into the design of the Space Station Processing Facility (SSPF), which began construction in 1991. The Space Station Directorate formed in 1996. KSC personnel were embedded at station module factories for insight into their processes. From 1997 to 2007, KSC planned and performed on the ground integration tests and checkouts of station modules: three Multi-Element Integration Testing (MEIT) sessions and the Integration Systems Test (IST). Numerous issues were found and corrected that would have been difficult to nearly impossible to do on-orbit. Today KSC continues to process ISS payloads from across the world before launch along with developing its experiments for on orbit. The proposed Lunar Gateway would be manufactured and processed at the Space Station Processing Facility. Current programs and initiatives The following are current programs and initiatives at Kennedy Space Center: Commercial Crew Program Exploration Ground Systems Program Launch Services Program Educational Launch of Nanosatellites (ELaNa) Research and Technology Artemis program Lunar Gateway International Space Station Payloads Camp KSC: educational camps for schoolchildren in spring and summer, with a focus on space, aviation and robotics. Facilities The KSC Industrial Area, where many of the center's support facilities are located, is south of LC-39. It includes the Headquarters Building, the Operations and Checkout Building and the Central Instrumentation Facility. The astronaut crew quarters are in the O&C; before it was completed, the astronaut crew quarters were located in Hangar S at the Cape Canaveral Missile Test Annex (now Cape Canaveral Space Force Station). Located at KSC was the Merritt Island Spaceflight Tracking and Data Network station (MILA), a key radio communications and spacecraft tracking complex. Facilities at the Kennedy Space Center are directly related to its mission to launch and recover missions. Facilities are available to prepare and maintain spacecraft and payloads for flight. The Headquarters (HQ) Building houses offices for the Center Director, library, film and photo archives, a print shop and security. When the KSC Library first opened, it was part of the Army Ballistic Missile Agency. However, in 1965, the library moved into three separate sections in the newly opened NASA headquarters before eventually becoming a single unit in 1970. The library contains over four million items related to the history and the work at Kennedy. As one of ten NASA center libraries in the country, their collection focuses on engineering, science, and technology. The archives contain planning documents, film reels, and original photographs covering the history of KSC. The library is not open to the public but is available for KSC, Space Force, and Navy employees who work on site. Many of the media items from the collection are digitized and available through NASA's KSC Media Gallery or through their more up-to-date Flickr gallery. A new Headquarters Building was completed in 2019 as part of the Central Campus consolidation. Groundbreaking began in 2014. The center operated its own short-line railroad. This operation was discontinued in 2015, with the sale of its final two locomotives. A third had already been donated to a museum. The line was costing $1.3 million annually to maintain. Payload manufacture and processing The Neil Armstrong Operations and Checkout Building (O&C) (previously known as the Manned Spacecraft Operations Building) is a historic site on the U.S. National Register of Historic Places dating back to the 1960s and was used to receive, process, and integrate payloads for the Gemini and Apollo programs, the Skylab program in the 1970s, and for initial segments of the International Space Station through the 1990s. The Apollo and Space Shuttle astronauts would board the astronaut transfer van to launch complex 39 from the O&C building. The three-story, Space Station Processing Facility (SSPF) consists of two enormous processing bays, an airlock, operational control rooms, laboratories, logistics areas and office space for support of non-hazardous Space Station and Shuttle payloads to ISO 14644-1 class 5 standards. Opened in 1994, it is the largest factory building in the KSC industrial area. The Vertical Processing Facility (VPF) features a door where payloads that are processed in the vertical position are brought in and manipulated with two overhead cranes and a hoist capable of lifting up to . The Hypergolic Maintenance and Checkout Facility (HMCA) comprises three buildings that are isolated from the rest of the industrial area because of the hazardous materials handled there. Hypergolic-fueled modules that made up the Space Shuttle Orbiter's reaction control system, orbital maneuvering system and auxiliary power units were stored and serviced in the HMCF. The Multi-Payload Processing Facility is a building used for Orion spacecraft and payload processing. The Payload Hazardous Servicing Facility (PHSF) contains a service bay, with a , hook height. It also contains a payload airlock. Its temperature is maintained at . The Blue Origin rocket manufacturing facility is located immediately south of the KSC visitor complex. Completed in 2019, it serves as the company's factory for the manufacture of New Glenn orbital rockets. Launch Complex 39 Launch Complex 39 (LC-39) was originally built for the Saturn V, the largest and most powerful operational launch vehicle until the Space Launch System, for the Apollo crewed Moon landing program. Since the end of the Apollo program in 1972, LC-39 has been used to launch every NASA human space flight, including Skylab (1973), the Apollo–Soyuz Test Project (1975), and the Space Shuttle program (1981–2011). Since December 1968, all launch operations have been conducted from launch pads A and B at LC-39. Both pads are on the ocean, east of the VAB. From 1969 to 1972, LC-39 was the "Moonport" for all six Apollo crewed Moon landing missions using the Saturn V, and was used from 1981 to 2011 for all Space Shuttle launches. Human missions to the Moon required the large three-stage Saturn V rocket, which was tall and in diameter. At KSC, Launch Complex 39 was built on Merritt Island to accommodate the new rocket. Construction of the $800 million project began in November 1962. LC-39 pads A and B were completed by October 1965 (planned Pads C, D and E were canceled), the VAB was completed in June 1965, and the infrastructure by late 1966. The complex includes: the Vehicle Assembly Building (VAB), a hangar capable of holding four Saturn Vs. The VAB was the largest structure in the world by volume when completed in 1965. a transporter capable of carrying 5,440 tons along a crawlerway to either of two launch pads; a mobile service structure, with three Mobile Launcher Platforms, each containing a fixed launch umbilical tower; the Launch Control Center; and a news media facility. Launch Complex 48 Launch Complex 48 (LC-48) is a multi-user launch site under construction for small launchers and spacecraft. It will be located between Launch Complex 39A and Space Launch Complex 41, with LC-39A to the north and SLC-41 to the south. LC-48 will be constructed as a "clean pad" to support multiple launch systems with differing propellant needs. While initially only planned to have a single pad, the complex is capable of being expanded to two at a later date. Commercial leasing As a part of promoting commercial space industry growth in the area and the overall center as a multi-user spaceport, KSC leases some of its properties. Here are some major examples: Exploration Park to multiple users (partnership with Space Florida) Shuttle Landing Facility to Space Florida (who contracts use to private companies) Orbiter Processing Facility (OPF)-3 to Boeing (for CST-100 Starliner) Launch Complex 39A, Launch Control Center Firing Room 4 and land for SpaceX's Roberts Road facility (Hangar X) to SpaceX O&C High Bay to Lockheed Martin (for Orion processing) Land for FPL's Space Coast Next Generation Solar Energy Center to Florida Power and Light (FPL) Hypergolic Maintenance Facility (HMF) to United Paradyne Corporation (UPC) Visitor complex The Kennedy Space Center Visitor Complex, operated by Delaware North since 1995, has a variety of exhibits, artifacts, displays and attractions on the history and future of human and robotic spaceflight. Bus tours of KSC originate from here. The complex also includes the separate Apollo/Saturn V Center, north of the VAB and the United States Astronaut Hall of Fame, six miles west near Titusville. There were 1.5 million visitors in 2009. It had some 700 employees. It was announced on May 29, 2015, that the Astronaut Hall of Fame exhibit would be moved from its current location to another location within the Visitor Complex to make room for an upcoming high-tech attraction entitled "Heroes and Legends". The attraction, designed by Orlando-based design firm Falcon's Treehouse, opened November 11, 2016. In March 2016, the visitor center unveiled the new location of the iconic countdown clock at the complex's entrance; previously, the clock was located with a flagpole at the press site. The clock was originally built and installed in 1969 and listed with the flagpole in the National Register of Historic Places in January 2000. In 2019, NASA celebrated the 50th anniversary of the Apollo program, and the launch of Apollo 10 on May 18. In summer of 2019, Lunar Module 9 (LM-9) was relocated to the Apollo/Saturn V Center as part of an initiative to rededicate the center and celebrate the 50th anniversary of the Apollo Program. Historic locations NASA lists the following Historic Districts at KSC; each district has multiple associated facilities: Launch Complex 39: Pad A Historic District Launch Complex 39: Pad B Historic District Shuttle Landing Facility (SLF) Area Historic District Orbiter Processing Historic District Solid Rocket Booster (SRB) Disassembly and Refurbishment Complex Historic District NASA KSC Railroad System Historic District NASA-owned Cape Canaveral Space Force Station Industrial Area Historic District There are 24 historic properties outside of these historic districts, including the Space Shuttle Atlantis, Vehicle Assembly Building, Crawlerway, and Operations and Checkout Building. KSC has one National Historic Landmark, 78 National Register of Historic Places (NRHP) listed or eligible sites, and 100 Archaeological Sites. Other facilities The Rotation, Processing and Surge Facility (RPSF) is responsible for the preparation of solid rocket booster segments for transportation to the Vehicle Assembly Building (VAB). The RPSF was built in 1984 to perform SRB operations that had previously been conducted in high bays 2 and 4 of the VAB at the beginning of the Space Shuttle program. It was used until the Space Shuttle's retirement, and will be used in the future by the Space Launch System (SLS) and OmegA rockets. Weather Florida's peninsular shape and temperature contrasts between land and ocean provide ideal conditions for electrical storms, earning Central Florida the reputation as "lightning capital of the United States". This makes extensive lightning protection and detection systems necessary to protect employees, structures and spacecraft on launch pads. On November 14, 1969, Apollo 12 was struck by lightning just after lift-off from Pad 39A, but the flight continued safely. The most powerful lightning strike recorded at KSC occurred at LC-39B on August 25, 2006, while shuttle Atlantis was being prepared for STS-115. NASA managers were initially concerned that the lightning strike caused damage to Atlantis, but none was found. On September 7, 2004, Hurricane Frances directly hit the area with sustained winds of and gusts up to , the most damaging storm to date. The Vehicle Assembly Building lost 1,000 exterior panels, each x in size. This exposed of the building to the elements. Damage occurred to the south and east sides of the VAB. The shuttle's Thermal Protection System Facility suffered extensive damage. The roof was partially torn off and the interior suffered water damage. Several rockets on display in the center were toppled. Further damage to KSC was caused by Hurricane Wilma in October 2005. The conservative estimate by NASA is that the Space Center will experience 5 to 8 inches of sea level rise by the 2050s. Launch Complex 39A, the site of the Apollo 11 launch, is the most vulnerable to flooding, and has a 14% annual risk of flooding beginning in 2020. KSC directors Since KSC's formation, ten NASA officials have served as directors, including three former astronauts (Crippen, Bridges and Cabana): In popular culture In addition to being frequently featured in documentaries, Kennedy Space Center has been portrayed on film many times. Some studio movies have even gained access and filmed scenes within the gates of the space center. If extras are needed in those scenes, space center employees are recruited (employees use personal time during filming). Films with scenes at KSC include: Moonraker Stowaway to the Moon SpaceCamp Apollo 13 Contact Armageddon Space Cowboys Swades Transformers 3: Dark of the Moon Tomorrowland Sharknado 3: Oh Hell No! First Man Geostorm Men in Black 3 Fly Me to the Moon The location appears as a major plot point in the finale of Stone Ocean, the 6th part of the manga and anime series JoJo's Bizarre Adventure. KSC is also one of the two primary settings of the 1965–1970 television series I Dream Of Jeannie (along with a home in nearby Cocoa Beach), though it was filmed entirely in Los Angeles.
Technology
Programs and launch sites
null
16472
https://en.wikipedia.org/wiki/Jet%20stream
Jet stream
Jet streams are fast flowing, narrow air currents in the Earth's atmosphere. The main jet streams are located near the altitude of the tropopause and are westerly winds, flowing west to east around the globe. The northern hemisphere and the southern hemisphere each have a polar jet around their respective polar vortex at around above sea level and typically travelling at around although often considerably faster. Closer to the equator and somewhat higher and somewhat weaker is a subtropical jet. The northern polar jet flows over the middle to northern latitudes of North America, Europe, and Asia and their intervening oceans, while the southern hemisphere polar jet mostly circles Antarctica. Jet streams may start, stop, split into two or more parts, combine into one stream, or flow in various directions including opposite to the direction of the remainder of the jet. The El Niño–Southern Oscillation affects the location of the jet streams, which in turn affects the weather over the tropical Pacific Ocean and affects the climate of much of the tropics and subtropics, and can affect weather in higher-latitude regions. The term "jet stream" is also applied to some other winds at varying levels in the atmosphere, some global (such as the higher-level polar-night jet), some local (such as the African easterly jet). Meteorologists use the location of some of the jet streams as an aid in weather forecasting. Airlines use them to reduce some flight times and fuel consumption. Scientists have considered whether the jet streams might be harnessed for power generation. In World War II, the Japanese used the jet stream to carry Fu-Go balloon bombs across the Pacific Ocean to launch small attacks on North America. Jet streams have been detected in the atmospheres of Venus, Jupiter, Saturn, Uranus, and Neptune. Discovery The first indications of this phenomenon came from American professor Elias Loomis (1811–1889), when he proposed the hypothesis of a powerful air current in the upper air blowing west to east across the United States as an explanation for the behaviour of major storms. After the 1883 eruption of the Krakatoa volcano, weather watchers tracked and mapped the effects on the sky over several years. They labelled the phenomenon the "equatorial smoke stream". In the 1920s Japanese meteorologist Wasaburo Oishi detected the jet stream from a site near Mount Fuji. He tracked pilot balloons ("pibals"), used to measure wind speed and direction, as they rose in the air. Oishi's work largely went unnoticed outside Japan because it was published in Esperanto, though chronologically he has to be credited for the scientific discovery of jet streams. American pilot Wiley Post (1898–1935), the first man to fly around the world solo in 1933, is often given some credit for discovery of jet streams. Post invented a pressurized suit that let him fly above . In the year before his death, Post made several attempts at a high-altitude transcontinental flight, and noticed that at times his ground speed greatly exceeded his air speed. German meteorologist Heinrich Seilkopf is credited with coining a special term, Strahlströmung (literally "jet current"), for the phenomenon in 1939. Many sources credit real understanding of the nature of jet streams to regular and repeated flight-path traversals during World War II. Flyers consistently noticed westerly tailwinds in excess of in flights, for example, from the US to the UK. Similarly in 1944 a team of American meteorologists in Guam, including Reid Bryson, had enough observations to forecast very high west winds that would slow bombers raiding Japan. Description The polar and subtropical jet streams are the product of two factors: the atmospheric heating by solar radiation that produces the large-scale polar, Ferrel, and Hadley circulation cells, and the action of the Coriolis force acting on those moving masses. The Coriolis force is caused by the planet's rotation on its axis. The polar jet stream forms near the interface of the polar and Ferrel circulation cells; the subtropical jet forms near the boundary of the Ferrel and Hadley circulation cells. Polar jet streams are typically located near the 250 hPa (about 1/4 atmosphere) pressure level, or above sea level while the weaker subtropical jet streams are somewhat higher. The polar jets, at lower altitude, and often intruding into mid-latitudes, strongly affect weather and aviation. The polar jet stream is most commonly found between latitudes 30° and 60° (closer to 60°), while the subtropical jet streams are located close to latitude 30°. These two jets merge at some locations and times, while at other times they are well separated. The northern polar jet stream is said to "follow the sun" as it slowly migrates northward as that hemisphere warms, and southward again as it cools. The width of a jet stream is typically a few hundred kilometres or miles and its vertical thickness often less than . Jet streams are typically continuous over long distances, but discontinuities are also common. The path of the jet typically has a meandering shape, and these meanders themselves propagate eastward, at lower speeds than that of the actual wind within the flow. Further, the meanders can split or form eddies. Each large meander, or wave, within the jet stream is known as a Rossby wave (planetary wave). Rossby waves are caused by changes in the Coriolis effect with latitude. Shortwave troughs, are smaller scale waves superimposed on the Rossby waves, with a scale of long, that move along through the flow pattern around large scale, or longwave, "ridges" and "troughs" within Rossby waves. The wind speeds are greatest where temperature differences between air masses are greatest, and often exceed . Speeds of have been measured. The jet stream moves from west to east bringing changes of weather. The path of jet streams affects cyclonic storm systems at lower levels in the atmosphere, and so knowledge of their course has become an important part of weather forecasting. For example, in 2007 and 2012, Britain experienced severe flooding as a result of the polar jet staying south for the summer. Cause In general, winds are strongest immediately under the tropopause (except locally, during tornadoes, tropical cyclones or other anomalous situations). If two air masses of different temperatures or densities meet, the resulting pressure difference caused by the density difference (which ultimately causes wind) is highest within the transition zone. The wind does not flow directly from the hot to the cold area, but is deflected by the Coriolis effect and flows along the boundary of the two air masses. All these facts are consequences of the thermal wind relation. The balance of forces acting on an atmospheric air parcel in the vertical direction is primarily between the gravitational force acting on the mass of the parcel and the buoyancy force, or the difference in pressure between the top and bottom surfaces of the parcel. Any imbalance between these forces results in the acceleration of the parcel in the imbalance direction: upward if the buoyant force exceeds the weight, and downward if the weight exceeds the buoyancy force. The balance in the vertical direction is referred to as hydrostatic. Beyond the tropics, the dominant forces act in the horizontal direction, and the primary struggle is between the Coriolis force and the pressure gradient force. Balance between these two forces is referred to as geostrophic. Given both hydrostatic and geostrophic balance, one can derive the thermal wind relation: the vertical gradient of the horizontal wind is proportional to the horizontal temperature gradient. If two air masses in the northern hemisphere, one cold and dense to the north and the other hot and less dense to the south, are separated by a vertical boundary and that boundary should be removed, the difference in densities will result in the cold air mass slipping under the hotter and less dense air mass. The Coriolis effect will then cause poleward-moving mass to deviate to the East, while equatorward-moving mass will deviate toward the west. The general trend in the atmosphere is for temperatures to decrease in the poleward direction. As a result, winds develop an eastward component and that component grows with altitude. Therefore, the strong eastward moving jet streams are in part a simple consequence of the fact that the Equator is warmer than the north and south poles. Polar jet stream The thermal wind relation does not explain why the winds are organized into tight jets, rather than distributed more broadly over the hemisphere. One factor that contributes to the creation of a concentrated polar jet is the undercutting of sub-tropical air masses by the more dense polar air masses at the polar front. This causes a sharp north–south pressure (south–north potential vorticity) gradient in the horizontal plane, an effect which is most significant during double Rossby wave breaking events. At high altitudes, lack of friction allows air to respond freely to the steep pressure gradient with low pressure at high altitude over the pole. This results in the formation of planetary wind circulations that experience a strong Coriolis deflection and thus can be considered 'quasi-geostrophic'. The polar front jet stream is closely linked to the frontogenesis process in midlatitudes, as the acceleration/deceleration of the air flow induces areas of low/high pressure respectively, which link to the formation of cyclones and anticyclones along the polar front in a relatively narrow region. Subtropical jet A second factor which contributes to a concentrated jet is more applicable to the subtropical jet which forms at the poleward limit of the tropical Hadley cell, and to first order this circulation is symmetric with respect to longitude. Tropical air rises to the tropopause, and moves poleward before sinking; this is the Hadley cell circulation. As it does so it tends to conserve angular momentum, since friction with the ground is slight. Air masses that begin moving poleward are deflected eastward by the Coriolis force (true for either hemisphere), which for poleward moving air implies an increased westerly component of the winds Effects Hurricane protection The subtropical jet stream rounding the base of the mid-oceanic upper trough is thought to be one of the causes most of the Hawaiian Islands have been resistant to the long list of Hawaii hurricanes that have approached. For example, when Hurricane Flossie (2007) approached and dissipated just before reaching landfall, the U.S. National Oceanic and Atmospheric Administration (NOAA) cited vertical wind shear as evidenced in the photo. Uses The northern polar jet stream is the most important one for aviation and weather forecasting, as it is much stronger and at a much lower altitude than the subtropical jet streams and also covers many countries in the northern hemisphere, while the southern polar jet stream mostly circles Antarctica and sometimes the southern tip of South America. Aviation The location of the jet stream is important for aviation. Aircraft flight time can be dramatically affected by either flying with the flow or against it. Often, airlines work to fly with the jet stream to obtain significant fuel cost and time savings. Commercial use of the jet stream began on 18 November 1952, when Pan Am flew from Tokyo to Honolulu at an altitude of . It cut the trip time by over one-third, from 18 to 11.5 hours. Within North America, the time needed to fly east across the continent can be decreased by about 30 minutes if an airplane can fly with the jet stream. Across the Atlantic Ocean the North Atlantic Tracks service allows airlines and air traffic control to accommodate the jet stream for the benefit for airlines and other users. Associated with jet streams is a phenomenon known as clear-air turbulence (CAT), caused by vertical and horizontal wind shear caused by jet streams. The CAT is strongest on the cold air side of the jet, next to and just under the axis of the jet. Clear-air turbulence can cause aircraft to plunge and so present a passenger safety hazard that has caused fatal accidents, such as the death of one passenger on United Airlines Flight 826 in 1997. Unusual wind speed in the jet stream in late February 2024 pushed commercial jets to excess of relative to the ground. Possible future power generation Scientists are investigating ways to harness the wind energy within the jet stream. According to one estimate of the potential wind energy in the jet stream, only one percent would be needed to meet the world's current energy needs. In the late 2000s it was estimated that the required technology would reportedly take 10–20 years to develop. There are two major but divergent scientific articles about jet stream power. Archer & Caldeira claim that the Earth's jet streams could generate a total power of 1700 terawatts (TW) and that the climatic impact of harnessing this amount would be negligible. However, Miller, Gans, & Kleidon claim that the jet streams could generate a total power of only 7.5 TW and lacks the potential to make a significant contribution to renewable energy. Unpowered aerial attack Near the end of World War II, from late 1944 until early 1945, the Japanese Fu-Go balloon bomb, a type of fire balloon, was designed as a cheap weapon intended to make use of the jet stream over the Pacific Ocean to reach the west coast of Canada and the United States. Relatively ineffective as weapons, they were used in one of the few attacks on North America during World War II, causing six deaths and a small amount of damage. American scientists studying the balloons thought the Japanese might be preparing a biological attack. Changes due to climate cycles Effects of ENSO El Niño–Southern Oscillation (ENSO) influences the average location of upper-level jet streams, and leads to cyclical variations in precipitation and temperature across North America, as well as affecting tropical cyclone development across the eastern Pacific and Atlantic basins. Combined with the Pacific Decadal Oscillation, ENSO can also impact cold season rainfall in Europe. Changes in ENSO also change the location of the jet stream over South America, which partially affects precipitation distribution over the continent. El Niño During El Niño events, increased precipitation is expected in California due to a more southerly, zonal, storm track. During the Niño portion of ENSO, increased precipitation falls along the Gulf coast and Southeast due to a stronger than normal, and more southerly, polar jet stream. Snowfall is greater than average across the southern Rockies and Sierra Nevada mountain range, and is well below normal across the Upper Midwest and Great Lakes states. The northern tier of the lower 48 exhibits above normal temperatures during the fall and winter, while the Gulf coast experiences below normal temperatures during the winter season. The subtropical jet stream across the deep tropics of the northern hemisphere is enhanced due to increased convection in the equatorial Pacific, which decreases tropical cyclogenesis within the Atlantic tropics below what is normal, and increases tropical cyclone activity across the eastern Pacific. In the southern hemisphere, the subtropical jet stream is displaced equatorward, or north, of its normal position, which diverts frontal systems and thunderstorm complexes from reaching central portions of the continent. La Niña Across North America during La Niña, increased precipitation is diverted into the Pacific Northwest due to a more northerly storm track and jet stream. The storm track shifts far enough northward to bring wetter than normal conditions (in the form of increased snowfall) to the Midwestern states, as well as hot and dry summers. Snowfall is above normal across the Pacific Northwest and western Great Lakes. Across the North Atlantic, the jet stream is stronger than normal, which directs stronger systems with increased precipitation towards Europe. Dust Bowl Evidence suggests the jet stream was at least partly responsible for the widespread drought conditions during the 1930s Dust Bowl in the Midwest United States. Normally, the jet stream flows east over the Gulf of Mexico and turns northward pulling up moisture and dumping rain onto the Great Plains. During the Dust Bowl, the jet stream weakened and changed course traveling farther south than normal. This starved the Great Plains and other areas of the Midwest of rainfall, causing extraordinary drought conditions. Longer-term climatic changes Since the early 2000s, climate models have consistently identified that global warming will gradually push jet streams poleward. In 2008, this was confirmed by observational evidence, which proved that from 1979 to 2001, the northern jet stream moved northward at an average rate of per year, with a similar trend in the southern hemisphere jet stream. Climate scientists have hypothesized that the jet stream will also gradually weaken as a result of global warming. Trends such as Arctic sea ice decline, reduced snow cover, evapotranspiration patterns, and other weather anomalies have caused the Arctic to heat up faster than other parts of the globe, in what is known as the Arctic amplification. In 2021–2022, it was found that since 1979, the warming within the Arctic Circle has been nearly four times faster than the global average, and some hotspots in the Barents Sea area warmed up to seven times faster than the global average. While the Arctic remains one of the coldest places on Earth today, the temperature gradient between it and the warmer parts of the globe will continue to diminish with every decade of global warming as the result of this amplification. If this gradient has a strong influence on the jet stream, then it will eventually become weaker and more variable in its course, which would allow more cold air from the polar vortex to leak mid-latitudes and slow the progression of Rossby waves, leading to more persistent and more extreme weather. The hypothesis above is closely associated with Jennifer Francis, who had first proposed it in a 2012 paper co-authored by Stephen J. Vavrus. While some paleoclimate reconstructions have suggested that the polar vortex becomes more variable and causes more unstable weather during periods of warming back in 1997, this was contradicted by climate modelling, with PMIP2 simulations finding in 2010 that the Arctic Oscillation (AO) was much weaker and more negative during the Last Glacial Maximum, and suggesting that warmer periods have stronger positive phase AO, and thus less frequent leaks of the polar vortex air. However, a 2012 review in the Journal of the Atmospheric Sciences noted that "there [has been] a significant change in the vortex mean state over the twenty-first century, resulting in a weaker, more disturbed vortex.", which contradicted the modelling results but fit the Francis-Vavrus hypothesis. Additionally, a 2013 study noted that the then-current CMIP5 tended to strongly underestimate winter blocking trends, and other 2012 research had suggested a connection between declining Arctic sea ice and heavy snowfall during midlatitude winters. In 2013, further research from Francis connected reductions in the Arctic sea ice to extreme summer weather in the northern mid-latitudes, while other research from that year identified potential linkages between Arctic sea ice trends and more extreme rainfall in the European summer. At the time, it was also suggested that this connection between Arctic amplification and jet stream patterns was involved in the formation of Hurricane Sandy and played a role in the early 2014 North American cold wave. In 2015, Francis' next study concluded that highly amplified jet-stream patterns are occurring more frequently in the past two decades. Hence, continued heat-trapping emissions favour increased formation of extreme events caused by prolonged weather conditions. Studies published in 2017 and 2018 identified stalling patterns of Rossby waves in the northern hemisphere jet stream as the culprit behind other almost stationary extreme weather events, such as the 2018 European heatwave, the 2003 European heat wave, 2010 Russian heat wave or the 2010 Pakistan floods, and suggested that these patterns were all connected to Arctic amplification. Further work from Francis and Vavrus that year suggested that amplified Arctic warming is observed as stronger in lower atmospheric areas because the expanding process of warmer air increases pressure levels which decreases poleward geopotential height gradients. As these gradients are the reason that cause west to east winds through the thermal wind relationship, declining speeds are usually found south of the areas with geopotential increases. In 2017, Francis explained her findings to the Scientific American: "A lot more water vapor is being transported northward by big swings in the jet stream. That's important because water vapor is a greenhouse gas just like carbon dioxide and methane. It traps heat in the atmosphere. That vapor also condenses as droplets we know as clouds, which themselves trap more heat. The vapor is a big part of the amplification story—a big reason the Arctic is warming faster than anywhere else." In a 2017 study conducted by climatologist Judah Cohen and several of his research associates, Cohen wrote that "[the] shift in polar vortex states can account for most of the recent winter cooling trends over Eurasian midlatitudes". A 2018 paper from Vavrus and others linked Arctic amplification to more persistent hot-dry extremes during the midlatitude summers, as well as the midlatitude winter continental cooling. Another 2017 paper estimated that when the Arctic experiences anomalous warming, primary production in North America goes down by between 1% and 4% on average, with some states suffering up to 20% losses. A 2021 study found that a stratospheric polar vortex disruption is linked with extreme cold winter weather across parts of Asia and North America, including the February 2021 North American cold wave. Another 2021 study identified a connection between the Arctic sea ice loss and the increased size of wildfires in the Western United States. However, because the specific observations are considered short-term observations, there is considerable uncertainty in the conclusions. Climatology observations require several decades to definitively distinguish various forms of natural variability from climate trends. This point was stressed by reviews in 2013 and in 2017. A study in 2014 concluded that Arctic amplification significantly decreased cold-season temperature variability over the northern hemisphere in recent decades. Cold Arctic air intrudes into the warmer lower latitudes more rapidly today during autumn and winter, a trend projected to continue in the future except during summer, thus calling into question whether winters will bring more cold extremes. A 2019 analysis of a data set collected from 35 182 weather stations worldwide, including 9116 whose records go beyond 50 years, found a sharp decrease in northern midlatitude cold waves since the 1980s. Moreover, a range of long-term observational data collected during the 2010s and published in 2020 suggests that the intensification of Arctic amplification since the early 2010s was not linked to significant changes on mid-latitude atmospheric patterns. State-of-the-art modelling research of PAMIP (Polar Amplification Model Intercomparison Project) improved upon the 2010 findings of PMIP2; it found that sea ice decline would weaken the jet stream and increase the probability of atmospheric blocking, but the connection was very minor, and typically insignificant next to interannual variability. In 2022, a follow-up study found that while the PAMIP average had likely underestimated the weakening caused by sea ice decline by 1.2 to 3 times, even the corrected connection still amounts to only 10% of the jet stream's natural variability. Additionally, a 2021 study found that while jet streams had indeed slowly moved polewards since 1960 as was predicted by models, they did not weaken, in spite of a small increase in waviness. A 2022 re-analysis of the aircraft observational data collected over 2002–2020 suggested that the North Atlantic jet stream had actually strengthened. Finally, a 2021 study was able to reconstruct jet stream patterns over the past 1,250 years based on Greenland ice cores, and found that all of the recently observed changes remain within range of natural variability: the earliest likely time of divergence is in 2060, under the Representative Concentration Pathway 8.5 which implies continually accelerating greenhouse gas emissions. Other upper-level jets Polar night jet The polar-night jet stream forms mainly during the winter months when the nights are much longer – hence the name referencing polar nights – in their respective hemispheres at around 60° latitude. The polar night jet moves at a greater height (about ) than it does during the summer. During these dark months the air high over the poles becomes much colder than the air over the Equator. This difference in temperature gives rise to extreme air pressure differences in the stratosphere which, when combined with the Coriolis effect, create the polar night jets, that race eastward at an altitude of about . The polar vortex is circled by the polar night jet. The warmer air can only move along the edge of the polar vortex, but not enter it. Within the vortex, the cold polar air becomes increasingly cold, due to a lack of warmer air from lower latitudes as well as a lack of energy from the Sun entering during the polar night. Low-level jets There are wind maxima at lower levels of the atmosphere that are also referred to as jets. Barrier jet A barrier jet in the low levels forms just upstream of mountain chains, with the mountains forcing the jet to be oriented parallel to the mountains. The mountain barrier increases the strength of the low level wind by 45 percent. In the North American Great Plains a southerly low-level jet helps fuel overnight thunderstorm activity during the warm season, normally in the form of mesoscale convective systems which form during the overnight hours. A similar phenomenon develops across Australia, which pulls moisture poleward from the Coral Sea towards cut-off lows which form mainly across southwestern portions of the continent. Coastal jet Coastal low-level jets are related to a sharp contrast between high temperatures over land and lower temperatures over the sea and play an important role in coastal weather, giving rise to strong coast parallel winds. Most coastal jets are associated with the oceanic high-pressure systems and thermal low over land and are mainly located along cold eastern boundary marine currents, in upwelling regions offshore California, Peru–Chile, Benguela, Portugal, Canary and West Australia, and offshore Yemen–Oman. Valley exit jet A valley exit jet is a strong, down-valley, elevated air current that emerges above the intersection of the valley and its adjacent plain. These winds frequently reach speeds of up to at heights of above the ground. Surface winds below the jet tend to be substantially weaker, even when they are strong enough to sway vegetation. Valley exit jets are likely to be found in valley regions that exhibit diurnal mountain wind systems, such as those of the dry mountain ranges of the US. Deep valleys that terminate abruptly at a plain are more impacted by these factors than are those that gradually become shallower as downvalley distance increases. Africa There are several important low-level jets in Africa. Numerous low-level jets form in the Sahara, and are important for the raising of dust off the desert surface. This includes a low-level jet in Chad, which is responsible for dust emission from the Bodélé Depression, the world's most important single source of dust emission. The Somali Jet, which forms off the East African coast is an important component of the global Hadley circulation, and supplies water vapour to the Asian Monsoon. Easterly low-level jets forming in valleys within the East African Rift System help account for the low rainfall in East Africa and support high rainfall in the Congo Basin rainforest. The formation of the thermal low over northern Africa leads to a low-level westerly jet stream from June into October, which provides the moist inflow to the West African monsoon. While not technically a low-level jet, the mid-level African easterly jet (at 3000–4000 m above the surface) is also an important climate feature in Africa. It occurs during the northern hemisphere summer between 10°N and 20°N above in the Sahel region of West Africa. It is considered to play a crucial role in the West African monsoon, and helps form the tropical waves which move across the tropical Atlantic and eastern Pacific oceans during the warm season. Other planets For other planets, internal heat rather than solar heating is believed to drive their jet streams. Jupiter's atmosphere has multiple jet streams caused by the convection cells driven by internal heating. These form the familiar banded color structure.
Physical sciences
Winds
null
16619
https://en.wikipedia.org/wiki/Kilogram
Kilogram
The kilogram (also spelled kilogramme) is the base unit of mass in the International System of Units (SI), having the unit symbol kg. 'Kilogram' means 'one thousand grams' and is colloquially abbreviated to kilo. The kilogram is an SI base unit, defined ultimately in terms of three defining constants of the SI, namely a specific transition frequency of the caesium-133 atom, the speed of light, and the Planck constant. A properly equipped metrology laboratory can calibrate a mass measurement instrument such as a Kibble balance as a primary standard for the kilogram mass. The kilogram was originally defined in 1795 during the French Revolution as the mass of one litre of water. The current definition of a kilogram agrees with this original definition to within 30 parts per million. In 1799, the platinum Kilogramme des Archives replaced it as the standard of mass. In 1889, a cylinder composed of platinum–iridium, the International Prototype of the Kilogram (IPK), became the standard of the unit of mass for the metric system and remained so for 130 years, before the current standard was adopted in 2019. Definition The kilogram is defined in terms of three defining constants: a specific atomic transition frequency , which defines the duration of the second, the speed of light , which when combined with the second, defines the length of the metre, and the Planck constant , which when combined with the metre and second, defines the mass of the kilogram. The formal definition according to the General Conference on Weights and Measures (CGPM) is: Defined in term of those units, the kg is formulated as: This definition is generally consistent with previous definitions: the mass remains within 30 ppm of the mass of one litre of water. Timeline of previous definitions 1793: The grave (the precursor of the kilogram) was defined as the mass of 1 litre (dm3) of water, which was determined to be 18841 grains. 1795: the gram (1/1000 of a kilogram) was provisionally defined as the mass of one cubic centimetre of water at the melting point of ice. 1799: The Kilogramme des Archives was manufactured as a prototype. It had a mass equal to the mass of 1 dm3 of water at the temperature of its maximum density, which is approximately 4 °C. 1875–1889: The Metre Convention was signed in 1875, leading to the production of the International Prototype of the Kilogram (IPK) in 1879 and its adoption in 1889. 2019: The kilogram was defined in terms of the Planck constant, the speed of light and hyperfine transition frequency of 133Cs as approved by the General Conference on Weights and Measures (CGPM) on 16 November 2018. Name and terminology The kilogram is the only base SI unit with an SI prefix (kilo) as part of its name. The word kilogramme or kilogram is derived from the French , which itself was a learned coinage, prefixing the Greek stem of "a thousand" to , a Late Latin term for "a small weight", itself from Greek . The word was written into French law in 1795, in the Decree of 18 Germinal, which revised the provisional system of units introduced by the French National Convention two years earlier, where the had been defined as weight () of a cubic centimetre of water, equal to 1/1000 of a . In the decree of 1795, the term thus replaced , and replaced . The French spelling was adopted in Great Britain when the word was used for the first time in English in 1795, with the spelling kilogram being adopted in the United States. In the United Kingdom both spellings are used, with "kilogram" having become by far the more common. UK law regulating the units to be used when trading by weight or measure does not prevent the use of either spelling. In the 19th century the French word , a shortening of , was imported into the English language where it has been used to mean both kilogram and kilometre. While kilo as an alternative is acceptable, to The Economist for example, the Canadian government's Termium Plus system states that "SI (International System of Units) usage, followed in scientific and technical writing" does not allow its usage and it is described as "a common informal name" on Russ Rowlett's Dictionary of Units of Measurement. When the United States Congress gave the metric system legal status in 1866, it permitted the use of the word kilo as an alternative to the word kilogram, but in 1990 revoked the status of the word kilo. The SI system was introduced in 1960 and in 1970 the BIPM started publishing the SI Brochure, which contains all relevant decisions and recommendations by the CGPM concerning units. The SI Brochure states that "It is not permissible to use abbreviations for unit symbols or unit names ...". For use with east Asian character sets, the SI symbol is encoded as a single Unicode character, in the CJK Compatibility block. Redefinition based on fundamental constants The replacement of the International Prototype of the Kilogram (IPK) as the primary standard was motivated by evidence accumulated over a long period of time that the mass of the IPK and its replicas had been changing; the IPK had diverged from its replicas by approximately 50 micrograms since their manufacture late in the 19th century. This led to several competing efforts to develop measurement technology precise enough to warrant replacing the kilogram artefact with a definition based directly on physical fundamental constants. The International Committee for Weights and Measures (CIPM) approved a revision in November 2018 that defines the kilogram by defining the Planck constant to be exactly , effectively defining the kilogram in terms of the second and the metre. The new definition took effect on 20 May 2019. Prior to the redefinition, the kilogram and several other SI units based on the kilogram were defined by a man-made metal artifact: the Kilogramme des Archives from 1799 to 1889, and the IPK from 1889 to 2019. In 1960, the metre, previously similarly having been defined with reference to a single platinum-iridium bar with two marks on it, was redefined in terms of an invariant physical constant (the wavelength of a particular emission of light emitted by krypton, and later the speed of light) so that the standard can be independently reproduced in different laboratories by following a written specification. At the 94th Meeting of the CIPM in 2005, it was recommended that the same be done with the kilogram. In October 2010, the CIPM voted to submit a resolution for consideration at the General Conference on Weights and Measures (CGPM), to "take note of an intention" that the kilogram be defined in terms of the Planck constant, (which has dimensions of energy times time, thus mass × length / time) together with other physical constants. This resolution was accepted by the 24th conference of the CGPM in October 2011 and further discussed at the 25th conference in 2014. Although the Committee recognised that significant progress had been made, they concluded that the data did not yet appear sufficiently robust to adopt the revised definition, and that work should continue to enable the adoption at the 26th meeting, scheduled for 2018. Such a definition would theoretically permit any apparatus that was capable of delineating the kilogram in terms of the Planck constant to be used as long as it possessed sufficient precision, accuracy and stability. The Kibble balance is one way to do this. As part of this project, a variety of very different technologies and approaches were considered and explored over many years. Some of these approaches were based on equipment and procedures that would enable the reproducible production of new, kilogram-mass prototypes on demand (albeit with extraordinary effort) using measurement techniques and material properties that are ultimately based on, or traceable to, physical constants. Others were based on devices that measured either the acceleration or weight of hand-tuned kilogram test masses and that expressed their magnitudes in electrical terms via special components that permit traceability to physical constants. All approaches depend on converting a weight measurement to a mass and therefore require precise measurement of the strength of gravity in laboratories (gravimetry). All approaches would have precisely fixed one or more constants of nature at a defined value. SI multiples Because an SI unit may not have multiple prefixes (see SI prefix), prefixes are added to gram, rather than the base unit kilogram, which already has a prefix as part of its name. For instance, one-millionth of a kilogram is 1mg (one milligram), not 1μkg (one microkilogram). Practical issues with SI weight names Serious medication errors have been made by confusing milligrams and micrograms when micrograms has been abbreviated. The abbreviation "mcg" rather than the SI symbol "μg" is formally mandated for medical practitioners in the US by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO). In the United Kingdom, the National Institute for Health and Care Excellence and Scottish Palliative Care Guidelines state that "micrograms" and "nanograms" must both be written in full, and never abbreviated as "mcg" or "μg". The hectogram (100 g) (Italian: ettogrammo or etto) is a very commonly used unit in the retail food trade in Italy. The former standard spelling and abbreviation "deka-" and "dk" produced abbreviations such as "dkm" (dekametre) and "dkg" (dekagram). the abbreviation "dkg" (10 g) is still used in parts of central Europe in retail for some foods such as cheese and meat. The unit name megagram is rarely used, and even then typically only in technical fields in contexts where especially rigorous consistency with the SI standard is desired. For most purposes, the name tonne is instead used. The tonne and its symbol, "t", were adopted by the CIPM in 1879. It is a non-SI unit accepted by the BIPM for use with the SI. According to the BIPM, "This unit is sometimes referred to as 'metric ton' in some English-speaking countries."
Physical sciences
Mass
null
16622
https://en.wikipedia.org/wiki/Knitting
Knitting
Knitting is a method for production of textile fabrics by interlacing yarn loops with loops of the same or other yarns. It is used to create many types of garments. Knitting may be done by hand or by machine. Knitting creates stitches: loops of yarn in a row; they can be either on straight flat needles or in the round on needles with (often times plastic) tubes connected to both ends of the needles. There are usually many active stitches on the knitting needle at one time. Knitted fabric consists of a number of consecutive rows of connected loops that intermesh with the next and previous rows. As each row is formed, each newly created loop is pulled through one or more loops from the prior row and placed on the gaining needle so that the loops from the prior row can be pulled off the other needle without unraveling. Differences in yarn (varying in fibre type, weight, uniformity and twist), needle size, and stitch type allow for a variety of knitted fabrics with different properties, including color, texture, thickness, heat retention, water resistance, and integrity. A small sample of knitwork is known as a swatch. Structure Courses and wales Like weaving, knitting is a technique for producing a two-dimensional fabric made from a one-dimensional yarn or thread. In weaving, threads are always straight, running parallel either lengthwise (warp threads) or crosswise (weft threads). By contrast, the yarn in knitted fabrics follows a meandering path (a course), forming symmetric loops (also called bights) symmetrically above and below the mean path of the yarn. These meandering loops can be easily stretched in different directions giving knit fabrics much more elasticity than woven fabrics. Depending on the yarn and knitting pattern, knitted garments can stretch as much as 500%. For this reason, knitting was initially developed for garments that must be elastic or stretch in response to the wearer's motions, such as socks and hosiery. For comparison, woven garments stretch mainly along one or other of a related pair of directions that lie roughly diagonally between the warp and the weft, while contracting in the other direction of the pair (stretching and contracting with the bias), and are not very elastic, unless they are woven from stretchable material such as spandex. Knitted garments are often more form-fitting than woven garments, since their elasticity allows them to contour to the body's outline more closely; by contrast, curvature is introduced into most woven garments only with sewn darts, flares, gussets and gores, the seams of which lower the elasticity of the woven fabric still further. Extra curvature can be introduced into knitted garments without seams, as in the heel of a sock; the effect of darts, flares, etc. can be obtained with short rows or by increasing or decreasing the number of stitches. Thread used in weaving is usually much finer than the yarn used in knitting, which can give the knitted fabric more bulk and less drape than a woven fabric. If they are not secured, the loops of a knitted course will come undone when their yarn is pulled; this is known as ripping out, unravelling knitting, or humorously, frogging (because you 'rip it', this sounds like a frog croaking: 'rib-bit'). To secure a stitch, at least one new loop is passed through it. Although the new stitch is itself unsecured ("active" or "live"), it secures the stitch(es) suspended from it. A sequence of stitches in which each stitch is suspended from the next is called a wale. To secure the initial stitches of a knitted fabric, a method for casting on is used; to secure the final stitches in a wale, one uses a method of binding/casting off. During knitting, the active stitches are secured mechanically, either from individual hooks (in knitting machines) or from a knitting needle or frame in hand-knitting. Weft and warp knitting There are two major varieties of knitting: weft knitting and warp knitting. In the more common weft knitting, the wales are perpendicular to the course of the yarn. In warp knitting, the wales and courses run roughly parallel. In weft knitting, the entire fabric may be produced from a single yarn, by adding stitches to each wale in turn, moving across the fabric as in a raster scan. By contrast, in warp knitting, one yarn is required for every wale. Since a typical piece of knitted fabric may have hundreds of wales, warp knitting is typically done by machine, whereas weft knitting is done by both hand and machine. Warp-knitted fabrics such as tricot and milanese are resistant to runs, and are commonly used in lingerie. Weft-knit fabrics may also be knit with multiple yarns, usually to produce interesting color patterns. The two most common approaches are intarsia and stranded colorwork. In intarsia, the yarns are used in well-segregated regions, e.g., a red apple on a field of green; in that case, the yarns are kept on separate spools and only one is knitted at any time. In the more complex stranded approach, two or more yarns alternate repeatedly within one row and all the yarns must be carried along the row, as seen in Fair Isle sweaters. Double knitting can produce two separate knitted fabrics simultaneously (e.g., two socks). However, the two fabrics are usually integrated into one, giving it great warmth and excellent drape. Knit and purl stitches In securing the previous stitch in a wale, the next stitch can pass through the previous loop from either below or above. If the former, the stitch is denoted as a 'knit stitch' or a 'plain stitch;' if the latter, as a 'purl stitch'. The two stitches are related in that a knit stitch seen from one side of the fabric appears as a purl stitch on the other side. The two types of stitches have a different visual effect; the knit stitches look like 'V's stacked vertically, whereas the purl stitches look like a wavy horizontal line across the fabric. Patterns and pictures can be created in knitted fabrics by using knit and purl stitches as "pixels"; however, such pixels are usually rectangular, rather than square, depending on the gauge/tension of the knitting. Individual stitches, or rows of stitches, may be made taller by drawing more yarn into the new loop (an elongated stitch), which is the basis for uneven knitting: a row of tall stitches may alternate with one or more rows of short stitches for an interesting visual effect. Short and tall stitches may also alternate within a row, forming a fish-like oval pattern. In the simplest of hand-knitted fabrics, every row of stitches are all knit (or all purl); this creates a garter stitch fabric. Alternating rows of all knit stitches and all purl stitches creates a stockinette stitch/stocking stitch pattern. Vertical stripes (ribbing) are possible by having alternating wales of knit and purl stitches. For example, a common choice is 2x2 ribbing, in which two wales of knit stitches are followed by two wales of purl stitches, etc. Horizontal striping (welting) is also possible, by alternating rows of knit and purl stitches. Checkerboard patterns (basketweave) are also possible, the smallest of which is known as seed/moss stitch: the stitches alternate between knit and purl in every wale and along every row. Fabrics in which each knitted row is followed by a purled row, such as in stockinette/stocking stitch, have a tendency to curl—top and bottom curl toward the front (or knitted side) while the sides curl toward the back (or purled side); by contrast, those in which knit and purl stitches are arranged symmetrically (such as ribbing, garter stitch or seed/moss stitch) have more texture and tend to lie flat. Wales of purl stitches have a tendency to recede, whereas those of knit stitches tend to come forward, giving the fabric more stretchability. Thus, the purl wales in ribbing tend to be invisible, since the neighboring knit wales come forward. Conversely, rows of purl stitches tend to form an embossed ridge relative to a row of knit stitches. This is the basis of shadow knitting, in which the appearance of a knitted fabric changes when viewed from different directions. Typically, a new stitch is passed through a single unsecured ('active') loop, thus lengthening that wale by one stitch. However, this need not be so; the new loop may be passed through an already secured stitch lower down on the fabric, or even between secured stitches (a dip stitch). Depending on the distance between where the loop is drawn through the fabric and where it is knitted, dip stitches can produce a subtle stippling or long lines across the surface of the fabric, e.g., the lower leaves of a flower. The new loop may also be passed between two stitches in the 'present' row, thus clustering the intervening stitches; this approach is often used to produce a smocking effect in the fabric. The new loop may also be passed through 'two or more' previous stitches, producing a decrease and merging wales together. The merged stitches need not be from the same row; for example, a tuck can be formed by knitting stitches together from two different rows, producing a raised horizontal welt on the fabric. Not every stitch in a row need be knitted; some may be 'missed' (unknitted and passed to the active needle) and knitted on a subsequent row. This is known as slip-stitch knitting. The slipped stitches are naturally longer than the knitted ones. For example, a stitch slipped for one row before knitting would be roughly twice as tall as its knitted counterparts. This can produce interesting visual effects, although the resulting fabric is more rigid because the slipped stitch 'pulls' on its neighbours and is less deformable. Mosaic knitting is a form of slip-stitch knitting that knits alternate colored rows and uses slip stitches to form patterns; mosaic-knit fabrics tend to be stiffer than patterned fabrics produced by other methods such as Fair-Isle knitting. In some cases, a stitch may be deliberately left unsecured by a new stitch and its wale allowed to disassemble. This is known as drop-stitch knitting, and produces a vertical ladder of see-through holes in the fabric, corresponding to where the wale had been. Differences between knitting and crocheting While creating knitting by hand, usually two needles are used to hold the live stitches. While crochet uses a single hook, usually creating one stitch at a time, finishing one stitch before creating the next. Knitted fabric tends to be flexible and flowing, the stitches forming a shape that is similar to a "V". Crochet fabric has a more structured feel, each stitch consisting of several loops entwined. Each textile has its own specialties and methods. Because of the different nature of each stitch, crochet fabric uses more yarn per stitch, is more structured, and is more flexible in the structures that can be created, not being restrained to create a stitch in the following stitch. Knitted fabric tends to be thinner, more flexible, and usually has easier to understand patterns because each new stitch must go into the next stitch. Because of the differences in how the fabrics are created, the first knitting machine was invented in Victorian times, while machine that can stitch a crochet fabric has yet to be produced. Although different methods, they can create similar projects using the same fibers and yarns. Right- and left-plaited stitches Both knit and purl stitches may be twisted: usually once if at all, but sometimes twice and (very rarely) thrice. When seen from above, the twist can be clockwise (right yarn over left) or counterclockwise (left yarn over right); these are denoted as right- and left-plaited stitches, respectively. Hand-knitters generally produce right-plaited stitches by knitting or purling through the back loops, i.e., passing the needle through the initial stitch in an unusual way, but wrapping the yarn as usual. By contrast, the left-plaited stitch is generally formed by hand-knitters by wrapping the yarn in the opposite way, rather than by any change in the needle. Although they are mirror images in form, right- and left-plaited stitches are functionally equivalent. Both types of plaited stitches give a subtle but interesting visual texture, and tend to draw the fabric inwards, making it stiffer. Plaited stitches are a common method for knitting jewelry from fine metal wire. Edges and joins between fabrics The initial and final edges of a knitted fabric are known as the cast-on and bound/cast-off edges. The side edges are known as the selvages; the word derives from "self-edges", meaning that the stitches do not need to be secured by anything else. Many types of selvages have been developed, with different elastic and ornamental properties. Vertical and horizontal edges can be introduced within a knitted fabric, e.g., for button holes, by binding/casting off and re-casting on again (horizontal) or by knitting the fabrics on either side of a vertical edge separately. Two knitted fabrics can be joined by embroidery-based grafting methods, most commonly the Kitchener stitch. New wales can be begun from any of the edges of a knitted fabric; this is known as picking up stitches and is the basis for entrelac, in which the wales run perpendicular to one another in a checkerboard pattern. Cables, increases, and lace Ordinarily, stitches are knitted in the same order in every row, and the wales of the fabric run parallel and vertically along the fabric. However, this need not be so, since the order in which stitches are knitted may be permuted so that wales cross over one another, forming a cable pattern. Cable patterns tend to draw the fabric together, making it denser and less elastic; Aran sweaters are a common form of knitted cabling. Arbitrarily complex braid patterns can be done in cable knitting, with the proviso that the wales must move ever upwards; it is generally impossible for a wale to move up and then down the fabric. Knitters have developed methods for giving the illusion of a circular wale, such as appear in Celtic knots, but these are inexact approximations. However, such circular wales are possible using Swiss darning, a form of embroidery, or by knitting a tube separately and attaching it to the knitted fabric. A wale can split into two or more wales using increases, most commonly involving a yarn over. Depending on how the increase is done, there is often a hole in the fabric at the point of the increase. This is used to great effect in lace knitting, which consists of making patterns and pictures using such holes, rather than with the stitches themselves. The large and many holes in lacy knitting makes it extremely elastic; for example, some Shetland "wedding-ring" shawls are so fine that they may be drawn through a wedding ring. By combining increases and decreases, it is possible to make the direction of a wale slant away from vertical, even in weft knitting. This is the basis for bias knitting, and can be used for visual effect, similar to the direction of a brush-stroke in oil painting. Ornamentations and additions Various point-like ornaments may be added to knitting for their look or to improve the wear of the fabric. Examples include various types of bobbles, sequins and beads. Long loops can also be drawn out and secured, forming a "shaggy" texture to the fabric; this is known as loop knitting. Additional patterns can be made on the surface of the knitted fabric using embroidery; if the embroidery resembles knitting, it is often called Swiss darning. Various closures for the garments, such as frogs and buttons can be added; usually buttonholes are knitted into the garment, rather than cut. Ornamental pieces may also be knitted separately and then attached using applique. For example, differently colored leaves and petals of a flower could be knit separately and attached to form the final picture. Separately knitted tubes can be applied to a knitted fabric to form complex Celtic knots and other patterns that would be difficult to knit. Unknitted yarns may be worked into knitted fabrics for warmth, as is done in tufting and "weaving" (also known as "couching"). History and culture The word is derived from knot and ultimately from the Old English cnyttan, to knot. The exact origins of knitting are unknown, the earliest known examples being cotton socks dating from the 11th century, found in the remains of the city of Fustat, now part of Cairo. Nålebinding (Danish: literally "binding with a needle" or "needle-binding") is a fabric creation technique predating both knitting and crochet. The first commercial knitting guilds appear in Western Europe in the early fifteenth century (Tournai in 1429, Barcelona in 1496). The Guild of Saint Fiacre was founded in Paris in 1527 but the archives mention an organization (not necessarily a guild) of knitters from 1268. The occupation: "cap knitter" describes Margaret Yeo, of London, in 1473. With the invention in 1589 of the stocking frame, an early form of knitting machine, knitting "by hand" became a craft used by country people with easy access to fiber. Similar to quilting, spinning, and needlepoint, hand knitting became a leisure activity for the wealthy. English Roman Catholic priest and former Anglican bishop, Richard Rutt, authored a history of the craft in A History of Hand Knitting (Batsford, 1987). His collection of books about knitting is now housed at the Winchester School of Art (University of Southampton). Properties of fabrics The topology of a knitted fabric is relatively complex. Unlike woven fabrics, where strands usually run straight horizontally and vertically, yarn that has been knitted follows a looped path along its row, as with the red strand in the diagram at left, in which the loops of one row have all been pulled through the loops of the row below it. Because there is no single straight line of yarn anywhere in the pattern, a knitted piece of fabric can stretch in all directions. This elasticity is all but unavailable in woven fabrics which only stretch along the bias. Many modern stretchy garments, even as they rely on elastic synthetic materials for some stretch, also achieve at least some of their stretch through knitted patterns. The basic knitted fabric (as in the diagram, and usually called a stocking or stockinette pattern) has a definite "right side" and "wrong side". On the right side, the visible portions of the loops are the verticals connecting two rows which are arranged in a grid of V shapes. On the wrong side, the ends of the loops are visible, both the tops and bottoms, creating a much more bumpy texture sometimes called reverse stockinette. (Despite being the "wrong side", reverse stockinette is frequently used as a pattern in its own right.) Because the yarn holding rows together is all on the front, and the yarn holding side-by-side stitches together is all on the back, stockinette fabric has a strong tendency to curl toward the front on the top and bottom, and toward the back on the left and right side. Stitches can be worked from either side, and various patterns are created by mixing regular knit stitches with the "wrong side" stitches, known as purl stitches, either in columns (ribbing), rows (garter, welting), or more complex patterns. Each fabric has different properties: a garter stitch has much more vertical stretch, while ribbing stretches much more horizontally. Because of their front-back symmetry, these two fabrics have little curl, making them popular as edging, even when their stretch properties are not desired. The basic knitted fabrics are referred to by different names in the setting of industrial manufacture. The fabric known by hand knitters as stockinette is called plain knit or jersey, and the fabric known by hand knitters as garter is called purl knitting or links-and-links. Different combinations of knit and purl stitches, along with more advanced techniques, generate fabrics of considerably variable consistency, from gauzy to very dense, from highly stretchy to relatively stiff, from flat to tightly curled, and so on. Texture The most common texture for a knitted garment is that generated by the flat stockinette stitch—as seen, though very small, in machine-made stockings and T-shirts—which is worked in the round as nothing but knit stitches, and worked flat as alternating rows of knit and purl. Other simple textures can be made with nothing but knit and purl stitches, including garter stitch, ribbing, and moss and seed stitches. Adding a "slip stitch" (where a loop is passed from one needle to the other) allows for a wide range of textures, including heel and linen stitches as well as a number of more complicated patterns. Some more advanced knitting techniques create a surprising variety of complex textures. Combining certain increases, which can create small eyelet holes in the resulting fabric, with assorted decreases is key to creating knitted lace, a very open fabric resembling needle or bobbin lace. Open vertical stripes can be created using the drop-stitch knitting technique. Changing the order of stitches from one row to the next, usually with the help of a cable needle or stitch holder, is key to cable knitting, producing an endless variety of cables, honeycombs, ropes, and Aran sweater patterning. Entrelac forms a rich checkerboard texture by knitting small squares, picking up their side edges, and knitting more squares to continue the piece. Fair Isle knitting uses two or more colored yarns to create patterns and forms a thicker and less flexible fabric. The appearance of a garment is also affected by the weight of the yarn, which describes the thickness of the spun fibre. The thicker the yarn, the more visible and apparent stitches will be; the thinner the yarn, the finer the texture. Color Plenty of finished knitting projects never use more than a single color of yarn, but there are many ways to work in multiple colors. Some yarns are dyed to be either variegated (changing color every few stitches in a random fashion) or self-striping (changing every few rows). More complicated techniques permit large fields of color (intarsia, for example), busy small-scale patterns of color (such as Fair Isle), or both (double knitting and slip-stitch color, for example). Yarn with multiple shades of the same hue are called ombre, while a yarn with multiple hues may be known as a given colorway; a green, red and yellow yarn might be dubbed the "Parrot Colorway" by its manufacturer, for example. Heathered yarns contain small amounts of fibre of different colours, while tweed yarns may have greater amounts of different colored fibres. Hand knitting process There are many hundreds of different knitting stitches used by hand knitters. A piece of hand knitting begins with the process of casting on, which involves the initial creation of the stitches on the needle. Different methods of casting on are used for different effects: one may be stretchy enough for lace, while another provides a decorative edging. Provisional cast-ons are used when the knitting will continue in both directions from the cast-on. There are various methods employed to cast on, such as the "thumb method" (also known as "slingshot" or "long-tail" cast-ons), where the stitches are created by a series of loops that will, when knitted, give a very loose edge ideal for "picking up stitches" and knitting a border; the "double needle method" (also known as "knit-on" or "cable cast-on"), whereby each loop placed on the needle is then "knitted on", which produces a firmer edge ideal on its own as a border; and many more. The number of active stitches remains the same as when cast on unless stitches are added (an increase) or removed (a decrease). Most Western-style hand knitters follow either the English style (in which the yarn is held in the right hand) or the Continental style (in which the yarn is held in the left hand). There are also different ways to insert the needle into the stitch. Knitting through the front of a stitch is called Western knitting. Going through the back of a stitch is called Eastern knitting. A third method, called combination knitting, goes through the front of a knit stitch and the back of a purl stitch. Once the hand knitted piece is finished, the remaining live stitches are "cast off". Casting (or "binding") off loops the stitches across each other so they can be removed from the needle without unravelling the item. Although the mechanics are different from casting on, there is a similar variety of methods. In hand knitting certain articles of clothing, especially larger ones like sweaters, the final knitted garment will be made of several knitted pieces, with individual sections of the garment hand knitted separately and then sewn together. Seamless knitting, where a whole garment is hand knit as a single piece, is also possible. Elizabeth Zimmermann is probably the best-known proponent of seamless or circular hand knitting techniques. Smaller items, such as socks and hats, are usually knit in one piece on double-pointed needles or circular needles. Hats in particular can be started "top down" on double pointed needles with the increases added until the preferred size is achieved, switching to an appropriate circular needle when enough stitches have been added. Care must be taken to bind off at a tension that will allow the "give" needed to comfortably fit on the head. (See Circular knitting.) Machine knitting Knitting can also be performed by machines.  The first knitting machine, known as the stocking frame, was invented in England in 1589. Modern knitting machines, both domestic and industrial, are either flat-bed or circular. Flat-bed knitting machines knit back and forth, producing a flat piece of fabric.  Flat-bed machines can produce uniform-width fabric which can be cut and sewn into garments, or they can produce shaped pieces which can be seamed to make garments without cutting.  The latter is known as full-fashioned knitting. Circular knitting machines knit in a continuous circle, producing a tubular piece of fabric.  Similarly to knitted fabrics manufactured on flat-bed machines, a tube of uniform-width fabric may be cut along one side to produce flat fabric which can be cut and sewn into garments.  Fabric produced in this way can be cheaper than fabric produced on a flat-bed machine, as circular machines can operate at higher speed.  Circular knitting machines can also be used to create shaped, finished articles, such as socks. Materials Yarn Yarn for hand-knitting is usually sold as balls or skeins (hanks), and it may also be wound on spools or cones. Skeins and balls are generally sold with a yarn-band, a label that describes the yarn's weight, length, dye lot, fiber content, washing instructions, suggested needle size, likely gauge/tension, etc. It is common practice to save the yarn band for future reference, especially if additional skeins must be purchased. Knitters generally ensure that the yarn for a project comes from a single dye lot. The dye lot specifies a group of skeins that were dyed together and thus have precisely the same color; skeins from different dye-lots, even if very similar in color, are usually slightly different and may produce a visible horizontal stripe when knitted together. If a knitter buys insufficient yarn of a single dye lot to complete a project, additional skeins of the same dye lot can sometimes be obtained from other yarn stores or online. Otherwise, knitters can alternate skeins every few rows to help the dye lots blend together easier. The thickness or weight of the yarn is a significant factor in determining the gauge/tension, i.e., how many stitches and rows are required to cover a given area for a given stitch pattern. Thicker yarns generally require thicker knitting needles, whereas thinner yarns may be knit with thick or thin needles. Hence, thicker yarns generally require fewer stitches, and therefore less time, to knit up a given garment. Patterns and motifs are coarser with thicker yarns; thicker yarns produce bold visual effects, whereas thinner yarns are best for refined patterns. Yarns are grouped by thickness into the following categories: lace, superfine (fingering or sock), fine (sport), light (double knit or DK), medium (worsted and aran), bulky, superbulky, and jumbo; quantitatively, thickness is measured by the number of wraps per inch (WPI). In the British Commonwealth (outside North America) yarns are measured as 1ply, 2ply, 3ply, 4ply, 5ply, 8ply (or double knit),10ply and 12ply (triple knit). The related weight per unit length is usually measured in tex or denier. Before knitting, the knitter will typically transform a hank/skein into a ball where the yarn emerges from the center of the ball; this making the knitting easier by preventing the yarn from becoming easily tangled. This transformation may be done by hand, or with a device known as a ballwinder. When knitting, some knitters enclose their balls in jars to keep them clean and untangled with other yarns; the free yarn passes through a small hole in the jar-lid. A yarn's usefulness for a knitting project is judged by several factors, such as its loft (its ability to trap air), its resilience (elasticity under tension), its washability and colorfastness, its hand (its feel, particularly softness vs. scratchiness), its durability against abrasion, its resistance to pilling, its hairiness (fuzziness), its tendency to twist or untwist, its overall weight and drape, its blocking and felting qualities, its comfort (breathability, moisture absorption, wicking properties) and of course its look, which includes its color, sheen, smoothness and ornamental features. Other factors include allergenicity; speed of drying; resistance to chemicals, moths, and mildew; melting point and flammability; retention of static electricity; and the propensity to become stained and to accept dyes. Different factors may be more significant than others for different knitting projects, so there is no one "best" yarn. The resilience and propensity to (un)twist are general properties that affect the ease of hand-knitting. More resilient yarns are more forgiving of irregularities in tension; highly twisted yarns are sometimes difficult to knit, whereas untwisting yarns can lead to split stitches, in which not all the yarn is knitted into a stitch. A key factor in knitting is stitch definition, corresponding to how well complicated stitch patterns can be seen when made from a given yarn. Smooth, highly spun yarns are best for showing off stitch patterns; at the other extreme, very fuzzy yarns or eyelash yarns have poor stitch definition, and any complicated stitch pattern would be invisible. Although knitting may be done with ribbons, metal wire or more exotic filaments, most yarns are made by spinning fibers. In spinning, the fibers are twisted so that the yarn resists breaking under tension; the twisting may be done in either direction, resulting in a Z-twist or S-twist yarn. If the fibers are first aligned by combing them, the yarn is smoother and called a worsted; by contrast, if the fibers are carded but not combed, the yarn is fuzzier and called woolen-spun. The fibers making up a yarn may be continuous filament fibers such as silk and many synthetics, or they may be staples (fibers of an average length, typically a few inches); naturally filament fibers are sometimes cut up into staples before spinning. The strength of the spun yarn against breaking is determined by the amount of twist, the length of the fibers and the thickness of the yarn. In general, yarns become stronger with more twist (also called worst), longer fibers and thicker yarns (more fibers); for example, thinner yarns require more twist than do thicker yarns to resist breaking under tension. The thickness of the yarn may vary along its length; a slub is a much thicker section in which a mass of fibers is incorporated into the yarn. The spun fibers are generally divided into animal fibers, plant and synthetic fibers. These fiber types are chemically different, corresponding to proteins, carbohydrates and synthetic polymers, respectively. Animal fibers include silk, but generally are long hairs of animals such as sheep (wool), goat (angora, or cashmere goat), rabbit (angora), llama, alpaca, dog, cat, camel, yak, and muskox (qiviut). Plants used for fibers include cotton, flax (for linen), bamboo, ramie, hemp, jute, nettle, raffia, yucca, coconut husk, banana fiber, soy and corn. Rayon and acetate fibers are also produced from cellulose mainly derived from trees. Common synthetic fibers include acrylics, polyesters such as dacron and ingeo, nylon and other polyamides, and olefins such as polypropylene. Of these types, wool is generally favored for knitting, chiefly owing to its superior elasticity, warmth and (sometimes) felting. It is also common to blend different fibers in the yarn, e.g., 85% alpaca and 15% silk. Even within a type of fiber, there can be great variety in the length and thickness of the fibers; for example, Merino wool and Egyptian cotton are favored because they produce exceptionally long, thin (fine) fibers for their type. A single spun yarn may be knitted as is, or braided or plied with another. In plying, two or more yarns are spun together, almost always in the opposite sense from which they were spun individually; for example, two Z-twist yarns are usually plied with an S-twist. The opposing twist relieves some of the yarns' tendency to curl up and produces a thicker, balanced yarn. Plied yarns may themselves be plied together, producing cabled yarns or multi-stranded yarns. Sometimes, the yarns being plied are fed at different rates, so that one yarn loops around the other, as in bouclé. The single yarns may be dyed separately before plying, or afterwards to give the yarn a uniform look. The dyeing of yarns is a complex art that has a long history. However, yarns need not be dyed. They may be dyed just one color, or a great variety of colors. Dyeing may be done industrially, by hand or even hand-painted onto the yarn. A great variety of synthetic dyes have been developed since the synthesis of indigo dye in the mid-19th century; however, natural dyes are also possible, although they are generally less brilliant. The color-scheme of a yarn is sometimes called its colorway. Variegated yarns can produce interesting visual effects, such as diagonal stripes; conversely, a variegated yarn may obscure a detailed knitting design, such as a cable or lace pattern. Metal wire There are multiple commercial applications for knit fabric made of metal wire by knitting machines. Steel wire of various sizes may be used for electric and magnetic shielding due to its conductivity. Stainless steel may be used in a coffee press for its rust resistance. Metal wire can also be used as jewelry. Glass and wax Knitted glass combines knitting with wax strands, lost-wax casting, mold-making, and kiln-casting. Tools The process of knitting has three basic tasks: the active (unsecured) stitches must be held so they don't drop these stitches must be released sometime after they are secured new bights of yarn must be passed through the fabric, usually through active stitches, thus securing them. In very simple cases, knitting can be done without tools, using only the fingers to do these tasks; however, knitting is usually carried out using tools such as knitting needles, knitting machines or rigid frames. Depending on their size and shape, the rigid frames are called stocking frames, knitting boards, knitting rings (also called knitting looms) or knitting spools (also known as knitting knobbies, knitting nancies, or corkers). There is also a technique called knooking of knitting with a crochet hook that has a cord attached to the end, to hold the stitches while they're being worked. Other tools are used to prepare yarn for knitting, to measure and design knitted garments, or to make knitting easier or more comfortable. Needles There are three basic types of knitting needles (also called "knitting pins"). The first and most common type consists of two slender, straight sticks tapered to a point at one end, and with a knob at the other end to prevent stitches from slipping off. Such needles are usually long but, due to the compressibility of knitted fabrics, may be used to knit pieces significantly wider. The most important property of needles is their diameter, which ranges from below 2 to 25 mm (roughly 1 inch). The diameter affects the size of stitches, which affects the gauge/tension of the knitting and the elasticity of the fabric. Thus, a simple way to change gauge/tension is to use different needles, which is the basis of uneven knitting. Although the diameter of the knitting needle is often measured in millimeters, there are several measurement systems, particularly those specific to the United States, the United Kingdom and Japan; a conversion table is given at knitting needle. Such knitting needles may be made out of any materials, but the most common materials are metals, wood, bamboo, and plastic. Different materials have different frictions and grip the yarn differently; slick needles such as metallic needles are useful for swift knitting, whereas rougher needles such as bamboo offer more friction and are therefore less prone to dropping stitches. The knitting of new stitches occurs only at the tapered ends. Needles with lighted tips have been sold to allow knitters to knit in the dark. The second type of knitting needles are straight, double-pointed knitting needles (also called "DPNs"). Double-pointed needles are tapered at both ends, which allows them to be knit from either end. DPNs are typically used for circular knitting, especially smaller tube-shaped pieces such as sleeves, collars, and socks; usually one needle is active while the others hold the remaining active stitches. DPNs are somewhat shorter (typically 7 inches) and are usually sold in sets of four or five. The third needle type consists of circular needles, which are long, flexible double-pointed needles. The two tapered ends (typically long) are rigid and straight, allowing for easy knitting; however, the two ends are connected by a flexible strand (usually nylon) that allows the two ends to be brought together. Circular needles are typically 24-60 inches long, and are usually used singly or in pairs; again, the width of the knitted piece may be significantly longer than the length of the circular needle. Interchangeable needles are a subset of circular needles. They are kits consist of pairs of needles with usually nylon cables or cords. The cables/cords are screwed into the needles, allowing the knitter to have both flexible straight needles or circular needles. This also allows the knitter to change the diameter and length of the needles as needed. The needles must be screwed on tightly, otherwise yarn can snag and become damaged. The ability to work from either end of one needle is convenient in several types of knitting, such as slip-stitch versions of double knitting. Circular needles may be used for flat or circular knitting. Cable needles are a special case of DPNs, although they are usually not straight, but dimpled in the middle. Often, they have the form of a hook. When cabling a knitted piece, a hook is easier to grab and hold the yarn. Cable needles are typically very short (a few inches), and are used to hold stitches temporarily while others are being knitted. When in use, the cable needle is used at the same time as two regular needles. At specific points indicated by the knitting pattern, the cable needle is moved, the stitches on it are worked by the other needles, then the cable needle is turned around to a different position to create the cable twist. Cable needles are a specific design, and are used to create the twisting motif of a knitted cable. They are made in different sizes, which produces cables of different widths. Ancillary tools Various tools have been developed to make hand-knitting easier. Tools for measuring needle diameter and yarn properties have been discussed above, as well as the yarn swift, ballwinder and "yarntainers". Crochet hooks and a darning needle are often useful in binding/casting off or in joining two knitted pieces edge-to-edge. The darning needle is used in duplicate stitch (also known as Swiss darning). The crochet hook is also essential for repairing dropped stitches and some specialty stitches such as tufting. Other tools such as knitting spools or pom-pom makers are used to prepare specific ornaments. For large or complex knitting patterns, it is sometimes difficult to keep track of which stitch should be knit in a particular way; therefore, several tools have been developed to identify the number of a particular row or stitch, including circular stitch markers, hanging markers, extra yarn and row counters. A second potential difficulty is that the knitted piece will slide off the tapered end of the needles when unattended; this is prevented by "point protectors" that cap the tapered ends. Another problem is that too much knitting may lead to hand and wrist troubles; for this, special stress-relieving gloves are available. In traditional Shetland knitting a special belt is often used to support the end of one needle allowing the knitting greater speed. Finally, there are sundry bags and containers for holding knitting, yarns and needles. Knitting styles/holds Continental/German style Continental knitting is achieved by holding the yarn in your left hand for both knitting and purling. Patterns are created on the outside (public-facing) side of the piece. Norwegian style While knit stitches are worked as in the classic Continental style, the purl is worked by leaving the yarn at back and moving the needle. Russian style Another variation on Continental knitting, this style is achieved by "picking" up the yarn by moving the needle head into it. Now wrap the yarn around the index finger on that left hand, so it is coming over the top of your finger and back around underneath it and on top of your middle finger. You will wind up with your index finger very close to the back of your left-hand needle. In Russian knitting, it is common to slip the first stitch of every row. English style English-style knitting is achieved by holding the yarn in your right hand. Patterns are created on the outside (public-facing) side of the piece. Portuguese/Greek/Incan/Turkish style This style is achieved by carrying the yarn around the neck or from a necklace-style hook, allowing the knitter to knit on the reverse (purl) side, e.g. "inside out" compared to Western knitting techniques. Patterns are typically created by stranding the yarn on the outside of the piece. This is an ancient style of knitting, which spread from Arabic culture to the Iberian peninsula, during its occupation by Muslims. Hence this style was taught to Indigenous South Americans, during conquest by Spanish/Portuguese colonists. Knitting techniques Armenian The Armenian knitting technique tacks the non-working yarn to the piece regularly to limit floats. You will tack your non-working yarn down approximately every 3 stitches. Double knitting A technique used to create a flat, smooth, reversible fabric that looks like stockinette or jersey on both sides, rather than having a knit face and a purl reverse side. Fair Isle A method by which many different yarns are used throughout the row and when not being used are floated on the wrong side of the piece. Mega knitting Mega knitting is a term recently coined and relates to the use of knitting needles greater than or equal to half an inch in diameter. Mega knitting uses the same stitches and techniques as conventional knitting, except that hooks are carved into the ends of the needles. The hooked needles greatly enhance control of the work, catching the stitches and preventing them from slipping off. It was the development of the knitting machine that introduced hooked needles and enabled faultless, automated knitting. The hook catches the loop of yarn as each stitch is knitted, meaning that wrists and fingers do not have to work so hard and there is less chance of stitches slipping off the needle. The position of the hook is most important. Turn the left (non-working) hook to face away at all times; turn the right (working) hook toward you up whilst knitting (plain stitch) and away whilst purling. Mega knitting produces a chunky, bulky fabric or an open lacy weave, depending on the weight and type of yarn used. Micro knitting Micro knitting or miniature knitting uses extremely fine threads and needles. Anthea Crome created 14 tiny sweaters used in the stop motion animated film Coraline and has made objects at 60 or 80 stitches per inch, making her own needles from fine surgical steel wire. She has published Bugknits: Extreme knitting for hobbyists, artists and knitters (2009, Blurb: ). Annelies de Kort has knitted on an even smaller scale and has used needles of 0.4mm. Short row In short row knitting, the work is turned before a row is fully knitted. There are several ways to achieve this. Wrap and turn Just before the work is turned, the working yarn is passed around the next unknitted stitch, forming a “wrap.” Later, this “wrap” is picked up and knitted into a stitch, concealing it from view. German short row In German short rows, the work is turned and the last stitch worked is slipped purlwise with yarn in front to the right needle. Finally, the working yarn is pulled over the top of the needle to the back, which rotates the stitch on the needle so that it tips backwards, forming what appears to be a double-stitch, sometimes referred to as a “German double stitch”. The working yarn stays to the back for the next stitch if it is to be knitted, or rotated below the right needle and pulled to the front, if it is to be purled, both of which maintain the proper (“tipped back”) orientation of the German double stitch. Eventually, this German double stitch is worked like a single stitch, which masks its appearance as viewed from the right side to look like a regular stitch. Japanese short row In Japanese short rows, a locking stitch marker is used to hold the loop of the working yarn at the turning point. Eventually, the loop is picked up (and stitch marker removed) and worked together with the stitch on the other side of the gap. Japanese short rows usually result in tidier turning points with less extraneous yarn bulk compared to German short rows and the Wrap and Turn technique. Twined knitting The technique, also known as two-end knitting, is a traditional Scandinavian knitting technique. It refers to knitting where two strands of yarn are knitted into the fabric alternatively and twisted once and always in the same direction before every stitch. This produces a firmer and more durable fabric with greater thermal insulation than conventional one-end knitting. Commercial applications Industrially, metal wire is also knitted into a metal fabric for a wide range of uses including the filter material in cafetieres, catalytic converters for cars and many other uses. These fabrics are usually manufactured on circular knitting machines that would be recognized by conventional knitters as sock machines. Knitting mills are factories that produce knitted fabrics or knitted apparel.  Knitted fabrics are used in the manufacture of highly-fitted garments such as athleticwear and athleisure.  The stretch properties of knitted fabrics may be enhanced by the inclusion of fibers such as spandex. In addition to athletic-type garments, knitted fabric may be used in fashion garments. Many fashion designers make heavy use of knitted fabric in their fashion collections. Gordana Gelhausen, who appeared in season six of the television show Project Runway, is primarily a knit designer. Other designers and labels that make heavy use of knitting include Michael Kors, Fendi, and Marc Jacobs. Knitting mills can also produce completed knitted apparel, such as sweaters, socks, T-shirts, and underwear. Beginning in the 1990s, seamless 3-dimensional whole-garment knitting machines have increased the options of finished garments that can be produced in knitting mills. These machines have also enabled the production of knitted shoe uppers. For individual hobbyists, websites such as Etsy, Big Cartel and Ravelry have made it easy to sell knitting patterns on a small scale, in a way similar to eBay. Graffiti In the 2000s, a practice called knitting graffiti, guerilla knitting, or yarn bombing—the use of knitted or crocheted cloth to modify and beautify one's (usually outdoor) surroundings—emerged in the U.S. and spread worldwide. Magda Sayeg is credited with starting the movement in the US and Knit the City are a prominent group of graffiti knitters in the United Kingdom. Yarn bombers sometimes target existing pieces of graffiti for beautification. For instance, Dave Cole is a contemporary sculpture artist who practiced knitting as graffiti for a large-scale public art installation in Melbourne, Australia for the Big West Arts Festival in 2009. The work was vandalized the night of its completion. A new movie, shot by a Tasmanian filmmaker on a set made almost entirely out of yarn, was partially inspired by "knitted graffiti". Yarn crawl Many major metropolitan cities across the US and Europe host annual Yarn Crawls. The event is typically a multi-day event that caters to all knitters, crochet and yarn enthusiasts that supports the local crafting community. Over the multi-day period, multiple local yarn and knit shops participate in the yarn crawl and offer up store discounts, give away free exclusive patterns, provide classes, trunk shows and conduct raffles for prizes. Participants of the crawl receive a passport and get their passport stamped at each store they visit along the crawl. Traditionally those that get their passports fully stamped are eligible to win a larger gift basket filled with yarn, knitting and crochet goodies. Some local crawls also provide a Knit-Along (KAL) or Crochet-Along (CAL) where attendees follow a specific pattern prior to the crawl and then proudly wear it during the crawl for others to see. Charity Hand knitting garments for free distribution to others has become common practice among hand knitting groups. Girls and women hand knitted socks, sweaters, scarves, mittens, gloves, and hats for soldiers in Crimea, the American Civil War, and the Boer Wars; this practice continued in World War I, World War II and the Korean War, and continues for soldiers in Iraq and Afghanistan. The Australian charity Wrap with Love continues to provide blankets hand knitted by volunteers to people most in need around the world who have been affected by war. In the historical projects, yarn companies provided knitting patterns approved by the various branches of the armed services; often they were distributed by local chapters of the American Red Cross. Modern projects usually entail the hand knitting of hats or helmet liners; the liners provided for soldiers must be of 100% worsted weight wool and be crafted using specific colors. Clothing and afghans are frequently made for children, the elderly, and the economically disadvantaged in various countries. Pine Ridge Indian Reservation accepts donations for the Lakota people in the United States. Prayer shawls, or shawls in which the crafter meditates or says prayers of their faith while hand knitting with the intent on comforting the recipient, are donated to those experiencing loss or stress. Many knitters today hand knit and donate "chemo caps", soft caps for cancer patients who lose their hair during chemotherapy. Yarn companies offer free knitting patterns for these caps. Penguin sweaters were hand knitted by volunteers for the rehabilitation of penguins contaminated by exposure to oil slicks. The project is now complete. Chicken sweaters were also hand knitted to aid battery hens that had lost their feathers. The organization is not currently accepting donations, but maintains a list of volunteers. Originally started after the 2004 Indonesian tsunami, Knitters Without Borders is a charity challenge issued by knitting personality Stephanie Pearl-McPhee that encourages hand knitters to donate to Médecins Sans Frontières (Doctors Without Borders). Instead of hand knitting for charity, knitters are encouraged to donate a week's worth of disposable income, including money that otherwise might have been spent on yarn. Knitted items are occasional offered as prizes to donors. As of September 2011, Knitters Without Borders donors have contributed CAD$1,062,217. Security blankets can also be made through the Project Linus organization which helps needy children. There are organizations that help reach other countries in need such as afghans for Afghans. This outreach is described as, "afghans for Afghans is a humanitarian and educational people-to-people project that sends hand-knit and crocheted blankets and sweaters, vests, hats, mittens, and socks to the beleaguered people of Afghanistan." The knitters of the Little Yellow Duck Project craft small yellow ducks which are left for others to find, as a random act of kindness and to raise awareness of blood donation and organ donation. The project was started in memory of a young woman who had collected plastic toy ducks and who died from cystic fibrosis while waiting for a lung transplant. Finders of the ducks are encouraged to log them on a website, which shows that 12,265 ducks have been found in 106 countries. Health benefits Studies have shown that hand knitting, along with other forms of needlework, provide several significant health benefits. These studies have found the rhythmic and repetitive action of hand knitting can help prevent and manage stress, pain and depression, which in turn strengthens the body's immune system, as well as create a relaxation response in the body which can decrease blood pressure, heart rate, help prevent illness, and have a calming effect. Pain specialists have also found that hand knitting changes brain chemistry, resulting in an increase in "feel good" hormones (i.e. serotonin and dopamine) and a decrease in stress hormones. Knitting can improve dexterity in the hands and fingers. This keeps the fingers limber and can be especially helpful for those with arthritis. Knitting can reduce the pain of arthritis if people make it a daily habit. Hand knitting, along with other leisure activities, has been linked to reducing the risk of developing dementia by preventing memory loss. Much like physical activity strengthens the body, mental exercise makes the human brain more resilient. Knitting can be done anywhere and requires that minimal materials and props be carried around with you, making it a very pleasurable and simple hobby that gives wonderful benefits. Knitting also helps in the area of social interaction; knitting provides people with opportunities to socialize with others and build community. One way to increase social interaction with knitting is inviting friends over to knit and chat with each other. Many public libraries and yarn stores host knitting groups where knitters can meet locally to engage with others interested in hand crafts. Knitting has been shown to be an effective form of art therapy for coping with trauma or grief . Whether the knitting is done individually or in a knitting group, the creativity and creation process along with the repetitive physical motion has been shown to be effective. A repository of research into the effect on health of hand knitting can be found at Stitch links, an organization founded in Bath, England. Notable knitters Cat Bordhi - pioneered teaching new and efficient knitting techniques Kaffe Fassett - American-born, British-based artist known for his colorful designs in the decorative arts Stephanie Pearl-McPhee - is a writer, knitter, and knit-wear designer Magda Sayeg - creator of Knitta Please knit graffiti movement Barbara G. Walker - author of several encyclopedic knitting references Stephen West - American knitter, fashion designer, educator, and author known for his knitting patterns and strong use of color Elizabeth Zimmermann - British-born hand knitting teacher and designer Tom Daley - British Olympic gold medallist and knitting and crochet designer. Founder of Made With Love by Tom Daley. Elisabetta Matsumoto - American physicist whose scientific interests include the study of knitted fabrics' special mathematical and mechanical properties.
Technology
Techniques_2
null
16629
https://en.wikipedia.org/wiki/KDE
KDE
KDE is an international free software community that develops free and open-source software. As a central development hub, it provides tools and resources that enable collaborative work on its projects. Its products include the Plasma Desktop, KDE Frameworks, and a range of applications such as Kate, digiKam, and Krita. Many KDE applications are cross-platform and can run on Unix and Unix-like operating systems, Microsoft Windows, and Android. KDE is legally represented by KDE e.V. based in Germany, who also own the KDE trademarks and fund the project. Origins KDE was founded in 1996 by Matthias Ettrich, a student at the University of Tübingen. At the time, he was troubled by certain aspects of the Unix desktop. Among his concerns was that none of the applications looked or behaved alike. In his opinion, desktop applications of the time were too complicated for end users. In order to solve the issue, he proposed the creation of a desktop environment in which users could expect the applications to be consistent and easy to use. His initial Usenet post spurred significant interest, and the KDE project was born. The name KDE was intended as a wordplay on the existing Common Desktop Environment, available for Unix systems. CDE was an X11-based user environment jointly developed by HP, IBM, and Sun through the X/Open consortium, with an interface and productivity tools based on the Motif graphical widget toolkit. It was supposed to be an intuitively easy-to-use desktop computer environment. The K was originally suggested to stand for "Kool", but it was quickly decided that the K should stand for nothing in particular. Therefore, the KDE initialism expanded to "K Desktop Environment" before it was dropped altogether in favor of simply KDE in a rebranding effort in 2009. In the beginning Matthias Ettrich chose to use Trolltech's Qt framework for the KDE project. Other programmers quickly started developing KDE/Qt applications, and by early 1997, a few applications were being released. On 12 July 1998 the first version of the desktop environment, called KDE 1.0, was released. The original GPL licensed version of this toolkit only existed for platforms that used the X11 display server, but with the release of Qt 4, LGPL licensed versions are available for more platforms. This allowed KDE software based on Qt 4 or newer versions to theoretically be distributed to Microsoft Windows and OS X. The KDE Marketing Team announced a rebranding of the KDE project components on 24 November 2009. Motivated by the perceived shift in objectives, the rebranding focused on emphasizing both the community of software creators and the various tools supplied by the KDE, rather than just the desktop environment. What was previously known as KDE 4 was split into KDE Plasma Workspaces, KDE Applications, and KDE Platform (now KDE Frameworks) bundled as KDE Software Compilation 4. Since 2009, the name KDE no longer stands for K Desktop Environment, but for the community that produces the software. Software releases KDE Projects The KDE community maintains multiple free-software projects. The project formerly referred to as KDE (or KDE SC (Software Compilation)) nowadays consists of three parts: KDE Plasma, a graphical desktop environment with customizable layouts and panels, supporting virtual desktops and widgets. Written with Qt and KDE Frameworks. KDE Frameworks, a collection of libraries and software frameworks built on top of Qt (formerly known as 'kdelibs' or 'KDE Platform'). KDE Gear, utility applications (like Kdenlive or Krita) mostly built on KDE Frameworks and which are often part of the official KDE Applications release. Other projects KDE neon KDE neon is a software repository that uses Ubuntu LTS as a core. It aims to provide the users with rapidly updated Qt and KDE software, while updating the rest of the OS components from the Ubuntu repositories at the normal pace. KDE maintains that it is not a "KDE distribution", but rather an up-to-date archive of KDE and Qt packages. Subtitle Composer Subtitle Composer is an open-source subtitle editor for the Linux and Microsoft Windows operating systems, based on Qt and KDE Frameworks. The project became part of KDE starting in December 2019. It supports the most common text and bitmap-based subtitle formats, video previewing, audio waveform, speech recognition, timings synchronization, subtitle translation, OCR and Javascript macros/scripting. Subtitle Composer is free software released under the GNU General Public License. WikiToLearn WikiToLearn, abbreviated WTL, is one of KDE's newer endeavors. It is a wiki (based on MediaWiki, like Wikipedia) that provides a platform to create and share open source textbooks. The idea is to have a massive library of textbooks for anyone and everyone to use and create. Its roots lie in the University of Milan, where a group of physics majors wanted to share notes and then decided that it was for everyone and not just their internal group of friends. They have become an official KDE project with several universities backing it. Contributors Developing KDE software is primarily a volunteer effort, although various companies, such as Novell, Nokia, or Blue Systems employ or employed developers to work on various parts of the project. Since a large number of individuals contribute to KDE in various ways (e.g. code, translation, artwork), organization of such a project is complex. A mentor program helps beginners to get started with developing and communicating within KDE projects and communities. Communication within the community takes place via mailing lists, IRC, blogs, forums, news announcements, wikis and conferences. The community has a Code of Conduct for acceptable behavior within the community. Development Currently the KDE community uses the Git version control system. The KDE GitLab Instance (named Invent) gives an overview of all projects hosted by KDE's Git repository system. Phabricator is used for task management. On 20 July 2009, KDE announced that the one millionth commit has been made to its Subversion repository. On 11 October 2009, Cornelius Schumacher, a main developer within KDE, wrote about the estimated cost (using the COCOMO model with SLOCCount) to develop KDE software package with 4,273,291 LoC, which would be about US$175,364,716. This estimation does not include Qt, Calligra Suite, Amarok, digiKam, and other applications that are not part of KDE core. Core team The overall direction is set by the KDE Core Team. These are developers who have made significant contributions within KDE over a long period of time. This team communicates using the kde-core-devel mailing list, which is publicly archived and readable, but joining requires approval. KDE does not have a single central leader who can veto important decisions. Instead, the KDE core team consists of several dozens of contributors who make decisions not by a formal vote, but through discussions. The developers also organize alongside topical teams. For example, the KDE Edu team develops free educational software. While these teams work mostly independent and do not all follow a common release schedule. Each team has its own messaging channels, both on IRC and on the mailing lists. KDE Patrons A KDE Patron is an individual or organization supporting the KDE community by donating at least 5000 Euro (depending on the company's size) to the KDE e.V. As of February 2024, there are nine such patrons: Blue Systems, Canonical Ltd., Google, GnuPG, Kubuntu Focus, Slimbook, SUSE, The Qt Company, and TUXEDO Computers. Community structure Mascot The KDE community's mascot is a green dragon named Konqi. Konqi's appearance was officially redesigned with the coming of Plasma 5, with Tyson Tan's entry (seen in the images) winning the redesign competition on the KDE Forums. Katie is a female dragon. She was presented in 2010 and is appointed as a mascot for the KDE women's community. Other dragons with different colors and professions were added to Konqi as part of the Tyson Tan redesign concept. Each dragon has a pair of letter-shaped antlers that reflect their role in the KDE community.Kandalf the wizard was the former mascot for the KDE community during its 1.x and 2.x versions. Kandalf's similarity to the character of Gandalf led to speculation that the mascot was switched to Konqi due to copyright infringement concerns, but this has never been confirmed by KDE. KDE e.V. organization The financial and legal matters of KDE are handled by KDE e.V., a German non-profit organization. Among others, it owns the KDE trademark and the corresponding logo. It also accepts donations on behalf of the KDE community, helps to run the servers, assists in organizing and financing conferences and meetings, but does not influence software development directly. Local communities In many countries, KDE has local branches. These are either informal organizations (KDE India) or like the KDE e.V., given a legal form (KDE France). The local organizations host and maintain regional websites, and organize local events, such as tradeshows, contributor meetings and social community meetings. Identity KDE has community identity guidelines (CIG) for definitions and recommendations which help the community to establish a unique, characteristic, and appealing design. The KDE official logo displays the white trademarked K-Gear shape on a blue square with mitred corners. Copying of the KDE Logo is subject to the LGPL. Some local community logos are derivations of the official logo. Many KDE applications have a K in the name, mostly as an initial letter. The K in many KDE applications is obtained by spelling a word which originally begins with C or Q differently, for example Konsole and Kaffeine, while some others prefix a commonly used word with a K, for instance KGet. However, the trend is not to have a K in the name at all, such as with Stage, Spectacle, Discover and Dolphin. Collaborations with other organizations Wikimedia On 23 June 2005, chairman of the Wikimedia Foundation announced that the KDE community and the Wikimedia Foundation have begun efforts towards cooperation. Fruits of that cooperation are MediaWiki syntax highlighting in Kate and accessing Wikipedia content within KDE applications, such as Amarok and Marble. On 4 April 2008, the KDE e.V. and Wikimedia Deutschland opened shared offices in Frankfurt. Free Software Foundation Europe In May 2006, KDE e.V. became an Associate Member of the Free Software Foundation Europe (FSFE). On 22 August 2008, KDE e.V. and FSFE jointly announced that after working with FSFE's Freedom Task Force for one and a half years KDE adopts FSFE's Fiduciary Licence Agreement. Using that, KDE developers can – on a voluntary basis – assign their copyrights to KDE e.V. In September 2009, KDE e.V. and FSFE moved into shared offices in Berlin. Commercial enterprises Several companies actively contribute to KDE, like Collabora, Erfrakon, Intevation GmbH, Kolab Konsortium, Klarälvdalens Datakonsult AB (KDAB), Blue Systems, and KO GmbH. Nokia used Calligra Suite as base for their Office Viewer application for Maemo/MeeGo. They have also been contracting KO GmbH to bring MS Office 2007 file format filters to Calligra. Nokia also employed several KDE developers directly – either to use KDE software for MeeGo (e.g. KCal) or as sponsorship. The software development and consulting companies Intevation GmbH of Germany and the Swedish KDAB use Qt and KDE software – especially Kontact and Akonadi for Kolab – for their services and products, therefore both employ KDE developers. Others KDE participates in freedesktop.org, an effort to standardize Unix desktop interoperability. In 2009 and 2011, GNOME and KDE co-hosted their conferences Akademy and GUADEC under the Desktop Summit label. In December 2010 KDE e.V. became a licensee of the Open Invention Network. Many Linux distributions and other free operating systems are involved in the development and distribution of the software, and are therefore also active in the KDE community. These include commercial distributors such as SUSE/Novell or Red Hat but also government-funded non-commercial organizations such as the Scientific and Technological Research Council of Turkey with its Linux distribution Pardus. In October 2018, Red Hat declared that KDE Plasma was no longer supported in future updates of Red Hat Enterprise Linux, though it continues to be part of Fedora. The announcement came shortly after the announcement of the business acquisition of Red Hat by IBM for close to US$43 billion. As a result, Fedora now makes KDE Plasma and other KDE software available also to Red Hat Enterprise Linux users through their Extra Packages for Enterprise Linux (EPEL) project. Activities The two most important conferences of KDE are Akademy and Camp KDE. Each event is on a large scale, both thematically and geographically. Akademy-BR and Akademy-es are local community events. Akademy Akademy is the annual world summit, held each summer at varying venues in Europe. The primary goals of Akademy are to act as a community building event, to communicate the achievements of community, and to provide a platform for collaboration with community and industry partners. Secondary goals are to engage local people, and to provide space for getting together to write code. KDE e.V. assist with procedures, advice and organization. Akademy including conference, KDE e.V. general assembly, marathon coding sessions, BOFs (birds of a feather sessions) and social program. BOFs meet to discuss specific sub-projects or issues. The first conference that the KDE community held was KDE One, in Arnsberg, Germany, in 1997 to discuss the first KDE release. Initially, each conference was numbered after the release, and not regular held. Since 2003 the conferences were held once a year. And they were named Akademy since 2004. The yearly Akademy conference gives Akademy Awards, are awards that the KDE community gives to KDE contributors. Their purpose is to recognize outstanding contribution to KDE. There are three awards, best application, best non-application and jury's award. As always the winners are chosen by the winners from the previous year. First winners received a framed picture of Konqi signed by all attending KDE developers. Camp KDE Camp KDE is another annual contributor's conference of the KDE community. The event provides a regional opportunity for contributors and enthusiasts to gather and share their experiences. It is free to all participants. It is intended to ensure that KDE in the world is not simply seen as being Euro-centric. The KDE e.V. helps travel and accommodation subsidies for presenters, BoF leaders, organizers or core contributor. It is held in the North America since 2009. In January 2008, KDE 4.0 Release Event was held at the Google headquarters in Mountain View, California, US, to celebrate the release of KDE SC 4.0. The community realized that there was a strong demand for KDE events in the Americas, therefore Camp KDE was produced. Camp KDE 2009 was the premiere meeting of the KDE Americas, was held at the Travellers Beach Resort in Negril, Jamaica, sponsored by Google, Intel, iXsystem, KDE e.V. and Kitware. The event included 1–2 days of presentations, BoF meetings and hackathon sessions. Camp KDE 2010 took place at the University of California, San Diego (UCSD) in La Jolla, US. The schedule included presentations, BoFs, hackathons and a day trip. It started with a short introduction by Jeff Mitchell, who was the principal organizer of the conference, talked a bit of history about Camp KDE and some statistics about the KDE community. With around 70 participants, the talks of the event were relatively well attended. On 1/19, the social event was a tour of a local brewery. Camp KDE 2011 was held at Hotel Kabuki in San Francisco, US. It was co-located with the Linux Foundation Collaboration Summit. The schedule included presentations, hackathons and a party at Noisebridge. The conference opened with an introduction by Celeste Lyn Paul. SoK (Season of KDE) Season of KDE is an outreach program hosted by the KDE community. Students are appointed mentors from the KDE community that help bring their project to fruition. Other community events conf.kde.in was the first KDE and Qt conference in India. The conference, organized by KDE India, was held at R.V. College of Engineering in Bangalore, India. The first three days of the event had talks, tutorials, and interactive sessions. The last two days were a focused code sprint. The conference was opened by its main organizer, Pradeepto Bhattacharya. Over 300 people were at the opening talks. The Lighting of the Auspicious Lamp ceremony was performed to open the conference. The first session was by Lydia Pintscher, who spoke on "So much to doso little time". At the event, the return of Project Neon was announced on March 11, 2011, with the project providing nightly builds of the KDE Software Compilation. Closing the conference was keynote speaker and old-time KDE developer Sirtaj. Día KDE (KDE Day) is an Argentinian event focused on KDE. It gives talks and workshops. The purposes of the event are to: spread the free software movement among the population of Argentina, bringing to it the KDE community and environment developed by it; know and strengthen KDE-AR; and generally bring the community together to have fun. The event is free. A Release party is a party, which celebrates the release of a new version of the KDE SC (twice a year). KDE also participates in other conferences that revolve around free software. Notable uses Brazil's primary school education system operates computers running KDE software, with more than 42,000 schools in 4,000 cities, thus serving nearly 52 million children. The base distribution is called Educational Linux, which is based on Kubuntu. Besides this, thousands more students in Brazil use KDE products in their universities. KDE software is also running on computers in Portuguese and Venezuelan schools, with respectively 700,000 and one million systems reached. Through Pardus, a local Linux distribution, many sections of the Turkish government make use of KDE software, including the Turkish Armed Forces, Ministry of Foreign Affairs, Ministry of National Defence, Turkish Police, and the SGK (Social Security Institution of Turkey), although these departments often do not exclusively use Pardus as their operating system. CERN (European Organization for Nuclear Research) uses KDE software. Germany uses KDE software in its embassies around the world, representing around 11,000 systems. NASA used the Plasma Desktop during the Mars Mission. Valve Corporation's handheld gaming computer, the Steam Deck, uses the KDE Plasma desktop environment when in desktop mode.
Technology
System
null
16745
https://en.wikipedia.org/wiki/Kenyanthropus
Kenyanthropus
Kenyanthropus is a genus of extinct hominin identified from the Lomekwi site by Lake Turkana, Kenya, dated to 3.3 to 3.2 million years ago during the Middle Pliocene. It contains one species, K. platyops, but may also include the 2 million year old Homo rudolfensis, or K. rudolfensis. Before its naming in 2001, Australopithecus afarensis was widely regarded as the only australopithecine to exist during the Middle Pliocene, but Kenyanthropus evinces a greater diversity than once acknowledged. Kenyanthropus is most recognisable by an unusually flat face and small teeth for such an early hominin, with values on the extremes or beyond the range of variation for australopithecines in regard to these features. Multiple australopithecine species may have coexisted by foraging for different food items (niche partitioning), which may be reason why these apes anatomically differ in features related to chewing. The Lomekwi site also yielded the earliest stone tool industry, the Lomekwian, characterised by the rudimentary production of simple flakes by pounding a core against an anvil or with a hammerstone. It may have been manufactured by Kenyanthropus, but it is unclear if multiple species were present at the site or not. The knappers were using volcanic rocks collected no more than from the site. Kenyanthropus seems to have lived on a lakeside or floodplain environment featuring forests and grasslands. Taxonomy Discovery In August 1998, field technician Blasto Onyango discovered a hominin partial left maxilla (upper jaw), specimen KNM-WT 38350, on the Kenyan Lomekwi dig site by Lake Turkana, overseen by prominent paleoanthropologists Louise and Meave Leakey. In August 1999 at the Lomekwi site, research assistant Justus Erus discovered an uncharacteristically flat-faced australopithecine skull, specimen KNM-WT 40000. The 1998–1999 field season subsequently uncovered 34 more craniodental hominin specimens, but the research team was unable to determine if these can be placed into the same species as the former two specimens (that is, if multiple species were present at the site). Age The specimens were recovered near the Nabetili tributary of the Lomekwi river in a mudstone layer of the Nachukui Formation. KNM-WT 40000 was recovered from the Kataboi Member, below the 3.4 million year old Tulu Bor Tuff, and above the 3.57 million year old Lokochot Tuff. By linear interpolation, KNM-WT 40000 is approximately 3.5 million years old, dating back to the Middle Pliocene. Only three more specimens were recovered from the Kataboi Member at around the same level, the deepest KNM-WT 38341 probably sitting on 3.53 million year old sediments. KNM-WT 38350 was recovered from the Lomekwi Member above Tulu Bor, and is approximately 3.3 million years old. The other specimens from this member sit above Tulu Bor, roughly 3.3 million years old as well. The highest specimens—KNM-WT 38344, -55, and -56—may be around 3.2 million years old. Classification In 2001, Meave Leakey and colleagues assigned the Lomekwi remains to a new genus and species, Kenyanthropus platyops, with KNM-WT 40000 the holotype, and KNM-WT 38350 a paratype. The genus name honours Kenya where Lomekwi and a slew of other major human-ancestor sites have been identified. The species name derives from Ancient Greek platus "flat" and opsis "face" in reference to the unusually flat face for such an early hominin. The classification of early hominins with their widely varying anatomy has been a difficult subject matter. The 20th century generated an overabundance of hominin genera plunging the field into taxonomic turmoil, until German evolutionary biologist Ernst Mayr, surveying a "bewildering diversity of names", decided to recognise only a single genus, Homo, containing a few species. Though other genera and species have since become popular, his more conservative view of hominin diversity has become the mainstay, and the acceptance of further genera is usually met with great resistance. Since Mayr, hominins are classified into Australopithecus which gave rise to Homo (which includes modern humans) and the robust Paranthropus (which is sometimes not recognised as its own genus), which by definition leaves Australopithecus polyphyletic (a non-natural group which does not comprise a common ancestor and all of its descendants). In addition to Kenyanthropus, the 1990s saw the introduction of A. bahrelghazali, Ardipithecus, Orrorin, and Sahelanthropus, which has complicated discussions of hominin diversity, though the latter three have not been met with much resistance on account of their greater age (all predating Australopithecus). At the time Kenyanthropus was discovered, Australopithecus afarensis was the only recognised australopithecine to have existed between 4 and 3 million years ago, aside from its probable ancestor A. anamensis, making A. afarensis the likely progenitor of all other australopithecines as they diversified in the late Pliocene and into the Pleistocene. Leakey and colleagues considered Kenyanthropus to be evidence of a greater diversity of Pliocene australopithecines than previously acknowledged. In 2015, Ethiopian palaeoanthropologist Yohannes Haile-Selassie and colleagues erected a new species, A. deyiremeda, which lived in the same time and region as Kenyanthropus and A. afarensis. Meave Leakey and colleagues drew attention to namely the flat face and small cheek teeth, in addition to several other traits, to distinguish the genus from earlier Ardipithecus, contemporary and later Australopithecus, and later Paranthropus. Kenyanthropus lacks any of the derived traits seen in Homo. They conceded Kenyanthropus could be subsumed into Australopithecus if the widest definition of the latter is used, but this conservative approach to hominin diversity leaves Australopithecus a grade taxon, a non-natural grouping of similar-looking species whereby it effectively encompasses all hominins not classifiable into Ardipithecus or Homo regardless of how they may be related to each other. Leakey and colleagues further drew parallels with KNM-WT 40000 and the 2 million year old KNM-ER 1470 assigned to Homo rudolfensis, attributing differences in braincase and nasal anatomy to archaicness. They suggested H. rudolfensis may be better classified as K. rudolfensis. In 2003, American palaeoanthropologist Tim D. White was concerned that KNM-WT 40000 was far too distorted to obtain any accurate metrics for classification purposes, especially because the skull was splintered into over 1,100 pieces often measuring less than across. Because such damage is rarely even seen, he argues that it cannot be reliably reconstructed. Because the skulls of modern ape species vary widely, he suggested further fossil discoveries in the region may prove the Lomekwi hominins to be a local variant of A. afarensis rather than a distinct genus or species. In response, anthropologist Fred Spoor and Meave and Louise Leakey produced much more detailed digital topographical scans of the KNM-WT 40000 maxilla in 2010, permitting the comparison of many more anatomical landmarks on the maxillae of all other early hominins, modern humans, chimpanzees, and gorillas, in order to more accurately correct the distortion. The new reconstruction more convincingly verifies the distinctness of Kenyanthropus. In 2003, Spanish writer Camilo José Cela Conde and evolutionary biologist Francisco J. Ayala proposed resurrecting the genus "Praeanthropus" to house all australopithecines which are not Ardipithecus, Paranthropus, or A. africanus, though they opted to synonymise Kenyanthropus with Homo as "H. platyops". Their recommendations have been largely rejected. Anatomy KNM-WT 40000 has been heavily distorted during the fossilisation process, the braincase shifted downwards and backwards, the nasal region to the right, and the mouth and cheek region forward. It is unclear if the specimen represents a male or a female. Kenyanthropus has a relatively flat face, including subnasally, between the nose and the mouth (the nasoalveolar clivus). The clivus inclines at 45° (there is relaxed sub-nasal prognathism), steeper than almost all other australopithecine specimens (on the upper end of variation for Paranthropus), more comparable to H. rudolfensis and H. habilis. This is the earliest example of a flat face in the hominin fossil record. Unlike A. afarensis, Kenyanthropus lacks the anterior pillars, bony columns running down from the nasal aperture (nose hole). It is also one of the longest early hominin clivi discovered at . The nasal aperture (nose hole) is narrow compared to that of Australopithecus and Paranthropus. The cheekbones are tall and steep, and the anterior surface (where the cheeks juts out the most) is positioned above the premolars, more frequently seen in Paranthropus than other hominins. The zygomaticoalveolar crest (stretching between the cheek and the teeth) is low and curved. Overall, the face resembles H. rudolfensis, though has longer nasal bones, a narrower nasal aperture, a shorter postcanine (the molars and premolars) tooth row, and a less steeply inclined (less flat, more prognathic) midfacial region. Much later Paranthropus are also characterised by relatively flat faces, but this is generally considered to be an adaptation to maximise bite force through enormous teeth, which Kenyanthropus enigmatically does not have. Among all the specimens, only the M2 (2nd upper left molar) and the tooth sockets of the left side of the mouth of KNM-WT 40000 are preserved well enough to measure and study. With dimensions of , a surface area of , it is the smallest M2 ever discovered for an early hominin. For comparison, those of A. afarensis in the comparative sample Leakey and colleagues used ranged from about , H. habilis and H. rudolfensis , and the robust P. boisei (with the largest molars among hominins) about . The reconstructed dimensions of KNM-WT 38350's M1 are for a surface area of , which is on the lower end of variation for A. anamensis, A. afarensis and H. habilis. The thick molar enamel is on par with that of A. anamensis and A. afarensis. KNM-WT40000 retains the ancestral ape premolar tooth root morphology, with a single lingual root (on the tongue side) and two buccal roots (towards the cheeks), though the P4 of KNM-WT 38350 may only have a single buccal root; the ancestral pattern is frequent in Paranthropus and variable in Australopithecus. Individuals of more derived species typically have single-rooted premolars. The canine jugum is not visible (a line of bone in the maxilla corresponding to the canine tooth root), which may mean the canines were not that large. The cross-sectional area of the I2 (2nd upper incisor) is 90% the size of that of I1, whereas it is usually 50 to 70% in other great apes. The tooth roots of the incisors do not appear to be orientated out (there was probably no alveolar prognathism, the front teeth did not jut forward). Brain volume is uncalculable due to distortion of the braincase, but it was probably similar to that of Australopithecus and Paranthropus. A sample of five A. afarensis averaged 445 cc. Like Paranthropus, there is no frontal trigon (a triangle formed by the conjunction of the temporal lines behind the brow ridge). Unlike H. habilis but like H. rudolfensis, there is no sulcus (trench) behind the brow ridge. The degree of postorbital constriction, the narrowing of the braincase in the frontal lobe region, is on par with that of Australopithecus, H. rudolfensis, and H. habilis, but less than P. boisei. Like the earlier A. anamensis and Ar. ramidus, the tympanic bone retains the ancestral hominin ear morphology, lacking the petrous crest, and bearing a narrow ear canal with a small opening. The foramen magnum, where the skull connects to the spine, was probably oval shaped as opposed to the heart-shaped one of P. boisei. Technology In 2015, French archaeologist Sonia Harmand and colleagues identified the Lomekwian stone-tool industry at the Lomekwi site. The tools are attributed to Kenyanthropus as it is the only hominin identified at the site, but in 2015, anthropologist Fred Spoor suggested that at least some of the indeterminate specimens may be assignable to A. deyiremeda as the two species have somewhat similar maxillary anatomy. At 3.3 million years old, it is the oldest proposed industry. The assemblage comprises 83 cores, 35 flakes, 7 possible anvils, 7 possible hammerstones, 5 pebbles (which may have also been used as hammers), and 12 indeterminant fragments, of which 52 were sourced from basalt, 51 from phonolite, 35 from trachyphonolite (intermediate composition of phonolite and trachyte), 3 from vesicular basalt, 2 from trachyte, and 6 indeterminant. These materials could have originated at a conglomerate only from the site. The cores are large and heavy, averaging and . Flakes ranged in length, normally shorter than later Oldowan industry flakes. Anvils were heavy, up to . Flakes seem to have been cleaved off primarily using the passive hammer technique (directly striking the core on the anvil) and/or the bipolar method (placing the core on the anvil and striking it with a hammerstone). They produced both unifaces (the flake was worked on one side) and bifaces (both sides were worked). Though they may have been shaping cores beforehand to make them easier to work, the knappers more often than not poorly executed the technique, producing incomplete fractures and fissures on several cores, or requiring multiple blows to flake off a piece. Harmand and colleagues suggested such rudimentary skills may place the Lomekwian as an intermediate industry between simple pounding techniques probably used by earlier hominins, and the flaking Oldowan industry developed by Homo. It is typically assumed that early hominins were using stone tools to cut meat in addition to other organic materials. Wild chimpanzees and black-striped capuchins have been observed to make flakes by accident while using hammerstones to crack nuts on anvils, but the Lomekwi knappers were producing multiple flakes from the same core, and flipped over flakes to work the other side, which speak to the intentionality of their production. In 2016, Spanish archaeologists Manuel Domínguez-Rodrigo and Luis Alcalá argued Harmand and colleagues did not convincingly justify that the tools were discovered in situ, that is, the tools may be much younger and were reworked into an older layer. If the date of 3.3 million years is accepted, then there is a 700,000 year gap between the next solid evidence of stone tools, at Ledi-Geraru associated with the earliest Homo LD 350-1, the Oldowan industry, reported by American palaeoanthropologist David Braun and colleagues in 2019. This gap can either be interpreted as the loss and reinvention of stone tool technology, or preservation bias (that tools from this time gap either did not preserve for whatever reason, or sit undiscovered), the latter implying the Lomekwian evolved into the Oldowan. Palaeoecology From 4.5 to 4 million years ago, Lake Turkana may have swelled to upwards of , in comparison to today's ; the lake at what is now the Koobi Fora site possibly sat at minimum below the surface. Volcanic hills by Lomekwi pushed basalt into the lake sediments. The lake broke up and from 3.6 to 3.2 million years ago, the region was probably characterised by a series of much smaller lakes, each covering no more than . Similarly, the bovid remains at Lomekwi are suggestive of a wet mosaic environment featuring both grasslands and forests on a lakeside or floodplain. Theropithecus brumpti is the most common monkey at the site as well as the rest of the Turkana Basin at this time; this species tends to live in more forested and closed environments. At the fossiliferous A. afarensis Hadar site in Ethiopia, Theropithecus darti is the most common monkey, which tends to prefer drier conditions conducive to wood- or grassland environments. Leakey and colleagues argued this distribution means Kenyanthropus was living in somewhat more forested environments than more northerly A. afarensis. Kenyanthropus, A. afarensis, and A. deyiremeda all coexisted in the same time and region, and, because their anatomy largely diverges in areas relevant to chewing, they may have practised niche partitioning and foraged for different food items.
Biology and health sciences
Australopithecines
Biology
16759
https://en.wikipedia.org/wiki/Kevlar
Kevlar
Kevlar (para-aramid) is a strong, heat-resistant synthetic fiber, related to other aramids such as Nomex and Technora. Developed by Stephanie Kwolek at DuPont in 1965, the high-strength material was first used commercially in the early 1970s as a replacement for steel in racing tires. It is typically spun into ropes or fabric sheets that can be used as such, or as an ingredient in composite material components. Kevlar has many applications, ranging from bicycle tires and racing sails to bulletproof vests, all due to its high tensile strength-to-weight ratio; by this measure it is five times stronger than steel. It is also used to make modern marching drumheads that withstand high impact; and for mooring lines and other underwater applications. A similar fiber called Twaron with the same chemical structure was developed by Akzo in the 1970s; commercial production started in 1986, and Twaron is manufactured by Teijin Aramid. History Poly-paraphenylene terephthalamide (K29) – branded Kevlar – was invented by the Polish-American chemist Stephanie Kwolek while working for DuPont, in anticipation of a gasoline shortage. In 1964, her group began searching for a new lightweight strong fiber to use for light, but strong, tires. The polymers she had been working with, poly-p-phenylene-terephthalate and polybenzamide, formed liquid crystals in solution, something unique to polymers at the time. The solution was "cloudy, opalescent upon being stirred, and of low viscosity" and usually was thrown away. However, Kwolek persuaded the technician, Charles Smullen, who ran the spinneret, to test her solution, and was amazed to find that the fiber did not break, unlike nylon. Her supervisor and her laboratory director understood the significance of her discovery and a new field of polymer chemistry quickly arose. By 1971, modern Kevlar was introduced. However, Kwolek was not very involved in developing the applications of Kevlar. In 1971, Lester Shubin, who was then the Director of Science and Technology for the National Institute for Law Enforcement and Criminal Justice, suggested using Kevlar to replace nylon in bullet-proof vests. Prior to the introduction of Kevlar, flak jackets made of nylon had provided much more limited protection to users. Shubin later recalled how the idea developed: "We folded it over a couple of times and shot at it. The bullets didn't go through." In tests, they strapped Kevlar onto anesthetized goats and shot at their hearts, spinal cords, livers and lungs. They monitored the goats' heart rate and blood gas levels to check for lung injuries. After 24 hours, one goat died and the others had wounds that were not life threatening. Shubin received a $5 million grant to research the use of the fabric in bullet-proof vests. Kevlar 149 was invented by Jacob Lahijani of Dupont in the 1980s. Production Kevlar is synthesized in solution from the monomers 1,4-phenylene-diamine (para-phenylenediamine) and terephthaloyl chloride in a condensation reaction yielding hydrochloric acid as a byproduct. The result has liquid-crystalline behavior, and mechanical drawing orients the polymer chains in the fiber's direction. Hexamethylphosphoramide (HMPA) was the solvent initially used for the polymerization, but for safety reasons, DuPont replaced it by a solution of N-methyl-pyrrolidone and calcium chloride. As this process had been patented by Akzo (see above) in the production of Twaron, a patent war ensued. Kevlar production is expensive because of the difficulties arising from using concentrated sulfuric acid, needed to keep the water-insoluble polymer in solution during its synthesis and spinning. Several grades of Kevlar are available: Kevlar K-29 – in industrial applications, such as cables, asbestos replacement, tires, and brake linings. Kevlar K49 – high modulus used in cable and rope products. Kevlar K100 – colored version of Kevlar Kevlar K119 – higher-elongation, flexible and more fatigue resistant Kevlar K129 – higher tenacity for ballistic applications Kevlar K149 – highest tenacity for ballistic, armor, and aerospace applications Kevlar AP – 15% higher tensile strength than K-29 Kevlar XP – lighter weight resin and KM2 plus fiber combination Kevlar KM2 – enhanced ballistic resistance for armor applications The ultraviolet component of sunlight degrades and decomposes Kevlar, a problem known as UV degradation, and so it is rarely used outdoors without protection against sunlight. Structure and properties When Kevlar is spun, the resulting fiber has a tensile strength of about , and a relative density of 1.44 (0.052 lb/in3). The polymer owes its high strength to the many inter-chain bonds. These inter-molecular hydrogen bonds form between the carbonyl groups and NH centers. Additional strength is derived from aromatic stacking interactions between adjacent strands. These interactions have a greater influence on Kevlar than the van der Waals interactions and chain length that typically influence the properties of other synthetic polymers and fibers such as ultra-high-molecular-weight polyethylene. The presence of salts and certain other impurities, especially calcium, could interfere with the strand interactions and care is taken to avoid inclusion in its production. Kevlar's structure consists of relatively rigid molecules which tend to form mostly planar sheet-like structures rather like silk protein. Thermal properties Kevlar maintains its strength and resilience down to cryogenic temperatures (): in fact, it is slightly stronger at low temperatures. At higher temperatures the tensile strength is immediately reduced by about 10–20%, and after some hours the strength progressively reduces further. For example: enduring for 500 hours, its strength is reduced by about 10%; and enduring for 70 hours, its strength is reduced by about 50%. Applications Science Kevlar is often used in the field of cryogenics for its low thermal conductivity and high strength relative to other materials for suspension purposes. It is most often used to suspend a paramagnetic salt enclosure from a superconducting magnet mandrel in order to minimize any heat leaks to the paramagnetic material. It is also used as a thermal standoff or structural support where low heat leaks are desired. A thin Kevlar window has been used by the NA48 experiment at CERN to separate a vacuum vessel from a vessel at nearly atmospheric pressure, both in diameter. The window has provided vacuum tightness combined with reasonably small amount of material (only 0.3% to 0.4% of radiation length). Protection Kevlar is a well-known component of personal armor such as combat helmets, ballistic face masks, and ballistic vests. The PASGT helmet and vest that were used by United States military forces used Kevlar as a key component in their construction. Other military uses include bulletproof face masks and spall liners used to protect the crews of armoured fighting vehicles. Nimitz-class aircraft carriers use Kevlar reinforcement in vital areas. Civilian applications include: high heat resistance uniforms worn by firefighters, body armour worn by police officers, security, and police tactical teams such as SWAT. Kevlar is used to manufacture gloves, sleeves, jackets, chaps and other articles of clothing designed to protect users from cuts, abrasions and heat. Kevlar-based protective gear is often considerably lighter and thinner than equivalent gear made of more traditional materials. It is used for motorcycle safety clothing, especially in the areas featuring padding such as the shoulders and elbows. In the sport of fencing it is used in the protective jackets, breeches, plastrons and the bib of the masks. It is increasingly being used in the peto, the padded covering which protects the picadors' horses in the bullring. Speed skaters also frequently wear an under-layer of Kevlar fabric to prevent potential wounds from skates in the event of a fall or collision. Sport In kyudo, or Japanese archery, it may be used for bow strings, as an alternative to the more expensive hemp. It is one of the main materials used for paraglider suspension lines. It is used as an inner lining for some bicycle tires to prevent punctures. In table tennis, plies of Kevlar are added to custom ply blades, or paddles, in order to increase bounce and reduce weight. Tennis racquets are sometimes strung with Kevlar. It is used in sails for high performance racing boats. In 2013, with advancements in technology, Nike used Kevlar in shoes for the first time. It launched the Elite II Series, with enhancements to its earlier version of basketball shoes by using Kevlar in the anterior as well as the shoe laces. This was done to decrease the elasticity of the tip of the shoe in contrast to the nylon conventionally used, as Kevlar expanded by about 1% against nylon which expanded by about 30%. Shoes in this range included LeBron, HyperDunk and Zoom Kobe VII. However these shoes were launched at a price range much higher than average cost of basketball shoes. It was also used in the laces for the Adidas F50 adiZero Prime football boot. Several companies, including Continental AG, manufacture cycle tires with Kevlar to protect against punctures. Folding-bead bicycle tires, introduced to cycling by Tom Ritchey in 1984, use Kevlar as a bead in place of steel for weight reduction and strength. A side effect of the folding bead is a reduction in shelf and floor space needed to display cycle tires in a retail environment, as they are folded and placed in small boxes. Music Kevlar has also been found to have useful acoustic properties for loudspeaker cones, specifically for bass and mid range drive units. Additionally, Kevlar has been used as a strength member in fiber optic cables such as the ones used for audio data transmissions. Kevlar can be used as an acoustic core on bows for string instruments. Kevlar's physical properties provide strength, flexibility, and stability for the bow's user. To date, the only manufacturer of this type of bow is CodaBow. Kevlar is also presently used as a material for tailcords (a.k.a. tailpiece adjusters), which connect the tailpiece to the endpin of bowed string instruments. Kevlar is sometimes used as a material on marching snare drums. It allows for an extremely high amount of tension, resulting in a cleaner sound. There is usually a resin poured onto the Kevlar to make the head airtight, and a nylon top layer to provide a flat striking surface. This is one of the primary types of marching snare drum heads. Remo's Falam Slam patch is made with Kevlar and is used to reinforce bass drum heads where the beater strikes. Kevlar is used in the woodwind reeds of Fibracell. The material of these reeds is a composite of aerospace materials designed to duplicate the way nature constructs cane reed. Very stiff but sound absorbing Kevlar fibers are suspended in a lightweight resin formulation. Motor vehicles Kevlar is sometimes used in structural components of cars, especially high-value performance cars such as the Ferrari F40. The chopped fiber has been used as a replacement for asbestos in brake pads. Aramids such as Kevlar release less airborne fibres than asbestos brakes and do not have the carcinogenic properties associated with asbestos. Other uses Wicks for fire dancing props are made of composite materials with Kevlar in them. Kevlar by itself does not absorb fuel very well, so it is blended with other materials such as fiberglass or cotton. Kevlar's high heat resistance allows the wicks to be reused many times. Kevlar is sometimes used as a substitute for Teflon in some non-stick frying pans. Kevlar fiber is used in rope and in cable, where the fibers are kept parallel within a polyethylene sleeve. The cables have been used in suspension bridges such as the bridge at Aberfeldy, Scotland. They have also been used to stabilize cracking concrete cooling towers by circumferential application followed by tensioning to close the cracks. Kevlar is widely used as a protective outer sheath for optical fiber cable, as its strength protects the cable from damage and kinking. When used in this application it is commonly known by the trademarked name Parafil. Kevlar was used by scientists at Georgia Institute of Technology as a base textile for an experiment in electricity-producing clothing. This was done by weaving zinc oxide nanowires into the fabric. If successful, the new fabric will generate about 80 milliwatts per square meter. A retractable roof of over of Kevlar was a key part of the design of the Olympic Stadium, Montreal for the 1976 Summer Olympics. It was spectacularly unsuccessful, as it was completed 10 years late and replaced just 10 years later in May 1998 after a series of problems. Kevlar can be found as a reinforcing layer in rubber bellows expansion joints and rubber hoses, for use in high temperature applications, and for its high strength. It is also found as a braid layer used on the outside of hose assemblies, to add protection against sharp objects. Some cellphones (including the Motorola RAZR Family, the Motorola Droid Maxx, OnePlus 2 and Pocophone F1) have a Kevlar backplate, chosen over other materials such as carbon fiber due to its resilience and lack of interference with signal transmission. The Kevlar fiber/epoxy matrix composite materials can be used in marine current turbines (MCT) or wind turbines due to their high specific strength and light weight compared to other fibers. Composite materials Aramid fibers are widely used for reinforcing composite materials, often in combination with carbon fiber and glass fiber. The matrix for high performance composites is usually epoxy resin. Typical applications include monocoque bodies for Formula 1 cars, helicopter rotor blades, tennis, table tennis, badminton and squash rackets, kayaks, cricket bats, and field hockey, ice hockey and lacrosse sticks. Kevlar 149, the strongest fiber and most crystalline in structure, is an alternative in certain parts of aircraft construction. The wing leading edge is one application, Kevlar being less prone than carbon or glass fiber to break in bird collisions.
Technology
Fabrics and fibers
null
16794
https://en.wikipedia.org/wiki/Kilobyte
Kilobyte
The kilobyte is a multiple of the unit byte for digital information. The International System of Units (SI) defines the prefix kilo as a multiplication factor of 1000 (103); therefore, one kilobyte is 1000 bytes. The internationally recommended unit symbol for the kilobyte is kB. In some areas of information technology, particularly in reference to random-access memory capacity, kilobyte instead typically refers to 1024 (210) bytes. This arises from the prevalence of sizes that are powers of two in modern digital memory architectures, coupled with the coincidence that 210 differs from 103 by less than 2.5%. A kibibyte is 1024 bytes. Definitions and usage Decimal (1000 bytes) In the International System of Units (SI) the metric prefix kilo means 1,000 (103); therefore, one kilobyte is 1000 bytes. The unit symbol is kB. This is the definition recommended by the International Electrotechnical Commission (IEC). This definition, and the related definitions of the prefixes mega (), giga (), etc., are most commonly used for data transfer rates in computer networks, internal bus, hard drive and flash media transfer speeds, and for the capacities of most storage media, particularly hard disk drives, flash-based storage, and DVDs. It is also consistent with the other uses of the metric prefixes in computing, such as CPU clock speeds or measures of performance. The international standard IEC 80000-13 uses the term "byte" to mean eight bits (1 B = 8 bit). Therefore, 1 kB = 8000 bit. One thousand kilobytes (1000 kB) is equal to one megabyte (1 MB), where 1 MB is one million bytes. Binary (1024 bytes) The term 'kilobyte' has traditionally been used to refer to 1024 bytes (210 B). The usage of the metric prefix kilo for binary multiples arose as a convenience, because 1024 is approximately 1000. The binary interpretation of metric prefixes is still prominently used by the Microsoft Windows operating system. Binary interpretation is also used for random-access memory capacity, such as main memory and CPU cache size, due to the prevalent binary addressing of memory. The binary meaning of the kilobyte for 1024 bytes typically uses the symbol KB, with an uppercase letter K. The B is sometimes omitted in informal use. For example, a processor with 65,536 bytes of cache memory might be said to have "64 K" of cache. In this convention, one thousand and twenty-four kilobytes (1024 KB) is equal to one megabyte (1 MB), where 1 MB is 10242 bytes. In December 1998, the IEC addressed such multiple usages and definitions by creating prefixes such as kibi, mebi, gibi, etc., to unambiguously denote powers of 1024. Thus the kibibyte, symbol KiB, represents 210 bytes = 1024 bytes. These prefixes are now part of IEC 80000-13. The IEC further specified that the kilobyte should only be used to refer to 1000 bytes. The International System of Units restricts the use of the SI prefixes strictly to powers of 10. Use of term The Shugart SA-400 5-inch floppy disk (1976) held 109,375 bytes unformatted, and was advertised as "110 Kbyte", using the 1000 convention. Likewise, the 8-inch DEC RX01 floppy (1975) held 256,256 bytes formatted, and was advertised as "256k". On the other hand, the Tandon 5-inch DD floppy format (1978) held 368,640 (which is 360×1024) bytes, but was advertised as "360 KB", following the 1024 convention. Early home computer systems would often advertise using the 1024 convention, hence the naming of the Commodore 64, Commodore 128, and the Amstrad CPC 464. On modern systems, all versions of Microsoft Windows including the newest () Windows 10 divide by 1024 and represent a 65,536-byte file as "64 KB". Conversely, Mac OS X Snow Leopard and newer represent this as 66 kB, rounding to the nearest 1000 bytes. File sizes are reported with decimal prefixes. the binary interpretation was still used in marketing and billing by some telecommunication companies, such as Vodafone, AT&T, Orange and Telstra. Data examples "This is an example of a text which is exactly a kilobyte (kB) large. In this case, this text is 10^3 bytes long, but if you were to add 24 extra characters to this string, it would be 2^10 bytes long, which is used in some fields such as information technology. Each character in this string (which includes the quotation marks at the end, by the way) is exactly one byte long, which is 8 bits. The bit is the fundamental unit of information, which represents a single yes or no. So, one could measure the amount of information in a single letter with 5 bits (because 2^5 is 32, there are 26 letters in the English language), but because we also use other characters, like numbers, capital/lowercase letters, symbols (like {}$#*&%!`~), spaces, and more. In a computer's memory, these may be represented with some binary string, such as 01010100 (which represents the letter T), usually this is distinguished from one million ten thousand one hundred by appending '0b' to the front, like 0b01010100." The Lord's Prayer, in Latin, is 296 bytes. The short story The Cask of Amontillado by Edgar Allan Poe, hosted on Project Gutenberg as an uncompressed plain text file, is 12,843 bytes: this is 12.8 kilobytes (divided by 1,000) or 12.5 kibibytes (divided by 1,024). The novel The Picture of Dorian Gray, by Oscar Wilde, hosted on Project Gutenberg as an uncompressed plain text file, is 428,952 bytes; this is 428.95 kilobytes (divided by 1,000) and 418.90 kibibytes (divided by 1,024). Great Expectations is 994,639 bytes, and Moby Dick is 1,191,763 bytes.
Physical sciences
Information
Basics and measurement
16796
https://en.wikipedia.org/wiki/Kuiper%20belt
Kuiper belt
The Kuiper belt ( ) is a circumstellar disc in the outer Solar System, extending from the orbit of Neptune at 30 astronomical units (AU) to approximately 50 AU from the Sun. It is similar to the asteroid belt, but is far larger—20 times as wide and 20–200 times as massive. Like the asteroid belt, it consists mainly of small bodies or remnants from when the Solar System formed. While many asteroids are composed primarily of rock and metal, most Kuiper belt objects are composed largely of frozen volatiles (termed "ices"), such as methane, ammonia, and water. The Kuiper belt is home to most of the objects that astronomers generally accept as dwarf planets: Orcus, Pluto, Haumea, Quaoar, and Makemake. Some of the Solar System's moons, such as Neptune's Triton and Saturn's Phoebe, may have originated in the region. The Kuiper belt is named in honor of the Dutch astronomer Gerard Kuiper, who conjectured the existence of the belt in 1951. There were researchers before and after him who also speculated on its existence, such as Kenneth Edgeworth in the 1930s. The astronomer Julio Angel Fernandez published a paper in 1980 suggesting the existence of a comet belt beyond Neptune which could serve as a source for short-period comets. In 1992, minor planet (15760) Albion was discovered, the first Kuiper belt object (KBO) since Pluto (in 1930) and Charon (in 1978). Since its discovery, the number of known KBOs has increased to thousands, and more than 100,000 KBOs over in diameter are thought to exist. The Kuiper belt was initially thought to be the main repository for periodic comets, those with orbits lasting less than 200 years. Studies since the mid-1990s have shown that the belt is dynamically stable and that comets' true place of origin is the scattered disc, a dynamically active zone created by the outward motion of Neptune 4.5 billion years ago; scattered disc objects such as Eris have extremely eccentric orbits that take them as far as 100 AU from the Sun. The Kuiper belt is distinct from the hypothesized Oort cloud, which is believed to be a thousand times more distant and mostly spherical. The objects within the Kuiper belt, together with the members of the scattered disc and any potential Hills cloud or Oort cloud objects, are collectively referred to as trans-Neptunian objects (TNOs). Pluto is the largest and most massive member of the Kuiper belt and the largest and the second-most-massive known TNO, surpassed only by Eris in the scattered disc. Originally considered a planet, Pluto's status as part of the Kuiper belt caused it to be reclassified as a dwarf planet in 2006. It is compositionally similar to many other objects of the Kuiper belt, and its orbital period is characteristic of a class of KBOs, known as "plutinos," that share the same 2:3 resonance with Neptune. The Kuiper belt and Neptune may be treated as a marker of the extent of the Solar System, alternatives being the heliopause and the distance at which the Sun's gravitational influence is matched by that of other stars (estimated to be between and ). History After the discovery of Pluto in 1930, many speculated that it might not be alone. The region now called the Kuiper belt was hypothesized in various forms for decades. It was only in 1992 that the first direct evidence for its existence was found. The number and variety of prior speculations on the nature of the Kuiper belt have led to continued uncertainty as to who deserves credit for first proposing it. Hypotheses The first astronomer to suggest the existence of a trans-Neptunian population was Frederick C. Leonard. Soon after Pluto's discovery by Clyde Tombaugh in 1930, Leonard pondered whether it was "likely that in Pluto there has come to light the first of a series of ultra-Neptunian bodies, the remaining members of which still await discovery but which are destined eventually to be detected". That same year, astronomer Armin O. Leuschner suggested that Pluto "may be one of many long-period planetary objects yet to be discovered." In 1943, in the Journal of the British Astronomical Association, Kenneth Edgeworth hypothesized that, in the region beyond Neptune, the material within the primordial solar nebula was too widely spaced to condense into planets, and so rather condensed into a myriad smaller bodies. From this he concluded that "the outer region of the solar system, beyond the orbits of the planets, is occupied by a very large number of comparatively small bodies" and that, from time to time, one of their number "wanders from its own sphere and appears as an occasional visitor to the inner solar system", becoming a comet. In 1951, in a paper in Astrophysics: A Topical Symposium, Gerard Kuiper speculated on a similar disc having formed early in the Solar System's evolution and concluded that the disc consisted of "remnants of original clusterings which have lost many members that became stray asteroids, much as has occurred with open galactic clusters dissolving into stars." In another paper, based upon a lecture Kuiper gave in 1950, also called On the Origin of the Solar System, Kuiper wrote about the "outermost region of the solar nebula, from 38 to 50 astr. units (i.e., just outside proto-Neptune)" where "condensation products (ices of H20, NH3, CH4, etc.) must have formed, and the flakes must have slowly collected and formed larger aggregates, estimated to range up to 1 km. or more in size." He continued to write that "these condensations appear to account for the comets, in size, number and composition." According to Kuiper "the planet Pluto, which sweeps through the whole zone from 30 to 50 astr. units, is held responsible for having started the scattering of the comets throughout the solar system." It is said that Kuiper was operating on the assumption, common in his time, that Pluto was the size of Earth and had therefore scattered these bodies out toward the Oort cloud or out of the Solar System; there would not be a Kuiper belt today if this were correct. The hypothesis took many other forms in the following decades. In 1962, physicist Al G.W. Cameron postulated the existence of "a tremendous mass of small material on the outskirts of the solar system". In 1964, Fred Whipple, who popularised the famous "dirty snowball" hypothesis for cometary structure, thought that a "comet belt" might be massive enough to cause the purported discrepancies in the orbit of Uranus that had sparked the search for Planet X, or, at the very least, massive enough to affect the orbits of known comets. Observation ruled out this hypothesis. In 1977, Charles Kowal discovered 2060 Chiron, an icy planetoid with an orbit between Saturn and Uranus. He used a blink comparator, the same device that had allowed Clyde Tombaugh to discover Pluto nearly 50 years before. In 1992, another object, 5145 Pholus, was discovered in a similar orbit. Today, an entire population of comet-like bodies, called the centaurs, is known to exist in the region between Jupiter and Neptune. The centaurs' orbits are unstable and have dynamical lifetimes of a few million years. From the time of Chiron's discovery in 1977, astronomers have speculated that the centaurs therefore must be frequently replenished by some outer reservoir. Further evidence for the existence of the Kuiper belt later emerged from the study of comets. That comets have finite lifespans has been known for some time. As they approach the Sun, its heat causes their volatile surfaces to sublimate into space, gradually dispersing them. In order for comets to continue to be visible over the age of the Solar System, they must be replenished frequently. A proposal for such an area of replenishment is the Oort cloud, possibly a spherical swarm of comets extending beyond 50,000 AU from the Sun first hypothesised by Dutch astronomer Jan Oort in 1950. The Oort cloud is thought to be the point of origin of long-period comets, which are those, like Hale–Bopp, with orbits lasting thousands of years.There is another comet population, known as short-period or periodic comets, consisting of those comets that, like Halley's Comet, have orbital periods of less than 200 years. By the 1970s, the rate at which short-period comets were being discovered was becoming increasingly inconsistent with their having emerged solely from the Oort cloud. For an Oort cloud object to become a short-period comet, it would first have to be captured by the giant planets. In a paper published in Monthly Notices of the Royal Astronomical Society in 1980, Uruguayan astronomer Julio Fernández stated that for every short-period comet to be sent into the inner Solar System from the Oort cloud, 600 would have to be ejected into interstellar space. He speculated that a comet belt from between 35 and 50 AU would be required to account for the observed number of comets. Following up on Fernández's work, in 1988 the Canadian team of Martin Duncan, Tom Quinn and Scott Tremaine ran a number of computer simulations to determine if all observed comets could have arrived from the Oort cloud. They found that the Oort cloud could not account for all short-period comets, particularly as short-period comets are clustered near the plane of the Solar System, whereas Oort-cloud comets tend to arrive from any point in the sky. With a "belt", as Fernández described it, added to the formulations, the simulations matched observations. Reportedly because the words "Kuiper" and "comet belt" appeared in the opening sentence of Fernández's paper, Tremaine named this hypothetical region the "Kuiper belt". Discovery In 1987, astronomer David Jewitt, then at MIT, became increasingly puzzled by "the apparent emptiness of the outer Solar System". He encouraged then-graduate student Jane Luu to aid him in his endeavour to locate another object beyond Pluto's orbit, because, as he told her, "If we don't, nobody will." Using telescopes at the Kitt Peak National Observatory in Arizona and the Cerro Tololo Inter-American Observatory in Chile, Jewitt and Luu conducted their search in much the same way as Clyde Tombaugh and Charles Kowal had, with a blink comparator. Initially, examination of each pair of plates took about eight hours, but the process was sped up with the arrival of electronic charge-coupled devices or CCDs, which, though their field of view was narrower, were not only more efficient at collecting light (they retained 90% of the light that hit them, rather than the 10% achieved by photographs) but allowed the blinking process to be done virtually, on a computer screen. Today, CCDs form the basis for most astronomical detectors. In 1988, Jewitt moved to the Institute of Astronomy at the University of Hawaii. Luu later joined him to work at the University of Hawaii's 2.24 m telescope at Mauna Kea. Eventually, the field of view for CCDs had increased to 1024 by 1024 pixels, which allowed searches to be conducted far more rapidly. Finally, after five years of searching, Jewitt and Luu announced on 30 August 1992 the "Discovery of the candidate Kuiper belt object 1992 QB1". This object would later be named 15760 Albion. Six months later, they discovered a second object in the region, (181708) 1993 FW. By 2018, over 2000 Kuiper belts objects had been discovered. Over one thousand bodies were found in a belt in the twenty years (1992–2012), after finding (named in 2018, 15760 Albion), showing a vast belt of bodies in addition to Pluto and Albion. Even in the 2010s the full extent and nature of Kuiper belt bodies was largely unknown. Finally, the unmanned spacecraft New Horizons conducted the first KBO flybys, providing much closer observations of the Plutonian system (2015) and then Arrokoth (2019). Studies conducted since the trans-Neptunian region was first charted have shown that the region now called the Kuiper belt is not the point of origin of short-period comets, but that they instead derive from a linked population called the scattered disc. The scattered disc was created when Neptune migrated outward into the proto-Kuiper belt, which at the time was much closer to the Sun, and left in its wake a population of dynamically stable objects that could never be affected by its orbit (the Kuiper belt proper), and a population whose perihelia are close enough that Neptune can still disturb them as it travels around the Sun (the scattered disc). Because the scattered disc is dynamically active and the Kuiper belt relatively dynamically stable, the scattered disc is now seen as the most likely point of origin for periodic comets. Name Astronomers sometimes use the alternative name Edgeworth–Kuiper belt to credit Edgeworth, and KBOs are occasionally referred to as EKOs. Brian G. Marsden claims that neither deserves true credit: "Neither Edgeworth nor Kuiper wrote about anything remotely like what we are now seeing, but Fred Whipple did". David Jewitt comments: "If anything ... Fernández most nearly deserves the credit for predicting the Kuiper Belt." KBOs are sometimes called "kuiperoids", a name suggested by Clyde Tombaugh. The term "trans-Neptunian object" (TNO) is recommended for objects in the belt by several scientific groups because the term is less controversial than all others—it is not an exact synonym though, as TNOs include all objects orbiting the Sun past the orbit of Neptune, not just those in the Kuiper belt. Structure At its fullest extent (but excluding the scattered disc), including its outlying regions, the Kuiper belt stretches from roughly 30–55 AU. The main body of the belt is generally accepted to extend from the 2:3 mean-motion resonance (see below) at 39.5 AU to the 1:2 resonance at roughly 48 AU. The Kuiper belt is quite thick, with the main concentration extending as much as ten degrees outside the ecliptic plane and a more diffuse distribution of objects extending several times farther. Overall it more resembles a torus or doughnut than a belt. Its mean position is inclined to the ecliptic by 1.86 degrees. The presence of Neptune has a profound effect on the Kuiper belt's structure due to orbital resonances. Over a timescale comparable to the age of the Solar System, Neptune's gravity destabilises the orbits of any objects that happen to lie in certain regions, and either sends them into the inner Solar System or out into the scattered disc or interstellar space. This causes the Kuiper belt to have pronounced gaps in its current layout, similar to the Kirkwood gaps in the asteroid belt. In the region between 40 and 42 AU, for instance, no objects can retain a stable orbit over such times, and any observed in that region must have migrated there relatively recently. Classical belt Between the 2:3 and 1:2 resonances with Neptune, at approximately 42–48 AU, the gravitational interactions with Neptune occur over an extended timescale, and objects can exist with their orbits essentially unaltered. This region is known as the classical Kuiper belt, and its members comprise roughly two thirds of KBOs observed to date. Because the first modern KBO discovered (Albion, but long called (15760) 1992 QB1), is considered the prototype of this group, classical KBOs are often referred to as cubewanos ("Q-B-1-os"). The guidelines established by the IAU demand that classical KBOs be given names of mythological beings associated with creation. The classical Kuiper belt appears to be a composite of two separate populations. The first, known as the "dynamically cold" population, has orbits much like the planets; nearly circular, with an orbital eccentricity of less than 0.1, and with relatively low inclinations up to about 10° (they lie close to the plane of the Solar System rather than at an angle). The cold population also contains a concentration of objects, referred to as the kernel, with semi-major axes at 44–44.5 AU. The second, the "dynamically hot" population, has orbits much more inclined to the ecliptic, by up to 30°. The two populations have been named this way not because of any major difference in temperature, but from analogy to particles in a gas, which increase their relative velocity as they become heated up. Not only are the two populations in different orbits, the cold population also differs in color and albedo, being redder and brighter, has a larger fraction of binary objects, has a different size distribution, and lacks very large objects. The mass of the dynamically cold population is roughly 30 times less than the mass of the hot. The difference in colors may be a reflection of different compositions, which suggests they formed in different regions. The hot population is proposed to have formed near Neptune's original orbit and to have been scattered out during the migration of the giant planets. The cold population, on the other hand, has been proposed to have formed more or less in its current position because the loose binaries would be unlikely to survive encounters with Neptune. Although the Nice model appears to be able to at least partially explain a compositional difference, it has also been suggested the color difference may reflect differences in surface evolution. Resonances When an object's orbital period is an exact ratio of Neptune's (a situation called a mean-motion resonance), then it can become locked in a synchronised motion with Neptune and avoid being perturbed away if their relative alignments are appropriate. If, for instance, an object orbits the Sun twice for every three Neptune orbits, and if it reaches perihelion with Neptune a quarter of an orbit away from it, then whenever it returns to perihelion, Neptune will always be in about the same relative position as it began, because it will have completed orbits in the same time. This is known as the 2:3 (or 3:2) resonance, and it corresponds to a characteristic semi-major axis of about 39.4 AU. This 2:3 resonance is populated by about 200 known objects, including Pluto together with its moons. In recognition of this, the members of this family are known as plutinos. Many plutinos, including Pluto, have orbits that cross that of Neptune, although their resonance means they can never collide. Plutinos have high orbital eccentricities, suggesting that they are not native to their current positions but were instead thrown haphazardly into their orbits by the migrating Neptune. IAU guidelines dictate that all plutinos must, like Pluto, be named for underworld deities. The 1:2 resonance (whose objects complete half an orbit for each of Neptune's) corresponds to semi-major axes of ~47.7 AU, and is sparsely populated. Its residents are sometimes referred to as twotinos. Other resonances also exist at 3:4, 3:5, 4:7, and 2:5. Neptune has a number of trojan objects, which occupy its Lagrangian points, gravitationally stable regions leading and trailing it in its orbit. Neptune trojans are in a 1:1 mean-motion resonance with Neptune and often have very stable orbits. Additionally, there is a relative absence of objects with semi-major axes below 39 AU that cannot apparently be explained by the present resonances. The currently accepted hypothesis for the cause of this is that as Neptune migrated outward, unstable orbital resonances moved gradually through this region, and thus any objects within it were swept up, or gravitationally ejected from it. Kuiper cliff The 1:2 resonance at 47.8 AU appears to be an edge beyond which few objects are known. It is not clear whether it is actually the outer edge of the classical belt or just the beginning of a broad gap. Objects have been detected at the 2:5 resonance at roughly 55 AU, well outside the classical belt; predictions of a large number of bodies in classical orbits between these resonances have not been verified through observation. Based on estimations of the primordial mass required to form Uranus and Neptune, as well as bodies as large as Pluto (see ), earlier models of the Kuiper belt had suggested that the number of large objects would increase by a factor of two beyond 50 AU, so this sudden drastic falloff, known as the Kuiper cliff, was unexpected, and to date its cause is unknown. Bernstein, Trilling, et al. (2003) found evidence that the rapid decline in objects of 100 km or more in radius beyond 50 AU is real, and not due to observational bias. Possible explanations include that material at that distance was too scarce or too scattered to accrete into large objects, or that subsequent processes removed or destroyed those that did. Patryk Lykawka of Kobe University claimed that the gravitational attraction of an unseen large planetary object, perhaps the size of Earth or Mars, might be responsible. An analysis of the TNO data available prior to September 2023 shows that the distribution of objects at the outer rim of the classical Kuiper belt resembles that of the outer main asteroid belt with a gap at about 72 AU, far from any mean-motion resonances with Neptune; the outer main asteroid belt exhibits a gap induced by the 5:6 mean-motion resonance with Jupiter at 5.875 AU. Origin The precise origins of the Kuiper belt and its complex structure are still unclear, and astronomers are awaiting the completion of several wide-field survey telescopes such as Pan-STARRS and the future LSST, which should reveal many currently unknown KBOs. These surveys will provide data that will help determine answers to these questions. Pan-STARRS 1 finished its primary science mission in 2014, and the full data from the Pan-STARRS 1 surveys were published in 2019, helping reveal many more KBOs. The Kuiper belt is thought to consist of planetesimals, fragments from the original protoplanetary disc around the Sun that failed to fully coalesce into planets and instead formed into smaller bodies, the largest less than in diameter. Studies of the crater counts on Pluto and Charon revealed a scarcity of small craters suggesting that such objects formed directly as sizeable objects in the range of tens of kilometers in diameter rather than being accreted from much smaller, roughly kilometer scale bodies. Hypothetical mechanisms for the formation of these larger bodies include the gravitational collapse of clouds of pebbles concentrated between eddies in a turbulent protoplanetary disk or in streaming instabilities. These collapsing clouds may fragment, forming binaries. Modern computer simulations show the Kuiper belt to have been strongly influenced by Jupiter and Neptune, and also suggest that neither Uranus nor Neptune could have formed in their present positions, because too little primordial matter existed at that range to produce objects of such high mass. Instead, these planets are estimated to have formed closer to Jupiter. Scattering of planetesimals early in the Solar System's history would have led to migration of the orbits of the giant planets: Saturn, Uranus, and Neptune drifted outwards, whereas Jupiter drifted inwards. Eventually, the orbits shifted to the point where Jupiter and Saturn reached an exact 1:2 resonance; Jupiter orbited the Sun twice for every one Saturn orbit. The gravitational repercussions of such a resonance ultimately destabilized the orbits of Uranus and Neptune, causing them to be scattered outward onto high-eccentricity orbits that crossed the primordial planetesimal disc. While Neptune's orbit was highly eccentric, its mean-motion resonances overlapped and the orbits of the planetesimals evolved chaotically, allowing planetesimals to wander outward as far as Neptune's 1:2 resonance to form a dynamically cold belt of low-inclination objects. Later, after its eccentricity decreased, Neptune's orbit expanded outward toward its current position. Many planetesimals were captured into and remain in resonances during this migration, others evolved onto higher-inclination and lower-eccentricity orbits and escaped from the resonances onto stable orbits. Many more planetesimals were scattered inward, with small fractions being captured as Jupiter trojans, as irregular satellites orbiting the giant planets, and as outer belt asteroids. The remainder were scattered outward again by Jupiter and in most cases ejected from the Solar System reducing the primordial Kuiper belt population by 99% or more. The original version of the currently most popular model, the "Nice model", reproduces many characteristics of the Kuiper belt such as the "cold" and "hot" populations, resonant objects, and a scattered disc, but it still fails to account for some of the characteristics of their distributions. The model predicts a higher average eccentricity in classical KBO orbits than is observed (0.10–0.13 versus 0.07) and its predicted inclination distribution contains too few high inclination objects. In addition, the frequency of binary objects in the cold belt, many of which are far apart and loosely bound, also poses a problem for the model. These are predicted to have been separated during encounters with Neptune, leading some to propose that the cold disc formed at its current location, representing the only truly local population of small bodies in the solar system. A recent modification of the Nice model has the Solar System begin with five giant planets, including an additional ice giant, in a chain of mean-motion resonances. About 400 million years after the formation of the Solar System the resonance chain is broken. Instead of being scattered into the disc, the ice giants first migrate outward several AU. This divergent migration eventually leads to a resonance crossing, destabilizing the orbits of the planets. The extra ice giant encounters Saturn and is scattered inward onto a Jupiter-crossing orbit and after a series of encounters is ejected from the Solar System. The remaining planets then continue their migration until the planetesimal disc is nearly depleted with small fractions remaining in various locations. As in the original Nice model, objects are captured into resonances with Neptune during its outward migration. Some remain in the resonances, others evolve onto higher-inclination, lower-eccentricity orbits, and are released onto stable orbits forming the dynamically hot classical belt. The hot belt's inclination distribution can be reproduced if Neptune migrated from 24 AU to 30 AU on a 30 Myr timescale. When Neptune migrates to 28 AU, it has a gravitational encounter with the extra ice giant. Objects captured from the cold belt into the 1:2 mean-motion resonance with Neptune are left behind as a local concentration at 44 AU when this encounter causes Neptune's semi-major axis to jump outward. The objects deposited in the cold belt include some loosely bound 'blue' binaries originating from closer than the cold belt's current location. If Neptune's eccentricity remains small during this encounter, the chaotic evolution of orbits of the original Nice model is avoided and a primordial cold belt is preserved. In the later phases of Neptune's migration, a slow sweeping of mean-motion resonances removes the higher-eccentricity objects from the cold belt, truncating its eccentricity distribution. Composition Being distant from the Sun and major planets, Kuiper belt objects are thought to be relatively unaffected by the processes that have shaped and altered other Solar System objects; thus, determining their composition would provide substantial information on the makeup of the earliest Solar System. Due to their small size and extreme distance from Earth, the chemical makeup of KBOs is very difficult to determine. The principal method by which astronomers determine the composition of a celestial object is spectroscopy. When an object's light is broken into its component colors, an image akin to a rainbow is formed. This image is called a spectrum. Different substances absorb light at different wavelengths, and when the spectrum for a specific object is unravelled, dark lines (called absorption lines) appear where the substances within it have absorbed that particular wavelength of light. Every element or compound has its own unique spectroscopic signature, and by reading an object's full spectral "fingerprint", astronomers can determine its composition. Analysis indicates that Kuiper belt objects are composed of a mixture of rock and a variety of ices such as water, methane, and ammonia. The temperature of the belt is only about 50 K, so many compounds that would be gaseous closer to the Sun remain solid. The densities and rock–ice fractions are known for only a small number of objects for which the diameters and the masses have been determined. The diameter can be determined by imaging with a high-resolution telescope such as the Hubble Space Telescope, by the timing of an occultation when an object passes in front of a star or, most commonly, by using the albedo of an object calculated from its infrared emissions. The masses are determined using the semi-major axes and periods of satellites, which are therefore known only for a few binary objects. The densities range from less than 0.4 to 2.6 g/cm3. The least dense objects are thought to be largely composed of ice and have significant porosity. The densest objects are likely composed of rock with a thin crust of ice. There is a trend of low densities for small objects and high densities for the largest objects. One possible explanation for this trend is that ice was lost from the surface layers when differentiated objects collided to form the largest objects. Initially, detailed analysis of KBOs was impossible, and so astronomers were only able to determine the most basic facts about their makeup, primarily their color. These first data showed a broad range of colors among KBOs, ranging from neutral grey to deep red. This suggested that their surfaces were composed of a wide range of compounds, from dirty ices to hydrocarbons. This diversity was startling, as astronomers had expected KBOs to be uniformly dark, having lost most of the volatile ices from their surfaces to the effects of cosmic rays. Various solutions were suggested for this discrepancy, including resurfacing by impacts or outgassing. Jewitt and Luu's spectral analysis of the known Kuiper belt objects in 2001 found that the variation in color was too extreme to be easily explained by random impacts. The radiation from the Sun is thought to have chemically altered methane on the surface of KBOs, producing products such as tholins. Makemake has been shown to possess a number of hydrocarbons derived from the radiation-processing of methane, including ethane, ethylene and acetylene. Although to date most KBOs still appear spectrally featureless due to their faintness, there have been a number of successes in determining their composition. In 1996, Robert H. Brown et al. acquired spectroscopic data on the KBO 1993 SC, which revealed that its surface composition is markedly similar to that of Pluto, as well as Neptune's moon Triton, with large amounts of methane ice. For the smaller objects, only colors and in some cases the albedos have been determined. These objects largely fall into two classes: gray with low albedos, or very red with higher albedos. The difference in colors and albedos is hypothesized to be due to the retention or the loss of hydrogen sulfide (H2S) on the surface of these objects, with the surfaces of those that formed far enough from the Sun to retain H2S being reddened due to irradiation. The largest KBOs, such as Pluto and Quaoar, have surfaces rich in volatile compounds such as methane, nitrogen and carbon monoxide; the presence of these molecules is likely due to their moderate vapor pressure in the 30–50 K temperature range of the Kuiper belt. This allows them to occasionally boil off their surfaces and then fall again as snow, whereas compounds with higher boiling points would remain solid. The relative abundances of these three compounds in the largest KBOs is directly related to their surface gravity and ambient temperature, which determines which they can retain. Water ice has been detected in several KBOs, including members of the Haumea family such as , mid-sized objects such as 38628 Huya and 20000 Varuna, and also on some small objects. The presence of crystalline ice on large and mid-sized objects, including 50000 Quaoar where ammonia hydrate has also been detected, may indicate past tectonic activity aided by melting point lowering due to the presence of ammonia. Mass and size distribution Despite its vast extent, the collective mass of the Kuiper belt is relatively low. The total mass of the dynamically hot population is estimated to be 1% the mass of the Earth. The dynamically cold population is estimated to be much smaller with only 0.03% the mass of the Earth. While the dynamically hot population is thought to be the remnant of a much larger population that formed closer to the Sun and was scattered outward during the migration of the giant planets, in contrast, the dynamically cold population is thought to have formed at its current location. The most recent estimate (2018) puts the total mass of the Kuiper belt at Earth masses based on the influence that it exerts on the motion of planets. The small total mass of the dynamically cold population presents some problems for models of the Solar System's formation because a sizable mass is required for accretion of KBOs larger than in diameter. If the cold classical Kuiper belt had always had its current low density, these large objects simply could not have formed by the collision and mergers of smaller planetesimals. Moreover, the eccentricity and inclination of current orbits make the encounters quite "violent" resulting in destruction rather than accretion. The removal of a large fraction of the mass of the dynamically cold population is thought to be unlikely. Neptune's current influence is too weak to explain such a massive "vacuuming", and the extent of mass loss by collisional grinding is limited by the presence of loosely bound binaries in the cold disk, which are likely to be disrupted in collisions. Instead of forming from the collisions of smaller planetesimals, the larger object may have formed directly from the collapse of clouds of pebbles. The size distributions of the Kuiper belt objects follow a number of power laws. A power law describes the relationship between N(D) (the number of objects of diameter greater than D) and D, and is referred to as brightness slope. The number of objects is inversely proportional to some power of the diameter D: which yields (assuming q is not 1): (The constant may be non-zero only if the power law doesn't apply at high values of D.) Early estimates that were based on measurements of the apparent magnitude distribution found a value of q = 4 ± 0.5, which implied that there are 8 (=23) times more objects in the 100–200 km range than in the 200–400 km range. Recent research has revealed that the size distributions of the hot classical and cold classical objects have differing slopes. The slope for the hot objects is q = 5.3 at large diameters and q = 2.0 at small diameters with the change in slope at 110 km. The slope for the cold objects is q = 8.2 at large diameters and q = 2.9 at small diameters with a change in slope at 140 km. The size distributions of the scattering objects, the plutinos, and the Neptune trojans have slopes similar to the other dynamically hot populations, but may instead have a divot, a sharp decrease in the number of objects below a specific size. This divot is hypothesized to be due to either the collisional evolution of the population, or to be due to the population having formed with no objects below this size, with the smaller objects being fragments of the original objects. The smallest known Kuiper belt objects with radii below 1 km have only been detected by stellar occultations, as they are far too dim (magnitude 35) to be seen directly by telescopes such as the Hubble Space Telescope. The first reports of these occultations were from Schlichting et al. in December 2009, who announced the discovery of a small, sub-kilometre-radius Kuiper belt object in archival Hubble photometry from March 2007. With an estimated radius of or a diameter of , the object was detected by Hubble star tracking system when it briefly occulted a star for 0.3 seconds. In a subsequent study published in December 2012, Schlichting et al. performed a more thorough analysis of archival Hubble photometry and reported another occultation event by a sub-kilometre-sized Kuiper belt object, estimated to be in radius or in diameter. From the occultation events detected in 2009 and 2012, Schlichting et al. determined the Kuiper belt object size distribution slope to be q = 3.6 ± 0.2 or q = 3.8 ± 0.2, with the assumptions of a single power law and a uniform ecliptic latitude distribution. Their result implies a strong deficit of sub-kilometer-sized Kuiper belt objects compared to extrapolations from the population of larger Kuiper belt objects with diameters above 90 km. Observations made by NASA's New Horizons Venetia Burney Student Dust Counter showed "higher than model-predicted dust fluxes" as far as 55 au, not explained by any existing model. Scattered objects The scattered disc is a sparsely populated region, overlapping with the Kuiper belt but extending to beyond 100 AU. Scattered disc objects (SDOs) have very elliptical orbits, often also very inclined to the ecliptic. Most models of Solar System formation show both KBOs and SDOs first forming in a primordial belt, with later gravitational interactions, particularly with Neptune, sending the objects outward, some into stable orbits (the KBOs) and some into unstable orbits, the scattered disc. Due to its unstable nature, the scattered disc is suspected to be the point of origin of many of the Solar System's short-period comets. Their dynamic orbits occasionally force them into the inner Solar System, first becoming centaurs, and then short-period comets. According to the Minor Planet Center, which officially catalogues all trans-Neptunian objects, a KBO is any object that orbits exclusively within the defined Kuiper belt region regardless of origin or composition. Objects found outside the belt are classed as scattered objects. In some scientific circles the term "Kuiper belt object" has become synonymous with any icy minor planet native to the outer Solar System assumed to have been part of that initial class, even if its orbit during the bulk of Solar System history has been beyond the Kuiper belt (e.g. in the scattered-disc region). They often describe scattered disc objects as "scattered Kuiper belt objects". Eris, which is known to be more massive than Pluto, is often referred to as a KBO, but is technically an SDO. A consensus among astronomers as to the precise definition of the Kuiper belt has yet to be reached, and this issue remains unresolved. The centaurs, which are not normally considered part of the Kuiper belt, are also thought to be scattered objects, the only difference being that they were scattered inward, rather than outward. The Minor Planet Center groups the centaurs and the SDOs together as scattered objects. Triton During its period of migration, Neptune is thought to have captured a large KBO, Triton, which is the only large moon in the Solar System with a retrograde orbit (that is, it orbits opposite to Neptune's rotation). This suggests that, unlike the large moons of Jupiter, Saturn and Uranus, which are thought to have coalesced from rotating discs of material around their young parent planets, Triton was a fully formed body that was captured from surrounding space. Gravitational capture of an object is not easy: it requires some mechanism to slow down the object enough to be caught by the larger object's gravity. A possible explanation is that Triton was part of a binary when it encountered Neptune. (Many KBOs are members of binaries. See below.) Ejection of the other member of the binary by Neptune could then explain Triton's capture. Triton is only 14% larger than Pluto, and spectral analysis of both worlds shows that their surfaces are largely composed of similar materials, such as methane and carbon monoxide. All this points to the conclusion that Triton was once a KBO that was captured by Neptune during its outward migration. Largest KBOs Since 2000, a number of KBOs with diameters of between 500 and , more than half that of Pluto (diameter 2370 km), have been discovered. Quaoar, a classical KBO discovered in 2002, is over 1,200 km across. and , both announced on 29 July 2005, are larger still. Other objects, such as 28978 Ixion (discovered in 2001) and 20000 Varuna (discovered in 2000), measure roughly across. Pluto The discovery of these large KBOs in orbits similar to Pluto's led many to conclude that, aside from its relative size, Pluto was not particularly different from other members of the Kuiper belt. Not only are these objects similar to Pluto in size, but many also have natural satellites, and are of similar composition (methane and carbon monoxide have been found both on Pluto and on the largest KBOs). Thus, just as Ceres was considered a planet before the discovery of its fellow asteroids, some began to suggest that Pluto might also be reclassified. The issue was brought to a head by the discovery of Eris, an object in the scattered disc far beyond the Kuiper belt, that is now known to be 27% more massive than Pluto. (Eris was originally thought to be larger than Pluto by volume, but the New Horizons mission found this not to be the case.) In response, the International Astronomical Union (IAU) was forced to define what a planet is for the first time, and in so doing included in their definition that a planet must have "cleared the neighbourhood around its orbit". As Pluto shares its orbit with many other sizable objects, it was deemed not to have cleared its orbit and was thus reclassified from a planet to a dwarf planet, making it a member of the Kuiper belt. It is not clear how many KBOs are large enough to be dwarf planets. Consideration of the surprisingly low densities of many dwarf-planet candidates suggests that not many are. , Pluto, Haumea, , and Makemake are accepted by most astronomers; some have proposed other bodies, such as , , , and . Satellites The six largest TNOs (Eris, Pluto, Gonggong, Makemake, Haumea and Quaoar) are all known to have satellites, and two of them have more than one. A higher percentage of the larger KBOs have satellites than the smaller objects in the Kuiper belt, suggesting that a different formation mechanism was responsible. There are also a high number of binaries (two objects close enough in mass to be orbiting "each other") in the Kuiper belt. The most notable example is the Pluto–Charon binary, but it is estimated that around 11% of KBOs exist in binaries. Exploration On 19 January 2006, the first spacecraft to explore the Kuiper belt, New Horizons, was launched, which flew by Pluto on 14 July 2015. Beyond the Pluto flyby, the mission's goal was to locate and investigate other, farther objects in the Kuiper belt. On 15 October 2014, it was revealed that Hubble had uncovered three potential targets, provisionally designated PT1 ("potential target 1"), PT2 and PT3 by the New Horizons team. The objects' diameters were estimated to be in the 30–55 km range; too small to be seen by ground telescopes, at distances from the Sun of 43–44 AU, which would put the encounters in the 2018–2019 period. The initial estimated probabilities that these objects were reachable within New Horizons fuel budget were 100%, 7%, and 97%, respectively. All were members of the "cold" (low-inclination, low-eccentricity) classical Kuiper belt, and thus very different from Pluto. PT1 (given the temporary designation "1110113Y" on the HST web site), the most favorably situated object, was magnitude 26.8, 30–45 km in diameter, and was encountered in January 2019. Once sufficient orbital information was provided, the Minor Planet Center gave official designations to the three target KBOs: (PT1), (PT2), and (PT3). By the fall of 2014, a possible fourth target, , had been eliminated by follow-up observations. PT2 was out of the running before the Pluto flyby. On 26 August 2015, the first target, (nicknamed "Ultima Thule" and later named 486958 Arrokoth), was chosen. Course adjustment took place in late October and early November 2015, leading to a flyby in January 2019. On 1 July 2016, NASA approved additional funding for New Horizons to visit the object. On 2 December 2015, New Horizons detected what was then called (later named 15810 Arawn) from away. On 1 January 2019, New Horizons successfully flew by Arrokoth, returning data showing Arrokoth to be a contact binary 32 km long by 16 km wide. The Ralph instrument aboard New Horizons confirmed Arrokoth's red color. Data from the fly-by will continue to be downloaded over the next 20 months. No follow-up missions for New Horizons are planned, though at least two concepts for missions that would return to orbit or land on Pluto have been studied. Beyond Pluto, there exist many large KBOs that cannot be visited with New Horizons, such as the dwarf planets Makemake and Haumea. New missions would be tasked to explore and study these objects in detail. Thales Alenia Space has studied the logistics of an orbiter mission to Haumea, a high priority scientific target due to its status as the parent body of a collisional family that includes several other TNOs, as well as Haumea's ring and two moons. The lead author, Joel Poncy, has advocated for new technology that would allow spacecraft to reach and orbit KBOs in 10–20 years or less. New Horizons Principal Investigator Alan Stern has informally suggested missions that would flyby the planets Uranus or Neptune before visiting new KBO targets, thus furthering the exploration of the Kuiper belt while also visiting these ice giant planets for the first time since the Voyager 2 flybys in the 1980s. Design studies and concept missions Quaoar has been considered as a flyby target for a probe tasked with exploring the interstellar medium, as it currently lies near the heliospheric nose; Pontus Brandt at Johns Hopkins Applied Physics Laboratory and his colleagues have studied a probe that would flyby Quaoar in the 2030s before continuing to the interstellar medium through the heliospheric nose. Among their interests in Quaoar include its likely disappearing methane atmosphere and cryovolcanism. The mission studied by Brandt and his colleagues would launch using SLS and achieve 30 km/s using a Jupiter flyby. Alternatively, for an orbiter mission, a study published in 2012 concluded that Ixion and Huya are among the most feasible targets. For instance, the authors calculated that an orbiter mission could reach Ixion after 17 years cruise time if launched in 2039. Extrasolar Kuiper belts By 2006, astronomers had resolved dust discs thought to be Kuiper belt-like structures around nine stars other than the Sun. They appear to fall into two categories: wide belts, with radii of over 50 AU, and narrow belts (tentatively like that of the Solar System) with radii of between 20 and 30 AU and relatively sharp boundaries. Beyond this, 15–20% of solar-type stars have an observed infrared excess that is suggestive of massive Kuiper-belt-like structures. Most known debris discs around other stars are fairly young, but the two images on the right, taken by the Hubble Space Telescope in January 2006, are old enough (roughly 300 million years) to have settled into stable configurations. The left image is a "top view" of a wide belt, and the right image is an "edge view" of a narrow belt. Computer simulations of dust in the Kuiper belt suggest that when it was younger, it may have resembled the narrow rings seen around younger stars.
Physical sciences
Solar System
null
16803
https://en.wikipedia.org/wiki/Ketone
Ketone
In organic chemistry, a ketone is an organic compound with the structure , where R and R' can be a variety of carbon-containing substituents. Ketones contain a carbonyl group (a carbon-oxygen double bond C=O). The simplest ketone is acetone (where R and R' is methyl), with the formula . Many ketones are of great importance in biology and industry. Examples include many sugars (ketoses), many steroids (e.g., testosterone), and the solvent acetone. Nomenclature and etymology The word ketone is derived from Aketon, an old German word for acetone. According to the rules of IUPAC nomenclature, ketone names are derived by changing the suffix -ane of the parent alkane to -anone. Typically, the position of the carbonyl group is denoted by a number, but traditional nonsystematic names are still generally used for the most important ketones, for example acetone and benzophenone. These nonsystematic names are considered retained IUPAC names, although some introductory chemistry textbooks use systematic names such as "2-propanone" or "propan-2-one" for the simplest ketone () instead of "acetone". The derived names of ketones are obtained by writing separately the names of the two alkyl groups attached to the carbonyl group, followed by "ketone" as a separate word. Traditionally the names of the alkyl groups were written in order of increasing complexity, for example methyl ethyl ketone. However, according to the rules of IUPAC nomenclature, the alkyl groups are written alphabetically, for example ethyl methyl ketone. When the two alkyl groups are the same, the prefix "di-" is added before the name of alkyl group. The positions of other groups are indicated by Greek letters, the α-carbon being the atom adjacent to carbonyl group. Although used infrequently, oxo is the IUPAC nomenclature for the oxo group (=O) and used as prefix when the ketone does not have the highest priority. Other prefixes, however, are also used. For some common chemicals (mainly in biochemistry), keto refer to the ketone functional group. Structure and bonding The ketone carbon is often described as sp2 hybridized, a description that includes both their electronic and molecular structure. Ketones are trigonal planar around the ketonic carbon, with C–C–O and C–C–C bond angles of approximately 120°. Ketones differ from aldehydes in that the carbonyl group (C=O) is bonded to two carbons within a carbon skeleton. In aldehydes, the carbonyl is bonded to one carbon and one hydrogen and are located at the ends of carbon chains. Ketones are also distinct from other carbonyl-containing functional groups, such as carboxylic acids, esters and amides. The carbonyl group is polar because the electronegativity of the oxygen is greater than that for carbon. Thus, ketones are nucleophilic at oxygen and electrophilic at carbon. Because the carbonyl group interacts with water by hydrogen bonding, ketones are typically more soluble in water than the related methylene compounds. Ketones are hydrogen-bond acceptors. Ketones are not usually hydrogen-bond donors and cannot hydrogen-bond to themselves. Because of their inability to serve both as hydrogen-bond donors and acceptors, ketones tend not to "self-associate" and are more volatile than alcohols and carboxylic acids of comparable molecular weights. These factors relate to the pervasiveness of ketones in perfumery and as solvents. Classes of ketones Ketones are classified on the basis of their substituents. One broad classification subdivides ketones into symmetrical and unsymmetrical derivatives, depending on the equivalency of the two organic substituents attached to the carbonyl center. Acetone and benzophenone () are symmetrical ketones. Acetophenone is an unsymmetrical ketone. Diketones Many kinds of diketones are known, some with unusual properties. The simplest is diacetyl , once used as butter-flavoring in popcorn. Acetylacetone (pentane-2,4-dione) is virtually a misnomer (inappropriate name) because this species exists mainly as the monoenol . Its enolate is a common ligand in coordination chemistry. Unsaturated ketones Ketones containing alkene and alkyne units are often called unsaturated ketones. A widely used member of this class of compounds is methyl vinyl ketone, , a α,β-unsaturated carbonyl compound. Cyclic ketones Many ketones are cyclic. The simplest class have the formula , where n varies from 2 for cyclopropanone () to the tens. Larger derivatives exist. Cyclohexanone (), a symmetrical cyclic ketone, is an important intermediate in the production of nylon. Isophorone, derived from acetone, is an unsaturated, asymmetrical ketone that is the precursor to other polymers. Muscone, 3-methylpentadecanone, is an animal pheromone. Another cyclic ketone is cyclobutanone, having the formula . Characterization An aldehyde differs from a ketone in that it has a hydrogen atom attached to its carbonyl group, making aldehydes easier to oxidize. Ketones do not have a hydrogen atom bonded to the carbonyl group, and are therefore more resistant to oxidation. They are oxidized only by powerful oxidizing agents which have the ability to cleave carbon–carbon bonds. Spectroscopy Ketones (and aldehydes) absorb strongly in the infra-red spectrum near 1750 cm−1, which is assigned to νC=O ("carbonyl stretching frequency"). The energy of the peak is lower for aryl and unsaturated ketones. Whereas 1H NMR spectroscopy is generally not useful for establishing the presence of a ketone, 13C NMR spectra exhibit signals somewhat downfield of 200 ppm depending on structure. Such signals are typically weak due to the absence of nuclear Overhauser effects. Since aldehydes resonate at similar chemical shifts, multiple resonance experiments are employed to definitively distinguish aldehydes and ketones. Qualitative organic tests Ketones give positive results in Brady's test, the reaction with 2,4-dinitrophenylhydrazine to give the corresponding hydrazone. Ketones may be distinguished from aldehydes by giving a negative result with Tollens' reagent or with Fehling's solution. Methyl ketones give positive results for the iodoform test. Ketones also give positive results when treated with m-dinitrobenzene in presence of dilute sodium hydroxide to give violet coloration. Synthesis Many methods exist for the preparation of ketones in industrial scale and academic laboratories. Ketones are also produced in various ways by organisms; see the section on biochemistry below. In industry, the most important method probably involves oxidation of hydrocarbons, often with air. For example, a billion kilograms of cyclohexanone are produced annually by aerobic oxidation of cyclohexane. Acetone is prepared by air-oxidation of cumene. For specialized or small scale organic synthetic applications, ketones are often prepared by oxidation of secondary alcohols: Typical strong oxidants (source of "O" in the above reaction) include potassium permanganate or a Cr(VI) compound. Milder conditions make use of the Dess–Martin periodinane or the Moffatt–Swern methods. Many other methods have been developed, examples include: By geminal halide hydrolysis. By hydration of alkynes. Such processes occur via enols and require the presence of an acid and mercury(II) sulfate (). Subsequent enol–keto tautomerization gives a ketone. This reaction always produces a ketone, even with a terminal alkyne, the only exception being the hydration of acetylene, which produces acetaldehyde. From Weinreb amides using stoichiometric organometallic reagents. Aryl ketones can be prepared in the Friedel–Crafts acylation, the related Houben–Hoesch reaction, and the Fries rearrangement. Ozonolysis, and related dihydroxylation/oxidative sequences, cleave alkenes to give aldehydes or ketones, depending on alkene substitution pattern. From peroxides (Kornblum–DeLaMare rearrangement). Cyclization of dicarboxylic acids (Ruzicka cyclization) Hydrolysis of salts of secondary nitro compounds ( Nef reaction) Alkylation of thioester with organozinc compounds (Fukuyama coupling). Alkylation of acid chloride with organocadmium compounds or organocopper compounds. The Dakin–West reaction provides an efficient method for preparation of certain methyl ketones from carboxylic acids. Ketones can be prepared by the reaction of Grignard reagents with nitriles, followed by hydrolysis. By decarboxylation of carboxylic anhydride. Ketones can be prepared from haloketones in reductive dehalogenation of halo ketones. In ketonic decarboxylation symmetrical ketones are prepared from carboxylic acids. Hydrolysis of unsaturated secondary amides, β-Keto acid esters, or β-diketones (the acetoacetic ester synthesis). Acid-catalysed rearrangement of 1,2-diols, or Criegee oxidation of the same. Reactions Keto-enol tautomerization Ketones that have at least one alpha-hydrogen, undergo keto-enol tautomerization; the tautomer is an enol. Tautomerization is catalyzed by both acids and bases. Usually, the keto form is more stable than the enol. This equilibrium allows ketones to be prepared via the hydration of alkynes. Acid/base properties of ketones bonds adjacent to the carbonyl in ketones are more acidic pKa ≈ 20) than the bonds in alkane (pKa ≈ 50). This difference reflects resonance stabilization of the enolate ion that is formed upon deprotonation. The relative acidity of the α-hydrogen is important in the enolization reactions of ketones and other carbonyl compounds. The acidity of the α-hydrogen also allows ketones and other carbonyl compounds to react as nucleophiles at that position, with either stoichiometric and catalytic base. Using very strong bases like lithium diisopropylamide (LDA, pKa of conjugate acid ~36) under non-equilibrating conditions (–78 °C, 1.1 equiv LDA in THF, ketone added to base), the less-substituted kinetic enolate is generated selectively, while conditions that allow for equilibration (higher temperature, base added to ketone, using weak or insoluble bases, e.g., in , or NaH) provides the more-substituted thermodynamic enolate. Ketones are also weak bases, undergoing protonation on the carbonyl oxygen in the presence of Brønsted acids. Ketonium ions (i.e., protonated ketones) are strong acids, with pKa values estimated to be somewhere between –5 and –7. Although acids encountered in organic chemistry are seldom strong enough to fully protonate ketones, the formation of equilibrium concentrations of protonated ketones is nevertheless an important step in the mechanisms of many common organic reactions, like the formation of an acetal, for example. Acids as weak as pyridinium cation (as found in pyridinium tosylate) with a pKa of 5.2 are able to serve as catalysts in this context, despite the highly unfavorable equilibrium constant for protonation (Keq < 10−10). Nucleophilic additions An important set of reactions follow from the susceptibility of the carbonyl carbon toward nucleophilic addition and the tendency for the enolates to add to electrophiles. Nucleophilic additions include in approximate order of their generality: With water (hydration) gives geminal diols, which are usually not formed in appreciable (or observable) amounts With an acetylide to give the α-hydroxyalkyne With ammonia or a primary amine gives an imine With secondary amine gives an enamine With Grignard and organolithium reagents to give, after aqueous workup, a tertiary alcohol With an alcohols or alkoxides to gives the hemiketal or its conjugate base. With a diol to the ketal. This reaction is employed to protect ketones. With sodium amide resulting in C–C bond cleavage with formation of the amide RCONH2 and the alkane or arene R'H, a reaction called the Haller–Bauer reaction. Oxidation Ketones are cleaved by strong oxidizing agents and at elevated temperatures. Their oxidation involves carbon–carbon bond cleavage to afford a mixture of carboxylic acids having lesser number of carbon atoms than the parent ketone. Other reactions Electrophilic addition, reaction with an electrophile gives a resonance stabilized cation With phosphonium ylides in the Wittig reaction to give the alkenes With thiols to give the thioacetal With hydrazine or 1-disubstituted derivatives of hydrazine to give hydrazones. With a metal hydride gives a metal alkoxide salt, hydrolysis of which gives the alcohol, an example of ketone reduction With halogens to form an α-haloketone, a reaction that proceeds via an enol (see Haloform reaction) With heavy water to give an α-deuterated ketone Fragmentation in photochemical Norrish reaction Reaction of 1,4-aminodiketones to oxazoles by dehydration in the Robinson–Gabriel synthesis In the case of aryl–alkyl ketones, with sulfur and an amine give amides in the Willgerodt reaction With hydroxylamine to produce oximes With reducing agents to form secondary alcohols With peroxy acids to form esters in the Baeyer–Villiger oxidation Biochemistry Ketones do not appear in standard amino acids, nucleic acids, nor lipids. The formation of organic compounds in photosynthesis occurs via the ketone ribulose-1,5-bisphosphate. Many sugars are ketones, known collectively as ketoses. The best known ketose is fructose; it mostly exists as a cyclic hemiketal, which masks the ketone functional group. Fatty acid synthesis proceeds via ketones. Acetoacetate is an intermediate in the Krebs cycle which releases energy from sugars and carbohydrates. In medicine, acetone, acetoacetate, and beta-hydroxybutyrate are collectively called ketone bodies, generated from carbohydrates, fatty acids, and amino acids in most vertebrates, including humans. Ketone bodies are elevated in the blood (ketosis) after fasting, including a night of sleep; in both blood and urine in starvation; in hypoglycemia, due to causes other than hyperinsulinism; in various inborn errors of metabolism, and intentionally induced via a ketogenic diet, and in ketoacidosis (usually due to diabetes mellitus). Although ketoacidosis is characteristic of decompensated or untreated type 1 diabetes, ketosis or even ketoacidosis can occur in type 2 diabetes in some circumstances as well. Applications Ketones are produced on massive scales in industry as solvents, polymer precursors, and pharmaceuticals. In terms of scale, the most important ketones are acetone, methylethyl ketone, and cyclohexanone. They are also common in biochemistry, but less so than in organic chemistry in general. The combustion of hydrocarbons is an uncontrolled oxidation process that gives ketones as well as many other types of compounds. Toxicity Although it is difficult to generalize on the toxicity of such a broad class of compounds, simple ketones are, in general, not highly toxic. This characteristic is one reason for their popularity as solvents. Exceptions to this rule are the unsaturated ketones such as methyl vinyl ketone with of 7 mg/kg (oral).
Physical sciences
Carbon–oxygen bond
null
16848
https://en.wikipedia.org/wiki/Kurtosis
Kurtosis
In probability theory and statistics, kurtosis (from , kyrtos or kurtos, meaning "curved, arching") refers to the degree of “tailedness” in the probability distribution of a real-valued random variable. Similar to skewness, kurtosis provides insight into specific characteristics of a distribution. Various methods exist for quantifying kurtosis in theoretical distributions, and corresponding techniques allow estimation based on sample data from a population. It’s important to note that different measures of kurtosis can yield varying interpretations. The standard measure of a distribution's kurtosis, originating with Karl Pearson, is a scaled version of the fourth moment of the distribution. This number is related to the tails of the distribution, not its peak; hence, the sometimes-seen characterization of kurtosis as "peakedness" is incorrect. For this measure, higher kurtosis corresponds to greater extremity of deviations (or outliers), and not the configuration of data near the mean. Excess kurtosis, typically compared to a value of 0, characterizes the “tailedness” of a distribution. A univariate normal distribution has an excess kurtosis of 0. Negative excess kurtosis indicates a platykurtic distribution, which doesn’t necessarily have a flat top but produces fewer or less extreme outliers than the normal distribution. For instance, the uniform distribution (ie one that is uniformly finite over some bound and zero elsewhere) is platykurtic. On the other hand, positive excess kurtosis signifies a leptokurtic distribution. The Laplace distribution, for example, has tails that decay more slowly than a Gaussian, resulting in more outliers. To simplify comparison with the normal distribution, excess kurtosis is calculated as Pearson’s kurtosis minus 3. Some authors and software packages use “kurtosis” to refer specifically to excess kurtosis, but this article distinguishes between the two for clarity. Alternative measures of kurtosis are: the L-kurtosis, which is a scaled version of the fourth L-moment; measures based on four population or sample quantiles. These are analogous to the alternative measures of skewness that are not based on ordinary moments. Pearson moments The kurtosis is the fourth standardized moment, defined as where μ4 is the fourth central moment and σ is the standard deviation. Several letters are used in the literature to denote the kurtosis. A very common choice is κ, which is fine as long as it is clear that it does not refer to a cumulant. Other choices include γ2, to be similar to the notation for skewness, although sometimes this is instead reserved for the excess kurtosis. The kurtosis is bounded below by the squared skewness plus 1: where μ3 is the third central moment. The lower bound is realized by the Bernoulli distribution. There is no upper limit to the kurtosis of a general probability distribution, and it may be infinite. A reason why some authors favor the excess kurtosis is that cumulants are extensive. Formulas related to the extensive property are more naturally expressed in terms of the excess kurtosis. For example, let X1, ..., Xn be independent random variables for which the fourth moment exists, and let Y be the random variable defined by the sum of the Xi. The excess kurtosis of Y is where is the standard deviation of . In particular if all of the Xi have the same variance, then this simplifies to The reason not to subtract 3 is that the bare moment better generalizes to multivariate distributions, especially when independence is not assumed. The cokurtosis between pairs of variables is an order four tensor. For a bivariate normal distribution, the cokurtosis tensor has off-diagonal terms that are neither 0 nor 3 in general, so attempting to "correct" for an excess becomes confusing. It is true, however, that the joint cumulants of degree greater than two for any multivariate normal distribution are zero. For two random variables, X and Y, not necessarily independent, the kurtosis of the sum, X + Y, is Note that the fourth-power binomial coefficients (1, 4, 6, 4, 1) appear in the above equation. Interpretation The interpretation of the Pearson measure of kurtosis (or excess kurtosis) was once debated, but it is now well-established. As noted by Westfall in 2014, "...its unambiguous interpretation relates to tail extremity. Specifically, it reflects either the presence of existing outliers (for sample kurtosis) or the tendency to produce outliers (for the kurtosis of a probability distribution). The underlying logic is straightforward: Kurtosis represents the average (or expected value) of standardized data raised to the fourth power. Standardized values less than 1—corresponding to data within one standard deviation of the mean (where the “peak” occurs)—contribute minimally to kurtosis. This is because raising a number less than 1 to the fourth power brings it closer to zero. The meaningful contributors to kurtosis are data values outside the peak region, i.e., the outliers. Therefore, kurtosis primarily measures outliers and provides no information about the central "peak". Numerous misconceptions about kurtosis relate to notions of peakedness. One such misconception is that kurtosis measures both the “peakedness” of a distribution and the heaviness of its tail . Other incorrect interpretations include notions like “lack of shoulders” (where the “shoulder” refers vaguely to the area between the peak and the tail, or more specifically, the region about one standard deviation from the mean) or “bimodality.” Balanda and MacGillivray argue that the standard definition of kurtosis “poorly captures the kurtosis, peakedness, or tail weight of a distribution.”Instead, they propose a vague definition of kurtosis as the location- and scale-free movement of probability mass from the distribution’s shoulders into its center and tails. Moors' interpretation In 1986, Moors gave an interpretation of kurtosis. Let where X is a random variable, μ is the mean and σ is the standard deviation. Now by definition of the kurtosis , and by the well-known identity The kurtosis can now be seen as a measure of the dispersion of Z2 around its expectation. Alternatively it can be seen to be a measure of the dispersion of Z around +1 and −1. κ attains its minimal value in a symmetric two-point distribution. In terms of the original variable X, the kurtosis is a measure of the dispersion of X around the two values μ ± σ. High values of κ arise in two circumstances: where the probability mass is concentrated around the mean and the data-generating process produces occasional values far from the mean where the probability mass is concentrated in the tails of the distribution. Maximal entropy The entropy of a distribution is . For any with positive definite, among all probability distributions on with mean and covariance , the normal distribution has the largest entropy. Since mean and covariance are the first two moments, it is natural to consider extension to higher moments. In fact, by Lagrange multiplier method, for any prescribed first n moments, if there exists some probability distribution of form that has the prescribed moments (if it is feasible), then it is the maximal entropy distribution under the given constraints. By serial expansion, so if a random variable has probability distribution , where is a normalization constant, then its kurtosis is Excess kurtosis The excess kurtosis is defined as kurtosis minus 3. There are 3 distinct regimes as described below. Mesokurtic Distributions with zero excess kurtosis are called mesokurtic, or mesokurtotic. The most prominent example of a mesokurtic distribution is the normal distribution family, regardless of the values of its parameters. A few other well-known distributions can be mesokurtic, depending on parameter values: for example, the binomial distribution is mesokurtic for . Leptokurtic A distribution with positive excess kurtosis is called leptokurtic, or leptokurtotic. "Lepto-" means "slender". In terms of shape, a leptokurtic distribution has fatter tails. Examples of leptokurtic distributions include the Student's t-distribution, Rayleigh distribution, Laplace distribution, exponential distribution, Poisson distribution and the logistic distribution. Such distributions are sometimes termed super-Gaussian. Platykurtic A distribution with negative excess kurtosis is called platykurtic, or platykurtotic. "Platy-" means "broad". In terms of shape, a platykurtic distribution has thinner tails. Examples of platykurtic distributions include the continuous and discrete uniform distributions, and the raised cosine distribution. The most platykurtic distribution of all is the Bernoulli distribution with p = 1/2 (for example the number of times one obtains "heads" when flipping a coin once, a coin toss), for which the excess kurtosis is −2. Graphical examples The Pearson type VII family The effects of kurtosis are illustrated using a parametric family of distributions whose kurtosis can be adjusted while their lower-order moments and cumulants remain constant. Consider the Pearson type VII family, which is a special case of the Pearson type IV family restricted to symmetric densities. The probability density function is given by where a is a scale parameter and m is a shape parameter. All densities in this family are symmetric. The kth moment exists provided m > (k + 1)/2. For the kurtosis to exist, we require m > 5/2. Then the mean and skewness exist and are both identically zero. Setting a2 = 2m − 3 makes the variance equal to unity. Then the only free parameter is m, which controls the fourth moment (and cumulant) and hence the kurtosis. One can reparameterize with , where is the excess kurtosis as defined above. This yields a one-parameter leptokurtic family with zero mean, unit variance, zero skewness, and arbitrary non-negative excess kurtosis. The reparameterized density is In the limit as one obtains the density which is shown as the red curve in the images on the right. In the other direction as one obtains the standard normal density as the limiting distribution, shown as the black curve. In the images on the right, the blue curve represents the density with excess kurtosis of 2. The top image shows that leptokurtic densities in this family have a higher peak than the mesokurtic normal density, although this conclusion is only valid for this select family of distributions. The comparatively fatter tails of the leptokurtic densities are illustrated in the second image, which plots the natural logarithm of the Pearson type VII densities: the black curve is the logarithm of the standard normal density, which is a parabola. One can see that the normal density allocates little probability mass to the regions far from the mean ("has thin tails"), compared with the blue curve of the leptokurtic Pearson type VII density with excess kurtosis of 2. Between the blue curve and the black are other Pearson type VII densities with γ2 = 1, 1/2, 1/4, 1/8, and 1/16. The red curve again shows the upper limit of the Pearson type VII family, with (which, strictly speaking, means that the fourth moment does not exist). The red curve decreases the slowest as one moves outward from the origin ("has fat tails"). Other well-known distributions Several well-known, unimodal, and symmetric distributions from different parametric families are compared here. Each has a mean and skewness of zero. The parameters have been chosen to result in a variance equal to 1 in each case. The images on the right show curves for the following seven densities, on a linear scale and logarithmic scale: D: Laplace distribution, also known as the double exponential distribution, red curve (two straight lines in the log-scale plot), excess kurtosis = 3 S: hyperbolic secant distribution, orange curve, excess kurtosis = 2 L: logistic distribution, green curve, excess kurtosis = 1.2 N: normal distribution, black curve (inverted parabola in the log-scale plot), excess kurtosis = 0 C: raised cosine distribution, cyan curve, excess kurtosis = −0.593762... W: Wigner semicircle distribution, blue curve, excess kurtosis = −1 U: uniform distribution, magenta curve (shown for clarity as a rectangle in both images), excess kurtosis = −1.2. Note that in these cases the platykurtic densities have bounded support, whereas the densities with positive or zero excess kurtosis are supported on the whole real line. One cannot infer that high or low kurtosis distributions have the characteristics indicated by these examples. There exist platykurtic densities with infinite support, e.g., exponential power distributions with sufficiently large shape parameter b and there exist leptokurtic densities with finite support. e.g., a distribution that is uniform between −3 and −0.3, between −0.3 and 0.3, and between 0.3 and 3, with the same density in the (−3, −0.3) and (0.3, 3) intervals, but with 20 times more density in the (−0.3, 0.3) interval Also, there exist platykurtic densities with infinite peakedness, e.g., an equal mixture of the beta distribution with parameters 0.5 and 1 with its reflection about 0.0 and there exist leptokurtic densities that appear flat-topped, e.g., a mixture of distribution that is uniform between −1 and 1 with a T(4.0000001) Student's t-distribution, with mixing probabilities 0.999 and 0.001. Sample kurtosis Definitions A natural but biased estimator For a sample of n values, a method of moments estimator of the population excess kurtosis can be defined as where m4 is the fourth sample moment about the mean, m2 is the second sample moment about the mean (that is, the sample variance), xi is the ith value, and is the sample mean. This formula has the simpler representation, where the values are the standardized data values using the standard deviation defined using n rather than n − 1 in the denominator. For example, suppose the data values are 0, 3, 4, 1, 2, 3, 0, 2, 1, 3, 2, 0, 2, 2, 3, 2, 5, 2, 3, 999. Then the values are −0.239, −0.225, −0.221, −0.234, −0.230, −0.225, −0.239, −0.230, −0.234, −0.225, −0.230, −0.239, −0.230, −0.230, −0.225, −0.230, −0.216, −0.230, −0.225, 4.359 and the values are 0.003, 0.003, 0.002, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.002, 0.003, 0.003, 360.976. The average of these values is 18.05 and the excess kurtosis is thus 18.05 − 3 = 15.05. This example makes it clear that data near the "middle" or "peak" of the distribution do not contribute to the kurtosis statistic, hence kurtosis does not measure "peakedness". It is simply a measure of the outlier, 999 in this example. Standard unbiased estimator Given a sub-set of samples from a population, the sample excess kurtosis above is a biased estimator of the population excess kurtosis. An alternative estimator of the population excess kurtosis, which is unbiased in random samples of a normal distribution, is defined as follows: where k4 is the unique symmetric unbiased estimator of the fourth cumulant, k2 is the unbiased estimate of the second cumulant (identical to the unbiased estimate of the sample variance), m4 is the fourth sample moment about the mean, m2 is the second sample moment about the mean, xi is the ith value, and is the sample mean. This adjusted Fisher–Pearson standardized moment coefficient is the version found in Excel and several statistical packages including Minitab, SAS, and SPSS. Unfortunately, in nonnormal samples is itself generally biased. Upper bound An upper bound for the sample kurtosis of n (n > 2) real numbers is where is the corresponding sample skewness. Variance under normality The variance of the sample kurtosis of a sample of size n from the normal distribution is Stated differently, under the assumption that the underlying random variable is normally distributed, it can be shown that . Applications The sample kurtosis is a useful measure of whether there is a problem with outliers in a data set. Larger kurtosis indicates a more serious outlier problem, and may lead the researcher to choose alternative statistical methods. D'Agostino's K-squared test is a goodness-of-fit normality test based on a combination of the sample skewness and sample kurtosis, as is the Jarque–Bera test for normality. For non-normal samples, the variance of the sample variance depends on the kurtosis; for details, please see variance. Pearson's definition of kurtosis is used as an indicator of intermittency in turbulence. It is also used in magnetic resonance imaging to quantify non-Gaussian diffusion. A concrete example is the following lemma by He, Zhang, and Zhang: Assume a random variable has expectation , variance and kurtosis Assume we sample many independent copies. Then This shows that with many samples, we will see one that is above the expectation with probability at least . In other words: If the kurtosis is large, we might see a lot values either all below or above the mean. Kurtosis convergence Applying band-pass filters to digital images, kurtosis values tend to be uniform, independent of the range of the filter. This behavior, termed kurtosis convergence, can be used to detect image splicing in forensic analysis. Seismic signal analysis Kurtosis can be used in geophysics to distinguish different types of seismic signals. It is particularly effective in differentiating seismic signals generated by human footsteps from other signals. This is useful in security and surveillance systems that rely on seismic detection. Weather prediction In meteorology, kurtosis is used to analyze weather data distributions. It helps predict extreme weather events by assessing the probability of outlier values in historical data, which is valuable for long-term climate studies and short-term weather forecasting. Other measures A different measure of "kurtosis" is provided by using L-moments instead of the ordinary moments.
Mathematics
Probability
null
16933
https://en.wikipedia.org/wiki/Kainite
Kainite
Kainite ( or ) (KMg(SO4)Cl·3H2O) is an evaporite mineral in the class of "Sulfates (selenates, etc.) with additional anions, with H2O" according to the Nickel–Strunz classification. It is a hydrated potassium-magnesium sulfate-chloride, naturally occurring in irregular granular masses or as crystalline coatings in cavities or fissures. This mineral is dull and soft, and is colored white, yellowish, grey, reddish, or blue to violet. Its name is derived from Greek [kainos] ("(hitherto) unknown"), as it was the first mineral discovered that contained both sulfate and chloride as anions. Kainite forms monoclinic crystals. Properties Kainite is of bitter taste and soluble in water. On recrystallization picromerite is deposited from the solution. Genesis and occurrence Kainite was discovered in the Stassfurt salt mines in today's Saxony-Anhalt, Germany in 1865 by the mine official Schöne and was first described by Carl Friedrich Jacob Zincken. Kainite is a typical secondary mineral that forms through metamorphosis in marine deposits of potassium carbonate, and is also occasionally formed through resublimation from volcanic vapours. It is often accompanied by anhydrite, carnallite, halite, and kieserite. Kainite is only found in comparatively few places, among them in salt mines in central and northern Germany, Bad Ischl (Austria), on Pasquasia in Sicily, in Whitby (UK), and in the Carlsbad Potash District in New Mexico, in volcanic deposits in Kamchatka and in Iceland, and in salt lakes in western China. It has also been identified in Gusev Crater on Mars. It can also be produced from bittern remaining after removal of table salt from seawater. Uses Kainite is used as a source of potassium and magnesium compounds, as a fertilizer, and as gritting salt.
Physical sciences
Sedimentary rocks
Earth science
16938
https://en.wikipedia.org/wiki/Kaolinite
Kaolinite
Kaolinite ( ; also called kaolin) is a clay mineral, with the chemical composition Al2Si2O5(OH)4. It is a layered silicate mineral, with one tetrahedral sheet of silica () linked through oxygen atoms to one octahedral sheet of alumina (). Kaolinite is a soft, earthy, usually white, mineral (dioctahedral phyllosilicate clay), produced by the chemical weathering of aluminium silicate minerals like feldspar. It has a low shrink–swell capacity and a low cation-exchange capacity (1–15 meq/100 g). Rocks that are rich in kaolinite, and halloysite, are known as kaolin () or china clay. In many parts of the world kaolin is colored pink-orange-red by iron oxide, giving it a distinct rust hue. Lower concentrations of iron oxide yield the white, yellow, or light orange colors of kaolin. Alternating lighter and darker layers are sometimes found, as at Providence Canyon State Park in Georgia, United States. Kaolin is an important raw material in many industries and applications. Commercial grades of kaolin are supplied and transported as powder, lumps, semi-dried noodle or slurry. Global production of kaolin in 2021 was estimated to be 45 million tonnes, with a total market value of US $4.24 billion. Names The English name kaolin was borrowed in 1727 from François Xavier d'Entrecolles's 1712 French reports on the manufacture of Jingdezhen porcelain. D'Entrecolles was transcribing the Chinese term , now romanized as in pinyin, taken from the name of the village of Gaoling ("High Ridge") near Ehu in Fuliang County, now part of Jiangxi Province's Jingdezhen Prefecture. The area around the village had become the main source of Jingdezhen's kaolin over the course of the Qing dynasty. The mineralogical suffix -ite was later added to generalize the name to cover nearly identical minerals from other locations. Kaolinite is also occasionally discussed under the archaic names lithomarge and lithomarga from Latin , a combination of (,, "stone") and ("marl"). In more proper modern use, lithomarge now refers specifically to a compacted and massive form of kaolin. Chemistry Notation The chemical formula for kaolinite as written in mineralogy is , however, in ceramics applications the same formula is typically written in terms of oxides, thus giving . Structure Compared with other clay minerals, kaolinite is chemically and structurally simple. It is described as a 1:1 or TO clay mineral because its crystals consist of stacked TO layers. Each TO layer consists of a tetrahedral (T) sheet composed of silicon and oxygen ions bonded to an octahedral (O) sheet composed of oxygen, aluminium, and hydroxyl ions. The T sheet is so called because each silicon ion is surrounded by four oxygen ions forming a tetrahedron. The O sheet is so called because each aluminium ion is surrounded by six oxygen or hydroxyl ions arranged at the corners of an octahedron. The two sheets in each layer are strongly bonded together via shared oxygen ions, while layers are bonded via hydrogen bonding between oxygen on the outer face of the T sheet of one layer and hydroxyl on the outer face of the O sheet of the next layer. A kaolinite layer has no net electrical charge and so there are no large cations (such as calcium, sodium, or potassium) between layers as with most other clay minerals. This accounts for kaolinite's relatively low ion exchange capacity. The close hydrogen bonding between layers also hinders water molecules from infiltrating between layers, accounting for kaolinite's nonswelling character. When moistened, the tiny platelike crystals of kaolinite acquire a layer of water molecules that cause crystals to adhere to each other and give kaolin clay its cohesiveness. The bonds are weak enough to allow the plates to slip past each other when the clay is being molded, but strong enough to hold the plates in place and allow the molded clay to retain its shape. When the clay is dried, most of the water molecules are removed, and the plates hydrogen bond directly to each other, so that the dried clay is rigid but still fragile. If the clay is moistened again, it will once more become plastic. Structural transformations Kaolinite group clays undergo a series of phase transformations upon thermal treatment in air at atmospheric pressure. Milling High-energy milling of kaolin results in the formation of a mechanochemically amorphized phase similar to metakaolin, although the properties of this solid are quite different. The high-energy milling process is highly inefficient and consumes a large amount of energy. Drying Below 100 °C, exposure to low humidity air will result in the slow evaporation of any liquid water in the kaolin. At low moisture content the mass can be described leather dry, and at near 0% moisture it is referred to as bone dry. Above 100 °C any remaining free water is lost. Above around 400 °C hydroxyl ions (OH−) are lost from the kaolinite crystal structure in the form of water: the material cannot now be plasticised by absorbing water. This is irreversible, as are subsequent transformations; this is referred to as calcination. Metakaolin Endothermic dehydration of kaolinite begins at 550–600 °C producing disordered metakaolin, but continuous hydroxyl loss is observed up to . Although historically there was much disagreement concerning the nature of the metakaolin phase, extensive research has led to a general consensus that metakaolin is not a simple mixture of amorphous silica () and alumina (), but rather a complex amorphous structure that retains some longer-range order (but not strictly crystalline) due to stacking of its hexagonal layers. Al2Si2O5(OH)4 -> Al2Si2O7 + 2 H2O Spinel Further heating to 925–950 °C converts metakaolin to an aluminium-silicon spinel which is sometimes also referred to as a gamma-alumina type structure: 2 Al2Si2O7 -> Si3Al4O12 + SiO2 Platelet mullite Upon calcination above 1050 °C, the spinel phase nucleates and transforms to platelet mullite and highly crystalline cristobalite: 3 Si3Al4O12 -> 2 (3 Al2O3 . 2 SiO2) + 5 SiO2 Needle mullite Finally, at 1400 °C the "needle" form of mullite appears, offering substantial increases in structural strength and heat resistance. This is a structural but not chemical transformation. See stoneware for more information on this form. Occurrence Kaolinite is one of the most common minerals; it is mined, as kaolin, in Australia, Brazil, Bulgaria, China, Czech Republic, France, Germany, India, Iran, Malaysia, South Africa, South Korea, Spain, Tanzania, Thailand, United Kingdom, United States and Vietnam. Mantles of kaolinite are common in Western and Northern Europe. The ages of these mantles are Mesozoic to Early Cenozoic. Kaolinite clay occurs in abundance in soils that have formed from the chemical weathering of rocks in hot, moist climates; for example in tropical rainforest areas. Comparing soils along a gradient towards progressively cooler or drier climates, the proportion of kaolinite decreases, while the proportion of other clay minerals such as illite (in cooler climates) or smectite (in drier climates) increases. Such climatically related differences in clay mineral content are often used to infer changes in climates in the geological past, where ancient soils have been buried and preserved. In the Institut National pour l'Étude Agronomique au Congo Belge (INEAC) classification system, soils in which the clay fraction is predominantly kaolinite are called kaolisol (from kaolin and soil). In the United States, the main kaolin deposits are found in central Georgia, on a stretch of the Atlantic Seaboard fall line between Augusta and Macon. This area of thirteen counties is called the "white gold" belt; Sandersville is known as the "Kaolin Capital of the World" due to its abundance of kaolin. In the late 1800s, an active kaolin surface-mining industry existed in the extreme southeast corner of Pennsylvania, near the towns of Landenberg and Kaolin, and in what is present-day White Clay Creek Preserve. The product was brought by train to Newark, Delaware, on the Newark-Pomeroy line, along which can still be seen many open-pit clay mines. The deposits were formed between the late Cretaceous and early Paleogene, about 100 to 45 million years ago, in sediments derived from weathered igneous and metakaolin rocks. Kaolin production in the United States during 2011 was 5.5 million tons. During the Paleocene–Eocene Thermal Maximum sediments deposited in the Espluga Freda area of Spain were enriched with kaolinite from a detrital source due to denudation. Synthesis and genesis Difficulties are encountered when trying to explain kaolinite formation under atmospheric conditions by extrapolation of thermodynamic data from the more successful high-temperature syntheses. La Iglesia and Van Oosterwijk-Gastuche (1978) thought that the conditions under which kaolinite will nucleate can be deduced from stability diagrams, based as they are on dissolution data. Because of a lack of convincing results in their own experiments, La Iglesia and Van Oosterwijk-Gastuche (1978) had to conclude, however, that there were other, still unknown, factors involved in the low-temperature nucleation of kaolinite. Because of the observed very slow crystallization rates of kaolinite from solution at room temperature Fripiat and Herbillon (1971) postulated the existence of high activation energies in the low-temperature nucleation of kaolinite. At high temperatures, equilibrium thermodynamic models appear to be satisfactory for the description of kaolinite dissolution and nucleation, because the thermal energy suffices to overcome the energy barriers involved in the nucleation process. The importance of syntheses at ambient temperature and atmospheric pressure towards the understanding of the mechanism involved in the nucleation of clay minerals lies in overcoming these energy barriers. As indicated by Caillère and Hénin (1960) the processes involved will have to be studied in well-defined experiments, because it is virtually impossible to isolate the factors involved by mere deduction from complex natural physico-chemical systems such as the soil environment. Fripiat and Herbillon (1971), in a review on the formation of kaolinite, raised the fundamental question how a disordered material (i.e., the amorphous fraction of tropical soils) could ever be transformed into a corresponding ordered structure. This transformation seems to take place in soils without major changes in the environment, in a relatively short period of time, and at ambient temperature (and pressure). Low-temperature synthesis of clay minerals (with kaolinite as an example) has several aspects. In the first place the silicic acid to be supplied to the growing crystal must be in a monomeric form, i.e., silica should be present in very dilute solution (Caillère et al., 1957; Caillère and Hénin, 1960; Wey and Siffert, 1962; Millot, 1970). In order to prevent the formation of amorphous silica gels precipitating from supersaturated solutions without reacting with the aluminium or magnesium cations to form crystalline silicates, the silicic acid must be present in concentrations below the maximum solubility of amorphous silica. The principle behind this prerequisite can be found in structural chemistry: "Since the polysilicate ions are not of uniform size, they cannot arrange themselves along with the metal ions into a regular crystal lattice." (Iler, 1955, p. 182) The second aspect of the low-temperature synthesis of kaolinite is that the aluminium cations must be hexacoordinated with respect to oxygen (Caillère and Hénin, 1947; Caillère et al., 1953; Hénin and Robichet, 1955). Gastuche et al. (1962) and Caillère and Hénin (1962) have concluded that kaolinite can only ever be formed when the aluminium hydroxide is in the form of gibbsite. Otherwise, the precipitate formed will be a "mixed alumino-silicic gel" (as Millot, 1970, p. 343 put it). If it were the only requirement, large amounts of kaolinite could be harvested simply by adding gibbsite powder to a silica solution. Undoubtedly a marked degree of adsorption of the silica in solution by the gibbsite surfaces will take place, but, as stated before, mere adsorption does not create the layer lattice typical of kaolinite crystals. The third aspect is that these two initial components must be incorporated into one mixed crystal with a layer structure. From the following equation (as given by Gastuche and DeKimpe, 1962) for kaolinite formation 2Al(OH)3 + 2H4SiO4 -> Si2O5 . Al2(OH)4 + 5H2O it can be seen that five molecules of water must be removed from the reaction for every molecule of kaolinite formed. Field evidence illustrating the importance of the removal of water from the kaolinite reaction has been supplied by Gastuche and DeKimpe (1962). While studying soil formation on a basaltic rock in Kivu (Zaïre), they noted how the occurrence of kaolinite depended on the of the area involved. A clear distinction was found between areas with good drainage (i.e., areas with a marked difference between wet and dry seasons) and those areas with poor drainage (i.e., perennially swampy areas). Kaolinite was only found in the areas with distinct seasonal alternations between wet and dry. The possible significance of alternating wet and dry conditions on the transition of allophane into kaolinite has been stressed by Tamura and Jackson (1953). The role of alternations between wetting and drying on the formation of kaolinite has also been noted by Moore (1964). Laboratory syntheses Syntheses of kaolinite at high temperatures (more than ) are relatively well known. There are for example the syntheses of Van Nieuwenberg and Pieters (1929); Noll (1934); Noll (1936); Norton (1939); Roy and Osborn (1954); Roy (1961); Hawkins and Roy (1962); Tomura et al. (1985); Satokawa et al. (1994) and Huertas et al. (1999). Relatively few low-temperature syntheses have become known (cf. Brindley and DeKimpe (1961); DeKimpe (1969); Bogatyrev et al. (1997)). Laboratory syntheses of kaolinite at room temperature and atmospheric pressure have been described by DeKimpe et al. (1961). From those tests the role of periodicity becomes convincingly clear. DeKimpe et al. (1961) had used daily additions of alumina (as ) and silica (in the form of ethyl silicate) during at least two months. In addition, adjustments of the pH took place every day by way of adding either hydrochloric acid or sodium hydroxide. Such daily additions of Si and Al to the solution in combination with the daily titrations with hydrochloric acid or sodium hydroxide during at least 60 days will have introduced the necessary element of periodicity. Only now the actual role of what has been described as the "aging" (Alterung) of amorphous alumino-silicates (as for example Harder, 1978 had noted) can be fully understood. As such, time is not bringing about any change in a closed system at equilibrium; but a series of alternations of periodically changing conditions (by definition, taking place in an open system) will bring about the low-temperature formation of more and more of the stable phase kaolinite instead of (ill-defined) amorphous alumino-silicates. Applications Main In 2009, up to 70% of kaolin was used in the production of paper. Following reduced demand from the paper industry, resulting from both competing minerals and the effect of digital media, in 2016 the market share was reported to be: paper, 36%; ceramics, 31%; paint, 7% and other, 26%. According to the USGS, in 2021 the global production of kaolin was estimated to be around 45 million tonnes. Paper applications require high-brightness, low abrasion and delaminated kaolins. For paper coatings it is used to enhance the gloss, brilliance, smoothness and receptability to inks; it can account for 25% of mass of the paper. As a paper filler it is used as a pulp extender, and to increase opacity; it can account for 15% of mass. In whiteware ceramic bodies, kaolin can constitute up to 50% of the raw materials. In unfired bodies it contributes to the green strength, plasticity and rheological properties, such as the casting rate. During firing it reacts with other body components to form the crystal and glass phases. With suitable firing schedules it is key to the formation of mullite. The most valued grades have low contents of chromophoric oxides such that the fired material has high whiteness. In glazes it is primarily used as a rheology control agent, but also contributes some green strength. In both glazes and frits it contributes some SiO2 as a glass network former, and Al2O3 as both a network former and modifier. Other industrial As a raw material for the production of an insulation material called Kaowool (a form of mineral wool). An additive to some paints to extend the titanium dioxide () white pigment and modify gloss levels. An additive to modify the properties of rubber upon vulcanization. An additive to adhesives to modify rheology. As adsorbents in water and wastewater treatment. In its altered metakaolin form, as a pozzolan; when added to a concrete mix, metakaolin accelerates the hydration of Portland cement and takes part in the pozzolanic reaction with the portlandite formed in the hydration of the main cement minerals (e.g. alite). Metakaolin is also a base component for geopolymer compounds. Medical To soothe an upset stomach, similar to the way parrots (and later, humans) in South America originally used it (more recently, industrially produced). Kaolin-based preparations are used for treatment of diarrhea. An ingredient in 'pre-work' skin protection and barrier creams. To induce and accelerate blood clotting. In April 2008 the US Naval Medical Research Institute announced the successful use of a kaolinite-derived aluminosilicate infusion in traditional gauze. which is still the hemostat of choice for all branches of the US military. See Kaolin clotting time and quikclot As a mild abrasive in toothpaste. Cosmetics As a filler in cosmetics. For facial masks or soap. for spa body treatments, such as body wraps, cocoons, or spot treatments. Archaeology As an indicator in radiological dating since kaolinite can contain very small traces of uranium and thorium. Geophagy Humans sometimes eat kaolin for pleasure or to suppress hunger, a practice known as geophagy. In Africa, kaolin used for such purposes is known as kalaba (in Gabon and Cameroon), calaba, and calabachop (in Equatorial Guinea). Consumption is greater among women, especially during pregnancy, and its use is sometimes said by women of the region to be a habit analogous to cigarette smoking among men. The practice has also been observed within a small population of African-American women in the Southern United States, especially Georgia, likely brought with the traditions of the aforementioned Africans via slavery. There, the kaolin is called white dirt, chalk or white clay. Geotechnical engineering Research results show that the utilization of kaolinite in geotechnical engineering can be alternatively replaced by safer illite, especially if its presence is less than 10.8% of the total rock mass. Small-scale uses As a light-diffusing material in white incandescent light bulbs. In organic farming as a spray applied to crops to deter insect damage, and in the case of apples, to prevent sun scald. As whitewash in traditional stone masonry homes in Nepal. As a filler in Edison Diamond Discs. Production output Global production of kaolin by country in 2012 was estimated to be: Typical properties Some selected typical properties of various ceramic grade kaolins are: Safety Kaolin is generally recognized as safe, but may cause mild irritation of the skin or mucous membranes. Kaolin products may also contain traces of crystalline silica, a known carcinogen if inhaled. In the US, the Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for kaolin exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 10 mg/m3 total exposure TWA 5 mg/m3 respiratory exposure over an 8-hour workday.
Physical sciences
Mineralogy
null
16948
https://en.wikipedia.org/wiki/Ketamine
Ketamine
Ketamine is a dissociative anesthetic used medically for induction and maintenance of anesthesia. It is also used as a treatment for depression and in pain management. Ketamine is an NMDA receptor antagonist which accounts for most of its psychoactive effects. At anesthetic doses, ketamine induces a state of dissociative anesthesia, a trance-like state providing pain relief, sedation, and amnesia. Its distinguishing features as an anesthestic are preserved breathing and airway reflexes, stimulated heart function with increased blood pressure, and moderate bronchodilation. At lower, sub-anesthetic doses, it is a promising agent for treatment of pain and treatment-resistant depression. As with many antidepressants, the results of a single administration wane with time. Ketamine is used as a recreational drug for its hallucinogenic and dissociative effects. When used recreationally, it is found both in crystalline powder and liquid form, and is often referred to by users as "Special K" or simply "K". The long-term effects of repeated use are largely unknown and are an area of active investigation. Liver and urinary toxicity have been reported among regular users of high doses of ketamine for recreational purposes. Ketamine was first synthesized in 1962, derived from phencyclidine in pursuit of a safer anesthetic with fewer hallucinogenic effects. It was approved for use in the United States in 1970. It has been regularly used in veterinary medicine and was extensively used for surgical anesthesia in the Vietnam War. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. Medical uses Anesthesia The use of ketamine in anesthesia reflects its characteristics. It is a drug of choice for short-term procedures when muscle relaxation is not required. The effect of ketamine on the respiratory and circulatory systems is different from that of other anesthetics. It suppresses breathing much less than most other available anesthetics. When used at anesthetic doses, ketamine usually stimulates rather than depresses the circulatory system. Protective airway reflexes are preserved, and it is sometimes possible to administer ketamine anesthesia without protective measures to the airways. Psychotomimetic effects limit the acceptance of ketamine; however, lamotrigine and nimodipine decrease psychotomimetic effects and can also be counteracted by benzodiazepines or propofol administration. Ketofol is a combination of ketamine and propofol. Ketamine is frequently used in severely injured people and appears to be safe in this group. It has been widely used for emergency surgery in field conditions in war zones, for example, during the Vietnam War. A 2011 clinical practice guideline supports the use of ketamine as a sedative in emergency medicine, including during physically painful procedures. It is the drug of choice for people in traumatic shock who are at risk of hypotension. Ketamine is unlikely to lower blood pressure, which is dangerous for people with severe head injury; in fact, it can raise blood pressure, often making it useful in treating such injuries. Ketamine is an option in children as the sole anesthetic for minor procedures or as an induction agent followed by neuromuscular blocker and tracheal intubation. In particular, children with cyanotic heart disease and neuromuscular disorders are good candidates for ketamine anesthesia. Due to the bronchodilating properties of ketamine, it can be used for anesthesia in people with asthma, chronic obstructive airway disease, and with severe reactive airway disease including active bronchospasm. Pain Ketamine infusions are used for acute pain treatment in emergency departments and in the perioperative period for individuals with refractory pain. The doses are lower than those used for anesthesia, usually referred to as sub-anesthetic doses. Adjunctive to morphine or on its own, ketamine reduces morphine use, pain level, nausea, and vomiting after surgery. Ketamine is likely to be most beneficial for surgical patients when severe post-operative pain is expected, and for opioid-tolerant patients. Ketamine is especially useful in the pre-hospital setting due to its effectiveness and low risk of respiratory depression. Ketamine has similar efficacy to opioids in a hospital emergency department setting for the management of acute pain and the control of procedural pain. It may also prevent opioid-induced hyperalgesia and postanesthetic shivering. For chronic pain, ketamine is used as an intravenous analgesic, mainly if the pain is neuropathic. It has the added benefit of counteracting spinal sensitization or wind-up phenomena experienced with chronic pain. In multiple clinical trials, ketamine infusions delivered short-term pain relief in neuropathic pain diagnoses, pain after a traumatic spine injury, fibromyalgia, and complex regional pain syndrome (CRPS). However, the 2018 consensus guidelines on chronic pain concluded that, overall, there is only weak evidence in favor of ketamine use in spinal injury pain, moderate evidence in favor of ketamine for CRPS, and weak or no evidence for ketamine in mixed neuropathic pain, fibromyalgia, and cancer pain. In particular, only for CRPS, there is evidence of medium to longer-term pain relief. Depression Ketamine is a rapid-acting antidepressant, but its effect is transient. Intravenous ketamine infusion in treatment-resistant depression may result in improved mood within 4 hours reaching the peak at 24 hours. A single dose of intravenous ketamine has been shown to result in a response rate greater than 60% as early as 4.5 hours after the dose (with a sustained effect after 24 hours) and greater than 40% after 7 days. Although only a few pilot studies have sought to determine the optimal dose, increasing evidence suggests that 0.5 mg/kg dose injected over 40 minutes gives an optimal outcome. The antidepressant effect of ketamine is diminished at 7 days, and most people relapse within 10 days. However, for a significant minority, the improvement may last 30 days or more. One of the main challenges with ketamine treatment can be the length of time that the antidepressant effects last after finishing a course of treatment. A possible option may be maintenance therapy with ketamine, which usually runs twice a week to once in two weeks. Ketamine may decrease suicidal thoughts for up to three days after the injection. An enantiomer of ketamine esketamine commercially sold as Spravato was approved as an antidepressant by the European Medicines Agency in 2019. Esketamine was approved as a nasal spray for treatment-resistant depression in the United States and elsewhere in 2019 (see Esketamine and Depression). The Canadian Network for Mood and Anxiety Treatments (CANMAT) recommends esketamine as a third-line treatment for depression. A Cochrane review of randomized controlled trials in adults with unipolar major depressive disorder, found that when compared with placebo, people treated with either ketamine or esketamine experienced reduction or remission of symptoms lasting 1 to 7 days. There were 18.7% (4.1 to 40.4%) more people reporting some benefit and 9.6% (0.2 to 39.4%) more who achieved remission within 24 hours of ketamine treatment. Among people receiving esketamine, 12.1% (2.5 to 24.4%) encountered some relief at 24 hours, and 10.3% (4.5 to 18.2%) had few or no symptoms. These effects did not persist beyond one week, although a higher dropout rate in some studies means that the benefit duration remains unclear. Ketamine may partially improve depressive symptoms among people with bipolar depression at 24 hours after treatment, but not three or more days. Potentially, ten more people with bipolar depression per 1000 may experience brief improvement, but not the cessation of symptoms, one day following treatment. These estimates are based on limited available research. In February 2022, the US Food and Drug Administration issued an alert to healthcare professionals concerning compounded nasal spray products containing ketamine intended to treat depression. Seizures Ketamine is used to treat status epilepticus that has not responded to standard treatments, but only case studies and no randomized controlled trials support its use. Asthma Ketamine has been suggested as a possible therapy for children with severe acute asthma who do not respond to standard treatment. This is due to its bronchodilator effects. A 2012 Cochrane review found there were minimal adverse effects reported, but the limited studies showed no significant benefit. Contraindications Some major contraindications for ketamine are: Severe cardiovascular disease such as unstable angina or poorly controlled hypertension Increased intracranial or intraocular pressure (however these remain controversial, with recent studies suggesting otherwise) Poorly controlled psychosis Severe liver disease such as cirrhosis Pregnancy Active substance use disorder (for serial ketamine injections) Age less than 3 months Adverse effects At anesthetic doses, 10–20% of adults and 1–2% of children experience adverse psychiatric reactions that occur during emergence from anesthesia, ranging from dreams and dysphoria to hallucinations and emergence delirium. Psychotomimetic effects decrease adding lamotrigine and nimodipine and can be counteracted by pretreatment with a benzodiazepine or propofol. Ketamine anesthesia commonly causes tonic-clonic movements (greater than 10% of people) and rarely hypertonia. Vomiting can be expected in 5–15% of the patients; pretreatment with propofol mitigates it as well. Laryngospasm occurs only rarely with ketamine. Ketamine, generally, stimulates breathing; however, in the first 2–3 minutes of a high-dose rapid intravenous injection, it may cause a transient respiratory depression. At lower sub-anesthetic doses, psychiatric side effects are prominent. Most people feel strange, spacey, woozy, or a sense of floating, or have visual distortions or numbness. Also very frequent (20–50%) are difficulty speaking, confusion, euphoria, drowsiness, and difficulty concentrating. The symptoms of psychosis such as going into a hole, disappearing, feeling as if melting, experiencing colors, and hallucinations are described by 6–10% of people. Dizziness, blurred vision, dry mouth, hypertension, nausea, increased or decreased body temperature, or feeling flushed are the common (>10%) non-psychiatric side effects. All these adverse effects are most pronounced by the end of the injection, dramatically reduced 40 minutes afterward, and completely disappear within 4 hours after the injection. Urinary and liver toxicity Urinary toxicity occurs primarily in people who use large amounts of ketamine routinely, with 20–30% of frequent users having bladder complaints. It includes a range of disorders from cystitis to hydronephrosis to kidney failure. The typical symptoms of ketamine-induced cystitis are frequent urination, dysuria, and urinary urgency sometimes accompanied by pain during urination and blood in urine. The damage to the bladder wall has similarities to both interstitial and eosinophilic cystitis. The wall is thickened and the functional bladder capacity is as low as 10–150 mL. Studies indicate that ketamine-induced cystitis is caused by ketamine and its metabolites directly interacting with urothelium, resulting in damage of the epithelial cells of the bladder lining and increased permeability of the urothelial barrier which results in clinical symptoms. Management of ketamine-induced cystitis involves ketamine cessation as the first step. This is followed by NSAIDs and anticholinergics and, if the response is insufficient, by tramadol. The second line treatments are epithelium-protective agents such as oral pentosan polysulfate or intravesical (intra-bladder) instillation of hyaluronic acid. Intravesical botulinum toxin is also useful. Liver toxicity of ketamine involves higher doses and repeated administration. In a group of chronic high-dose ketamine users, the frequency of liver injury was reported to be about 10%. There are case reports of increased liver enzymes involving ketamine treatment of chronic pain. Chronic ketamine abuse has also been associated with biliary colic, cachexia, gastrointestinal diseases, hepatobiliary disorder, and acute kidney injury. Near-death experience Most people who were able to remember their dreams during ketamine anesthesia report near-death experiences (NDEs) when the broadest possible definition of an NDE is used. Ketamine can reproduce features that commonly have been associated with NDEs. A 2019 large-scale study found that written reports of ketamine experiences had a high degree of similarity to written reports of NDEs in comparison to other written reports of drug experiences. Dependence and tolerance Although the incidence of ketamine dependence is unknown, some people who regularly use ketamine develop ketamine dependence. Animal experiments also confirm the risk of misuse. Additionally, the rapid onset of effects following insufflation may increase potential use as a recreational drug. The short duration of effects promotes bingeing. Ketamine tolerance rapidly develops, even with repeated medical use, prompting the use of higher doses. Some daily users reported withdrawal symptoms, primarily anxiety, shaking, sweating, and palpitations, following the attempts to stop. Cognitive deficits as well as increased dissociation and delusion symptoms were observed in frequent recreational users of ketamine. Interactions Ketamine potentiates the sedative effects of propofol and midazolam. Naltrexone potentiates psychotomimetic effects of a low dose of ketamine, while lamotrigine and nimodipine decrease them. Clonidine reduces the increase of salivation, heart rate, and blood pressure during ketamine anesthesia and decreases the incidence of nightmares. Clinical observations suggest that benzodiazepines may diminish the antidepressant effects of ketamine. It appears most conventional antidepressants can be safely combined with ketamine. Pharmacology Pharmacodynamics Mechanism of action Ketamine is a mixture of equal amounts of two enantiomers: esketamine and arketamine. Esketamine is a far more potent NMDA receptor pore blocker than arketamine. Pore blocking of the NMDA receptor is responsible for the anesthetic, analgesic, and psychotomimetic effects of ketamine. Blocking of the NMDA receptor results in analgesia by preventing central sensitization in dorsal horn neurons; in other words, ketamine's actions interfere with pain transmission in the spinal cord. The mechanism of action of ketamine in alleviating depression is not well understood and is an area of active investigation. Due to the hypothesis that NMDA receptor antagonism underlies the antidepressant effects of ketamine, esketamine was developed as an antidepressant. However, multiple other NMDA receptor antagonists, including memantine, lanicemine, rislenemdaz, rapastinel, and 4-chlorokynurenine, have thus far failed to demonstrate significant effectiveness for depression. Furthermore, animal research indicates that arketamine, the enantiomer with a weaker NMDA receptor antagonism, as well as (2R,6R)-hydroxynorketamine, the metabolite with negligible affinity for the NMDA receptor but potent alpha-7 nicotinic receptor antagonist activity, may have antidepressant action. This furthers the argument that NMDA receptor antagonism may not be primarily responsible for the antidepressant effects of ketamine. Acute inhibition of the lateral habenula, a part of the brain responsible for inhibiting the mesolimbic reward pathway and referred to as the "anti-reward center", is another possible mechanism for ketamine's antidepressant effects. Possible biochemical mechanisms of ketamine's antidepressant action include direct action on the NMDA receptor and downstream effects on regulators such as BDNF and mTOR. It is not clear whether ketamine alone is sufficient for antidepressant action or its metabolites are also important; the active metabolite of ketamine, hydroxynorketamine, which does not significantly interact with the NMDA receptor but nonetheless indirectly activates AMPA receptors, may also or alternatively be involved in the rapid-onset antidepressant effects of ketamine. In NMDA receptor antagonism, acute blockade of NMDA receptors in the brain results in an increase in the release of glutamate, which leads to an activation of α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors (AMPA receptors), which in turn modulate a variety of downstream signaling pathways to influence neurotransmission in the limbic system and mediate antidepressant effects. Such downstream actions of the activation of AMPA receptors include upregulation of brain-derived neurotrophic factor (BDNF) and activation of its signaling receptor tropomyosin receptor kinase B (TrkB), activation of the mammalian target of rapamycin (mTOR) pathway, deactivation of glycogen synthase kinase 3 (GSK-3), and inhibition of the phosphorylation of the eukaryotic elongation factor 2 (eEF2) kinase. Molecular targets Ketamine principally acts as a pore blocker of the NMDA receptor, an ionotropic glutamate receptor. The S-(+) and R-(–) stereoisomers of ketamine bind to the dizocilpine site of the NMDA receptor with different affinities, the former showing approximately 3- to 4-fold greater affinity for the receptor than the latter. As a result, the S isomer is a more potent anesthetic and analgesic than its R counterpart. Ketamine may interact with and inhibit the NMDAR via another allosteric site on the receptor. With a couple of exceptions, ketamine actions at other receptors are far weaker than ketamine's antagonism of the NMDA receptor (see the activity table to the right). Although ketamine is a very weak ligand of the monoamine transporters (Ki > 60 μM), it has been suggested that it may interact with allosteric sites on the monoamine transporters to produce monoamine reuptake inhibition. However, no functional inhibition (IC50) of the human monoamine transporters has been observed with ketamine or its metabolites at concentrations of up to 10,000 nM. Moreover, animal studies and at least three human case reports have found no interaction between ketamine and the monoamine oxidase inhibitor (MAOI) tranylcypromine, which is of importance as the combination of a monoamine reuptake inhibitor with an MAOI can produce severe toxicity such as serotonin syndrome or hypertensive crisis. Collectively, these findings shed doubt on the involvement of monoamine reuptake inhibition in the effects of ketamine in humans. Ketamine has been found to increase dopaminergic neurotransmission in the brain, but instead of being due to dopamine reuptake inhibition, this may be via indirect/downstream mechanisms, namely through antagonism of the NMDA receptor. Whether ketamine is an agonist of D2 receptors is controversial. Early research by the Philip Seeman group found ketamine to be a D2 partial agonist with a potency similar to that of its NMDA receptor antagonism. However, later studies by different researchers found the affinity of ketamine of >10 μM for the regular human and rat D2 receptors, Moreover, whereas D2 receptor agonists such as bromocriptine can rapidly and powerfully suppress prolactin secretion, subanesthetic doses of ketamine have not been found to do this in humans and in fact, have been found to dose-dependently increase prolactin levels. Imaging studies have shown mixed results on inhibition of striatal [11C] raclopride binding by ketamine in humans, with some studies finding a significant decrease and others finding no such effect. However, changes in [11C] raclopride binding may be due to changes in dopamine concentrations induced by ketamine rather than binding of ketamine to the D2 receptor. Relationships between levels and effects Dissociation and psychotomimetic effects are reported in people treated with ketamine at plasma concentrations of approximately 100 to 250 ng/mL (0.42–1.1 μM). The typical intravenous antidepressant dosage of ketamine used to treat depression is low and results in maximal plasma concentrations of 70 to 200 ng/mL (0.29–0.84 μM). At similar plasma concentrations (70 to 160 ng/mL; 0.29–0.67 μM) it also shows analgesic effects. In 1–5 minutes after inducing anesthesia by rapid intravenous injection of ketamine, its plasma concentration reaches as high as 60–110 μM. When the anesthesia was maintained using nitrous oxide together with continuous injection of ketamine, the ketamine concentration stabilized at approximately 9.3 μM. In an experiment with purely ketamine anesthesia, people began to awaken once the plasma level of ketamine decreased to about 2,600 ng/mL (11 μM) and became oriented in place and time when the level was down to 1,000 ng/mL (4 μM). In a single-case study, the concentration of ketamine in cerebrospinal fluid, a proxy for the brain concentration, during anesthesia varied between 2.8 and 6.5 μM and was approximately 40% lower than in plasma. Pharmacokinetics Ketamine can be absorbed by many different routes due to both its water and lipid solubility. Intravenous ketamine bioavailability is 100% by definition, intramuscular injection bioavailability is slightly lower at 93%, and epidural bioavailability is 77%. Subcutaneous bioavailability has never been measured but is presumed to be high. Among the less invasive routes, the intranasal route has the highest bioavailability (45–50%) and oral – the lowest (16–20%). Sublingual and rectal bioavailabilities are intermediate at approximately 25–50%. After absorption ketamine is rapidly distributed into the brain and other tissues. The plasma protein binding of ketamine is variable at 23–47%. In the body, ketamine undergoes extensive metabolism. It is biotransformed by CYP3A4 and CYP2B6 isoenzymes into norketamine, which, in turn, is converted by CYP2A6 and CYP2B6 into hydroxynorketamine and dehydronorketamine. Low oral bioavailability of ketamine is due to the first-pass effect and, possibly, ketamine intestinal metabolism by CYP3A4. As a result, norketamine plasma levels are several-fold higher than ketamine following oral administration, and norketamine may play a role in anesthetic and analgesic action of oral ketamine. This also explains why oral ketamine levels are independent of CYP2B6 activity, unlike subcutaneous ketamine levels. After an intravenous injection of tritium-labelled ketamine, 91% of the radioactivity is recovered from urine and 3% from feces. The medication is excreted mostly in the form of metabolites, with only 2% remaining unchanged. Conjugated hydroxylated derivatives of ketamine (80%) followed by dehydronorketamine (16%) are the most prevalent metabolites detected in urine. Chemistry Structure In chemical structure, ketamine is an arylcyclohexylamine derivative. Ketamine is a chiral compound. The more active enantiomer, esketamine (S-ketamine), is also available for medical use under the brand name Ketanest S, while the less active enantiomer, arketamine (R-ketamine), has never been marketed as an enantiopure drug for clinical use. While S-ketamine is more effective as an analgesic and anesthetic through NMDA receptor antagonism, R-ketamine produces longer-lasting effects as an antidepressant. The optical rotation of a given enantiomer of ketamine can vary between its salts and free base form. The free base form of (S)‑ketamine exhibits dextrorotation and is therefore labelled (S)‑(+)‑ketamine. However, its hydrochloride salt shows levorotation and is thus labelled (S)‑(−)‑ketamine hydrochloride. Detection Ketamine may be quantitated in blood or plasma to confirm a diagnosis of poisoning in hospitalized people, provide evidence in an impaired driving arrest, or assist in a medicolegal death investigation. Blood or plasma ketamine concentrations are usually in a range of 0.5–5.0 mg/L in persons receiving the drug therapeutically (during general anesthesia), 1–2 mg/L in those arrested for impaired driving, and 3–20 mg/L in victims of acute fatal overdosage. Urine is often the preferred specimen for routine drug use monitoring purposes. The presence of norketamine, a pharmacologically active metabolite, is useful for confirmation of ketamine ingestion. History Ketamine was first synthesized in 1962 by Calvin L. Stevens, a professor of chemistry at Wayne State University and a Parke-Davis consultant. It was known by the developmental code name CI-581. After promising preclinical research in animals, ketamine was tested in human prisoners in 1964. These investigations demonstrated ketamine's short duration of action and reduced behavioral toxicity made it a favorable choice over phencyclidine (PCP) as an anesthetic. The researchers wanted to call the state of ketamine anesthesia "dreaming", but Parke-Davis did not approve of the name. Hearing about this problem and the "disconnected" appearance of treated people, Mrs. Edward F. Domino, the wife of one of the pharmacologists working on ketamine, suggested "dissociative anesthesia". Following FDA approval in 1970, ketamine anesthesia was first given to American soldiers during the Vietnam War. The discovery of antidepressive action of ketamine in 2000 has been described as the single most important advance in the treatment of depression in more than 50 years. It has sparked interest in NMDA receptor antagonists for depression, and has shifted the direction of antidepressant research and development. Society and culture Legal status While ketamine is marketed legally in many countries worldwide, it is also a controlled substance in many countries. In Australia, ketamine is listed as a Schedule 8 controlled drug under the Poisons Standard (October 2015). In Canada, ketamine has been classified as a Schedule I narcotic, since 2005. In December 2013, the government of India, in response to rising recreational use and the use of ketamine as a date rape drug, added it to Schedule X of the Drug and Cosmetics Act requiring a special license for sale and maintenance of records of all sales for two years. In the United Kingdom, it became labeled a Class B drug on 12 February 2014. The increase in recreational use prompted ketamine to be placed in Schedule III of the United States Controlled Substances Act in August 1999. Recreational use At sub-anesthetic doses, ketamine produces a dissociative state, characterised by a sense of detachment from one's physical body and the external world that is known as depersonalization and derealization. At sufficiently high doses, users may experience what is called the "K-hole", a state of dissociation with visual and auditory hallucination. John C. Lilly, Marcia Moore, D. M. Turner, and David Woodard (among others) have written extensively about their own entheogenic and psychonautic experiences with ketamine. Turner died prematurely due to drowning during presumed unsupervised ketamine use. In 2006, the Russian edition of Adam Parfrey's Apocalypse Culture was banned and destroyed by authorities owing to its inclusion of an essay by Woodard about the entheogenic use of, and psychonautic experiences with, ketamine. Recreational ketamine use has been implicated in deaths globally, with more than 90 deaths in England and Wales in the years of 2005–2013. They include accidental poisonings, drownings, traffic accidents, and suicides. The majority of deaths were among young people. Several months after being found dead in his hot tub, actor Matthew Perry's October 2023 apparent drowning death was revealed to have been caused by a ketamine overdose, and while other factors were present, the acute effects of ketamine were ruled to be the primary cause of death. Due to its ability to cause confusion and amnesia, ketamine has been used for date rape. Research Ketamine is under investigation for its potential in treating treatment-resistant depression. Ketamine is a known psychoplastogen, which refers to a compound capable of promoting rapid and sustained neuroplasticity. Ketamine has shown anthelmintic activity in rats, with an effect comparable to ivermectin and albendazole at extremely high concentrations. Veterinary uses In veterinary anesthesia, ketamine is often used for its anesthetic and analgesic effects on cats, dogs, rabbits, rats, and other small animals. It is frequently used in induction and anesthetic maintenance in horses. It is an important part of the "rodent cocktail", a mixture of drugs used for anesthetising rodents. Veterinarians often use ketamine with sedative drugs to produce balanced anesthesia and analgesia, and as a constant-rate infusion to help prevent pain wind-up. Ketamine is also used to manage pain among large animals. It is the primary intravenous anesthetic agent used in equine surgery, often in conjunction with detomidine and thiopental, or sometimes guaifenesin. Ketamine appears not to produce sedation or anesthesia in snails. Instead, it appears to have an excitatory effect.
Biology and health sciences
Anesthetics
Health
16976
https://en.wikipedia.org/wiki/Kernite
Kernite
Kernite, also known as rasorite, is a hydrated sodium borate hydroxide mineral with formula . It is a colorless to white mineral crystallizing in the monoclinic crystal system typically occurring as prismatic to acicular crystals or granular masses. It is relatively soft with Mohs hardness of 2.5 to 3 and light with a specific gravity of 1.91. It exhibits perfect cleavage and a brittle fracture. Kernite is soluble in cold water and alters to tincalconite when it dehydrates. It undergoes a non-reversible alteration to metakernite () when heated to above 100 °C. Occurrence and history The mineral occurs in sedimentary evaporite deposits in arid regions. Kernite was discovered in 1926 in eastern Kern County, in Southern California, and later renamed after the county. The location was the US Borax Mine at Boron in the western Mojave Desert. This type material is stored at Harvard University, Cambridge, Massachusetts, and the National Museum of Natural History, Washington, D.C. The Kern County mine was the only known source of the mineral for a period of time. More recently, kernite is mined in Argentina and Turkey. The largest documented, single crystal of kernite measured 2.44 x 0.9 x 0.9 m3 and weighed ~3.8 tons. Uses Kernite is used to produce borax which can be used in a variety of soaps.
Physical sciences
Minerals
Earth science
16992
https://en.wikipedia.org/wiki/Kerosene
Kerosene
Kerosene, or paraffin, is a combustible hydrocarbon liquid which is derived from petroleum. It is widely used as a fuel in aviation as well as households. Its name derives from (kērós) meaning "wax", and was registered as a trademark by Nova Scotia geologist and inventor Abraham Gesner in 1854 before evolving into a generic trademark. It is sometimes spelled kerosine in scientific and industrial usage. Kerosene is widely used to power jet engines of aircraft (jet fuel), as well as some rocket engines in a highly refined form called RP-1. It is also commonly used as a cooking and lighting fuel, and for fire toys such as poi. In parts of Asia, kerosene is sometimes used as fuel for small outboard motors or even motorcycles. World total kerosene consumption for all purposes is equivalent to about 5,500,000 barrels per day as of July 2023. The term "kerosene" is common in much of Argentina, Australia, Canada, India, New Zealand, Nigeria, and the United States, while the term paraffin (or a closely related variant) is used in Chile, East Africa, South Africa, Norway, and the United Kingdom. The term "lamp oil", or the equivalent in the local languages, is common in the majority of Asia and the Southeastern United States, although in Appalachia, it is also commonly referred to as "coal oil". Confusingly, the name "paraffin" is also used to refer to a number of distinct petroleum byproducts other than kerosene. For instance, liquid paraffin (called mineral oil in the US) is a more viscous and highly refined product which is used as a laxative. Paraffin wax is a waxy solid extracted from petroleum. To prevent confusion between kerosene and the much more flammable and volatile gasoline (petrol), some jurisdictions regulate markings or colourings for containers used to store or dispense kerosene. For example, in the United States, Pennsylvania requires that portable containers used at retail service stations for kerosene be colored blue, as opposed to red (for gasoline) or yellow (for diesel). The World Health Organization considers kerosene to be a polluting fuel and recommends that "governments and practitioners immediately stop promoting its household use". Kerosene smoke contains high levels of harmful particulate matter, and household use of kerosene is associated with higher risks of cancer, respiratory infections, asthma, tuberculosis, cataracts, and adverse pregnancy outcomes. Properties and grades Kerosene is a low-viscosity, clear liquid formed from hydrocarbons obtained from the fractional distillation of petroleum between , resulting in a mixture with a density of 0.78–0.81 g/cm3. It is miscible with petroleum solvents but immiscible with water. It is composed of hydrocarbon molecules that typically contain between 6-20 carbon atoms per molecule, predominantly containing 9 to 16 carbon atoms. Regardless of crude oil source or processing history, kerosene's major components are branched- and straight-chain alkanes (hydrocarbon chains) and naphthenes (cycloalkanes), which normally account for at least 70% of volume. Aromatic hydrocarbons such as alkylbenzenes (single ring) and alkylnaphthalenes (double ring), do not normally exceed 25% by volume of kerosene streams. Olefins are usually not present at more than 5% by volume. The heat of combustion of kerosene is similar to that of diesel fuel; its lower heating value is 43.1 MJ/kg (around 18,500 Btu/lb), and its higher heating value is . The ASTM recognizes two grades of kerosene: 1-K (less than 0.04% sulfur by weight) and 2-K (0.3% sulfur by weight). Grade 1-K kerosene burns cleaner with fewer deposits, fewer toxins, and less frequent maintenance than 2-K, and is the preferred grade for indoor heaters and stoves. In the United Kingdom, two grades of heating oil are defined. BS 2869 Class C1 is the lightest grade used for lanterns, camping stoves, and wick heaters, and mixed with petrol in some vintage combustion engines as a substitute for tractor vaporizing oil. BS 2869 Class C2 is a heavier distillate, which is used as domestic heating oil. Premium kerosene is usually sold in 5- or 20-litre containers from hardware, camping and garden stores, and is often dyed purple. Standard kerosene is usually dispensed in bulk by a tanker and is undyed. National and international standards define the properties of several grades of kerosene used for jet fuel. Flash point and freezing point properties are particularly interesting for operation and safety; the standards also define additives for control of static electricity and other purposes. Melting, freeze and flash points Kerosene is liquid around room temperature: . The flash point of kerosene is between and , and its autoignition temperature is . The freezing point of kerosene depends on grade, with commercial aviation fuel standardized at . Grade 1-K kerosene freezes around −40 °C (−40 °F, 233 K). History The process of distilling crude oil/petroleum into kerosene, as well as other hydrocarbon compounds, was first written about in the ninth century by the Persian scholar Rāzi (or Rhazes). In his Kitab al-Asrar (Book of Secrets), the physician and chemist Razi described two methods for the production of kerosene, termed naft abyad (نفط ابيض"white naphtha"), using an apparatus called an alembic. One method used clay as an absorbent, and later the other method using chemicals like ammonium chloride (sal ammoniac). The distillation process was repeated until most of the volatile hydrocarbon fractions had been removed and the final product was perfectly clear and safe to burn. Kerosene was also produced during the same period from oil shale and bitumen by heating the rock to extract the oil, which was then distilled. During the Chinese Ming Dynasty, the Chinese made use of kerosene through extracting and purifying petroleum and then converted it into lamp fuel. The Chinese made use of petroleum for lighting lamps and heating homes as early as 1500 BC. Illuminating oil from coal and oil shale Although "coal oil" was well known by industrial chemists at least as early as the 1700s as a byproduct of making coal gas and coal tar, it burned with a smoky flame that prevented its use for indoor illumination. In cities, much indoor illumination was provided by piped-in coal gas, but outside the cities, and for spot lighting within the cities, the lucrative market for fueling indoor lamps was supplied by whale oil, specifically that from sperm whales, which burned brighter and cleaner. Canadian geologist Abraham Pineo Gesner claimed that in 1846, he had given a public demonstration in Charlottetown, Prince Edward Island of a new process he had discovered. He heated coal in a retort, and distilled from it a clear, thin fluid that he showed made an excellent lamp fuel. He coined the name "kerosene" for his fuel, a contraction of keroselaion, meaning wax-oil. The cost of extracting kerosene from coal was high. Gesner recalled from his extensive knowledge of New Brunswick's geology a naturally occurring asphaltum called albertite. He was blocked from using it by the New Brunswick coal conglomerate because they had coal extraction rights for the province, and he lost a court case when their experts claimed albertite was a form of coal. In 1854, Gesner moved to Newtown Creek, Long Island, New York. There, he secured backing from a group of businessmen. They formed the North American Gas Light Company, to which he assigned his patents. Despite clear priority of discovery, Gesner did not obtain his first kerosene patent until 1854, two years after James Young's United States patent. Gesner's method of purifying the distillation products appears to have been superior to Young's, resulting in a cleaner and better-smelling fuel. Manufacture of kerosene under the Gesner patents began in New York in 1854 and later in Boston—being distilled from bituminous coal and oil shale. Gesner registered the word "Kerosene" as a trademark in 1854, and for several years, only the North American Gas Light Company and the Downer Company (to which Gesner had granted the right) were allowed to call their lamp oil "Kerosene" in the United States. In 1848, Scottish chemist James Young experimented with oil discovered seeping in a coal mine as a source of lubricating oil and illuminating fuel. When the seep became exhausted, he experimented with the dry distillation of coal, especially the resinous "boghead coal" (torbanite). He extracted a number of useful liquids from it, one of which he named paraffine oil because at low temperatures, it congealed into a substance that resembled paraffin wax. Young took out a patent on his process and the resulting products in 1850, and built the first truly commercial oil-works in the world at Bathgate in 1851, using oil extracted from locally mined torbanite, shale, and bituminous coal. In 1852, he took out a United States patent for the same invention. These patents were subsequently upheld in both countries in a series of lawsuits, and other producers were obliged to pay him royalties. Kerosene from petroleum In 1851, Samuel Martin Kier began selling lamp oil to local miners, under the name "Carbon Oil". He distilled this from crude oil by a process of his own invention. He also invented a new lamp to burn his product. He has been dubbed the Grandfather of the American Oil Industry by historians. Kier's salt wells began to be fouled with petroleum in the 1840s. At first, Kier simply dumped the oil into the nearby Pennsylvania Main Line Canal as useless waste, but later he began experimenting with several distillates of the crude oil, along with a chemist from eastern Pennsylvania. Ignacy Łukasiewicz, a Polish pharmacist residing in Lviv, and his partner had been experimenting with different distillation techniques, trying to improve on Gesner's kerosene process, but using oil from a local petroleum seep. Many people knew of his work, but paid little attention to it. On the night of 31 July 1853, doctors at the local hospital needed to perform an emergency operation, virtually impossible by candlelight. They therefore sent a messenger for Łukasiewicz and his new lamps. The lamp burned so brightly and cleanly that the hospital officials ordered several lamps plus a large supply of fuel. Łukasiewicz realized the potential of his work and quit the pharmacy to find a business partner, and then traveled to Vienna to register his technique with the government. Łukasiewicz moved to the Gorlice region of Poland in 1854, and sank several wells across southern Poland over the following decade, setting up a refinery near Jasło in 1859. The petroleum discovery by Edwin Drake - Drake Well in western Pennsylvania in 1859 caused a great deal of public excitement and investment drilling in new wells, not only in Pennsylvania, but also in Canada, where petroleum had been discovered at Oil Springs, Ontario in 1858, and southern Poland, where Ignacy Łukasiewicz had been distilling lamp oil from petroleum seeps since 1852. The increased supply of petroleum allowed oil refiners to entirely side-step the oil-from-coal patents of both Young and Gesner, and produce illuminating oil from petroleum without paying royalties to anyone. As a result, the illuminating oil industry in the United States completely switched over to petroleum in the 1860s. The petroleum-based illuminating oil was widely sold as Kerosene, and the trade name soon lost its proprietary status, and became the lower-case generic product "kerosene". Because Gesner's original Kerosene had been also known as "coal oil", generic kerosene from petroleum was commonly called "coal oil" in some parts of the United States well into the 20th century. In the United Kingdom, manufacturing oil from coal (or oil shale) continued into the early 20th century, although increasingly overshadowed by petroleum oils. As kerosene production increased, whaling declined. The American whaling fleet, which had been steadily growing for 50 years, reached its all-time peak of 199 ships in 1858. By 1860, just two years later, the fleet had dropped to 167 ships. The Civil War cut into American whaling temporarily, but only 105 whaling ships returned to sea in 1866, the first full year of peace, and that number dwindled until only 39 American ships set out to hunt whales in 1876. Kerosene, made first from coal and oil shale, then from petroleum, had largely taken over whaling's lucrative market in lamp oil. Electric lighting started displacing kerosene as an illuminant in the late 19th century, especially in urban areas. However, kerosene remained the predominant commercial end-use for petroleum refined in the United States until 1909, when it was exceeded by motor fuels. The rise of the gasoline-powered automobile in the early 20th century created a demand for the lighter hydrocarbon fractions, and refiners invented methods to increase their output of gasoline, while decreasing their output of kerosene. In addition, some of the heavier hydrocarbons that previously went into kerosene were incorporated into diesel fuel. Kerosene kept some market share by being increasingly used in stoves and portable heaters. Kerosene from carbon dioxide and water A pilot project by ETH Zurich used solar power to produce kerosene from carbon dioxide and water in July 2022. The product can be used in existing aviation applications, and "can also be blended with fossil-derived kerosene." Production Kerosene is produced by fractional distillation of crude oil in an oil refinery. It condenses at a temperature intermediate between diesel fuel, which is less volatile, and naphtha and gasoline, which are more volatile. Kerosene made up 8.5 percent by volume of petroleum refinery output in 2021 in the United States, of which nearly all was kerosene-type jet fuel (8.4 percent). Applications As fuel Heating and lighting The fuel, also known as heating oil in the UK and Ireland, remains widely used in kerosene lamps and lanterns in the developing world. Although it replaced whale oil, the 1873 edition of Elements of Chemistry said, "The vapor of this substance [kerosene] mixed with air is as explosive as gunpowder." This statement may have been due to the common practice of adulterating kerosene with cheaper but more volatile hydrocarbon mixtures, such as naphtha. Kerosene was a significant fire risk; in 1880, nearly two of every five New York City fires were caused by defective kerosene lamps. In less-developed countries kerosene is an important source of energy for cooking and lighting. It is used as a cooking fuel in portable stoves for backpackers. As a heating fuel, it is often used in portable stoves, and is sold in some filling stations. It is sometimes used as a heat source during power failures. Kerosene is widely used in Japan and Chile as a home heating fuel for portable and installed kerosene heaters. In Chile and Japan, kerosene can be readily bought at any filling station or be delivered to homes in some cases. In the United Kingdom and Ireland, kerosene is often used as a heating fuel in areas not connected to a gas pipeline network. It is used less for cooking, with LPG being preferred because it is easier to light. Kerosene is often the fuel of choice for range cookers such as Rayburn. Additives such as RangeKlene can be put into kerosene to ensure that it burns cleaner and produces less soot when used in range cookers. The Amish, who generally abstain from the use of electricity, rely on kerosene for lighting at night. More ubiquitous in the late 19th and early 20th centuries, kerosene space heaters were often built into kitchen ranges, and kept many farm and fishing families warm and dry through the winter. At one time, citrus growers used a smudge pot fueled by kerosene to create a pall of thick smoke over a grove in an effort to prevent freezing temperatures from damaging crops. "Salamanders" are kerosene space heaters used on construction sites to dry out building materials and to warm workers. Before the days of electrically lighted road barriers, highway construction zones were marked at night by kerosene fired, pot-bellied torches. Most of these uses of kerosene created thick black smoke because of the low temperature of combustion. A notable exception, discovered in the early 19th century, is the use of a gas mantle mounted above the wick on a kerosene lamp. Looking like a delicate woven bag above the woven cotton wick, the mantle is a residue of mineral materials (mostly thorium dioxide), heated to incandescence by the flame from the wick. The thorium and cerium oxide combination produces both a whiter light and a greater fraction of the energy in the form of visible light than a black body at the same temperature would. These types of lamps are still in use today in areas of the world without electricity, because they give a much better light than a simple wick-type lamp does. Recently, a multipurpose lantern that doubles as a cook stove has been introduced in India in areas with no electricity. Cooking In countries such as Nigeria, kerosene is the main fuel used for cooking, especially by the poor, and kerosene stoves have replaced traditional wood-based cooking appliances. As such, increase in the price of kerosene can have a major political and environmental consequence. The Indian government subsidizes the fuel to keep the price very low, to around 15 U.S. cents per liter as of February 2007, as keeping the price low discourages dismantling of forests for cooking fuel. In Nigeria an attempt by the government to remove a fuel subsidy that includes kerosene met with strong opposition. Kerosene is used as a fuel in portable stoves, especially in Primus stoves invented in 1892. Portable kerosene stoves are reliable and durable in everyday use, and perform especially well under adverse conditions. In outdoor activities and mountaineering, a decisive advantage of pressurized kerosene stoves over gas cartridge stoves is their particularly high thermal output and their ability to operate at very low ambient temperatures in winter or at high altitude. Wick stoves like Perfection's or wickless like Boss continue to be used by the Amish and off grid living and in natural disasters where there is no power available. Engines In the early to mid-20th century, kerosene or tractor vaporizing oil was used as a cheap fuel for tractors and hit 'n miss engines. A petrol-paraffin engine would start on gasoline, then switch over to kerosene once the engine warmed up. On some engines a heat valve on the manifold would route the exhaust gasses around the intake pipe, heating the kerosene to the point where it was vaporized and could be ignited by an electric spark. In Europe following the Second World War, automobiles were similarly modified to run on kerosene rather than gasoline, which they would have to import and pay heavy taxes on. Besides additional piping and the switch between fuels, the head gasket was replaced by a much thicker one to diminish the compression ratio (making the engine less powerful and less efficient, but able to run on kerosene). The necessary equipment was sold under the trademark "Econom". During the fuel crisis of the 1970s, Saab-Valmet developed and series-produced the Saab 99 Petro that ran on kerosene, turpentine or gasoline. The project, codenamed "Project Lapponia", was headed by Simo Vuorio, and towards the end of the 1970s, a working prototype was produced based on the Saab 99 GL. The car was designed to run on two fuels. Gasoline was used for cold starts and when extra power was needed, but normally it ran on kerosene or turpentine. The idea was that the gasoline could be made from peat using the Fischer–Tropsch process. Between 1980 and 1984, 3,756 Saab 99 Petros and 2,385 Talbot Horizons (a version of the Chrysler Horizon that integrated many Saab components) were made. One reason to manufacture kerosene-fueled cars was that in Finland kerosene was less heavily taxed than gasoline. Kerosene is used to fuel smaller-horsepower outboard motors built by Yamaha, Suzuki, and Tohatsu. Primarily used on small fishing craft, these are dual-fuel engines that start on gasoline and then transition to kerosene once the engine reaches optimum operating temperature. Multiple fuel Evinrude and Mercury Racing engines also burn kerosene, as well as jet fuel. Today, kerosene is mainly used in fuel for jet engines in several grades. One highly refined form of the fuel is known as RP-1, and is often burned with liquid oxygen as rocket fuel. These fuel grade kerosenes meet specifications for smoke points and freeze points. The combustion reaction can be approximated as follows, with the molecular formula C12H26 (dodecane): 2 C12H26(l) + 37 O2(g) → 24 CO2(g) + 26 H2O(g); ∆H˚ = -7513 kJ In the initial phase of liftoff, the Saturn V launch vehicle was powered by the reaction of liquid oxygen with RP-1. For the five 6.4 meganewton sea-level thrust F-1 rocket engines of the Saturn V, burning together, the reaction generated roughly 1.62 × 1011 watts (J/s) (162 gigawatt) or 217 million horsepower. Kerosene is sometimes used as an additive in diesel fuel to prevent gelling or waxing in cold temperatures. Ultra-low sulfur kerosene is a custom-blended fuel used by the New York City Transit Authority to power its bus fleet. The transit agency started using this fuel in 2004, prior to the widespread adoption of ultra-low-sulfur diesel, which has since become the standard. In 2008, the suppliers of the custom fuel failed to tender for a renewal of the transit agency's contract, leading to a negotiated contract at a significantly increased cost. JP-8 (for "Jet Propellant 8"), a kerosene-based fuel, is used by the United States military as a replacement in diesel fueled vehicles and for powering aircraft. JP-8 is also used by the U.S. military and its NATO allies as a fuel for heaters, stoves, tanks, and as a replacement for diesel fuel in the engines of nearly all tactical ground vehicles and electrical generators. Chemical processes Aliphatic kerosene is a type of kerosene which has a low aromatic hydrocarbon content, the aromatic content of crude oil varies greatly from oil field to oil field. However by solvent extraction it is possible to separate aromatic hydrocarbons from aliphatic (alkane) hydrocarbons. A common method is solvent extraction with methanol, DMSO or sulfolane. Aromatic kerosene is a grade of kerosene with a large concentration of aromatic hydrocarbons, an example of this would be Exon's Solvesso 150. Kerosene is commonly used in metal extraction as the diluent, for example in copper extraction by LIX-84 it can be used in mixer settlers. Kerosene is used as a diluent in the PUREX extraction process, but it is increasingly being supplanted by dodecane and other artificial hydrocarbons such as TPH (Hydrogenated Propylene Trimer). Traditionally the UK plants at Sellafield used aromatic kerosene to reduce the radiolysis of TBP while the French nuclear industry tended to use diluents with very little aromatic content. The French nuclear reprocessing plants typically use TPH as their diluent. In recent times it has been shown by Mark Foreman at Chalmers that aliphatic kerosene can be replaced in solvent extraction with HVO100 which is a second generation biodiesel made by Neste. In X-ray crystallography, kerosene can be used to store crystals. When a hydrated crystal is left in air, dehydration may occur slowly. This makes the color of the crystal become dull. Kerosene can keep air away from the crystal. It can be also used to prevent air from re-dissolving in a boiled liquid, and to store alkali metals such as potassium, sodium, and rubidium (with the exception of lithium, which is less dense than kerosene, causing it to float). In entertainment Kerosene is often used in the entertainment industry for fire performances, such as fire breathing, fire juggling or poi, and fire dancing. Because of its low flame temperature when burnt in free air, the risk is lower should the performer come in contact with the flame. Kerosene is generally not recommended as fuel for indoor fire dancing, as it produces an unpleasant (to some) odor, which becomes poisonous in sufficient concentration. Ethanol was sometimes used instead, but the flames it produces look less impressive, and its lower flash point poses a high risk. In industry As a petroleum product miscible with many industrial liquids, kerosene can be used as both a solvent, able to remove other petroleum products, such as chain grease, and as a lubricant, with less risk of combustion when compared to using gasoline. It can also be used as a cooling agent in metal production and treatment (oxygen-free conditions). In the petroleum industry, kerosene is often used as a synthetic hydrocarbon for corrosion experiments to simulate crude oil in field conditions. Solvent Kerosene can be used as an adhesive remover on hard-to-remove mucilage or adhesive left by stickers on a glass surface (such as in show windows of stores). It can be used to remove candle wax that has dripped onto a glass surface; it is recommended that the excess wax be scraped off prior to applying kerosene via a soaked cloth or tissue paper. It can be used to clean bicycle and motorcycle chains of old lubricant before relubrication. It can also be used to thin oil-based paint used in fine art. Some artists even use it to clean their brushes; however, it leaves the bristles greasy to the touch. Others It has seen use for water tank mosquito control in Australia, where a temporary thin floating layer above the water protects it until the defective tank is repaired. Toxicity The World Health Organization considers kerosene to be a polluting fuel and recommends that "governments and practitioners immediately stop promoting its household use". Kerosene smoke contains high levels of harmful particulate matter, and household use of kerosene is associated with higher risks of cancer, respiratory infections, asthma, tuberculosis, cataract, and adverse pregnancy outcomes. Ingestion of kerosene is harmful. Kerosene is sometimes recommended as a folk remedy for killing head lice, but health agencies warn against this as it can cause burns and serious illness. A kerosene shampoo can even be fatal if fumes are inhaled. People can be exposed to kerosene in the workplace by breathing it in, swallowing it, skin contact, and eye contact. The US National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit of 100 mg/m3 over an 8-hour workday.
Technology
Energy and fuel
null
17001
https://en.wikipedia.org/wiki/Kohlrabi
Kohlrabi
Kohlrabi (; pronounced in English; scientific name Brassica oleracea Gongylodes Group), also called German turnip or turnip cabbage, is a biennial vegetable, a low, stout cultivar of wild cabbage. It is a cultivar of the same species as cabbage, broccoli, cauliflower, kale, Brussels sprouts, collard greens, Savoy cabbage, and gai lan. It can be eaten raw or cooked. Edible preparations are made with both the stem and the leaves. Despite its common names, it is not the same species as turnip, although both are in the genus Brassica. Etymology The name comes from the German ("cabbage") plus Rübe ~ Rabi (Swiss German variant) ("turnip"), because the swollen stem resembles the latter. Its Group name Gongylodes (or lowercase and italicized gongylodes or gongyloides as a variety name) means "roundish" in Greek, from (, 'round'). History The first European written record is by the botanist Mattioli in 1554 who wrote that it had "come lately into Italy". By the end of the 16th century, kohlrabi spread to North Europe and was being grown in Austria, Germany, England, Italy, Spain, Tripoli and parts of the eastern Mediterranean. Description Kohlrabi has been created by artificial selection for lateral meristem growth (a swollen, nearly spherical shape); its origin in nature is the same as that of cabbage, broccoli, cauliflower, kale, collard greens, and Brussels sprouts: they are all bred from, and are the same species as, the wild cabbage plant (Brassica oleracea). The taste and texture of kohlrabi are similar to those of a broccoli stem or cabbage heart, but milder and sweeter, with a higher ratio of flesh to skin. The young stem in particular can be as crisp and juicy as an apple, although much less sweet. Except for the Gigante cultivar, spring-grown kohlrabi that are much over in size tend to be woody, as do full-grown kohlrabi much over perhaps in size; the Gigante cultivar can achieve great size while remaining of good eating quality. The plant matures in 55–60 days after sowing and has good standing ability for up to 30 days after maturity. The approximate weight is g . It grows well in hydroponic systems, producing a large edible bulk without clogging the nutrient troughs. There are several varieties commonly available, including 'White Vienna', 'Purple Vienna', 'Grand Duke', 'Gigante' (also known as "Superschmelz"), 'Purple Danube', and 'White Danube'. Colouration of the purple types is superficial: the edible parts are all pale yellow. The leafy greens can also be eaten. One commonly used variety grows without a swollen stem, having just leaves and a very thin stem, and is called Haakh. Haakh and Monj are popular Kashmiri dishes made using this vegetable. In the second year, the plant will bloom and develop seeds. Nutrition Raw kohlrabi is 91% water, 6% carbohydrates, 2% protein, and contains negligible fat (table). In a reference amount, raw kohlrabi supplies 27 calories, and is a rich source (20% of more of the Daily Value, DV) of vitamin C (65% DV) and a moderate source (10-19% DV) of copper and potassium, with no other micronutrients in significant amounts (table). Preparation and use Kohlrabi stems (the enlarged vegetal part) are surrounded by two distinct fibrous layers that do not soften appreciably when cooked. These layers are generally peeled away prior to cooking or serving raw, with the result that the stems often provide a smaller amount of food than one might assume from their intact appearance. Although all parts of kohlrabi are edible, the bulbous stem is most frequently used, typically raw in salad or slaws. It has a texture similar to that of a broccoli stem, but with a flavor that is sweeter and less vegetal. It is also more crunchy and crisp than a raw broccoli stem. Kohlrabi leaves are edible and can be used similarly to collard greens and kale, but take longer to cook. Kohlrabi is an important part of Kashmiri cuisine, where it is called Mŏnji. It is one of the most commonly cooked vegetables, along with collard greens (haakh). It is prepared with its leaves and served with a light soup and eaten with rice. In Cyprus, it is popularly sprinkled with salt and lemon and served as an appetizer. Kohlrabi is a common ingredient in Vietnamese cuisine. It can also be found in the dish nem rán, stir fry and canh. Raw kohlrabi is usually sliced thinly for nộm or nước chấm. Some varieties are grown as feed for cattle. Gallery
Biology and health sciences
Brassicales
null
17003
https://en.wikipedia.org/wiki/Tettigoniidae
Tettigoniidae
Insects in the family Tettigoniidae are commonly called katydids (especially in North America) or bush crickets. They have previously been known as "long-horned grasshoppers". More than 8,000 species are known. Part of the suborder Ensifera, the Tettigoniidae are the only extant (living) family in the superfamily Tettigonioidea. Many species are nocturnal in habit, having strident mating calls and may exhibit mimicry or camouflage, commonly with shapes and colours similar to leaves. Etymology The family name Tettigoniidae is derived from the genus Tettigonia, of which the great green bush cricket is the type species; it was first described by Carl Linnaeus in 1758. In Latin tettigonia means a kind of small cicada, leafhopper; it is from the Greek τεττιγόνιον tettigonion, the diminutive of the imitative (onomatopoeic) τέττιξ, tettix, cicada. All of these names such as tettix with repeated sounds are onomatopoeic, imitating the stridulation of these insects. The common name katydid is also onomatopoeic and comes from the particularly loud, three-pulsed song, often rendered "ka-ty-did", of the nominate subspecies of the North American Pterophylla camellifolia, belonging to the subfamily Pseudophyllinae, which are known as "true katydids". Description and life cycle Description Tettigoniids range in size from as small as to as large as . The smaller species typically live in drier or more stressful habitats which may lead to their small size. The small size is associated with greater agility, faster development, and lower nutritional needs. Tettigoniids are tree-living insects that are most commonly heard at night during summer and early fall. Tettigoniids may be distinguished from the grasshopper by the length of their filamentous antennae, which may exceed their own body length, while grasshoppers' antennae are always relatively short and thickened. Life cycle Eggs are typically oval and may be attached in rows to plants. Where the eggs are deposited relates to the way the ovipositor is formed. It consists of up to three pairs of appendages formed to transmit the egg, to make a place for it, and place it properly. Tettigoniids have either sickle-shaped ovipositors which typically lay eggs in dead or living plant matter, or uniform long ovipositors which lay eggs in grass stems. When tettigoniids hatch, the nymphs often look like small, wingless versions of the adults, but in some species, the nymphs look nothing at all like the adult and rather mimic other species such as ants, spiders and assassin bugs, or flowers, to prevent predation. The nymphs remain in a mimic state only until they are large enough to escape predation. Once they complete their last molt (after about 5 successful molts), they are then prepared to mate. Distribution Tettigoniids are found on every continent except Antarctica. The vast majority of katydid species live in the tropical regions of the world. For example, the Amazon basin tropical forests are home to over 2,000 species. However, katydids are found in the cool, dry temperate regions, as well, with about 255 species in North America. Classification The Tettigoniidae are a large family and have been divided into a number of subfamilies: The Copiphorinae were previously considered a subfamily, but are now placed as tribe Copiphorini in the subfamily Conocephalinae. The genus Acridoxena is now placed in the tribe Acridoxenini of the Mecopodinae (previously its own subfamily, Acridoxeninae). Extinct taxa The Orthoptera species file lists: †Pseudotettigoniinae (North America, Europe) †Rammeinae (Europe) †Tettigoidinae (Australia) Genera incertae sedis †Locustites Heer, 1849: 3 spp. †Locustophanes Handlirsch, 1939: †L. rhipidophorus Handlirsch, 1939 †Prophasgonura Piton, 1940: †P. lineatocollis Piton, 1940 †Protempusa Piton, 1940: †P. incerta Piton, 1940 †Prototettix Giebel, 1856: †P. lithanthraca (Goldenberg, 1854) The genus †Triassophyllum is extinct and may be placed here or in the Archaeorthoptera. Ecology The diet of most tettigoniids includes leaves, flowers, bark, and seeds, but many species are exclusively predatory, feeding on other insects, snails, or even small vertebrates such as snakes and lizards. Some are also considered pests by commercial crop growers and are sprayed to limit growth, but population densities are usually low, so a large economic impact is rare. Tettigoniids are serious insect pests of karuka (Pandanus julianettii). The species Segestes gracilis and Segestidea montana eat the leaves and can sometimes kill trees. Growers will stuff leaves and grass in between the leaves of the crown to keep insects out. By observing the head and mouthparts, where differences can be seen in relation to function, it is possible to determine what type of food the tettigoniids consume. Large tettigoniids can inflict a painful bite or pinch if handled, but seldom break the skin. Some species of bush crickets are consumed by people, such as the nsenene (Ruspolia differens) in Uganda and neighbouring areas. Communication The males of tettigoniids have sound-producing organs located on the hind angles of their front wings. In some species, females are also capable of stridulation. Females chirp in response to the shrill of the males. The males use this sound for courtship, which occurs late in the summer. The sound is produced by rubbing two parts of their bodies together, called stridulation. In many cases this is done with the wings, but not exclusively. One body part bears a file or comb with ridges; the other has the plectrum, which runs over the ridges to produce a vibration. For tettigoniids, the fore wings are used to sing. Tettigoniids produce continuous songs known as trills. The size of the insect, the spacing of the ridges, and the width of the scraper all influence what sound is made. Many species stridulate at a tempo which is governed by ambient temperature, so that the number of chirps in a defined period of time can produce a fairly accurate temperature reading. For American katydids, the formula is generally given as the number of chirps in 15 seconds plus 37 to give the temperature in degrees Fahrenheit. Predation Some tettigoniids have spines on different parts of their bodies that work in different ways. The Listroscelinae have limb spines on the ventral surfaces of their bodies. This works in a way to confine their prey to make a temporary cage above their mouthparts. The spines are articulated and comparatively flexible, but relatively blunt. Due to this, they are used to cage and not penetrate the prey's body. Spines on the tibiae and the femora are usually more sharp and nonarticulated. They are designed more for penetration or help in the defensive mechanism they might have. This usually works with their diurnal roosting posture to maximize defense and prevent predators from going for their head. Defense mechanisms When tettigoniids go to rest during the day, they enter a diurnal roosting posture to maximize their cryptic qualities. This position fools predators into thinking the katydid is either dead or just a leaf on the plant. Various tettigoniids have bright coloration and black apical spots on the inner surfaces of the tegmina, and brightly colored hind wings. By flicking their wings open when disturbed, they use the coloration to fool predators into thinking the spots are eyes. This, in combination with their coloration mimicking leaves, allows them to blend in with their surroundings, but also makes predators unsure which side is the front and which side is the back. Reproductive behavior The males provide a nuptial gift for the females in the form of a spermatophylax, a body attached to the males' spermatophore and consumed by the female, to distract her from eating the male's spermatophore and thereby increase his paternity. Polygamy The Tettigoniidae have polygamous relationships. The first male to mate is guaranteed an extremely high confidence of paternity when a second male couples at the termination of female sexual refractoriness. The nutrients that the offspring ultimately receive will increase their fitness. The second male to mate with the female at the termination of her refractory period is usually cuckolded. Competition The polygamous relationships of the Tettigoniidae lead to high levels of male-male competition. Male competition is caused by the decreased availability of males able to supply nutritious spermaphylanges to the females. Females produce more eggs on a high-quality diet; thus, the female looks for healthier males with a more nutritious spermatophylax. Females use the sound created by the male to judge his fitness. The louder and more fluent the trill, the higher the fitness of the male. Stress response In species which produce larger food gifts, the female often seeks out the males to copulate. This, however, is a cost to females as they risk predation while searching for males. Also, a cost-benefit tradeoff exists in the size of the spermatophore which the male tettigoniids produce. When males possess a large spermatophore, they benefit by being more highly selected for by females, but they are only able to mate one to two times during their lifetimes. Inversely, male Tettigoniidae with smaller spermatophores have the benefit of being able to mate two to three times per night, but have lower chances of being selected by females. Even in times of nutritional stress, male Tettigoniidae continue to invest nutrients within their spermatophores. In some species, the cost of creating the spermatophore is low, but even in those which it is not low, it is still not beneficial to reduce the quality of the spermatophore, as it would lead to lower reproductive selection and success. This low reproductive success is attributed to some Tettigoniidae species in which the spermatophylax that the female receives as a food gift from the male during copulation increases the reproductive output of the reproduction attempt. However, in other cases, the female receives few, if any, benefits. The reproductive behavior of bush crickets has been studied in great depth. Studies found that the tuberous bush cricket (Platycleis affinis) has the largest testes in proportion to body mass of any animal recorded. They account for 14% of the insect's body mass and are thought to enable a fast remating rate.
Biology and health sciences
Orthoptera
null
9690915
https://en.wikipedia.org/wiki/Heteropteryx
Heteropteryx
Heteropteryx is a monotypic genus of stick insects containing Heteropteryx dilatata as the only described species. and gives its name to the family of the Heteropterygidae. Their only species may be known as jungle nymph, Malaysian stick insect, Malaysian wood nymph, Malayan jungle nymph, or Malayan wood nymph and because of their size it is commonly kept in zoological institutions and private terrariums of insect lovers. It originates from the Malay Archipelago and is nocturnal. Description The females are much larger and wider than the males, reaching to in length and 30 to 65 g in weight, making them among the heaviest phasmids and extant insects. In addition to the typically lime green-colored females, there are also yellow and even more rarely red-brown females. Their two pairs of wings are both shortened. At rest, the green forewings, formed as tegmina, cover the somewhat shorter, strikingly pink-colored membranous hind wings, here formed as alae, however they are incapable of flight. The head, body and legs are thorny. The flattened body is provided with a number of spines, in particular along the body edges including the abdomen and the legs, and especially along the hind legs. At the end of the abdomen there is a secondary ovipositor for laying the eggs in the ground. It surrounds the actual ovipositor and is ventral formed from the eighth sternite, here named subgenital plate or operculum and dorsally from the eleventh tergum, which is referred to here as the supraanal plate or epiproct. The much smaller males are slender and only about to long. They have spines all over their body and legs like the females, and are usually a mottled brown colour. The hind wings cover the entire abdomen. The narrow, but only slightly shorter forewings are designed as tegmina to and have a light front edge, which gives the animals with closed wings the typical lateral stripes over the mesonotum and half of the abdomen. The fully developed hind wings are reddish and marked with a brown net pattern. Distribution area and lifestyle Heteropteryx dilatata comes from the Malay Archipelago. There it was found on the Malay Peninsula, in Thailand, Singapore, as well as on Sumatra and in Sarawak on Borneo. It is unclear whether the animals documented on Madagascar belong to an indigenous population. Both sexes are capable of defensive stridulation when there is danger. The colored rear wings are jerked open again and again. In addition, the animals then threaten, similar to the representatives of the closely related genus Haaniella, with raised abdomen and the attacker stretched, splayed hind legs. Upon contact, the legs snap together as a scissor-like weapon. When touched, the tibiae of the hind legs are then quickly struck against the femura, which creates an effective defense through its spines, in particular those on the tibiae. Reproduction It is a common misconception that Heteropteryx dilatata holds the record for the largest egg laid by an insect, with the eggs sometimes described as being in length. The heaviest eggs are 250 to 300 mg laid by the closely related Haaniella echinata. These are up to long and about wide. The females of Asceles malaccae, which are just under long, lay eggs that are up to long, but only about in diameter. The eggs of Heteropteryx dilatata are to long, wide and about 70 mg in weight. The females lay these individually in the ground with their ovipositor. After about 7 to 14 months the nymphs hatch. These are able to change their lighter color during the day to a darker one at night and form sleeping communities up to the fourth larval stage, in which the insects clump or chain to one another on the food plants. The nymphs are generally beige in color when they hatch. While the color of the males becomes a little darker with each moult, the females change from beige to green after the third moult. About a year after hatching, the molting to imago takes place, which is the fifth molting in the males and the sixth molting in the females. The imago then live for about 6 to 24 months. As with many other phasmid species, Gynander also occasionally occur in Heteropteryx dilatata. These are often designed as half-sided hermaphrodites. Taxonomy Heteropteryx dilatata is the only described representative of the genus Heteropteryx established by George Robert Gray in 1835 and was described in 1798 by John Parkinson as Phasma dilatatum. The holotype is a female deposited in the collection of the Macleay Museum of the University of Sydney. All other species described in the genus Heteropteryx, like Heteropteryx dehaanii, Heteropteryx echinata, Heteropteryx erringtoniae, Heteropteryx grayii, Heteropteryx muelleri, Heteropteryx rosenbergii and Heteropteryx scabra are assigned to Haaniella, or have turned out to be synonyms of Heteropteryx dilatata like Heteropteryx castelnaudi, Heteropteryx hopei and Heteropteryx rollandi. The generic name Leocrates introduced by Carl Stål in 1875 for Leocrates graciosa and used for Leocrates glaber and Leocrates mecheli by Josef Redtenbacher 1906 is synonymous with Heteropteryx. The two species described by Redtenbacher have been valid species of the genus Haaniella again since 2016. In their investigations based on genetic analysis to clarify the phylogeny of the Heteropterygidae, Sarah Bank et al showed that the representatives of the Heteropterigini form a common clade, but the genus Heteropteryx phylogenetically is to be placed in the middle of several lines of species currently listed in Haaniella. It could also be shown that in addition to the Malay Heteropteryx dilatata there is another species from the Thai Phang Nga Province, more precisely from the Khao Lak–Lam Ru National Park. Terraristic The species was founded in 1974 by C.C. Chua from the Cameron Highlands in Pahang near the border to Perak and imported several times from Perak to Europe by various traders in the 1980s. Other stocks have been introduced from this region in the recent past and are kept with their origin being named. One stock from Tapah Hills (also Perak near Pahang) and in 2015 another from Yoko Matsumura from Kuala Boh in Pahang were bred. A breeding stock imported from Phuket in 1998, in which the females have black coxae, has been lost. The fact that this corresponds to the one used by Bank et al. the undescribed species identified in 2021 is considered likely, as the two sites are only about one hundred kilometers apart and the specimens examined by molecular genetics also have black coxes. The size of the terrarium had to be adapted to the number of animals. For a couple, the terrarium should not be smaller than 40 × 40 × . The feed branches with leaves can be placed in a narrow-necked vase so that they stay fresh longer. Among other leaves, those of bramble are eaten, such as blackberry and raspberry, but also oak, hazel and ivy. Temperatures between and and high humidity are required for keeping them. The latter is achieved by spraying the forage plants with water. In order to enable the females to lay their eggs, the ground should be covered several centimeters thick with substrate. Alternatively, an egg-laying vessel with substrate can be offered. Heteropteryx dilatata can live up to two years of age in captivity. Gallery
Biology and health sciences
Insects: General
Animals
9697733
https://en.wikipedia.org/wiki/Contact%20mechanics
Contact mechanics
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. A central distinction in contact mechanics is between stresses acting perpendicular to the contacting bodies' surfaces (known as normal stress) and frictional stresses acting tangentially between the surfaces (shear stress). Normal contact mechanics or frictionless contact mechanics focuses on normal stresses caused by applied normal forces and by the adhesion present on surfaces in close contact, even if they are clean and dry. Frictional contact mechanics emphasizes the effect of friction forces. Contact mechanics is part of mechanical engineering. The physical and mathematical formulation of the subject is built upon the mechanics of materials and continuum mechanics and focuses on computations involving elastic, viscoelastic, and plastic bodies in static or dynamic contact. Contact mechanics provides necessary information for the safe and energy efficient design of technical systems and for the study of tribology, contact stiffness, electrical contact resistance and indentation hardness. Principles of contacts mechanics are implemented towards applications such as locomotive wheel-rail contact, coupling devices, braking systems, tires, bearings, combustion engines, mechanical linkages, gasket seals, metalworking, metal forming, ultrasonic welding, electrical contacts, and many others. Current challenges faced in the field may include stress analysis of contact and coupling members and the influence of lubrication and material design on friction and wear. Applications of contact mechanics further extend into the micro- and nanotechnological realm. The original work in contact mechanics dates back to 1881 with the publication of the paper "On the contact of elastic solids" "Über die Berührung fester elastischer Körper" by Heinrich Hertz. Hertz attempted to understand how the optical properties of multiple, stacked lenses might change with the force holding them together. Hertzian contact stress refers to the localized stresses that develop as two curved surfaces come in contact and deform slightly under the imposed loads. This amount of deformation is dependent on the modulus of elasticity of the material in contact. It gives the contact stress as a function of the normal contact force, the radii of curvature of both bodies and the modulus of elasticity of both bodies. Hertzian contact stress forms the foundation for the equations for load bearing capabilities and fatigue life in bearings, gears, and any other bodies where two surfaces are in contact. History Classical contact mechanics is most notably associated with Heinrich Hertz. In 1882, Hertz solved the contact problem of two elastic bodies with curved surfaces. This still-relevant classical solution provides a foundation for modern problems in contact mechanics. For example, in mechanical engineering and tribology, Hertzian contact stress is a description of the stress within mating parts. The Hertzian contact stress usually refers to the stress close to the area of contact between two spheres of different radii. It was not until nearly one hundred years later that Kenneth L. Johnson, Kevin Kendall, and Alan D. Roberts found a similar solution for the case of adhesive contact. This theory was rejected by Boris Derjaguin and co-workers who proposed a different theory of adhesion in the 1970s. The Derjaguin model came to be known as the Derjaguin–Muller–Toporov (DMT) model (after Derjaguin, M. V. Muller and Yu. P. Toporov), and the Johnson et al. model came to be known as the Johnson–Kendall–Roberts (JKR) model for adhesive elastic contact. This rejection proved to be instrumental in the development of the David Tabor and later Daniel Maugis parameters that quantify which contact model (of the JKR and DMT models) represent adhesive contact better for specific materials. Further advancement in the field of contact mechanics in the mid-twentieth century may be attributed to names such as Frank Philip Bowden and Tabor. Bowden and Tabor were the first to emphasize the importance of surface roughness for bodies in contact. Through investigation of the surface roughness, the true contact area between friction partners is found to be less than the apparent contact area. Such understanding also drastically changed the direction of undertakings in tribology. The works of Bowden and Tabor yielded several theories in contact mechanics of rough surfaces. The contributions of J. F. Archard (1957) must also be mentioned in discussion of pioneering works in this field. Archard concluded that, even for rough elastic surfaces, the contact area is approximately proportional to the normal force. Further important insights along these lines were provided by Jonh A. Greenwood and J. B. P. Williamson (1966), A. W. Bush (1975), and Bo N. J. Persson (2002). The main findings of these works were that the true contact surface in rough materials is generally proportional to the normal force, while the parameters of individual micro-contacts (pressure and size of the micro-contact) are only weakly dependent upon the load. Classical solutions for non-adhesive elastic contact The theory of contact between elastic bodies can be used to find contact areas and indentation depths for simple geometries. Some commonly used solutions are listed below. The theory used to compute these solutions is discussed later in the article. Solutions for multitude of other technically relevant shapes, e.g. the truncated cone, the worn sphere, rough profiles, hollow cylinders, etc. can be found in Contact between a sphere and a half-space An elastic sphere of radius indents an elastic half-space where total deformation is , causing a contact area of radius The applied force is related to the displacement by where and , are the elastic moduli and , the Poisson's ratios associated with each body. The distribution of normal pressure in the contact area as a function of distance from the center of the circle is where is the maximum contact pressure given by The radius of the circle is related to the applied load by the equation The total deformation is related to the maximum contact pressure by The maximum shear stress occurs in the interior at for . Contact between two spheres For contact between two spheres of radii and , the area of contact is a circle of radius . The equations are the same as for a sphere in contact with a half plane except that the effective radius is defined as Contact between two crossed cylinders of equal radius This is equivalent to contact between a sphere of radius and a plane. Contact between a rigid cylinder with flat end and an elastic half-space If a rigid cylinder is pressed into an elastic half-space, it creates a pressure distribution described by where is the radius of the cylinder and The relationship between the indentation depth and the normal force is given by Contact between a rigid conical indenter and an elastic half-space In the case of indentation of an elastic half-space of Young's modulus using a rigid conical indenter, the depth of the contact region and contact radius are related by with defined as the angle between the plane and the side surface of the cone. The total indentation depth is given by: The total force is The pressure distribution is given by The stress has a logarithmic singularity at the tip of the cone. Contact between two cylinders with parallel axes In contact between two cylinders with parallel axes, the force is linearly proportional to the length of cylinders L and to the indentation depth d: The radii of curvature are entirely absent from this relationship. The contact radius is described through the usual relationship with as in contact between two spheres. The maximum pressure is equal to Bearing contact The contact in the case of bearings is often a contact between a convex surface (male cylinder or sphere) and a concave surface (female cylinder or sphere: bore or hemispherical cup). Method of dimensionality reduction Some contact problems can be solved with the method of dimensionality reduction (MDR). In this method, the initial three-dimensional system is replaced with a contact of a body with a linear elastic or viscoelastic foundation (see fig.). The properties of one-dimensional systems coincide exactly with those of the original three-dimensional system, if the form of the bodies is modified and the elements of the foundation are defined according to the rules of the MDR. MDR is based on the solution to axisymmetric contact problems first obtained by Ludwig Föppl (1941) and Gerhard Schubert (1942) However, for exact analytical results, it is required that the contact problem is axisymmetric and the contacts are compact. Hertzian theory of non-adhesive elastic contact The classical theory of contact focused primarily on non-adhesive contact where no tension force is allowed to occur within the contact area, i.e., contacting bodies can be separated without adhesion forces. Several analytical and numerical approaches have been used to solve contact problems that satisfy the no-adhesion condition. Complex forces and moments are transmitted between the bodies where they touch, so problems in contact mechanics can become quite sophisticated. In addition, the contact stresses are usually a nonlinear function of the deformation. To simplify the solution procedure, a frame of reference is usually defined in which the objects (possibly in motion relative to one another) are static. They interact through surface tractions (or pressures/stresses) at their interface. As an example, consider two objects which meet at some surface in the (,)-plane with the -axis assumed normal to the surface. One of the bodies will experience a normally-directed pressure distribution and in-plane surface traction distributions and over the region . In terms of a Newtonian force balance, the forces: must be equal and opposite to the forces established in the other body. The moments corresponding to these forces: are also required to cancel between bodies so that they are kinematically immobile. Assumptions in Hertzian theory The following assumptions are made in determining the solutions of Hertzian contact problems: The strains are small and within the elastic limit. The surfaces are continuous and non-conforming (implying that the area of contact is much smaller than the characteristic dimensions of the contacting bodies). Each body can be considered an elastic half-space. The surfaces are frictionless. Additional complications arise when some or all these assumptions are violated and such contact problems are usually called non-Hertzian. Analytical solution techniques Analytical solution methods for non-adhesive contact problem can be classified into two types based on the geometry of the area of contact. A conforming contact is one in which the two bodies touch at multiple points before any deformation takes place (i.e., they just "fit together"). A non-conforming contact is one in which the shapes of the bodies are dissimilar enough that, under zero load, they only touch at a point (or possibly along a line). In the non-conforming case, the contact area is small compared to the sizes of the objects and the stresses are highly concentrated in this area. Such a contact is called concentrated, otherwise it is called diversified. A common approach in linear elasticity is to superpose a number of solutions each of which corresponds to a point load acting over the area of contact. For example, in the case of loading of a half-plane, the Flamant solution is often used as a starting point and then generalized to various shapes of the area of contact. The force and moment balances between the two bodies in contact act as additional constraints to the solution. Point contact on a (2D) half-plane A starting point for solving contact problems is to understand the effect of a "point-load" applied to an isotropic, homogeneous, and linear elastic half-plane, shown in the figure to the right. The problem may be either plane stress or plane strain. This is a boundary value problem of linear elasticity subject to the traction boundary conditions: where is the Dirac delta function. The boundary conditions state that there are no shear stresses on the surface and a singular normal force P is applied at (0, 0). Applying these conditions to the governing equations of elasticity produces the result for some point, , in the half-plane. The circle shown in the figure indicates a surface on which the maximum shear stress is constant. From this stress field, the strain components and thus the displacements of all material points may be determined. Line contact on a (2D) half-plane Normal loading over a region Suppose, rather than a point load , a distributed load is applied to the surface instead, over the range . The principle of linear superposition can be applied to determine the resulting stress field as the solution to the integral equations: Shear loading over a region The same principle applies for loading on the surface in the plane of the surface. These kinds of tractions would tend to arise as a result of friction. The solution is similar the above (for both singular loads and distributed loads ) but altered slightly: These results may themselves be superposed onto those given above for normal loading to deal with more complex loads. Point contact on a (3D) half-space Analogously to the Flamant solution for the 2D half-plane, fundamental solutions are known for the linearly elastic 3D half-space as well. These were found by Boussinesq for a concentrated normal load and by Cerruti for a tangential load. See the section on this in Linear elasticity. Numerical solution techniques Distinctions between conforming and non-conforming contact do not have to be made when numerical solution schemes are employed to solve contact problems. These methods do not rely on further assumptions within the solution process since they base solely on the general formulation of the underlying equations. Besides the standard equations describing the deformation and motion of bodies two additional inequalities can be formulated. The first simply restricts the motion and deformation of the bodies by the assumption that no penetration can occur. Hence the gap between two bodies can only be positive or zero where denotes contact. The second assumption in contact mechanics is related to the fact, that no tension force is allowed to occur within the contact area (contacting bodies can be lifted up without adhesion forces). This leads to an inequality which the stresses have to obey at the contact interface. It is formulated for the normal stress . At locations where there is contact between the surfaces the gap is zero, i.e. , and there the normal stress is different than zero, indeed, . At locations where the surfaces are not in contact the normal stress is identical to zero; , while the gap is positive; i.e., . This type of complementarity formulation can be expressed in the so-called Kuhn–Tucker form, viz. These conditions are valid in a general way. The mathematical formulation of the gap depends upon the kinematics of the underlying theory of the solid (e.g., linear or nonlinear solid in two- or three dimensions, beam or shell model). By restating the normal stress in terms of the contact pressure, ; i.e., the Kuhn-Tucker problem can be restated as in standard complementarity form i.e. In the linear elastic case the gap can be formulated as where is the rigid body separation, is the geometry/topography of the contact (cylinder and roughness) and is the elastic deformation/deflection. If the contacting bodies are approximated as linear elastic half spaces, the Boussinesq-Cerruti integral equation solution can be applied to express the deformation () as a function of the contact pressure (); i.e., where for line loading of an elastic half space and for point loading of an elastic half-space. After discretization the linear elastic contact mechanics problem can be stated in standard Linear Complementarity Problem (LCP) form. where is a matrix, whose elements are so called influence coefficients relating the contact pressure and the deformation. The strict LCP formulation of the CM problem presented above, allows for direct application of well-established numerical solution techniques such as Lemke's pivoting algorithm. The Lemke algorithm has the advantage that it finds the numerically exact solution within a finite number of iterations. The MATLAB implementation presented by Almqvist et al. is one example that can be employed to solve the problem numerically. In addition, an example code for an LCP solution of a 2D linear elastic contact mechanics problem has also been made public at MATLAB file exchange by Almqvist et al. Contact between rough surfaces When two bodies with rough surfaces are pressed against each other, the true contact area formed between the two bodies, , is much smaller than the apparent or nominal contact area . The mechanics of contacting rough surfaces are discussed in terms of normal contact mechanics and static frictional interactions. Natural and engineering surfaces typically exhibit roughness features, known as asperities, across a broad range of length scales down to the molecular level, with surface structures exhibiting self affinity, also known as surface fractality. It is recognized that the self affine structure of surfaces is the origin of the linear scaling of true contact area with applied pressure. Assuming a model of shearing welded contacts in tribological interactions, this ubiquitously observed linearity between contact area and pressure can also be considered the origin of the linearity of the relationship between static friction and applied normal force. In contact between a "random rough" surface and an elastic half-space, the true contact area is related to the normal force by with equal to the root mean square (also known as the quadratic mean) of the surface slope and . The median pressure in the true contact surface can be reasonably estimated as half of the effective elastic modulus multiplied with the root mean square of the surface slope . An overview of the GW model Greenwood and Williamson in 1966 (GW) proposed a theory of elastic contact mechanics of rough surfaces which is today the foundation of many theories in tribology (friction, adhesion, thermal and electrical conductance, wear, etc.). They considered the contact between a smooth rigid plane and a nominally flat deformable rough surface covered with round tip asperities of the same radius R. Their theory assumes that the deformation of each asperity is independent of that of its neighbours and is described by the Hertz model. The heights of asperities have a random distribution. The probability that asperity height is between and is . The authors calculated the number of contact spots n, the total contact area and the total load P in general case. They gave those formulas in two forms: in the basic and using standardized variables. If one assumes that N asperities covers a rough surface, then the expected number of contacts is The expected total area of contact can be calculated from the formula and the expected total force is given by where: R, radius of curvature of the microasperity, z, height of the microasperity measured from the profile line, d, close the surface, , composite Young's modulus of elasticity, , modulus of elasticity of the surface, , Poisson's surface coefficients. Greenwood and Williamson introduced standardized separation and standardized height distribution whose standard deviation is equal to one. Below are presented the formulas in the standardized form. where: d is the separation, is the nominal contact area, is the surface density of asperities, is the effective Young modulus. and can be determined when the terms are calculated for the given surfaces using the convolution of the surface roughness . Several studies have followed the suggested curve fits for assuming a Gaussian surface high distribution with curve fits presented by Arcoumanis et al. and Jedynak among others. It has been repeatedly observed that engineering surfaces do not demonstrate Gaussian surface height distributions e.g. Peklenik. Leighton et al. presented fits for crosshatched IC engine cylinder liner surfaces together with a process for determining the terms for any measured surfaces. Leighton et al. demonstrated that Gaussian fit data is not accurate for modelling any engineered surfaces and went on to demonstrate that early running of the surfaces results in a gradual transition which significantly changes the surface topography, load carrying capacity and friction. Recently the exact approximants to and were published by Jedynak. They are given by the following rational formulas, which are approximants to the integrals . They are calculated for the Gaussian distribution of asperities, which have been shown to be unrealistic for engineering surface but can be assumed where friction, load carrying capacity or real contact area results are not critical to the analysis. For the coefficients are The maximum relative error is . For the coefficients are The maximum relative error is . The paper also contains the exact expressions for where erfc(z) means the complementary error function and is the modified Bessel function of the second kind. For the situation where the asperities on the two surfaces have a Gaussian height distribution and the peaks can be assumed to be spherical, the average contact pressure is sufficient to cause yield when where is the uniaxial yield stress and is the indentation hardness. Greenwood and Williamson defined a dimensionless parameter called the plasticity index that could be used to determine whether contact would be elastic or plastic. The Greenwood-Williamson model requires knowledge of two statistically dependent quantities; the standard deviation of the surface roughness and the curvature of the asperity peaks. An alternative definition of the plasticity index has been given by Mikic. Yield occurs when the pressure is greater than the uniaxial yield stress. Since the yield stress is proportional to the indentation hardness , Mikic defined the plasticity index for elastic-plastic contact to be In this definition represents the micro-roughness in a state of complete plasticity and only one statistical quantity, the rms slope, is needed which can be calculated from surface measurements. For , the surface behaves elastically during contact. In both the Greenwood-Williamson and Mikic models the load is assumed to be proportional to the deformed area. Hence, whether the system behaves plastically or elastically is independent of the applied normal force. An overview of the GT model The model proposed by John A. Greenwood and John H. Tripp (GT), extended the GW model to contact between two rough surfaces. The GT model is widely used in the field of elastohydrodynamic analysis. The most frequently cited equations given by the GT model are for the asperity contact area and load carried by asperities where: , roughness parameter, , nominal contact area, , Stribeck oil film parameter, first defined by Stribeck \cite{gt} as , , effective elastic modulus, , statistical functions introduced to match the assumed Gaussian distribution of asperities. Matthew Leighton et al. presented fits for crosshatched IC engine cylinder liner surfaces together with a process for determining the terms for any measured surfaces. Leighton et al. demonstrated that Gaussian fit data is not accurate for modelling any engineered surfaces and went on to demonstrate that early running of the surfaces results in a gradual transition which significantly changes the surface topography, load carrying capacity and friction. The exact solutions for and are firstly presented by Jedynak. They are expressed by as follows. They are calculated for the Gaussian distribution of asperities, which have been shown to be unrealistic for engineering surface but can be assumed where friction, load carrying capacity or real contact area results are not critical to the analysis. where erfc(z) means the complementary error function and is the modified Bessel function of the second kind. In paper one can find comprehensive review of existing approximants to . New proposals give the most accurate approximants to and , which are reported in the literature. They are given by the following rational formulas, which are very exact approximants to integrals . They are calculated for the Gaussian distribution of asperities For the coefficients are The maximum relative error is . For the coefficients are The maximum relative error is . Adhesive contact between elastic bodies When two solid surfaces are brought into close proximity, they experience attractive van der Waals forces. R. S. Bradley's van der Waals model provides a means of calculating the tensile force between two rigid spheres with perfectly smooth surfaces. The Hertzian model of contact does not consider adhesion possible. However, in the late 1960s, several contradictions were observed when the Hertz theory was compared with experiments involving contact between rubber and glass spheres. It was observed that, though Hertz theory applied at large loads, at low loads the area of contact was larger than that predicted by Hertz theory, the area of contact had a non-zero value even when the load was removed, and there was even strong adhesion if the contacting surfaces were clean and dry. This indicated that adhesive forces were at work. The Johnson-Kendall-Roberts (JKR) model and the Derjaguin-Muller-Toporov (DMT) models were the first to incorporate adhesion into Hertzian contact. Bradley model of rigid contact It is commonly assumed that the surface force between two atomic planes at a distance from each other can be derived from the Lennard-Jones potential. With this assumption where is the force (positive in compression), is the total surface energy of both surfaces per unit area, and is the equilibrium separation of the two atomic planes. The Bradley model applied the Lennard-Jones potential to find the force of adhesion between two rigid spheres. The total force between the spheres is found to be where are the radii of the two spheres. The two spheres separate completely when the pull-off force is achieved at at which point JKR model of elastic contact To incorporate the effect of adhesion in Hertzian contact, Johnson, Kendall, and Roberts formulated the JKR theory of adhesive contact using a balance between the stored elastic energy and the loss in surface energy. The JKR model considers the effect of contact pressure and adhesion only inside the area of contact. The general solution for the pressure distribution in the contact area in the JKR model is Note that in the original Hertz theory, the term containing was neglected on the ground that tension could not be sustained in the contact zone. For contact between two spheres where is the radius of the area of contact, is the applied force, is the total surface energy of both surfaces per unit contact area, are the radii, Young's moduli, and Poisson's ratios of the two spheres, and The approach distance between the two spheres is given by The Hertz equation for the area of contact between two spheres, modified to take into account the surface energy, has the form When the surface energy is zero, , the Hertz equation for contact between two spheres is recovered. When the applied load is zero, the contact radius is The tensile load at which the spheres are separated (i.e., ) is predicted to be This force is also called the pull-off force. Note that this force is independent of the moduli of the two spheres. However, there is another possible solution for the value of at this load. This is the critical contact area , given by If we define the work of adhesion as where are the adhesive energies of the two surfaces and is an interaction term, we can write the JKR contact radius as The tensile load at separation is and the critical contact radius is given by The critical depth of penetration is DMT model of elastic contact The Derjaguin–Muller–Toporov (DMT) model is an alternative model for adhesive contact which assumes that the contact profile remains the same as in Hertzian contact but with additional attractive interactions outside the area of contact. The radius of contact between two spheres from DMT theory is and the pull-off force is When the pull-off force is achieved the contact area becomes zero and there is no singularity in the contact stresses at the edge of the contact area. In terms of the work of adhesion and Tabor parameter In 1977, Tabor showed that the apparent contradiction between the JKR and DMT theories could be resolved by noting that the two theories were the extreme limits of a single theory parametrized by the Tabor parameter () defined as where is the equilibrium separation between the two surfaces in contact. The JKR theory applies to large, compliant spheres for which is large. The DMT theory applies for small, stiff spheres with small values of . Subsequently, Derjaguin and his collaborators by applying Bradley's surface force law to an elastic half space, confirmed that as the Tabor parameter increases, the pull-off force falls from the Bradley value to the JKR value . More detailed calculations were later done by Greenwood revealing the S-shaped load/approach curve which explains the jumping-on effect. A more efficient method of doing the calculations and additional results were given by Feng Maugis–Dugdale model of elastic contact Further improvement to the Tabor idea was provided by Maugis who represented the surface force in terms of a Dugdale cohesive zone approximation such that the work of adhesion is given by where is the maximum force predicted by the Lennard-Jones potential and is the maximum separation obtained by matching the areas under the Dugdale and Lennard-Jones curves (see adjacent figure). This means that the attractive force is constant for . There is not further penetration in compression. Perfect contact occurs in an area of radius and adhesive forces of magnitude extend to an area of radius . In the region , the two surfaces are separated by a distance with and . The ratio is defined as . In the Maugis–Dugdale theory, the surface traction distribution is divided into two parts - one due to the Hertz contact pressure and the other from the Dugdale adhesive stress. Hertz contact is assumed in the region . The contribution to the surface traction from the Hertz pressure is given by where the Hertz contact force is given by The penetration due to elastic compression is The vertical displacement at is and the separation between the two surfaces at is The surface traction distribution due to the adhesive Dugdale stress is The total adhesive force is then given by The compression due to Dugdale adhesion is and the gap at is The net traction on the contact area is then given by and the net contact force is . When the adhesive traction drops to zero. Non-dimensionalized values of are introduced at this stage that are defied as In addition, Maugis proposed a parameter which is equivalent to the Tabor parameter . This parameter is defined as where the step cohesive stress equals to the theoretical stress of the Lennard-Jones potential Zheng and Yu suggested another value for the step cohesive stress to match the Lennard-Jones potential, which leads to Then the net contact force may be expressed as and the elastic compression as The equation for the cohesive gap between the two bodies takes the form This equation can be solved to obtain values of for various values of and . For large values of , and the JKR model is obtained. For small values of the DMT model is retrieved. Carpick–Ogletree-Salmeron (COS) model The Maugis–Dugdale model can only be solved iteratively if the value of is not known a-priori. The Carpick–Ogletree–Salmeron (COS) approximate solution (after Robert Carpick, D. Frank Ogletree and Miquel Salmeron)simplifies the process by using the following relation to determine the contact radius : where is the contact area at zero load, and is a transition parameter that is related to by The case corresponds exactly to JKR theory while corresponds to DMT theory. For intermediate cases the COS model corresponds closely to the Maugis–Dugdale solution for . Influence of contact shape Even in the presence of perfectly smooth surfaces, geometry can come into play in form of the macroscopic shape of the contacting region. When a rigid punch with flat but oddly shaped face is carefully pulled off its soft counterpart, its detachment occurs not instantaneously but detachment fronts start at pointed corners and travel inwards, until the final configuration is reached which for macroscopically isotropic shapes is almost circular. The main parameter determining the adhesive strength of flat contacts occurs to be the maximum linear size of the contact. The process of detachment can as observed experimentally can be seen in the film.
Physical sciences
Solid mechanics
Physics
4301763
https://en.wikipedia.org/wiki/Grain%20growth
Grain growth
In materials science, grain growth is the increase in size of grains (crystallites) in a material at high temperature. This occurs when recovery and recrystallisation are complete and further reduction in the internal energy can only be achieved by reducing the total area of grain boundary. The term is commonly used in metallurgy but is also used in reference to ceramics and minerals. The behaviors of grain growth is analogous to the coarsening behaviors of grains, which implied that both of grain growth and coarsening may be dominated by the same physical mechanism. Importance of grain growth The practical performances of polycrystalline materials are strongly affected by the formed microstructure inside, which is mostly dominated by grain growth behaviors. For example, most materials exhibit the Hall–Petch effect at room-temperature and so display a higher yield stress when the grain size is reduced (assuming abnormal grain growth has not taken place). At high temperatures the opposite is true since the open, disordered nature of grain boundaries means that vacancies can diffuse more rapidly down boundaries leading to more rapid Coble creep. Since boundaries are regions of high energy they make excellent sites for the nucleation of precipitates and other second-phases e.g. Mg–Si–Cu phases in some aluminium alloys or martensite platlets in steel. Depending on the second phase in question this may have positive or negative effects. Rules of grain growth Grain growth has long been studied primarily by the examination of sectioned, polished and etched samples under the optical microscope. Although such methods enabled the collection of a great deal of empirical evidence, particularly with regard to factors such as temperature or composition, the lack of crystallographic information limited the development of an understanding of the fundamental physics. Nevertheless, the following became well-established features of grain growth: Grain growth occurs by the movement of grain boundaries and also by coalescence (i.e. like water droplets) Grain growth competition between Ordered coalescence and the movement of grain boundaries Boundary movement may be discontinuous and the direction of motion may change suddenly during abnormal grain growth. One grain may grow into another grain whilst being consumed from the other side The rate of consumption often increases when the grain is nearly consumed A curved boundary typically migrates towards its centre of curvature Classical driving force The boundary between one grain and its neighbour (grain boundary) is a defect in the crystal structure and so it is associated with a certain amount of energy. As a result, there is a thermodynamic driving force for the total area of boundary to be reduced. If the grain size increases, accompanied by a reduction in the actual number of grains per volume, then the total area of grain boundary will be reduced. In the classic theory, the local velocity of a grain boundary at any point is proportional to the local curvature of the grain boundary, i.e.: , where is the velocity of grain boundary, is grain boundary mobility (generally depends on orientation of two grains), is the grain boundary energy and is the sum of the two principal surface curvatures. For example, shrinkage velocity of a spherical grain embedded inside another grain is , where is radius of the sphere. This driving pressure is very similar in nature to the Laplace pressure that occurs in foams. In comparison to phase transformations the energy available to drive grain growth is very low and so it tends to occur at much slower rates and is easily slowed by the presence of second phase particles or solute atoms in the structure. Recently, in contrast to the classic linear relation between grain boundary velocity and curvature, grain boundary velocity and curvature are observed to be not correlated in Ni polycrystals, which conflicting results has been revealed and be theoretically interpreted by a general model of grain boundary (GB) migration in the previous literature. According to the general GB migration model, the classical linear relation can only be used in a specical case. A general theory of grain growth Development of theoretical models describing grain growth is an active field of research. Many models have been proposed for grain growth, but no theory has yet been put forth that has been independently validated to apply across the full range of conditions and many questions remain open. By no means is the following a comprehensive review. One recent theory of grain growth posits that normal grain growth only occurs in the polycrystalline systems with grain boundaries which have undergone roughening transitions, and abnormal and/or stagnant grain growth can only occur in the polycrystalline systems with non-zero GB (grain boundary) step free energy of grains. Other models explaining grain coarsening assert that disconnections are responsible for the motion of grain boundaries, and provide limited experimental evidence suggesting that they govern grain boundary migration and grain growth behavior. Other models have indicated that triple junctions play an important role in determining the grain growth behavior in many systems. Ideal grain growth Ideal grain growth is a special case of normal grain growth where boundary motion is driven only by local curvature of the grain boundary. It results in the reduction of the total amount of grain boundary surface area i.e. total energy of the system. Additional contributions to the driving force by e.g. elastic strains or temperature gradients are neglected. If it holds that the rate of growth is proportional to the driving force and that the driving force is proportional to the total amount of grain boundary energy, then it can be shown that the time t required to reach a given grain size is approximated by the equation where d0 is the initial grain size, d is the final grain size and k is a temperature dependent constant given by an exponential law: where k0 is a constant, T is the absolute temperature and Q is the activation energy for boundary mobility. Theoretically, the activation energy for boundary mobility should equal that for self-diffusion but this is often found not to be the case. In general these equations are found to hold for ultra-high purity materials but rapidly fail when even tiny concentrations of solute are introduced. Self-similarity An old-standing topic in grain growth is the evolution of the grains size distribution. Inspired by the work of Lifshitz and Slyozov on Ostwald ripening, Hillert has suggested that in a normal grain growth process the size distribution function must converge to a self-similar solution, i.e. it becomes invariant when the grain size is scaled with a characteristic length of the system that is proportional to the average grain size . Several simulation studies, however, have shown that the size distribution deviates from the Hillert's self-similar solution. Hence a search for a new possible self-similar solution was initiated that indeed led to a new class of self-similar distribution functions. Large-scale phase field simulations have shown that there is indeed a self-similar behavior possible within the new distribution functions. It was shown that the origin of the deviation from Hillert's distribution is indeed the geometry of grains specially when they are shrinking. Normal vs abnormal In common with recovery and recrystallisation, growth phenomena can be separated into continuous and discontinuous mechanisms. In the former the microstructure evolves from state A to B (in this case the grains get larger) in a uniform manner. In the latter, the changes occur heterogeneously and specific transformed and untransformed regions may be identified. Abnormal or discontinuous grain growth is characterised by a subset of grains growing at a high rate and at the expense of their neighbours and tends to result in a microstructure dominated by a few very large grains. In order for this to occur the subset of grains must possess some advantage over their competitors such as a high grain boundary energy, locally high grain boundary mobility, favourable texture or lower local second-phase particle density. Factors hindering growth If there are additional factors preventing boundary movement, such as Zener pinning by particles, then the grain size may be restricted to a much lower value than might otherwise be expected. This is an important industrial mechanism in preventing the softening of materials at high temperature. Inhibition Certain materials especially refractories which are processed at high temperatures end up with excessively large grain size and poor mechanical properties at room temperature. To mitigate this problem in a common sintering procedure, a variety of dopants are often used to inhibit grain growth.
Physical sciences
Crystallography
Physics
4302166
https://en.wikipedia.org/wiki/Explosively%20formed%20penetrator
Explosively formed penetrator
An explosively formed penetrator (EFP), also known as an explosively formed projectile, a self-forging warhead, or a self-forging fragment, is a special type of shaped charge designed to penetrate armor effectively, from a much greater standoff range than standard shaped charges, which are more limited by standoff distance. As the name suggests, the effect of the explosive charge is to deform a metal plate into a slug or rod shape and accelerate it toward a target. They were first developed as oil well perforators by American oil companies in the 1930s, and were deployed as weapons in World War II. Difference from conventional shaped charges A conventional shaped charge generally has a conical metal liner that is forced by an explosive blast into a hypervelocity jet of superplastic metal able to penetrate thick armor and knock out vehicles. A disadvantage of this arrangement is that the jet of metal loses effectiveness the further it travels, as it breaks up into disconnected particles that drift out of alignment. An EFP operates on the same principle, but its liner is designed to form a distinct projectile that will maintain its shape, permitting it to penetrate armor at greater distance. The dish-shaped liner of an EFP can generate a number of distinct projectile forms, depending on the shape of the plate and how the explosive is detonated. An EFP's penetration is more strongly affected by the density of its liner metal compared to a conventional shaped charge. At 16.654 g/cm3, tantalum is preferable in delivery systems that have limitations in size, like the SADARM, which is delivered by a howitzer. For other weapon systems without practical limitations on warhead diameter, a less expensive copper liner (8.960 g/cm3) of double the diameter can be used instead. An EFP with a tantalum liner can typically penetrate steel armor of a thickness equal to its diameter – or half that amount with a copper liner instead. By contrast, a conventional shaped charge can penetrate armor up to six times its diameter in thickness, depending on its design and liner material. Some sophisticated EFP warheads have multiple detonators that can be fired in different arrangements causing different types of waveform in the explosive, resulting in either a long-rod penetrator, an aerodynamic slug projectile, or multiple high-velocity fragments. A less sophisticated approach for changing the formation of an EFP is the use of wire mesh in front of the liner, which causes the liner to fragment into multiple penetrators. In addition to single-penetrator EFPs (also called single EFPs or SEFPs), there are EFP warheads whose liners are designed to produce more than one penetrator; these are known as multiple EFPs, or MEFPs. The liner of an MEFP generally comprises a number of dimples that intersect each other at sharp angles. Upon detonation, the liner fragments along these intersections to form up to dozens of small, generally spheroidal projectiles, producing an effect similar to that of a shotgun. The pattern of impacts on a target can be finely controlled based on the design of the liner and the manner in which the explosive charge is detonated. A nuclear-driven MEFP was apparently proposed by a member of the JASON group in 1966 for terminal ballistic missile defense. A related device was the proposed nuclear pulse propulsion unit for Project Orion. Extensive research is going on in the zone between jetting charges and EFPs, which combines the advantages of both types, resulting in very long stretched-rod EFPs for short-to-medium distances (because of the lack of aerostability) with improved penetration capability. EFPs have been adopted as warheads in a number of weapon systems, including the CBU-97 and BLU-108 air bombs (with the Skeet submunition), the M303 Special Operations Forces demolition kit, the M2/M4 Selectable Lightweight Attack Munition (SLAM), the SADARM submunition, the SMArt 155 top-attack artillery round, the Low Cost Autonomous Attack System, the TOW-2B anti-tank missile, and the NASM-SR anti-ship missile. Use in improvised explosive devices EFPs have been used in improvised explosive devices against armoured cars, for example in the 1989 assassination of German banker Alfred Herrhausen (attributed to the Red Army Faction) and by Hezbollah in the 1990s. They saw widespread use in IEDs by insurgents in Iraq against coalition vehicles. The charges are generally cylindrical, fabricated from commonly available metal pipe, with the forward end closed by a concave copper or steel disk-shaped liner to create a shaped charge. Explosive is loaded behind the metal liner to fill the pipe. Upon detonation, the explosive projects the liner to form a projectile. The effects of traditional explosions like blast-forces and metal fragments seldom disable armored vehicles, but the explosively formed solid copper penetrator is quite lethal—even to the new generation of mine-resistant vehicles (which are made to withstand an anti-tank mine), and many tanks. Often mounted on crash barriers at window level, they are placed along roadsides at choke points where vehicles must slow down, such as intersections and junctions. This gives the operator time to judge the moment to fire, when the vehicle is moving more slowly. Detonation is controlled by cable, radio control, TV or IR remote controls, or remote arming with a passive infrared sensor, or via a pair of ordinary cell phones. EFPs can be deployed singly, in pairs, or in arrays, depending on the tactical situation. Non-circular explosively formed penetrators Non-circular explosively formed penetrators can be formed based on modifications to the liner construction. For instance, U.S. patents 6606951 and 4649828 are non-circular in design. US6606951B1 is designed to launch multiple asymmetric explosively forged penetrators horizontally in 360 degrees. US4649828A is designed to form several clothespin shaped EFPs, increasing hit probability. In addition, a simplified EFP (SIM-EFP) can be made using a rectangular liner, similar to a linear shaped charge or modified platter charge. This design can be further modified to be similar to US4649828A with multiple cut and bent steel bars lined side by side instead of a singular liner. In Northern Ireland similar devices have been discovered that were developed by dissident Republican groups for intended use against the police. In Northern Ireland, the weapon was first used in March 2014 when a PSNI Land Rover was targeted as it travelled along the Falls Road in west Belfast. A police car was destroyed by an EFP detonated by a command wire in Strabane, Co Tyrone on 18 November 2022. Asteroid impactor The spacecraft Hayabusa2 carried a small carry-on impactor. It was dropped off Hayabusa2 on to an asteroid and detonated. The explosion created a copper explosively formed penetrator, which hit the asteroid with a velocity of 2 km/s. The crater created by the impact was a target for further observations by the onboard instruments. The shaped charge consisted of 4.5 kg of plasticized HMX and a 2.5 kg copper liner.
Technology
Explosive weapons
null
4303245
https://en.wikipedia.org/wiki/Fractus%20cloud
Fractus cloud
Fractus clouds, also called fractostratus or fractocumulus, are small, ragged cloud fragments that are usually found under an ambient cloud base. They form or have broken off from a larger cloud, and are generally sheared by strong winds, giving them a jagged, shredded appearance. Fractus have irregular patterns, appearing much like torn pieces of cotton candy. They change constantly, often forming and dissipating rapidly. They do not have clearly defined bases. Sometimes they are persistent and form very near the surface. Common kinds include and . Forms Fractus are accessory clouds, named for the type of cloud from which they were sheared. The two principal forms are cumulus fractus (formerly, fractocumulus) and stratus fractus (formerly, fractostratus). Fractus clouds may develop into cumulus if the ground heats enough to start convection. Stratus fractus is distinguishable from cumulus fractus by its smaller vertical extent, darker color, and by the greater dispersion of its particles. Cumulus fractus clouds actually look like ragged cumulus clouds. They may originate from dissipated cumulus clouds, appearing in this case as white ragged clouds located at significant distances from each other. Cumulus fractus in particular form on the leading and trailing edges of summer storms in warm and humid conditions. Observing fractus gives an indication of wind movements under the parent cloud. Masses of multiple fractus clouds, located under a main cloud, are called pannus or scud clouds. Fractonimbus are a form of stratus fractus, developing under precipitation clouds due to turbulent air movement. They are dark-gray and ragged in appearance. Fractonimbus exist only under precipitation clouds (such as nimbostratus, altostratus or cumulonimbus), and don't produce precipitation themselves. Fractonimbus may eventually merge completely with overlying nimbostratus clouds. Stratus silvagenitus clouds also commonly form as scattered stratus fractus silvagenitus. These clouds can be seen as the occasional ragged tuft of cloud seeming to rise from forests, commonly just after rain or during other periods of high humidity. Significance in thunderstorms In rainstorms, scud often form in the updraft area where the air has been cooled by precipitation from the downdraft, thus condensation occurs below the ambient cloud deck. If scud are rising and moving towards the main updraft, sometimes marked by a rain-free base (RFB) or wall cloud, then the thunderstorm is still developing from rising scud. In addition to forming in inflow, fractus also form in outflow. Scud are very common on the leading edge of a thunderstorm where warm, moist air is lifted by the gust front. Scud are usually found under shelf clouds.
Physical sciences
Clouds
Earth science
4303674
https://en.wikipedia.org/wiki/Deposition%20%28phase%20transition%29
Deposition (phase transition)
Deposition is the phase transition in which gas transforms into solid without passing through the liquid phase. Deposition is a thermodynamic process. The reverse of deposition is sublimation and hence sometimes deposition is called desublimation. Applications Examples One example of deposition is the process by which, in sub-freezing air, water vapour changes directly to ice without first becoming a liquid. This is how frost and hoar frost form on the ground or other surfaces. Another example is when frost forms on a leaf. For deposition to occur, thermal energy must be removed from a gas. When the air becomes cold enough, water vapour in the air surrounding the leaf loses enough thermal energy to change into a solid. Even though the air temperature may be below the dew point, the water vapour may not be able to condense spontaneously if there is no way to remove the latent heat. When the leaf is introduced, the supercooled water vapour immediately begins to condense, but by this point is already past the freezing point. This causes the water vapour to change directly into a solid. Another example is the soot that is deposited on the walls of chimneys. Soot molecules rise from the fire in a hot and gaseous state. When they come into contact with the walls they cool, and change to the solid state, without formation of the liquid state. The process is made use of industrially in combustion chemical vapour deposition. Industrial applications There is an industrial coating process, known as evaporative deposition, whereby a solid material is heated to the gaseous state in a low-pressure chamber, the gas molecules travel across the chamber space and then deposit to the solid state on a target surface, forming a smooth and thin layer on the target surface. Again, the molecules do not go through an intermediate liquid state when going from the gas to the solid.
Physical sciences
Phase transitions
Physics
5728727
https://en.wikipedia.org/wiki/Nereididae
Nereididae
Nereididae (formerly spelled Nereidae) are a family of polychaete worms. It contains about 500 – mostly marine – species grouped into 42 genera. They may be commonly called ragworms or clam worms. Characteristics The prostomium of Nereididae bears a pair of palps that are differentiated into two units. The proximal unit is much larger than the distal unit. Parapodia are mostly biramous (only the first two pairs are uniramous). Peristomium fused with the first body segment, with usually two pairs of tentacular cirri. The first body segment with 1-2 pairs tentacular cirri without aciculae. Compound setae are present. Notopodia are distinct (rarely reduced), usually with more flattened lobes, notosetae compound falcigers and/or spinigers (rarely notosetae absent). They have two prostomial antennae (absent in Micronereis). Their pharynx, when everted, clearly consists of two portions, with a pair of strong jaws on the distal portion and usually with conical teeth on one or more areas of both portions. Most genera have no gills (if present, they are usually branched and arise on mid-anterior segments of body). The larval body consists of four segments. Jaw material Ragworms' teeth are made of a very tough, yet lightweight material. Unlike bone and tooth enamel, this is not mineralised with calcium, but is formed by a histidine rich protein, with bound zinc ions. Research on this material could lead to applications in engineering. Systematics Nereididae are currently considered a monophyletic taxon. Their closest neighbours in polychaete phylogenetic tree are Chrysopetalidae and Hesionidae (the superfamily Nereidoidea). Nereididae are divided into 42 genera, but the relationships between them are as yet unclear. The family contains traditionally three subfamilies - Namanereidinae, Gymnonereinae and Nereidinae. Genera Subfamily Gymnonereidinae Banse, 1977 Australonereis Hartman, 1954 Ceratocephale Malmgren, 1867 Dendronereides Southern, 1921 Gymnonereis Horst, 1919 Kinberginereis Pettibone, 1971 Leptonereis Kinberg, 1865 Micronereides Day, 1963 Olganereis Hartmann-Schröder, 1977 Rullierinereis Pettibone, 1971 Sinonereis Wu & Sun, 1979 Stenoninereis Wesenberg-Lund, 1958 Tambalagamia Pillai, 1961 Tylonereis Fauvel, 1911 Tylorrhynchus Grube, 1866 Typhlonereis Hansen, 1879 Websterinereis Pettibone, 1971 Subfamily Namanereidinae Hartman, 1959 Namalycastis Hartman, 1959 Namanereis Chamberlin, 1919 Subfamily Nereidinae Blainville, 1818 Alitta Kinberg, 1865 Ceratonereis Kinberg, 1865 Cheilonereis Benham, 1916 Composetia Hartmann-Schröder, 1985 Eunereis Malmgren, 1865 Hediste Malmgren, 1867 Imajimainereis de León-González & Solís-Weiss, 2000 Laeonereis Hartman, 1945 Leonnates Kinberg, 1865 Micronereis Claparède, 1863 Neanthes Kinberg, 1865 Nectoneanthes Imajima, 1972 Nereis Linnaeus, 1758 Nicon Kinberg, 1865 Paraleonnates Chlebovitsch & Wu, 1962 Parasetia Villalobos-Guerrero, Conde-Vela & Sato, 2022 Perinereis Kinberg, 1865 Platynereis Kinberg, 1865 Potamonereis Villalobos-Guerrero, Conde-Vela & Sato, 2022 Pseudonereis Kinberg, 1865 Simplisetia Hartmann-Schröder, 1985 Solomononereis Gibbs, 1971 Unanereis Day, 1962 Wuinereis Khlebovich, 1996 Subfamily Nereididae incertae sedis: Kainonereis Chamberlin, 1919 Lycastonereis Nageswara Rao, 1981 Ecology Ragworms are predominantly marine organisms that may occasionally swim upstream to rivers and even climb to land (for example Lycastopsis catarractarum). They are commonly found in all water depths, foraging in seaweeds, hiding under rocks or burrowing in sand or mud. Ragworms are mainly omnivorous but many are active carnivores. Nereids breed only once before dying (semelparity), and most of them morph into a distinct form to breed (epitoky). Ragworms are important food sources for a number of shore birds. Human use Ragworms such as Hediste diversicolor are commonly used as bait in sea angling. They are a popular bait for all types of wrasse and pollock. They are also used as fish feed in aquaculture. Ragworms, such as Tylorrhynchus heterochetus, are considered a delicacy in Vietnam where they are used in the dish chả rươi. In rice-growing areas of China, these worms are called 禾虫 (Mandarin: hé chóng, Cantonese: woh4 chuhng4). They are harvested from the rice fields and are often cooked with eggs.
Biology and health sciences
Lophotrochozoa
Animals
5731588
https://en.wikipedia.org/wiki/UBV%20photometric%20system
UBV photometric system
The UBV photometric system (from Ultraviolet, Blue, Visual), also called the Johnson system (or Johnson-Morgan system), is a photometric system usually employed for classifying stars according to their colors. It was the first standardized photometric system. The apparent magnitudes of stars in the system are often used to determine the color indices B−V and U−B, the difference between the B and V magnitudes and the U and B magnitudes respectively. The system is defined using a set of color optical filters in combination with an RMA 1P21 photomultiplier tube. The choice of colors on the blue end of the spectrum was assisted by the bias that photographic film has for those colors. It was introduced in the 1950s by American astronomers Harold Lester Johnson and William Wilson Morgan. A telescope and the telescope at McDonald Observatory were used to define the system. The filters that Johnson and Morgan used were Corning 9 863 for U and 3 384 for V. The B filter used a combination of Corning 5 030 and Schott GG 13. The filters are selected so that the mean wavelengths of response functions (at which magnitudes are measured to mean precision) are for U, for B, for V. Zero-points were calibrated in the B−V (B minus V) and U−B (U minus B) color indices selecting such A0 main sequence stars which are not affected by interstellar reddening. These stars correspond with a mean effective temperature (Teff (K)) of between 9727 and 9790 Kelvin, the latter being stars with class A0V (V meaning five). The system has a key limit drawback. The short wavelength cutoff that is the shortest limit of the U filter is set by any given terrestrial atmosphere rather than the filter itself; thus, it (and observed magnitudes) varies chiefly with altitude and atmospheric water (humidity plus condensation into clouds). However, many measurements have been made in this system, including thousands of the bright stars. Extensions The Johnson-Kron-Cousins UBVRI photometric system is a common extension of Johnson's original system that provides redder passbands.
Physical sciences
Basics
Astronomy
5732230
https://en.wikipedia.org/wiki/Prolibytherium
Prolibytherium
Prolibytherium is an extinct genus of prolibytheriid artiodactyl ungulate native to Middle Miocene North Africa and Pakistan, from around 16.9 to 15.97 million years ago. Fossils of Prolibytherium were found in the Marada Formation of Libya, Vihowa Formation of Pakistan, and the Moghara Formation of Egypt. Description The long creature would have superficially resembled an okapi or a deer. Unlike these, however, Prolibytherium displayed dramatic sexual dimorphism, in that the male had a set of large, leaf-shaped ossicones with a width of , while the female had a set of slender, horn-like ossicones. The taxonomic status of Prolibytherium remains in flux. At one time, it was described as a relative of Sivatherium (as a precursor to "Libytherium maurusium" (S. maurusium)). Later, it would be regarded as a palaeomerycid, or either as a climacoceratid, or as a basal member of Giraffoidea. With the discovery and study of a female skull in 2010, Prolibytherium is tentatively confirmed as a climacoceratid. A recent study published in 2022 found it to be part of a separate family, Prolibytheriidae.
Biology and health sciences
Giraffidae
Animals
5732414
https://en.wikipedia.org/wiki/Sivatherium
Sivatherium
Sivatherium ("Shiva's beast", from Shiva and therium, Latinized form of Ancient Greek θηρίον - thēríon) is an extinct genus of giraffid that ranged throughout Africa and Eurasia. The species Sivatherium giganteum is, by weight, one of the largest giraffids known, and also one of the largest ruminants of all time. Sivatherium originated during the Late Miocene (around 7 million years ago) in Africa and survived through to the late Early Pleistocene (Calabrian) until around 1 million years ago. Description Sivatherium resembled the modern okapi, but was far larger, and more heavily built, being about tall at the shoulder, in total height with a weight up to . A newer estimate has come up with an estimated body mass of about or . This would make Sivatherium one of the largest known ruminants, rivalling the modern giraffe and the largest bovines. This weight estimate is thought to be an underestimate, as it does not take into account the large horns possessed by males of the species. Sivatherium had a wide, antler-like pair of ossicones on its head, and a second pair of ossicones above its eyes. Its shoulders were very powerful to support the neck muscles required to lift the heavy skull. Sivatherium was initially misidentified as an archaic link between modern ruminants and the now obsolete, polyphyletic "pachyderms" (elephants, rhinoceroses, horses and tapirs). The confusion arose in part due to its graviportal (robust) morphology, which was unlike anything else studied at that time. Diet A dental wear analysis of S. hendeyi from the Early Pliocene of South Africa found that the teeth were brachyodont, but had a higher hypsodonty than a giraffe, and that it was best classified as a mixed feeder, being able to both graze and browse. Analysis of dental microwear and mesowear paired with δ13C and δ18O measurements of S. maurusium from Ahl al Oughlam in western Morocco show it predominantly fed on C3 vegetation. Relationship with humans Remains of Sivatherium from Olduvai Gorge in Tanzania, dating to around 1.35 million years ago have been found associated with stone tools and bearing cut marks, indicating butchery by archaic humans, likely Homo erectus. Historically, it has been suggested that figurines from Sumeria and ancient rock paintings in the Sahara and Central West India represent Sivatherium. However, these claims are not substantiated by fossil evidence (which suggest that the genus was extinct long before the emergence of modern humans), and the depictions likely represent other animals.
Biology and health sciences
Giraffidae
Animals
5732433
https://en.wikipedia.org/wiki/Curved%20mirror
Curved mirror
A curved mirror is a mirror with a curved reflecting surface. The surface may be either convex (bulging outward) or concave (recessed inward). Most curved mirrors have surfaces that are shaped like part of a sphere, but other shapes are sometimes used in optical devices. The most common non-spherical type are parabolic reflectors, found in optical devices such as reflecting telescopes that need to image distant objects, since spherical mirror systems, like spherical lenses, suffer from spherical aberration. Distorting mirrors are used for entertainment. They have convex and concave regions that produce deliberately distorted images. They also provide highly magnified or highly diminished (smaller) images when the object is placed at certain distances. Convex mirrors A convex mirror or diverging mirror is a curved mirror in which the reflective surface bulges towards the light source. Convex mirrors reflect light outwards, therefore they are not used to focus light. Such mirrors always form a virtual image, since the focal point (F) and the centre of curvature (2F) are both imaginary points "inside" the mirror, that cannot be reached. As a result, images formed by these mirrors cannot be projected on a screen, since the image is inside the mirror. The image is smaller than the object, but gets larger as the object approaches the mirror. A collimated (parallel) beam of light diverges (spreads out) after reflection from a convex mirror, since the normal to the surface differs at each spot on the mirror. Uses The passenger-side mirror on a car is typically a convex mirror. In some countries, these are labeled with the safety warning "Objects in mirror are closer than they appear", to warn the driver of the convex mirror's distorting effects on distance perception. Convex mirrors are preferred in vehicles because they give an upright (not inverted), though diminished (smaller), image and because they provide a wider field of view as they are curved outwards. These mirrors are often found in the hallways of various buildings (commonly known as "hallway safety mirrors"), including hospitals, hotels, schools, stores, and apartment buildings. They are usually mounted on a wall or ceiling where hallways intersect each other, or where they make sharp turns. They are useful for people to look at any obstruction they will face on the next hallway or after the next turn. They are also used on roads, driveways, and alleys to provide safety for road users where there is a lack of visibility, especially at curves and turns. Convex mirrors are used in some automated teller machines as a simple and handy security feature, allowing the users to see what is happening behind them. Similar devices are sold to be attached to ordinary computer monitors. Convex mirrors make everything seem smaller but cover a larger area of surveillance. Round convex mirrors called Oeil de Sorcière (French for "sorcerer's eye") were a popular luxury item from the 15th century onwards, shown in many depictions of interiors from that time. With 15th century technology, it was easier to make a regular curved mirror (from blown glass) than a perfectly flat one. They were also known as "bankers' eyes" due to the fact that their wide field of vision was useful for security. Famous examples in art include the Arnolfini Portrait by Jan van Eyck and the left wing of the Werl Altarpiece by Robert Campin. Image The image on a convex mirror is always virtual (rays haven't actually passed through the image; their extensions do, like in a regular mirror), diminished (smaller), and upright (not inverted). As the object gets closer to the mirror, the image gets larger, until approximately the size of the object, when it touches the mirror. As the object moves away, the image diminishes in size and gets gradually closer to the focus, until it is reduced to a point in the focus when the object is at an infinite distance. These features make convex mirrors very useful: since everything appears smaller in the mirror, they cover a wider field of view than a normal plane mirror, so useful for looking at cars behind a driver's car on a road, watching a wider area for surveillance, etc. Concave mirrors A concave mirror, or converging mirror, has a reflecting surface that is recessed inward (away from the incident light). Concave mirrors reflect light inward to one focal point. They are used to focus light. Unlike convex mirrors, concave mirrors show different image types depending on the distance between the object and the mirror. The mirrors are called "converging mirrors" because they tend to collect light that falls on them, refocusing parallel incoming rays toward a focus. This is because the light is reflected at different angles at different spots on the mirror as the normal to the mirror surface differs at each spot. Uses Concave mirrors are used in reflecting telescopes. They are also used to provide a magnified image of the face for applying make-up or shaving. In illumination applications, concave mirrors are used to gather light from a small source and direct it outward in a beam as in torches, headlamps and spotlights, or to collect light from a large area and focus it into a small spot, as in concentrated solar power. Concave mirrors are used to form optical cavities, which are important in laser construction. Some dental mirrors use a concave surface to provide a magnified image. The mirror landing aid system of modern aircraft carriers also uses a concave mirror. Image Mirror shape Most curved mirrors have a spherical profile. These are the simplest to make, and it is the best shape for general-purpose use. Spherical mirrors, however, suffer from spherical aberration—parallel rays reflected from such mirrors do not focus to a single point. For parallel rays, such as those coming from a very distant object, a parabolic reflector can do a better job. Such a mirror can focus incoming parallel rays to a much smaller spot than a spherical mirror can. A toroidal reflector is a form of parabolic reflector which has a different focal distance depending on the angle of the mirror. Analysis Mirror equation, magnification, and focal length The Gaussian mirror equation, also known as the mirror and lens equation, relates the object distance and image distance to the focal length : . The sign convention used here is that the focal length is positive for concave mirrors and negative for convex ones, and and are positive when the object and image are in front of the mirror, respectively. (They are positive when the object or image is real.) For convex mirrors, if one moves the term to the right side of the equation to solve for , then the result is always a negative number, meaning that the image distance is negative—the image is virtual, located "behind" the mirror. This is consistent with the behavior described above. For concave mirrors, whether the image is virtual or real depends on how large the object distance is compared to the focal length. If the term is larger than the term, then is positive and the image is real. Otherwise, the term is negative and the image is virtual. Again, this validates the behavior described above. The magnification of a mirror is defined as the height of the image divided by the height of the object: . By convention, if the resulting magnification is positive, the image is upright. If the magnification is negative, the image is inverted (upside down). Ray tracing The image location and size can also be found by graphical ray tracing, as illustrated in the figures above. A ray drawn from the top of the object to the mirror surface vertex (where the optical axis meets the mirror) will form an angle with the optical axis. The reflected ray has the same angle to the axis, but on the opposite side (See Specular reflection). A second ray can be drawn from the top of the object, parallel to the optical axis. This ray is reflected by the mirror and passes through its focal point. The point at which these two rays meet is the image point corresponding to the top of the object. Its distance from the optical axis defines the height of the image, and its location along the axis is the image location. The mirror equation and magnification equation can be derived geometrically by considering these two rays. A ray that goes from the top of the object through the focal point can be considered instead. Such a ray reflects parallel to the optical axis and also passes through the image point corresponding to the top of the object. Ray transfer matrix of spherical mirrors The mathematical treatment is done under the paraxial approximation, meaning that under the first approximation a spherical mirror is a parabolic reflector. The ray matrix of a concave spherical mirror is shown here. The element of the matrix is , where is the focal point of the optical device. Boxes 1 and 3 feature summing the angles of a triangle and comparing to π radians (or 180°). Box 2 shows the Maclaurin series of up to order 1. The derivations of the ray matrices of a convex spherical mirror and a thin lens are very similar.
Physical sciences
Optics
Physics
3158838
https://en.wikipedia.org/wiki/Asparagaceae
Asparagaceae
Asparagaceae (), known as the asparagus family, is a family of flowering plants, placed in the order Asparagales of the monocots. The family name is based on the edible garden asparagus, Asparagus officinalis. This family includes both common garden plants as well as common houseplants. The garden plants include asparagus, yucca, bluebell, and hosta, and the houseplants include snake plant, corn cane, spider plant, and plumosus fern. The Asparagaceae is a morphologically heterogenous family with the included species varying widely in their appearance and growth form. It has a cosmopolitan distribution, with genera and species contained in the family native to all continents except Antarctica. Taxonomy Early taxonomy The plant family Asparagaceae was first named, described, and published in Genera Plantarum in 1789 by the French botanist Antoine Laurent de Jussieu, who is particularly noted for his work in developing the concept of plant families. From the time of first introduction until the 21st century, the Asparagaceae was a monotypic family containing only the single genus, Asparagus, after which the family was named. Asparagaceae under the APG II system In 2003, the formation of the APG II plant classification system radically expanded the Asparagaceae to include the genera and species previously contained in seven plant families. In the APG II system, two options were provided as to the circumscription of the family, with Asparagaceae sensu lato (meaning in the wider sense) being the broader circumscription of the family documented in the APG II; or, Asparagaceae sensu stricto (meaning in the strict sense) consisting of only Asparagus and Hemiphylacus. If opting to use Asparagaceae sensu lato, the paper outlining the APG II system recommended placing the previously recognised family in parenthesis after Asparagaceae. The paper also recommended including grouping the families Anemarrhenaceae, Anthericaeae, Behniaceae and Herreriaceae with the Agavaceae, noting that in 2000, the Convallariaceae, Dracaenaceae, Eriospermaceae and Nolinaceae had been grouped together in the Ruscaceae. Asparagaceae under the APG III system In 2009, botanists proposed a major revision of the Asparagales order of plants, that included a vast expansion of three constituent plant families; the Amaryllidaceae, Asparagaceae and Xanthorrhoeaceae, to include large number of genera in former plant families by placing them into subfamilies nested within these three plant families. Under the APG III system, the Asparagaceae contain seven subfamilies, and unlike the APG II system, Asparagaceae was only circumscribed in the broad sense (sensu lato), but the Asparagaceae subfamily Asparagoideae is roughly equivalent to Asparagaceae (sensu stricto) under the APG II system. Whilst the subfamilies are broadly equivalent to the previous subdivision by families under the APG II system, genera previously included in one previously recognised family may have moved to another subfamily under the APG III system, or even placed into another family outside of the Asparagaceae. Genera As of November 2024, the Asparagaceae includes about 119 genera; and these genera contain approximately 3,170 accepted species altogether, although the number of accepted genera and their constituent species varies depending on authority and changes with time. The reference against the subfamily name is to the source which places the genus in that subfamily. Obsolete genera or species formerly included in the Asparagaceae Calibanus was a former genus that was placed in the Asparagaceae (Nolinoideae subfamily) when the APG III system was introduced. Both members of the genus have since been transferred to the genus Beaucarnea (also a member of the Asparagaceae (Nolinoideae subfamily)) after molecular phylogenetic research demonstrated a strong phylogenetic relationship with species of Beaucarnea. Sansevieria was a long recognised genus belonging to the Nolinoideae subfamily but on the basis of molecular phylogenetic studies, the species formerly including as belonging to the genus have been transferred to the genus Dracaena (also included in the Noliniodeae subfamily).
Biology and health sciences
Asparagales
Plants
3159140
https://en.wikipedia.org/wiki/Pigeon%20post
Pigeon post
Pigeon post is the use of homing pigeons to carry messages. Pigeons are effective as messengers due to their natural homing abilities. The pigeons are transported to a destination in cages, where they are attached with messages, then the pigeon naturally flies back to its home where the recipient could read the message. They have been used in many places around the world. Pigeons have also been used to great effect in military situations, and are in this case referred to as war pigeons. Early history As a method of communication, it is likely as old as the ancient Persians, from whom the art of training the birds probably came. The Romans used pigeon messengers to aid their military over 2000 years ago. Frontinus said that Julius Caesar used pigeons as messengers in his conquest of Gaul. The Greeks conveyed the names of the victors at the Olympic Games to their various cities by this means. Naval chaplain Henry Teonge (c. 1620–1690) describes in his diary a regular pigeon postal service being used by merchants between İskenderun and Aleppo in the Levant. The Mughals also used messenger pigeons. Before the telegraph, this method of communication was used extensively in the financial field. The Dutch government established a civil and military system in Java and Sumatra early in the 19th century, the birds being obtained from Baghdad. In 1851, the German-born Paul Julius Reuter opened an office in the City of London which transmitted stock market quotations between London and Paris via the new Calais to Dover cable. Reuter had previously used pigeons to fly stock prices between Aachen and Brussels, a service that operated for a year until a gap in the telegraph link was closed. Details of the employment of pigeons during the siege of Paris in 1870–71 led to a revival in the training of pigeons for military purposes. Numerous societies were established for keeping pigeons of this class in all important European countries; and, in time, various governments established systems of communication for military purposes by pigeon post. After pigeon post between military fortresses had been thoroughly tested, attention was turned to its use for naval purposes, to send messages to ships in nearby waters. It was also used by news agencies and private individuals at various times. Governments in several countries established lofts of their own. Laws were passed making the destruction of such pigeons a serious offense; premiums to stimulate efficiency were offered to private societies, and rewards given for destruction of birds of prey. Before the advent of radio, pigeons were used by newspapers to report yacht races, and some yachts were actually fitted with lofts. During the establishment of formal pigeon post services, the registration of all birds was introduced. At the same time, in order to hinder the efficiency of the systems of foreign countries, difficulties were placed in the way of the importation of their birds for training, and in a few cases falcons were specially trained to interrupt the service during war, the Germans having set the example by employing hawks against the Paris pigeons in 1870–71. No satisfactory method of protecting the weaker birds seems to have been developed, though the Chinese formerly provided their pigeons with whistles and bells to scare away birds of prey. As radio telegraphy and telephony were developed, the use of pigeons became limited to fortress warfare by the 1910s. Although the British Admiralty had attained a very high standard of efficiency, it discontinued its pigeon service in the early 20th century. In contrast, large numbers of birds were still kept by France, Germany and Russia at the outbreak of the First World War. In modern days, a rafting photographer still uses pigeons as a sneakernet to transport digital photos on flash media from the camera to the tour operator. Paris The pigeon post that was in operation while Paris was besieged during the Franco-Prussian War of 1870–1871 is probably the most famous. Barely six weeks after the outbreak of hostilities, the Emperor Napoleon III and the French Army of Châlons surrendered at Sedan on 2 September 1870. There were two immediate consequences: the fall of the Second Empire and the swift Prussian advance on Paris. As had been expected, the normal channels of communication into and out of Paris were interrupted during the four-and-a-half months of the siege, and, indeed, it was not until the middle of February 1871 that the Prussians relaxed their control of the postal and telegraph services. With the encirclement of the city on 18 September, the last overhead telegraph wires were cut on the morning of 19 September, and the secret telegraph cable in the bed of the Seine was located and cut on 27 September. Although a number of postmen succeeded in passing through the Prussian lines in the earliest days of the siege, others were captured and shot, and there is no proof of any post, certainly after October, reaching Paris from the outside, apart from private letters carried by unofficial individuals. For an assured communication into Paris, the only successful method was by the time-honoured carrier-pigeon, and thousands of messages, official and private, were thus taken into the besieged city. During the course of the siege, pigeons were regularly taken out of Paris by balloon. Initially, one of the pigeons carried by a balloon was released as soon as the balloon landed so that Paris could be apprised of its safe passage over the Prussian lines. Soon a regular service was in operation, based first at Tours and later at Poitiers. The pigeons were taken to their base after their arrival from Paris and when they had preened themselves, been fed and rested, they were ready for the return journey. Tours lies some 200 km (100 miles) from Paris and Poitiers some 300 km (200 miles); to reduce the flight distance the pigeons were taken by train as far forward towards Paris as was safe from Prussian intervention. Before release, they were loaded with their despatches. The first despatch was dated 27 September and reached Paris on 1 October, but it was only from 16 October, when an official control was introduced, that a complete record was kept. The pigeons carried two kinds of despatch: official and private, both of which are later described in detail. The service was put into operation for the transmission of information from the Delegation to Paris and was opened to the public in early November. The private despatches were sent only when an official despatch was being sent, since the latter would have absolute priority. However, the introduction of the Dagron microfilms eased any problems there might have been in claims for transport since their volumetric requirements were very small. For example: one tube sent during January contained 21 microfilms, of which 6 were official despatches and 15 were private, while a later tube contained 16 private despatches and 2 official ones. In order to improve the chances of the despatches successfully reaching Paris, the same despatch was sent by several pigeons, one official despatch being repeated 35 times and the later private despatches were repeated on average 22 times. The records show that from 7 January to the end, 61 tubes were sent off, containing 246 official and 671 private despatches. The practice was to send off the despatches not only by pigeons of the same release but also of successive releases until Paris signaled the arrival of those despatches. When the pigeon reached its particular loft in Paris, its arrival was announced by a bell in the trap in the loft. Immediately, a watchman relieved it of its tube which was taken to the Central Telegraph Office where the content was carefully unpacked and placed between two thin sheets of glass. The photographs are said to have been projected by magic lantern on to a screen where the enlargement could be easily read and written down by a team of clerks. This would certainly be true for the microfilms, but the earlier despatches on photographic paper were read through microscopes. The transcribed messages were written out on forms (telegraph forms for private messages, with or without the special annotation "pigeon") and so delivered. The interval between sending a private message and its receipt by the addressee depended on many factors: the density of telegraphic traffic to and from the sender's town, the time taken to register the message, to pass it to the printers where it was assembled with its 3000 companions into a single page, and then to assemble the pages into nines or twelves or sixteens. During the four months of the siege, 150,000 official and 1 million private communications were carried into Paris by this method. The service was formally terminated on 1 February 1871. In fact, the last pigeons were released on 1 and 3 February. The pigeons that were still alive were now official property and were sold at the Depot du Mobilier de l'Etat. Their value as racing pigeons was reflected by the average price of only 1 franc 50 centimes, but two pigeons, reported to have made three journeys, were purchased by an enthusiast for 26 francs. The success of the pigeon post, both for official and for private messages, did not pass unnoticed by the military forces of the European powers and in the years that followed the Franco-Prussian War pigeon sections were established in their armies. The advent of wireless communication led to rising pigeon unemployment, although in certain particular applications pigeons provided the only method of communication. But never again were pigeons called upon to perform such a tremendous public service as that which they had maintained during the siege of Paris. Canada Major-General Donald Roderick Cameron, then Commandant of the Royal Military College of Canada in Kingston, Ontario recommended an international pigeon service for marine search and rescue and military service in a paper entitled "Messenger Pigeons, a National Question". Sir Charles Hibbert Tupper, then Minister of Marine and Fisheries supported the pigeon policy. Colonel Goldie, Assistant Adjutant General and Major Waldron of the Royal Artillery, and Captain Dopping-Hepenstal of the Royal Engineers carried through the plan. The pigeon post between look-out stations at lighthouses on islands and the mainland at the citadel in Halifax, Nova Scotia provided a messenger service from 1891 until it was discontinued in 1895. The pigeon post faced a heavy mortality among the pigeons as many were lost on the operations. The flight from the Citadel in Halifax, Nova Scotia to Sable Island, for example, was difficult for the pigeons to complete. Catalina Island From 1894 to 1898 pigeons carried mail from Avalon across the Santa Barbara Channel to Los Angeles. Two pigeon fanciers, brothers Otto J. and O. F. Zahn, reached an agreement with Western Union where it would not build a telegraph line to the isolated island so long as the pigeons did not compete with it on the mainland. Fifty birds were trained, carrying three copies of each message because of the danger of hunters and predators. They made the 48-mile passage in about one hour, bringing letters, news clippings from the Los Angeles Times, and emergency summons for doctors. In three seasons of operation only two letters failed to come through, but at $.50 to $1.00 per message the service was not profitable, and in 1898 the Zahn brothers ended the post. Great Barrier Island (New Zealand) Before the pigeon post service was established the only regular connection between the community on Great Barrier Island (90 kilometres northeast of Auckland) and the mainland was provided by a weekly coastal steamer. The island's isolation was highlighted when the ship SS Wairarapa was wrecked off its coast in 1894, with the loss of 121 lives, and the news took several days to reach the mainland. The pigeon post service began between the island and Auckland in 1897. Soon there were two rival pigeongram companies, both of which issued distinctive and attractive stamps. The stamps have been eagerly collected for their novelty value, and some have become extremely rare. Initially, the service operated only from Great Barrier Island to Auckland, the reverse route being considered uneconomic. On the island, pigeongram agencies were established at Port Fitzroy, Okupu, and Whangaparara. Birds were sent over to the island on the weekly steamer and flew back to Auckland with up to five messages per bird written on lightweight writing stock and attached to their legs. Great Barrier Island's pigeongram service ended when the first telegraph cable was laid between the island and the mainland in 1908. India The Orissa police in India have established regular pigeon posts at Cuttack, Chatrapur, Kendrapara, Sambalpur and Denkanal and these pigeons rose to the occasion in times of emergencies and natural calamities. During the centenary celebrations of the Indian postal service in 1954, the Orissa police pigeons demonstrated their capacity by conveying the message of inauguration from the President of India to the Prime Minister. The last of the pigeon post services in the world (the one in Cuttack, India) was closed in 2008, although about 150 pigeons continue to be maintained for ceremonial purposes in Cuttack and at the Police Training College in Angul.
Technology
Media and communication: Basics
null
3160379
https://en.wikipedia.org/wiki/Organ%20printing
Organ printing
Organ printing utilizes techniques similar to conventional 3D printing where a computer model is fed into a printer that lays down successive layers of plastics or wax until a 3D object is produced. In the case of organ printing, the material being used by the printer is a biocompatible plastic. The biocompatible plastic forms a scaffold that acts as the skeleton for the organ that is being printed. As the plastic is being laid down, it is also seeded with human cells from the patient's organ that is being printed for. After printing, the organ is transferred to an incubation chamber to give the cells time to grow. After a sufficient amount of time, the organ is implanted into the patient. To many researchers the ultimate goal of organ printing is to create organs that can be fully integrated into the human body. Successful organ printing has the potential to impact several industries, notably artificial organs organ transplants, pharmaceutical research, and the training of physicians and surgeons. History The field of organ printing stemmed from research in the area of stereolithography, the basis for the practice of 3D printing that was invented in 1984. In this early era of 3D printing, it was not possible to create lasting objects because of the material used for the printing process was not durable. 3D printing was instead used as a way to model potential end products that would eventually be made from different materials under more traditional techniques. In the beginning of the 1990s, nanocomposites were developed that allowed 3D printed objects to be more durable, permitting 3D printed objects to be used for more than just models. It was around this time that those in the medical field began considering 3D printing as an avenue for generating artificial organs. By the late 1990s, medical researchers were searching for biocompatible materials that could be used in 3D printing. The concept of bioprinting was first demonstrated in 1988. At this time, a researcher used a modified HP inkjet printer to deposit cells using cytoscribing technology. Progress continued in 1999 when the first artificial organ made using bioprinting was printed by a team of scientist leads by Dr. Anthony Atala at the Wake Forest Institute for Regenerative Medicine. The scientists at Wake Forest printed an artificial scaffold for a human bladder and then seeded the scaffold with cells from their patient. Using this method, they were able to grow a functioning organ and ten years after implantation the patient had no serious complications. After the bladder at Wake Forest, strides were taken towards printing other organs. In 2002, a miniature, fully functional kidney was printed. In 2003, Dr. Thomas Boland from Clemson University patented the use of inkjet printing for cells. This process utilized a modified spotting system for the deposition of cells into organized 3D matrices placed on a substrate. This printer allowed for extensive research into bioprinting and suitable biomaterials. For instance, since these initial findings, the 3D printing of biological structures has been further developed to encompass the production of tissue and organ structures, as opposed to cell matrices. Additionally, more techniques for printing, such as extrusion bioprinting, have been researched and subsequently introduced as a means of production. In 2004, the field of bioprinting was drastically changed by yet another new bioprinter. This new printer was able to use live human cells without having to build an artificial scaffold first. In 2009, Organovo used this novel technology to create the first commercially available bioprinter. Soon after, Organovo's bioprinter was used to develop a biodegradable blood vessel, the first of its kind, without a cell scaffold. In the 2010s and beyond, further research has been put forth into producing other organs, such as the liver and heart valves, and tissues, such as a blood-borne network, via 3D printing. In 2019, scientists in Israel made a major breakthrough when they were able to print a rabbit-sized heart with a network of blood vessels that were capable of contracting like natural blood vessels. The printed heart had the correct anatomical structure and function compared to real hearts. This breakthrough represented a real possibility of printing fully functioning human organs. In fact, scientists at the Warsaw Foundation for Research and Development of Science in Poland have been working on creating a fully artificial pancreas using bioprinting technology. As of today, these scientists have been able to develop a functioning prototype. This is a growing field and much research is still being conducted. 3D printing techniques 3D printing for the manufacturing of artificial organs has been a major topic of study in biological engineering. As the rapid manufacturing techniques entailed by 3D printing become increasingly efficient, their applicability in artificial organ synthesis has grown more evident. Some of the primary benefits of 3D printing lie in its capability of mass-producing scaffold structures, as well as the high degree of anatomical precision in scaffold products. This allows for the creation of constructs that more effectively resemble the microstructure of a natural organ or tissue structure. Organ printing using 3D printing can be conducted using a variety of techniques, each of which confers specific advantages that can be suited to particular types of organ production. Sacrificial writing into functional tissue (SWIFT) Sacrificial writing into function tissue (SWIFT) is a method of organ printing where living cells are packed tightly to mimic the density that occurs in the human body. While packing, tunnels are carved to mimic blood vessels and oxygen and essential nutrients are delivered via these tunnels. This technique pieces together other methods that only packed cells or created vasculature. SWIFT combines both and is an improvement that brings researchers closer to creating functional artificial organs. Stereolithographic (SLA) 3D bioprinting This method of organ printing uses spatially controlled light or laser to create a 2D pattern that is layered through a selective photopolymerization in the bio-ink reservoir. A 3D structure can then be built in layers using the 2D pattern. Afterwards the bio-ink is removed from the final product. SLA bioprinting allows for the creation of complex shapes and internal structures. The feature resolution for this method is extremely high and the only disadvantage is the scarcity of resins that are biocompatible. Drop-based bioprinting (Inkjet) Drop-based bioprinting makes cellular developments utilizing droplets of an assigned material, which has oftentimes been combined with a cell line. Cells themselves can also be deposited in this manner with or without polymer. When printing polymer scaffolds using these methods, each drop starts to polymerize upon contact with the substrate surface and merge into a larger structure as droplets start to coalesce. Polymerization can happen through a variety of methods depending on the polymer used. For instance, alginate polymerization is started by calcium ions in the substrate, which diffuse into the liquified bioink and permit for the arrangement of a strong gel. Drop-based bioprinting is commonly utilized due to its productive speed. However, this may make it less appropriate for more complicated organ structures. Extrusion bioprinting Extrusion bioprinting includes the consistent statement of a specific printing fabric and cell line from an extruder, a sort of portable print head. This tends to be a more controlled and gentler handle for fabric or cell statement, and permits for more noteworthy cell densities to be utilized within the development of 3D tissue or organ structures. In any case, such benefits are set back by the slower printing speeds involved by this procedure. Extrusion bioprinting is frequently coupled with UV light, which photopolymerizes the printed fabric to create a more steady, coordinated construct. Fused deposition modeling Fused deposition modeling (FDM) is more common and inexpensive compared to selective laser sintering. This printer uses a printhead that is similar in structure to an inkjet printer; however, ink is not used. Plastic beads are heated at high temperature and released from the printhead as it moves, building the object in thin layers. A variety of plastics can be used with FDM printers. Additionally, most of the parts printed by FDM are typically composed from the same thermoplastics that are utilized in tradition injection molding or machining techniques. Due to this, these parts have analogous durability, mechanical properties, and stability characteristics. Precision control allows for a consistent release amount and specific location deposition for each layer contributing to the shape. As the heated plastic is deposited from the printhead, it fuses or bonds to the layers below. As each layer cools, they harden and gradually take hold of the solid shape intended to be created as more layers are contributed to the structure. Selective laser sintering Selective laser sintering (SLS) uses powdered material as the substrate for printing new objects. SLS can be used to create metal, plastic, and ceramic objects. This technique uses a laser controlled by a computer as the power source to sinter powdered material. The laser traces a cross-section of the shape of the desired object in the powder, which fuses it together into a solid form. A new layer of powder is then laid down and the process repeats itself, building each layer with every new application of powder, one by one, to form the entirety of the object. One of the advantages of SLS printing is that it requires very little additional tooling, i.e. sanding, once the object is printed. Recent advances in organ printing using SLS include 3D constructs of craniofacial implants as well as scaffolds for cardiac tissue engineering. Printing materials Printing materials must fit a broad spectrum of criteria, one of the foremost being biocompatibility. The resulting scaffolds formed by 3D printed materials should be physically and chemically appropriate for cell proliferation. Biodegradability is another important factor, and insures that the artificially formed structure can be broken down upon successful transplantation, to be replaced by a completely natural cellular structure. Due to the nature of 3D printing, materials used must be customizable and adaptable, being suited to wide array of cell types and structural conformations. Natural polymers Materials for 3D printing usually consist of alginate or fibrin polymers that have been integrated with cellular adhesion molecules, which support the physical attachment of cells. Such polymers are specifically designed to maintain structural stability and be receptive to cellular integration. The term bio-ink has been used as a broad classification of materials that are compatible with 3D bioprinting. Hydrogel alginates have emerged as one of the most commonly used materials in organ printing research, as they are highly customizable, and can be fine-tuned to simulate certain mechanical and biological properties characteristic of natural tissue. The ability of hydrogels to be tailored to specific needs allows them to be used as an adaptable scaffold material, that are suited for a variety of tissue or organ structures and physiological conditions. A major challenge in the use of alginate is its stability and slow degradation, which makes it difficult for the artificial gel scaffolding to be broken down and replaced with the implanted cells' own extracellular matrix. Alginate hydrogel that is suitable for extrusion printing is also often less structurally and mechanically sound; however, this issue can be mediated by the incorporation of other biopolymers, such as nanocellulose, to provide greater stability. The properties of the alginate or mixed-polymer bioink are tunable and can be altered for different applications and types of organs. Other natural polymers that have been used for tissue and 3D organ printing include chitosan, hydroxyapatite (HA), collagen, and gelatin. Gelatin is a thermosensitive polymer with properties exhibiting excellent wear solubility, biodegradability, biocompatibility, as well as a low immunologic rejection. These qualities are advantageous and result in high acceptance of the 3D bioprinted organ when implanted in vivo. Synthetic polymers Synthetic polymers are human made through chemical reactions of monomers. Their mechanical properties are favorable in that their molecular weights can be regulated from low to high based on differing requirements. However, their lack of functional groups and structural complexity has limited their usage in organ printing. Current synthetic polymers with excellent 3D printability and in vivo tissue compatibility, include polyethylene glycol (PEG), poly(lactic-glycolic acid) (PLGA), and polyurethane (PU). PEG is a biocompatible, nonimmunogenic synthetic polyether that has tunable mechanical properties for use in 3D bioprinting. Though PEG has been utilized in various 3D printing applications, the lack of cell-adhesive domains has limited further use in organ printing. PLGA, a synthetic copolymer, is widely familiar in living creatures, such as animals, humans, plants, and microorganisms. PLGA is used in conjunction with other polymers to create different material systems, including PLGA-gelatin, PLGA-collagen, all of which enhance mechanical properties of the material, biocompatible when placed in vivo, and have tunable biodegradability. PLGA has most often been used in printed constructs for bone, liver, and other large organ regeneration efforts. Lastly, PU is unique in that it can be classified into two groups: biodegradable or non-biodegradable. It has been used in the field of bioprinting due to its excellent mechanical and bioinert properties. An application of PU would be inanimate artificial hearts; however, using existing 3D bioprinters, this polymer cannot be printed. A new elastomeric PU was created composed of PEG and polycaprolactone (PCL) monomers. This new material exhibits excellent biocompatibility, biodegradability, bioprintability, and biostability for use in complex bioartificial organ printing and manufacturing. Due to high vascular and neural network construction, this material can be applied to organ printing in a variety of complex ways, such as the brain, heart, lung, and kidney. Natural-synthetic hybrid polymers Natural-synthetic hybrid polymers are based on the synergic effect between synthetic and biopolymeric constituents. Gelatin-methacryloyl (GelMA) has become a popular biomaterial in the field of bioprinting. GelMA has shown it has viable potential as a bioink material due to its suitable biocompatibility and readily tunable psychochemical properties. Hyaluronic acid (HA)-PEG is another natural-synthetic hybrid polymer that has proven to be very successful in bioprinting applications. HA combined with synthetic polymers aid in obtaining more stable structures with high cell viability and limited loss in mechanical properties after printing. A recent application of HA-PEG in bioprinting is the creation of artificial liver. Lastly, a series of biodegradable polyurethane (PU)-gelatin hybrid polymers with tunable mechanical properties and efficient degradation rates have been implemented in organ printing. This hybrid has the ability to print complicated structures such as a nose-shaped construct. All of the polymers described above have the potential to be manufactured into implantable, bioartificial organs for purposes including, but not limited to, customized organ restoration, drug screening, as well as metabolic model analysis. Cell sources The creation of a complete organ often requires incorporation of a variety of different cell types, arranged in distinct and patterned ways. One advantage of 3D-printed organs, compared to traditional transplants, is the potential to use cells derived from the patient to make the new organ. This significantly decreases the likelihood of transplant rejection, and may remove the need for immunosuppressive drugs after transplant, which would reduce the health risks of transplants. However, since it may not always be possible to collect all the needed cell types, it may be necessary to collect adult stem cells or induce pluripotency in collected tissue. This involves resource-intensive cell growth and differentiation and comes with its own set of potential health risks, since cell proliferation in a printed organ occurs outside the body and requires external application of growth factors. However, the ability of some tissues to self-organize into differentiated structures may provide a way to simultaneously construct the tissues and form distinct cell populations, improving the efficacy and functionality of organ printing. Types of printers and processes The types of printers used for organ printing include: Inkjet printer Multi-nozzle Hybrid printer Electrospinning Drop-on-demand These printers are used in the methods described previously. Each printer requires different materials and has its own advantages and limitations. Applications Organ donation Currently, the sole method for treatment for those in organ failure is to await a transplant from a living or recently deceased donor. In the United States alone, there are over 100,000 patients on the organ transplant list waiting for donor organs to become available. Patients on the donor list can wait days, weeks, months, or even years for a suitable organ to become available. The average wait time for some common organ transplants are as follows: four months for a heart or lung, eleven months for a liver, two years for a pancreas, and five years for a kidney. This is a significant increase from the 1990s, when a patient could wait as little as five weeks for a heart. These extensive wait times are due to a shortage of organs as well as the requirement for finding an organ that is suitable for the recipient. An organ is deemed suitable for a patient based on blood type, comparable body size between donor and recipient, the severity of the patient's medical condition, the length of time the patient has been waiting for an organ, patient availability (i.e. ability to contact patient, if patient has an infection), the proximity of the patient to the donor, and the viability time of the donor organ. In the United States, 20 people die everyday waiting for organs. 3D organ printing has the potential to remove both these issues; if organs could be printed as soon as there is need, there would be no shortage. Additionally, seeding printed organs with a patient's own cells would eliminate the need to screen donor organs for compatibility. Physician and surgical training Surgical usage of 3D printing has evolved from printing surgical instrumentation to the development of patient-specific technologies for total joint replacements, dental implants, and hearing aids. In the field of organ printing, applications can be applied for patients and surgeons. For instance, printed organs have been used to model structure and injury to better understand the anatomy and discuss a treatment regime with patients. For these cases, the functionality of the organ is not required and is used for proof-of-concept. These model organs provide advancement for improving surgical techniques, training inexperienced surgeons, and moving towards patient-specific treatments. Pharmaceutical research 3D organ printing technology permits the fabrication of high degrees of complexity with great reproducibility, in a fast and cost-effective manner. 3D printing has been used in pharmaceutical research and fabrication, providing a transformative system allowing precise control of droplet size and dose, personalized medicine, and the production of complex drug-release profiles. This technology calls for implantable drug delivery devices, in which the drug is injected into the 3D printed organ and is released once in vivo. Also, organ printing has been used as a transformative tool for in vitro testing. The printed organ can be utilized in discovery and dosage research upon drug-release factors. Organ-on-a-chip Organ printing technology can also be combined with microfluidic technology to develop organs-on-chips. These organs-on-chips have the potential to be used for disease models, aiding in drug discovery, and performing high-throughput assays. Organ-on-chips work by providing a 3D model that imitates the natural extracellular matrix, allowing them to display realistic responses to drugs. Thus far, research has been focused on developing liver-on-a-chip and heart-on-a-chip, but there exists the potential to develop an entire body-on-a-chip model. By combining 3D printed organs, researchers are able to create a body-on-a-chip. The heart-on-a-chip model has already been used to investigate how several drugs with heart rate-based negative side effects, such as the chemotherapeutic drug doxorubicin could affect people on an individual basis. The new body-on-a-chip platform includes liver, heart, lungs, and kidney-on-a-chip. The organs-on-a-chip are separately printed or constructed and then integrated together. Using this platform drug toxicity studies are performed in high throughput, lowering the cost and increasing the efficiency in the drug-discovery pipeline. Legal and safety 3D-printing techniques have been used in a variety of industries for the overall goal of fabricating a product. Organ printing, on the other hand, is a novel industry that utilizes biological components to develop therapeutic applications for organ transplants. Due to the increased interest in this field, regulation and ethical considerations desperately need to be established. Specifically, there can be legal complications from pre-clinical to clinical translation for this treatment method. Regulation The current American regulation for organ matching is centered on the national registry of organ donors after the National Organ Transplant Act was passed in 1984. This act was set in place to ensure equal and honest distribution, although it has been proven insufficient due to the large demand for organ transplants. Organ printing can assist in diminishing the imbalance between supply and demand by printing patient-specific organ replacements, all of which is unfeasible without regulation. The Food and Drug Administration (FDA) is responsible for regulation of biologics, devices, and drugs in the United States. Due to the complexity of this therapeutic approach, the location of organ printing on the spectrum has not been discerned. Studies have characterized printed organs as multi-functional combination products, meaning they fall between the biologics and devices sectors of the FDA; this leads to more extensive processes for review and approval. In 2016, the FDA issued draft guidance on the Technical Considerations for Additive Manufactured Devices and is currently evaluating new submissions for 3D printed devices. However, the technology itself is not advanced enough for the FDA to mainstream it directly. Currently, the 3D printers, rather than the finished products, are the main focus in safety and efficacy evaluations in order to standardize the technology for personalized treatment approaches. From a global perspective, only South Korea and Japan's medical device regulation administrations have provided guidelines that are applicable to 3D bio-printing. There are also concerns with intellectual property and ownership. These can have a large impact on more consequential matters such as piracy, quality control for manufacturing, and unauthorized use on the black market. These considerations are focused more on the materials and fabrication processes; they are more extensively explained in the legal aspects subsection of 3D printing. Ethical considerations From an ethical standpoint, there are concerns with respect to the availability of organ printing technologies, the cell sources, and public expectations. Although this approach may be less expensive than traditional surgical transplantation, there is skepticism in regards to social availability of these 3D printed organs. Contemporary research has found that there is potential social stratification for the wealthier population to have access to this therapy while the general population remains on the organ registry. The cell sources mentioned previously also need to be considered. Organ printing can decrease or eliminate animal studies and trials, but also raises questions on the ethical implications of autologous and allogenic sources. More specifically, studies have begun to examine future risks for humans undergoing experimental testing. Generally, this application can give rise to social, cultural, and religious differences, making it more difficult for worldwide integration and regulation. Overall, the ethical considerations of organ printing are similar to those of general ethics of bioprinting, but are extrapolated from tissue to organ. Altogether, organ printing possesses short- and long-term legal and ethical consequences that need to be considered before mainstream production can be feasible. Impact Organ printing for medical applications is still in the developmental stages. Thus, the long term impacts of organ printing have yet to be determined. Researchers hope that organ printing could decrease the organ transplant shortage. There is currently a shortage of available organs, including liver, kidneys, and lungs. The lengthy wait time to receive life saving organs is one of the leading causes of death in the United States, with nearly one third of deaths each year in the United States that could be delayed or prevented with organ transplants. Currently the only organ that has been 3D bioprinted and successfully transplanted into a human is a bladder. The bladder was formed from the host's bladder tissue. Researchers have proposed that a potential positive impact of 3D printed organs is the ability to customize organs for the recipient. Developments enabling an organ recipient’s host cells to be used to synthesize organs decreases the risk of organ rejection. The ability to print organs has decreased the demand for animal testing. Animal testing is used to determine the safety of products ranging from makeup to medical devices. Cosmetic companies are already using smaller tissue models to test new products on skin. The ability to 3D print skin reduces the need for animal trials for makeup testing. In addition, the ability to print models of human organs to test the safety and efficacy of new drugs further reduces the necessity for animal trials. Researchers at Harvard University determined that drug safety can be accurately tested on smaller tissue models of lungs. The company Organovo, which designed one of the initial commercial bioprinters in 2009, has displayed that biodegradable 3D tissue models can be used to research and develop new drugs, including those to treat cancer. An additional impact of organ printing includes the ability to rapidly create tissue models, therefore increasing productivity. Challenges One of the challenges of 3D printing organs is to recreate the vasculature required to keep the organs alive. Designing a correct vasculature is necessary for the transport of nutrients, oxygen, and waste. Blood vessels, especially capillaries, are difficult due to the small diameter. Progress has been made in this area at Rice University, where researchers designed a 3D printer to make vessels in biocompatible hydrogels and designed a model of lungs that can oxygenate blood. However, accompanied with this technique is the challenge of replicating the other minute details of organs. It is difficult to replicate the entangled networks of airways, blood vessels, and bile ducts and complex geometry of organs. The challenges faced in the organ printing field extends beyond the research and development of techniques to solve the issues of multivascularization and difficult geometries. Before organ printing can become widely available, a source for sustainable cell sources must be found and large-scale manufacturing processes need to be developed. Additional challenges include designing clinical trials to test the long-term viability and biocompatibility of synthetic organs. While many developments have been made in the field of organ printing, more research must be conducted.
Technology
Biotechnology
null
3160714
https://en.wikipedia.org/wiki/Teratornithidae
Teratornithidae
Teratornithidae is an extinct family of very large birds of prey that lived in North and South America from the Late Oligocene to Late Pleistocene. They include some of the largest known flying birds. Its members are known as teratorns. Taxonomy Teratornithidae are related to New World vultures (Cathartidae, syn. Vulturidae). So far, at least seven species in six genera have been identified: Teratornis Teratornis merriami. This is by far the best-known species. Over a hundred specimens have been found, mostly from La Brea Tar Pits. It stood about tall with an estimated wingspan of perhaps , and weighed about ; making it about a third bigger than extant condors. It became extinct at the end of Pleistocene, some 10,000 years ago. Teratornis woodburnensis. The first species to be found north of the La Brea Tar Pits, this partial specimen was discovered at Legion Park, Woodburn, Oregon. It is known from a humerus, parts of the cranium, beak, sternum, and vertebrae which indicate an estimated wingspan of over . The find dates to the Late Pleistocene, between 11,000 and 12,000 years ago, in a stratum which is filled with the bones of mastodons, sloths, and condors, and has evidence of human habitation. Aiolornis incredibilis, previously known as Teratornis incredibilis. This species is fairly poorly known; finds from Nevada and California include several wing bones and part of the beak. They show remarkable similarity with merriami but are uniformly about 40% larger: this would translate to a mass of up to and a wingspan of about for incredibilis. The finds are dated from the Pliocene to the late Pleistocene, which is a considerable chronological spread, and thus it is uncertain whether they actually represent the same species. Cathartornis gracilis. This species is known only from a couple of leg bones found from La Brea Ranch. Compared to T. merriami, remains are slightly shorter and clearly more slender, indicating a more gracile build. Argentavis magnificens. A partial skeleton of this enormous teratorn was found from La Pampa, Argentina. It is one of the largest flying birds known to have existed, only likely exceeded by measurement of wingspan by Pelagornis sandersi, discovered in 1983. Fossil remains of this species have been dated to Late Miocene, about 6 to 8 million years ago, and one of the few teratorn finds in South America. Initial discovery included portions of the skull, incomplete humerus and several other wing bones. Even conservative estimates put its wingspan at and up, and it may have been as much as . The weight of the bird was estimated to have been around . Another form, "Teratornis" olsoni, was described from the Pleistocene of Cuba, but its affinities are not completely resolved; it might not be a teratorn at all, but has also been placed in its own genus, Oscaravis. There are also undescribed fossils from southwestern Ecuador. Taubatornis campbelli is the earliest known teratorn species, from the Late Oligocene or Early Miocene of the Tremembé Formation, Taubaté Basin, Brazil. Classification Teratornithidae has only been included in a single phylogenetic analysis, published by Steven Emslie in 1988. The analysis was conducted using cranial characters of various taxa within the order Ciconiiformes, with a specific focus on Vulturidae (Cathartidae). This analysis included Teratornis merriami as a representative of Teratornithidae, and found the group to be just outside of Vulturidae. Description and ecology Despite their size, there is little doubt that even the largest teratorns could fly. Visible marks of the attachments of contour feathers can be seen on Argentavis wing bones. This defies some earlier theories that extant condors, swans, and bustards represent the size limit for flying birds. The wing loading of Argentavis was relatively low for its size, a bit more than a turkey's, and if there were any significant wind present, the bird could probably get airborne merely by spreading its wings, just like modern albatrosses. South America during the Miocene probably featured strong and steady westerly winds, as the Andes were still forming and not yet very high. T. merriami was small enough (relatively speaking) to take off with a simple jump and a few flaps. The fingerbones are mostly fused as in all birds, but the former index finger has partially evolved into a wide shelf at least in T. merriami, and as condors have a similar adaptation, probably in other species, too. Wing length estimates vary considerably but more likely than not were at the upper end of the range, because this bone structure bears the load of the massive primaries. Studies on condor flight suggest that even the largest teratorns were capable of flight in normal conditions, as modern large soaring birds rarely flap their wings regardless of terrain. Traditionally, teratorns have been described as large scavengers, very much like oversized condors, owing to considerable similarity with condors. However, the long beaks and wide gapes of teratorns are more like the beaks of eagles and other actively predatory birds than those of vultures. Most likely teratorns swallowed their prey whole; Argentavis could technically swallow up to hare-sized animals in a single piece. Although they undoubtedly engaged in opportunistic scavenging, they seem to have been active predators most of the time. Teratorns had relatively longer and stouter legs than Old World vultures; thus it seems possible that teratorns would stalk their prey on the ground (much like extant caracaras), and take off only to fly to another feeding ground or their nests; especially Cathartornis seems well-adapted for such a lifestyle. Argentavis may have been an exception, as its sheer bulk would have made it a less effective hunter, but better adapted to taking over other predators' kills. As teratorns were not habitual scavengers, they most likely had completely feathered heads, unlike vultures. The skull features of teratorns still share a lot of crucial similarities with specialized scavenging raptors. Many old world vultures possess large bills similar to teratorns, and a longer bill is in fact an anatomic feature that points toward a scavenging rather than a predatory life style, as this allow them to probe deeper into large carcases - larger than those fed upon by active-hunting raptors. Other anatomical features, such as the relatively small and sideward facing orbits and the lower skull, are also consistent with a scavenging live style. More sideward facing eyes allow scavenging raptors to have a wider field of vision, which is advantageous in spotting carcases. In contrast, predatory raptors usually have proportionally larger and more forward facing orbits, as depth perception is more important for a predatory lifestyle. As in other large birds, a clutch probably had only one or two eggs; the young would be cared for more than half a year, and take several years to reach maturity, probably up to 12 years in Argentavis.
Biology and health sciences
Prehistoric birds
Animals
3161666
https://en.wikipedia.org/wiki/TW%20Hydrae
TW Hydrae
TW Hydrae is a T Tauri star approximately 196 light-years away in the constellation of Hydra (the Sea Serpent). TW Hydrae is about 80% of the mass of the Sun, but is only about 5-10 million years old. The star appears to be accreting from a protoplanetary disk of dust and gas, oriented face-on to Earth, which has been resolved in images from the ALMA observatory. TW Hydrae is accompanied by about twenty other low-mass stars with similar ages and spatial motions, comprising the "TW Hydrae association" or TWA, one of the closest regions of recent "fossil" star-formation to the Sun. Stellar characteristics TW Hydrae is a pre-main-sequence star that is approximately 80% the mass of and 111% the radius of the Sun. It has a temperature of 4000 K and is about 8 million years old. In comparison, the Sun is about 4.6 billion years old and has a temperature of 5778 K. The star's luminosity is 28% (0.28x) that of the Sun, equivalent to that of a main-sequence star of spectral type ~K2. However, the spectral class is K6. The star's apparent magnitude, or how bright it appears from Earth's perspective, is 11.27. It is too dim to be seen with the naked eye. Planetary system The star is known to host one likely exoplanet, TW Hydrae b. Protoplanetary disk Previously disproven protoplanet In December 2007, a team led by Johny Setiawan of the Max Planck Institute for Astronomy in Heidelberg, Germany announced discovery of a planet orbiting TW Hydrae, dubbed "TW Hydrae b" with a minimum mass around 1.2 Jupiter masses, a period of 3.56 days, and an orbital radius of 0.04 astronomical units (inside the inner rim of the protoplanetary disk). Assuming it orbits in the same plane as the outer part of the dust disk (inclination 7±1°), it has a true mass of 9.8±3.3 Jupiter masses. However, if the inclination is similar to the inner part of the dust disk (4.3±1.0°), the mass would be 16 Jupiter masses, making it a brown dwarf. Since the star itself is so young, it was presumed this is the youngest extrasolar planet yet discovered, and essentially still in formation. In 2008 a team of Spanish researchers concluded that the planet does not exist: the radial velocity variations were not consistent when observed at different wavelengths, which would not occur if the origin of the radial velocity variations was caused by an orbiting planet. Instead, the data was better modelled by starspots on TW Hydrae's surface passing in and out of view as the star rotates. "Results support the spot scenario rather than the presence of a hot Jupiter around TW Hya". Similar wavelength-dependent radial velocity variations, also caused by starspots, have been detected on other T Tauri stars. New study of more distant planet In 2016, ALMA found evidence that a possible Neptune-like planet was forming in its disk, at a distance of around 22 AU. Outflow of an embedded protoplanet In 2024 observations with ALMA showed sulfur monoxide representing an outflow from an embedded protoplanet. The position of the brightest emission coincides with a planet-carved dust gap at 42 au. Previously this gap was associated with the formation of a super-earth and modelling of the outflow velocity, the researchers estimate a mass of about 4 earth-masses. The mass accretion of this embedded protoplanet is constrained to between 3 x10−7 and 10−5 /year. Detection of methanol In 2016, methanol, one of the building blocks for life, was detected in the star's protoplanetary disk. Gallery
Physical sciences
Notable stars
Astronomy
3164495
https://en.wikipedia.org/wiki/Australopithecus%20bahrelghazali
Australopithecus bahrelghazali
Australopithecus bahrelghazali is an extinct species of australopithecine discovered in 1995 at Koro Toro, Bahr el Gazel, Chad, existing around 3.5 million years ago in the Pliocene. It is the first and only australopithecine known from Central Africa, and demonstrates that this group was widely distributed across Africa as opposed to being restricted to East and southern Africa as previously thought. The validity of A. bahrelghazali has not been widely accepted, in favour of classifying the specimens as A. afarensis, a better known Pliocene australopithecine from East Africa, because of the anatomical similarity and the fact that A. bahrelghazali is known only from 3 partial jawbones and an isolated premolar. The specimens inhabited a lakeside grassland environment with sparse tree cover, possibly similar to the modern Okavango Delta, and similarly predominantly ate C4 savanna foods—such as grasses, sedges, storage organs, or rhizomes—and to a lesser degree also C3 forest foods—such as fruits, flowers, pods, or insects. However, the teeth seem ill-equipped to process C4 plants, so its true diet is unclear. Research history In 1995, two specimens were recovered from Koro Toro, Bahr el Gazel, Chad: KT12/H1 or "Abel" (a jawbone preserving the premolars, canines, and the right second incisor) and KT12/H2 (an isolated first upper premolar). They were discovered by the Franco-Chadian Paleoanthropological Mission, and reported by French palaeontologist Michel Brunet, French geographer Alain Beauvilain, French anthropologist Yves Coppens, French palaeontologist Emile Heintz, Chadian geochemist engineer Aladji Hamit Elimi Ali Moutaye, and British palaeoanthropologist David Pilbeam. Based on the wildlife assemblage, the remains were roughly dated to the middle to late Pliocene 3.5–3 million years ago; consequently, the describers decided to preliminarily assign the remains to Australopithecus afarensis, which inhabited Ethiopia during that time period, barring more detailed anatomical comparisons. In 1996, they allocated it to a new species, A. bahrelghazali, naming it after the region; Bahr el Gazel means "River of the Gazelles" in classical Arabic. They denoted KT12/H1 as the holotype and KT12/H2 a paratype. Another jawbone was discovered at the K13 site in 1997, and a third from the KT40 site. In 2008, a pelite (a type of sedimentary rock) recovered from the same sediments as Abel was radiometrically dated (using the 10Be/9Be ratio) to have been deposited 3.58 million years ago. However, Beauvilain responded that Abel was not found in situ but at the edge of a shallow gulley, and it is impossible to figure out from what stratigraphic section the specimen (or any other fossil from Koro Toro) was first deposited in, in order to accurately radiometrically date it. Nonetheless, Abel was redated in 2010 using the same methods to about 3.65 million years ago, and Brunet agreed with an age of roughly 3.5 million years ago. A. bahrelghazali was the first australopithecine recovered from Central Africa, and disproved the earlier notion that they were restricted to east of the eastern branch of the East African Rift which formed in the Late Miocene. Koro Toro is situated about from the Rift Valley, and the remains suggest australopithecines were widely distributed in grassland and woodland zones across the continent. The lack of other Central and West African australopithecines may be due to sampling bias, as similarly aged fossil-bearing sediments are more or less unknown beyond East Africa. The ancestors of A. bahrelghazali may have left East Africa via the Central African Shear Zone. In 2014, the first australopithecine in the western branch of the East African Rift was reported in Ishango, Democratic Republic of the Congo. At present, the classification of Australopithecus and Paranthropus species is in disarray. Australopithecus is considered a grade taxon whose members are united by their similar physiology rather than how close they are to each other in the hominin family tree. In an attempt to resolve this, in 2003, Spanish writer Camilo José Cela Conde and evolutionary biologist Francisco J. Ayala proposed splitting off the genus "Praeanthropus" and including A. bahrelghazali alongside Sahelanthropus (the only other fossil ape known from Chad), A. anamensis, A. afarensis, and A. garhi. The validity of A. bahrelghazali has not been widely accepted given how few remains there are and how similar they are to A. afarensis. Anatomy The teeth of KT12/H1 are quite similar to the jawbone of A. afarensis, with large and incisor-like canines and bicuspid premolars (as opposed to molar-like premolars). Unlike A. afarensis, the alveolar part of the jawbone where the tooth sockets are is almost vertical as opposed to oblique, possesses poorly developed superior transverse torus and moderate inferior torus (two ridges on the midline of the jaw on the tongue side), and thin enamel on the chewing surface of the premolars. Brunet and colleagues had listed the presence of 3 distinct tooth roots as a distinguishing characteristic, but the third premolar of the A. afarensis LH-24 specimen from Middle Awash, Ethiopia, was described in 2000 as having the same feature, which shows that premolar anatomy was highly variable for A. afarensis. The mandibular symphysis (at the midline of the jaw) of KT40, especially, as well as KT12/H1 have the same dimensions as the symphysis of A. afarensis, though theirs is relatively thick compared to the height. Palaeoecology Carbon isotope analysis indicates a diet of predominantly C4 savanna foods, such as grasses, sedges, underground storage organs (USOs), or rhizomes. There is a smaller C3 portion which may have comprised more typical ape food items such as fruits, flowers, pods, or insects. This indicates that, like contemporary and future australopiths, A. bahrelghazali was capable of exploiting whatever food was abundant in its environment, whereas most primates (including savanna chimps) avoid C4 foods. However, despite 55–80% of δ13C deriving from C4 sources similar to Paranthropus boisei and the modern gelada (and considerably more than any tested A. afarensis population), A. bahrelghazali lacks the specialisations for such a diet. Because the teeth are not hypsodont, it could not have chewed large quantities of grass, and because the enamel is so thin, the teeth would not have been able to withstand the abrasive dirt particles of USOs. In regard to C4 sources, chimps and bonobos (which have even thinner enamel) consume plant medullas as a fallback food and sedges as an important energy and protein source; however a sedge-based diet likely could not have sustained A. bahrelghazali. During the Pliocene around the expanded Lake Chad (or "Lake Mega-Chad"), insect trace fossils indicate this was a well-vegetated region, and the abundance of rhizomes may suggest a seasonal climate with wet and dry seasons. Koro Toro has yielded several large mammals, including several antelopes, of which some were endemic, the elephant Loxodonta exoptata, the white rhinoceros Ceratotherium praecox, the pig Kolpochoerus afarensis, a Hipparion horse, a Sivatherium, and a giraffe. Some of these are also known from Pliocene East African sites, implying that animals could freely migrate between east and west of the Great African Rift. The K13 site features, in regard to bovids, an abundance of Reduncinae, Alcelaphinae, and Antilopinae, whereas Tragelaphini is much rarer, which indicates an open environment which was drier than Pliocene East African sites. In total, the area seems to have been predominantly grasslands with some tree cover. In addition, the area featured aquatic creatures, predominantly catfish, and also 10 other kinds of fish, the hippo Hexaprotodon protamphibius, an otter, a Geochelone tortoise, a Trionyx softshell turtle, a false gharial, and an anatid waterbird. These aquatic animals indicate Koro Toro had open-water lakes or streams with swampy grassy margins, connected to the Nilo-Sudan waterways (including the Nile, Chari, Niger, Senegal, Volta, and Gambia Rivers). Koro Toro, during Mega-Chad events (which have been cyclical for the last 7 million years), may have been similar to the modern Okavango Delta.
Biology and health sciences
Australopithecines
Biology
635484
https://en.wikipedia.org/wiki/Stinger
Stinger
A stinger (or sting) is a sharp organ found in various animals (typically insects and other arthropods) capable of injecting venom, usually by piercing the epidermis of another animal. An insect sting is complicated by its introduction of venom, although not all stings are venomous. Bites, which can introduce saliva as well as additional pathogens and diseases, are often confused with stings, and vice versa. Specific components of venom are believed to give rise to an allergic reaction, which in turn produces skin lesions that may vary from a small itching weal, or slightly elevated area of the skin, to large areas of inflamed skin covered by vesicles and crusted lesions. Stinging insects produce a painful swelling of the skin, the severity of the lesion varying according to the location of the sting, the identity of the insect and the sensitivity of the subject. Many species of bees and wasps have two poison glands, one gland secreting a toxin in which formic acid is one recognized constituent, and the other secreting an alkaline neurotoxin; acting independently, each toxin is rather mild, but when they combine through the sting, the combination has strong irritating properties. In a small number of cases, the second occasion of a bee or wasp sting causes a severe allergic reaction known as anaphylaxis. While the overwhelming majority of insects withdraw their stingers from their victims, a few insects leave them in the wounds. For example, of the 20,000 species of bees worldwide, only the half-dozen species of honeybees (Apis) are reported to have a barbed stinger that cannot be withdrawn; of wasps, nearly all are reported to have smooth stingers with the exception of two species, Polybia rejecta and Synoeca surinama. A few non-insect arthropods, such as scorpions, also sting. Arthropods Among arthropods, a sting or stinger is a sharp organ, often connected with a venom gland and adapted to inflict a wound by piercing, as with the caudal sting of a scorpion. Stings are usually located at the rear of the animal. Animals with stings include bees, wasps (including hornets), some ants like fire ants, and scorpions, as well as a single beetle species (Onychocerus albitarsis) that can deliver a venomous sting from its antennae, whose terminal segments have evolved to resemble a scorpion's tail. In all stinging Hymenoptera the sting is a modified ovipositor. Unlike most other stings, honey bee workers' stings are strongly barbed and lodge in the flesh of mammals upon use, tearing free from the honey bee's body, killing the bee within minutes. The sting has its own ganglion, and it continues to saw into the target's flesh and release venom for several minutes. This trait is of obvious disadvantage to the individual but protects the hive from attacks by large animals; aside from the effects of the venom, the remnant also marks the stung animal with honey bee alarm pheromone. The barbs of a honey bee's attack are only suicidal if the skin is elastic, as is characteristic of vertebrates such as birds and mammals; honey bees can sting other insects repeatedly without dying. The sting of nearly all other bees and other sting-bearing organisms is not barbed and can be used to sting repeatedly. The description of barbed or unbarbed is not precise: there are barbs on the stings of yellowjacket wasps and the Mexican honey wasp, but the barbs are so small that the wasp can sometimes withdraw its sting apparatus from victim's skin. The stings of some wasps, such as those of the Polistes versicolor, contain relatively large amounts of 5-hydroxytryptamine (5-HT) in its venoms. The 5-HT in these venoms has been found to play at least two roles: one as a pain-producing agent and the other in the distribution and penetration of the paralyzing components to vulnerable sites in the offender. This helps in the rapid immobilization of the animal or of the body parts receiving the venom. Spiders only bite, although some tarantulas have barbed bristles called urticating hairs. Certain caterpillars also have urticating hairs. Centipedes also possess a venomous bite rather than a sting, inflicted with a highly modified first pair of legs, called forcipules. Stingrays, platypi and jellyfish Organs that perform similar functions in non-arthropods are often referred to as "stings". These organs include the modified dermal denticle of the stingray, the venomous spurs on the hind legs of the male platypus, and the cnidocyte tentacles of the jellyfish.
Biology and health sciences
Integumentary system
Biology
635489
https://en.wikipedia.org/wiki/Olfactory%20system
Olfactory system
The olfactory system, is the sensory system used for the sense of smell (olfaction). Olfaction is one of the special senses directly associated with specific organs. Most mammals and reptiles have a main olfactory system and an accessory olfactory system. The main olfactory system detects airborne substances, while the accessory system senses fluid-phase stimuli. The senses of smell and taste (gustatory system) are often referred to together as the chemosensory system, because they both give the brain information about the chemical composition of objects through a process called transduction. Structure Peripheral The peripheral olfactory system consists mainly of the nostrils, ethmoid bone, nasal cavity, and the olfactory epithelium (layers of thin tissue covered in mucus that line the nasal cavity). The primary components of the layers of epithelial tissue are the mucous membranes, olfactory glands, olfactory neurons, and nerve fibers of the olfactory nerves. Odor molecules can enter the peripheral pathway and reach the nasal cavity either through the nostrils when inhaling (olfaction) or through the throat when the tongue pushes air to the back of the nasal cavity while chewing or swallowing (retro-nasal olfaction). Inside the nasal cavity, mucus lining the walls of the cavity dissolves odor molecules. Mucus also covers the olfactory epithelium, which contains mucous membranes that produce and store mucus, and olfactory glands that secrete metabolic enzymes found in the mucus. Transduction Olfactory sensory neurons in the epithelium detect odor molecules dissolved in the mucus and transmit information about the odor to the brain in a process called sensory transduction. Olfactory neurons have cilia (tiny hairs) containing olfactory receptors that bind to odor molecules, causing an electrical response that spreads through the sensory neuron to the olfactory nerve fibers at the back of the nasal cavity. Olfactory nerves and fibers transmit information about odors from the peripheral olfactory system to the central olfactory system of the brain, which is separated from the epithelium by the cribriform plate of the ethmoid bone. Olfactory nerve fibers, which originate in the epithelium, pass through the cribriform plate, connecting the epithelium to the brain's limbic system at the olfactory bulbs. Central The main olfactory bulb transmits pulses to both mitral and tufted cells, which help determine odor concentration based on the time certain neuron clusters fire (called 'timing code'). These cells also note differences between highly similar odors and use that data to aid in later recognition. The cells are different with mitral having low firing-rates and being easily inhibited by neighboring cells, while tufted have high rates of firing and are more difficult to inhibit. How the bulbar neural circuit transforms odor inputs to the bulb into the bulbar responses that are sent to the olfactory cortex can be partly understood by a mathematical model. The uncus houses the olfactory cortex which includes the piriform cortex (posterior orbitofrontal cortex), amygdala, olfactory tubercle, and parahippocampal gyrus. The olfactory tubercle connects to numerous areas of the amygdala, thalamus, hypothalamus, hippocampus, brain stem, retina, auditory cortex, and olfactory system. In total it has 27 inputs and 20 outputs. An oversimplification of its role is to state that it: checks to ensure odor signals arose from actual odors rather than villi irritation, regulates motor behavior (primarily social and stereotypical) brought on by odors, integrates auditory and olfactory sensory info to complete the aforementioned tasks, and plays a role in transmitting positive signals to reward sensors (and is thus involved in addiction). The amygdala (in olfaction) processes pheromone, allomone, and kairomone (same-species, cross-species, and cross-species where the emitter is harmed and the sensor is benefited, respectively) signals. Due to cerebrum evolution this processing is secondary and therefore is largely unnoticed in human interactions. Allomones include flower scents, natural herbicides, and natural toxic plant chemicals. The info for these processes comes from the vomeronasal organ indirectly via the olfactory bulb. The main olfactory bulb's pulses in the amygdala are used to pair odors to names and recognize odor to odor differences. The bed nuclei of the stria terminalis (BNST) act as the information pathway between the amygdala and hypothalamus, as well as the hypothalamus and pituitary gland. BNST abnormalities often lead to sexual confusion and immaturity. The BNST also connect to the septal area, rewarding sexual behavior. Mitral pulses to the hypothalamus promote/discourage feeding, whereas accessory olfactory bulb pulses regulate reproductive and odor-related-reflex processes. The hippocampus (although minimally connected to the main olfactory bulb) receives almost all of its olfactory information via the amygdala (either directly or via the BNST). The hippocampus forms new memories and reinforces existing ones. Similarly, the parahippocampus encodes, recognizes and contextualizes scenes. The parahippocampal gyrus houses the topographical map for olfaction. The orbitofrontal cortex (OFC) is heavily correlated with the cingulate gyrus and septal area to act out positive/negative reinforcement. The OFC is the expectation of reward/punishment in response to stimuli. The OFC represents the emotion and reward in decision making. The anterior olfactory nucleus distributes reciprocal signals between the olfactory bulb and piriform cortex. The anterior olfactory nucleus is the memory hub for smell. When different odor objects or components are mixed, humans and other mammals sniffing the mixture (presented by, e.g., a sniff bottle) are often unable to identify the components in the mixture even though they can recognize each individual component presented alone. This is largely because each odor sensory neuron can be excited by multiple odor components. It has been proposed that, in an olfactory environment typically composed of multiple odor components (e.g., odor of a dog entering a kitchen that contains a background coffee odor), feedback from the olfactory cortex to the olfactory bulb suppresses the pre-existing odor background (e.g., coffee) via olfactory adaptation, so that the newly arrived foreground odor (e.g., dog) can be singled out from the mixture for recognition. Clinical significance Loss of smell is known as anosmia. Anosmia can occur on both sides or a single side. Olfactory problems can be divided into different types based on their malfunction. The olfactory dysfunction can be total (anosmia), incomplete (partial anosmia, hyposmia, or microsmia), distorted (dysosmia), or can be characterized by spontaneous sensations like phantosmia. An inability to recognize odors despite a normally functioning olfactory system is termed olfactory agnosia. Hyperosmia is a rare condition typified by an abnormally heightened sense of smell. Like vision and hearing, the olfactory problems can be bilateral or unilateral meaning if a person has anosmia on the right side of the nose but not the left, it is a unilateral right anosmia. On the other hand, if it is on both sides of the nose it is called bilateral anosmia or total anosmia. Destruction to olfactory bulb, tract, and primary cortex (brodmann area 34) results in anosmia on the same side as the destruction. Also, irritative lesion of the uncus results in olfactory hallucinations. Damage to the olfactory system can occur by traumatic brain injury, cancer, infection, inhalation of toxic fumes, or neurodegenerative diseases such as Parkinson's disease and Alzheimer's disease. These conditions can cause anosmia. In contrast, recent finding suggested the molecular aspects of olfactory dysfunction can be recognized as a hallmark of amyloidogenesis-related diseases and there may even be a causal link through the disruption of multivalent metal ion transport and storage. Doctors can detect damage to the olfactory system by presenting the patient with odors via a scratch and sniff card or by having the patient close their eyes and try to identify commonly available odors like coffee or peppermint candy. Doctors must exclude other diseases that inhibit or eliminate 'the sense of smell' such as chronic colds or sinusitis before making the diagnosis that there is permanent damage to the olfactory system. Prevalence of olfactory dysfunction in the general US population was assessed by questionnaire and examination in a national health survey in 2012–2014. Among over a thousand persons aged 40 years and older, 12.0% reported a problem with smell in the past 12 months and 12.4% had olfactory dysfunction on examination. Prevalence rose from 4.2% at age 40–49 to 39.4% at 80 years and older and was higher in men than women, in blacks and Mexican Americans than in whites and in less than more educated. Of concern for safety, 20% of persons aged 70 and older were unable to identify smoke and 31%, natural gas. Causes of olfactory dysfunction The olfactory system is a vital sense, and its dysfunction may lead to a reduced quality of life, an inability to determine hazardous odors, decreased pleasure in eating, and poor mental health. The common causes of olfactory dysfunction include advanced age, viral infections, exposure to toxic chemicals, head trauma, and neurodegenerative diseases. Age Age is the strongest reason for olfactory decline in healthy adults, having even greater impact than does cigarette smoking. Age-related changes in smell function often go unnoticed and smell ability is rarely tested clinically unlike hearing and vision. 2% of people under 65 years of age have chronic smelling problems. This increases greatly between people of ages 65 and 80 with about half experiencing significant problems smelling. Then for adults over 80, the numbers rise to almost 75%. The basis for age-related changes in smell function include closure of the cribriform plate, and cumulative damage to the olfactory receptors from repeated viral and other insults throughout life. Viral infections The most common cause of permanent hyposmia and anosmia are upper respiratory infections. Such dysfunctions show no change over time and can sometimes reflect damage not only to the olfactory epithelium, but also to the central olfactory structures as a result of viral invasions into the brain. Among these virus-related disorders are the common cold, hepatitis, influenza and influenza-like illness, as well as herpes. Notably, COVID-19 is associated with olfactory disturbance. Most viral infections are unrecognizable because they are so mild or entirely asymptomatic. There are no known cures for olfactory loss due to viral infections, however olfactory training is a highly recommended option, as well as oral steroids for a short period of time when discussed with a medical professional. Exposure to toxic chemicals Chronic exposure to some airborne toxins such as herbicides, pesticides, solvents, and heavy metals (cadmium, chromium, nickel, and manganese), can alter the ability to smell. A study conducted in 2023 found that hairdressers who were exposed to formaldehyde, an ingredient found in hair dye, experienced olfactory loss when protective equipment was not used. These agents not only damage the olfactory epithelium, but they are likely to enter the brain via the olfactory mucosa. Head trauma Trauma-related olfactory dysfunction depends on the severity of the trauma and whether strong acceleration/deceleration of the head occurred. Occipital and side impact causes more damage to the olfactory system than frontal impact. However, recent evidence from individuals with traumatic brain injury suggests that smell loss can occur with changes in brain function outside of olfactory cortex. Neurodegenerative diseases Neurologists have observed that olfactory dysfunction is a cardinal feature of several neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease. Most of these patients are unaware of an olfactory deficit until after testing where 85% to 90% of early-stage patients showed decreased activity in central odor processing structures. Other neurodegenerative diseases that affect olfactory dysfunction include Huntington's disease, multi-infarct dementia, amyotrophic lateral sclerosis, and schizophrenia. These diseases have more moderate effects on the olfactory system than Alzheimer's or Parkinson's diseases. Furthermore, progressive supranuclear palsy and parkinsonism are associated with only minor olfactory problems. These findings have led to the suggestion that olfactory testing may help in the diagnosis of several different neurodegenerative diseases. Neurodegenerative diseases with well-established genetic determinants are also associated with olfactory dysfunction. Such dysfunction, for example, is found in patients with familial Parkinson's disease and those with Down syndrome. Further studies have concluded that the olfactory loss may be associated with intellectual disability, rather than any Alzheimer's disease-like pathology. Huntington's disease is also associated with problems in odor identification, detection, discrimination, and memory. The problem is prevalent once the phenotypic elements of the disorder appear, although it is unknown how far in advance the olfactory loss precedes the phenotypic expression. History Linda B. Buck and Richard Axel won the 2004 Nobel Prize in Physiology or Medicine for their work on the olfactory system.
Biology and health sciences
Nervous system
null
635490
https://en.wikipedia.org/wiki/Auditory%20system
Auditory system
The auditory system is the sensory system for the sense of hearing. It includes both the sensory organs (the ears) and the auditory parts of the sensory system. System overview The outer ear funnels sound vibrations to the eardrum, increasing the sound pressure in the middle frequency range. The middle-ear ossicles further amplify the vibration pressure roughly 20 times. The base of the stapes couples vibrations into the cochlea via the oval window, which vibrates the perilymph liquid (present throughout the inner ear) and causes the round window to bulb out as the oval window bulges in. Vestibular and tympanic ducts are filled with perilymph, and the smaller cochlear duct between them is filled with endolymph, a fluid with a very different ion concentration and voltage. Vestibular duct perilymph vibrations bend organ of Corti outer cells (4 lines) causing prestin to be released in cell tips. This causes the cells to be chemically elongated and shrunk (somatic motor), and hair bundles to shift which, in turn, electrically affects the basilar membrane's movement (hair-bundle motor). These motors (outer hair cells) amplify the traveling wave amplitudes over 40-fold. The outer hair cells (OHC) are minimally innervated by spiral ganglion in slow (unmyelinated) reciprocal communicative bundles (30+ hairs per nerve fiber); this contrasts with inner hair cells (IHC) that have only afferent innervation (30+ nerve fibers per one hair) but are heavily connected. There are three to four times as many OHCs as IHCs. The basilar membrane (BM) is a barrier between scalae, along the edge of which the IHCs and OHCs sit. Basilar membrane width and stiffness vary to control the frequencies best sensed by the IHC. At the cochlear base the BM is at its narrowest and most stiff (high-frequencies), while at the cochlear apex it is at its widest and least stiff (low-frequencies). The tectorial membrane (TM) helps facilitate cochlear amplification by stimulating OHC (direct) and IHC (via endolymph vibrations). TM width and stiffness parallels BM's and similarly aids in frequency differentiation. The superior olivary complex (SOC), in the pons, is the first convergence of the left and right cochlear pulses. SOC has 14 described nuclei; their abbreviation are used here (see Superior olivary complex for their full names). MSO determines the angle the sound came from by measuring time differences in left and right info. LSO normalizes sound levels between the ears; it uses the sound intensities to help determine sound angle. LSO innervates the IHC. VNTB innervate OHC. MNTB inhibit LSO via glycine. LNTB are glycine-immune, used for fast signalling. DPO are high-frequency and tonotopical. DLPO are low-frequency and tonotopical. VLPO have the same function as DPO, but act in a different area. PVO, CPO, RPO, VMPO, ALPO and SPON (inhibited by glycine) are various signalling and inhibiting nuclei. The trapezoid body is where most of the cochlear nucleus (CN) fibers decussate (cross left to right and vice versa); this cross aids in sound localization. The CN breaks into ventral (VCN) and dorsal (DCN) regions. The VCN has three nuclei. Bushy cells transmit timing info, their shape averages timing jitters. Stellate (chopper) cells encode sound spectra (peaks and valleys) by spatial neural firing rates based on auditory input strength (rather than frequency). Octopus cells have close to the best temporal precision while firing, they decode the auditory timing code. The DCN has 2 nuclei. DCN also receives info from VCN. Fusiform cells integrate information to determine spectral cues to locations (for example, whether a sound originated from in front or behind). Cochlear nerve fibers (30,000+) each have a most sensitive frequency and respond over a wide range of levels. Simplified, nerve fibers' signals are transported by bushy cells to the binaural areas in the olivary complex, while signal peaks and valleys are noted by stellate cells, and signal timing is extracted by octopus cells. The lateral lemniscus has three nuclei: dorsal nuclei respond best to bilateral input and have complexity tuned responses; intermediate nuclei have broad tuning responses; and ventral nuclei have broad and moderately complex tuning curves. Ventral nuclei of lateral lemniscus help the inferior colliculus (IC) decode amplitude modulated sounds by giving both phasic and tonic responses (short and long notes, respectively). IC receives inputs not shown, including: visual (pretectal area: moves eyes to sound. superior colliculus: orientation and behavior toward objects, as well as eye movements (saccade)) areas, pons (superior cerebellar peduncle: thalamus to cerebellum connection/hear sound and learn behavioral response), spinal cord (periaqueductal grey: hear sound and instinctually move), and thalamus. The above are what implicate IC in the 'startle response' and ocular reflexes. Beyond multi-sensory integration IC responds to specific amplitude modulation frequencies, allowing for the detection of pitch. IC also determines time differences in binaural hearing. The medial geniculate nucleus divides into: ventral (relay and relay-inhibitory cells: frequency, intensity, and binaural info topographically relayed), dorsal (broad and complex tuned nuclei: connection to somatosensory info), and medial (broad, complex, and narrow tuned nuclei: relay intensity and sound duration). The auditory cortex (AC) brings sound into awareness/perception. AC identifies sounds (sound-name recognition) and also identifies the sound's origin location. AC is a topographical frequency map with bundles reacting to different harmonies, timing and pitch. Right-hand-side AC is more sensitive to tonality, left-hand-side AC is more sensitive to minute sequential differences in sound. Rostromedial and ventrolateral prefrontal cortices are involved in activation during tonal space and storing short-term memories, respectively. The Heschl's gyrus/transverse temporal gyrus includes Wernicke's area and functionality, it is heavily involved in emotion-sound, emotion-facial-expression, and sound-memory processes. The entorhinal cortex is the part of the 'hippocampus system' that aids and stores visual and auditory memories. The supramarginal gyrus (SMG) aids in language comprehension and is responsible for compassionate responses. SMG links sounds to words with the angular gyrus and aids in word choice. SMG integrates tactile, visual, and auditory info. Structure Outer ear The folds of cartilage surrounding the ear canal are called the auricle. Sound waves are reflected and attenuated when they hit the auricle, and these changes provide additional information that will help the brain determine the sound direction. The sound waves enter the auditory canal, a deceptively simple tube. The ear canal amplifies sounds that are between 3 and 12 kHz. The tympanic membrane, at the far end of the ear canal marks the beginning of the middle ear. Middle ear Sound waves travel through the ear canal and hit the tympanic membrane, or eardrum. This wave information travels across the air-filled middle ear cavity via a series of delicate bones: the malleus (hammer), incus (anvil) and stapes (stirrup). These ossicles act as a lever, converting the lower-pressure eardrum sound vibrations into higher-pressure sound vibrations at another, smaller membrane called the oval window or vestibular window. The manubrium (handle) of the malleus articulates with the tympanic membrane, while the footplate (base) of the stapes articulates with the oval window. Higher pressure is necessary at the oval window than at the tympanic membrane because the inner ear beyond the oval window contains liquid rather than air. The stapedius reflex of the middle ear muscles helps protect the inner ear from damage by reducing the transmission of sound energy when the stapedius muscle is activated in response to sound. The middle ear still contains the sound information in wave form; it is converted to nerve impulses in the cochlea. Inner ear The inner ear consists of the cochlea and several non-auditory structures. The cochlea has three fluid-filled sections (i.e. the scala media, scala tympani and scala vestibuli), and supports a fluid wave driven by pressure across the basilar membrane separating two of the sections. Strikingly, one section, called the cochlear duct or scala media, contains endolymph. The organ of Corti is located in this duct on the basilar membrane, and transforms mechanical waves to electric signals in neurons. The other two sections are known as the scala tympani and the scala vestibuli. These are located within the bony labyrinth, which is filled with fluid called perilymph, similar in composition to cerebrospinal fluid. The chemical difference between the fluids endolymph and perilymph fluids is important for the function of the inner ear due to electrical potential differences between potassium and calcium ions. The plan view of the human cochlea (typical of all mammalian and most vertebrates) shows where specific frequencies occur along its length. The frequency is an approximately exponential function of the length of the cochlea within the Organ of Corti. In some species, such as bats and dolphins, the relationship is expanded in specific areas to support their active sonar capability. Organ of Corti The organ of Corti forms a ribbon of sensory epithelium which runs lengthwise down the cochlea's entire scala media. Its hair cells transform the fluid waves into nerve signals. The journey of countless nerves begins with this first step; from here, further processing leads to a panoply of auditory reactions and sensations. Hair cell Hair cells are columnar cells, each with a "hair bundle" of 100–200 specialized stereocilia at the top, for which they are named. There are two types of hair cells specific to the auditory system; inner and outer hair cells. Inner hair cells are the mechanoreceptors for hearing: they transduce the vibration of sound into electrical activity in nerve fibers, which is transmitted to the brain. Outer hair cells are a motor structure. Sound energy causes changes in the shape of these cells, which serves to amplify sound vibrations in a frequency specific manner. Lightly resting atop the longest cilia of the inner hair cells is the tectorial membrane, which moves back and forth with each cycle of sound, tilting the cilia, which is what elicits the hair cells' electrical responses. Inner hair cells, like the photoreceptor cells of the eye, show a graded response, instead of the spikes typical of other neurons. These graded potentials are not bound by the "all or none" properties of an action potential. At this point, one may ask how such a wiggle of a hair bundle triggers a difference in membrane potential. The current model is that cilia are attached to one another by "tip links", structures which link the tips of one cilium to another. Stretching and compressing, the tip links may open an ion channel and produce the receptor potential in the hair cell. Recently it has been shown that cadherin-23 CDH23 and protocadherin-15 PCDH15 are the adhesion molecules associated with these tip links. It is thought that a calcium driven motor causes a shortening of these links to regenerate tensions. This regeneration of tension allows for apprehension of prolonged auditory stimulation. Neurons Afferent neurons innervate cochlear inner hair cells, at synapses where the neurotransmitter glutamate communicates signals from the hair cells to the dendrites of the primary auditory neurons. There are far fewer inner hair cells in the cochlea than afferent nerve fibers – many auditory nerve fibers innervate each hair cell. The neural dendrites belong to neurons of the auditory nerve, which in turn joins the vestibular nerve to form the vestibulocochlear nerve, or cranial nerve number VIII. The region of the basilar membrane supplying the inputs to a particular afferent nerve fibre can be considered to be its receptive field. Efferent projections from the brain to the cochlea also play a role in the perception of sound, although this is not well understood. Efferent synapses occur on outer hair cells and on afferent (towards the brain) dendrites under inner hair cells Neuronal structure Cochlear nucleus The cochlear nucleus is the first site of the neuronal processing of the newly converted "digital" data from the inner ear (see also binaural fusion). In mammals, this region is anatomically and physiologically split into two regions, the dorsal cochlear nucleus (DCN), and ventral cochlear nucleus (VCN). The VCN is further divided by the nerve root into the posteroventral cochlear nucleus (PVCN) and the anteroventral cochlear nucleus (AVCN). Trapezoid body The trapezoid body is a bundle of decussating fibers in the ventral pons that carry information used for binaural computations in the brainstem. Some of these axons come from the cochlear nucleus and cross over to the other side before traveling on to the superior olivary nucleus. This is believed to help with localization of sound. Superior olivary complex The superior olivary complex is located in the pons, and receives projections predominantly from the ventral cochlear nucleus, although the dorsal cochlear nucleus projects there as well, via the ventral acoustic stria. Within the superior olivary complex lies the lateral superior olive (LSO) and the medial superior olive (MSO). The former is important in detecting interaural level differences while the latter is important in distinguishing interaural time difference. Lateral lemniscus The lateral lemniscus is a tract of axons in the brainstem that carries information about sound from the cochlear nucleus to various brainstem nuclei and ultimately the contralateral inferior colliculus of the midbrain. Inferior colliculi The inferior colliculi (IC) are located just below the visual processing centers known as the superior colliculi. The central nucleus of the IC is a nearly obligatory relay in the ascending auditory system, and most likely acts to integrate information (specifically regarding sound source localization from the superior olivary complex and dorsal cochlear nucleus) before sending it to the thalamus and cortex. The inferior colliculus also receives descending inputs from the auditory cortex and auditory thalamus (or medial geniculate nucleus). Medial geniculate nucleus The medial geniculate nucleus is part of the thalamic relay system. Primary auditory cortex The primary auditory cortex is the first region of cerebral cortex to receive auditory input. Perception of sound is associated with the left posterior superior temporal gyrus (STG). The superior temporal gyrus contains several important structures of the brain, including Brodmann areas 41 and 42, marking the location of the primary auditory cortex, the cortical region responsible for the sensation of basic characteristics of sound such as pitch and rhythm. We know from research in nonhuman primates that the primary auditory cortex can probably be divided further into functionally differentiable subregions. The neurons of the primary auditory cortex can be considered to have receptive fields covering a range of auditory frequencies and have selective responses to harmonic pitches. Neurons integrating information from the two ears have receptive fields covering a particular region of auditory space. The primary auditory cortex is surrounded by secondary auditory cortex, and interconnects with it. These secondary areas interconnect with further processing areas in the superior temporal gyrus, in the dorsal bank of the superior temporal sulcus, and in the frontal lobe. In humans, connections of these regions with the middle temporal gyrus are probably important for speech perception. The frontotemporal system underlying auditory perception allows us to distinguish sounds as speech, music, or noise. The auditory ventral and dorsal streams From the primary auditory cortex emerge two separate pathways: the auditory ventral stream and auditory dorsal stream. The auditory ventral stream includes the anterior superior temporal gyrus, anterior superior temporal sulcus, middle temporal gyrus and temporal pole. Neurons in these areas are responsible for sound recognition, and extraction of meaning from sentences. The auditory dorsal stream includes the posterior superior temporal gyrus and sulcus, inferior parietal lobule and intra-parietal sulcus. Both pathways project in humans to the inferior frontal gyrus. The most established role of the auditory dorsal stream in primates is sound localization. In humans, the auditory dorsal stream in the left hemisphere is also responsible for speech repetition and articulation, phonological long-term encoding of word names, and verbal working memory. Clinical significance Proper function of the auditory system is required to able to sense, process, and understand sound from the surroundings. Difficulty in sensing, processing and understanding sound input has the potential to adversely impact an individual's ability to communicate, learn and effectively complete routine tasks on a daily basis. In children, early diagnosis and treatment of impaired auditory system function is an important factor in ensuring that key social, academic and speech/language developmental milestones are met. Impairment of the auditory system can include any of the following: Auditory brainstem response and ABR audiometry test for newborn hearing Auditory processing disorder Hyperacusis Diplacusis Tinnitus Endaural phenomena
Biology and health sciences
Nervous system
null
636035
https://en.wikipedia.org/wiki/Convection%20oven
Convection oven
A convection oven (also known as a fan-assisted oven, turbo broiler or simply a fan oven or turbo) is an oven that has fans to circulate air around food to create an evenly heated environment. In an oven without a fan, natural convection circulates hot air unevenly, so that it will be cooler at the bottom and hotter at the top than in the middle. Fan ovens cook food faster, and are also used in non-food, industrial applications. Small countertop convection ovens for household use are often marketed as air fryers. When cooking using a fan-assisted oven, the temperature is usually set lower than for a non-fan oven, often by , to avoid overcooking the outside of the food. Principle of operation Convection ovens distribute heat evenly around the food, removing the blanket of cooler air that surrounds food when it is first placed in an oven and allowing food to cook more evenly in less time and at a lower temperature than in a conventional oven. History The first oven with a fan to circulate air was invented in 1914 but it was never launched commercially. The first convection oven in wide use was the Maxson Whirlwind Oven, introduced in 1945. Convection ovens have been in wide use since 1945. In 2006, Groupe SEB introduced the world's first air fryer, under the Actifry brand of convection ovens in the French market. In 2010, Philips introduced the Airfryer brand of convection oven at the IFA Berlin consumer electronics fair. By 2018, the term "air fryer" was starting to be used generically. In the United States convection ovens experienced a surge in popularity in the late 2010s and early 2020s with a reported 36% of U.S. households having one in 2020 and an estimated 60% of U.S. households having one in 2023. Food manufacturers have responded by adding air frying instructions on a number of products and pre air-fried products also coming to market. In the UK, air fryers have surged in popularity since the early 2020s, with a 2024 study claiming that 1 in 5 Britons surveyed said that air fryers are their most commonly used cooking device. Design A convection oven has a fan with a heating element around it. A small fan circulates the air in the cooking chamber. One effect of the fan is to reduce the thickness of the stationary thermal boundary layer of cooler air that naturally forms around the food. The boundary layer acts as an insulator and slows the rate at which heat is transferred to the food. By moving the cool air (convecting it) away from the food the layer is thinned, and cooking is faster. To prevent overcooking before the middle is cooked, the temperature is usually reduced by about below the setting used for a non-fan oven. In a non-fan oven the temperature varies significantly in different places; a fan distributes hot air evenly for a uniform temperature. Convection ovens may include additional radiant heat sources at the top and bottom of the oven, which provide immediate heat without the warmup time of a (natural or fan-assisted) convection oven. Effectiveness A convection oven allows a reduction in cooking temperature compared to a conventional oven. This comparison will vary, depending on factors including, for example, how much food is being cooked at once or if airflow is being restricted, for example by an oversized baking tray. This difference in cooking temperature is offset as the circulating air transfers heat more quickly than still air of the same temperature. In order to transfer the same amount of heat in the same time, the temperature must be lowered to reduce the rate of heat transfer in order to compensate. Variants Another form of convection oven has hot air directed at a high flow rate from above and below food that passes through the oven on a conveyor belt; it is called an impingement oven. This cooks, for example, breaded products such as chicken nuggets or breaded chicken portions faster than a fan oven, and yields a crisp surface texture. Impinged air also prevents "shadowing" which occurs with infrared radiant heat sources. Impingement ovens can achieve a much higher heat transfer than a conventional oven. Fully enclosed models can also use dual magnetrons, as used by microwave ovens. The most notable manufacturer of this type of oven is TurboChef. The differences between an impingement oven with magnetrons and a convection microwave oven are claimed to be cost, power consumption, and speed. Impingement ovens are designed to be used in restaurants, where speed is essential and power consumption and cost are less of a concern. There are also convection microwave ovens which combine a convection oven with a microwave oven to cook food with the speed of a microwave oven and the browning ability of a convection oven. A combi steamer is an oven that combines convection functionality with superheated steam to cook foods even faster and retain more nutrients and moisture. Air fryer An air fryer is a small countertop convection oven that is said to simulate deep frying without submerging the food in oil. A fan circulates hot air at a high speed, producing a crisp layer via browning reactions such as the Maillard reaction. Some product reviewers find that regular convection ovens or convection toaster ovens produce better results; others say that air frying is essentially the same as convection baking, while still others praise the devices for cooking faster, being easier to clean, and making it easier to produce crispy results than full size convection ovens. The original Philips Air fryer used radiant heat from a heating element just above the food and convection heat from a strong air stream flowing upwards through the open bottom of the food chamber, delivering heat from all sides, with a small volume of hot air forced to pass from the heater surface and over the food, with no idle air circulating as in a convection oven. A shaped guide directed the airflow over the bottom of the food. The technique was patented as Rapid Air technology. Traditional frying methods induce the Maillard reaction at temperatures of by completely submerging foods in hot oil, well above the boiling point of water. The air fryer works by circulating air at up to to apply sufficient heat to food coated with a thin layer of oil, causing the reaction. Most air fryers have temperature and timer adjustments that allow precise cooking. Food is typically cooked in a basket that sits on a drip tray. For best results the basket must be periodically agitated, either manually or by the fryer mechanism. Convection ovens and air fryers are similar in the way they cook food, but air fryers are smaller and give off less heat to the room. There are several types of household air fryer: Paddle In this type, a paddle machine moves throughout the heating chamber to move the air around more evenly. This is more convenient for the user because other types of air fryers require manual stirring throughout to ensure that all sides are fully cooked. Cylindrical basket A cylindrical basket is a small, single function air fryer that includes a drawer with a removable basket. A fan circulates from the top, and the food is cooked through holes in the basket. It can accommodate of food or less on average. Because of its compact size, it preheats faster than other types of air fryers. Countertop convection oven Countertop convection ovens come with an air frying feature that work the same way as basket type air fryers. They usually have multiple trays or racks, so multiple things can be cooked at the same time. It holds of food on average. They are more versatile than single function type because they have multiple features like baking, rotisserie, grilling, frying, broiling, and toasting. Halogen This type of air fryer cooks food with a halogen radiant heat source from above. The heat is spread evenly with a fan like other types of air fryers. This type is usually a large glass bowl with a hinged lid. Oil-less turkey fryer This is a large, barrel-shaped air fryer used to cook whole turkeys and other large pieces of meat. It circulates air around the drum to cook the meat evenly. Industrial convection ovens Industrial convection ovens can be very large. Hot air ovens are convection ovens used to sterilize medical equipment.
Technology
Household appliances
null
636094
https://en.wikipedia.org/wiki/Irreversible%20process
Irreversible process
In science, a process that is not reversible is called irreversible. This concept arises frequently in thermodynamics. All complex natural processes are irreversible, although a phase transition at the coexistence temperature (e.g. melting of ice cubes in water) is well approximated as reversible. In thermodynamics, a change in the thermodynamic state of a system and all of its surroundings cannot be precisely restored to its initial state by infinitesimal changes in some property of the system without expenditure of energy. A system that undergoes an irreversible process may still be capable of returning to its initial state. Because entropy is a state function, the change in entropy of the system is the same whether the process is reversible or irreversible. However, the impossibility occurs in restoring the environment to its own initial conditions. An irreversible process increases the total entropy of the system and its surroundings. The second law of thermodynamics can be used to determine whether a hypothetical process is reversible or not. Intuitively, a process is reversible if there is no dissipation. For example, Joule expansion is irreversible because initially the system is not uniform. Initially, there is part of the system with gas in it, and part of the system with no gas. For dissipation to occur, there needs to be such a non uniformity. This is just the same as if in a system one section of the gas was hot, and the other cold. Then dissipation would occur; the temperature distribution would become uniform with no work being done, and this would be irreversible because you couldn't add or remove heat or change the volume to return the system to its initial state. Thus, if the system is always uniform, then the process is reversible, meaning that you can return the system to its original state by either adding or removing heat, doing work on the system, or letting the system do work. As another example, to approximate the expansion in an internal combustion engine as reversible, we would be assuming that the temperature and pressure uniformly change throughout the volume after the spark. Obviously, this is not true and there is a flame front and sometimes even engine knocking. One of the reasons that Diesel engines are able to attain higher efficiency is that the combustion is much more uniform, so less energy is lost to dissipation and the process is closer to reversible. The phenomenon of irreversibility results from the fact that if a thermodynamic system, which is any system of sufficient complexity, of interacting molecules is brought from one thermodynamic state to another, the configuration or arrangement of the atoms and molecules in the system will change in a way that is not easily predictable. Some "transformation energy" will be used as the molecules of the "working body" do work on each other when they change from one state to another. During this transformation, there will be some heat energy loss or dissipation due to intermolecular friction and collisions. This energy will not be recoverable if the process is reversed. Many biological processes that were once thought to be reversible have been found to actually be a pairing of two irreversible processes. Whereas a single enzyme was once believed to catalyze both the forward and reverse chemical changes, research has found that two separate enzymes of similar structure are typically needed to perform what results in a pair of thermodynamically irreversible processes. Absolute versus statistical reversibility Thermodynamics defines the statistical behaviour of large numbers of entities, whose exact behavior is given by more specific laws. While the fundamental theoretical laws of physics are all time-reversible, experimentally the probability of real reversibility is low and the former state of system and surroundings is recovered only to certain extent (see: uncertainty principle). The reversibility of thermodynamics must be statistical in nature; that is, it must be merely highly unlikely, but not impossible, that a system will lower in entropy. In other words, time reversibility is fulfilled if the process happens the same way if time were to flow in reverse or the order of states in the process is reversed (the last state becomes the first and vice versa). History The German physicist Rudolf Clausius, in the 1850s, was the first to mathematically quantify the discovery of irreversibility in nature through his introduction of the concept of entropy. In his 1854 memoir "On a Modified Form of the Second Fundamental Theorem in the Mechanical Theory of Heat," Clausius states: Simply, Clausius states that it is impossible for a system to transfer heat from a cooler body to a hotter body. For example, a cup of hot coffee placed in an area of room temperature will transfer heat to its surroundings and thereby cool down with the temperature of the room slightly increasing (to ). However, that same initial cup of coffee will never absorb heat from its surroundings, causing it to grow even hotter, with the temperature of the room decreasing (to ). Therefore, the process of the coffee cooling down is irreversible unless extra energy is added to the system. However, a paradox arose when attempting to reconcile microanalysis of a system with observations of its macrostate. Many processes are mathematically reversible in their microstate when analyzed using classical Newtonian mechanics. This paradox clearly taints microscopic explanations of macroscopic tendency towards equilibrium, such as James Clerk Maxwell's 1860 argument that molecular collisions entail an equalization of temperatures of mixed gases. From 1872 to 1875, Ludwig Boltzmann reinforced the statistical explanation of this paradox in the form of Boltzmann's entropy formula, stating that an increase of the number of possible microstates a system might be in, will increase the entropy of the system, making it less likely that the system will return to an earlier state. His formulas quantified the analysis done by William Thomson, 1st Baron Kelvin, who had argued that: Another explanation of irreversible systems was presented by French mathematician Henri Poincaré. In 1890, he published his first explanation of nonlinear dynamics, also called chaos theory. Applying chaos theory to the second law of thermodynamics, the paradox of irreversibility can be explained in the errors associated with scaling from microstates to macrostates and the degrees of freedom used when making experimental observations. Sensitivity to initial conditions relating to the system and its environment at the microstate compounds into an exhibition of irreversible characteristics within the observable, physical realm. Examples of irreversible processes In the physical realm, many irreversible processes are present to which the inability to achieve 100% efficiency in energy transfer can be attributed. The following is a list of spontaneous events which contribute to the irreversibility of processes. Ageing (this claim is disputed, as aging has been demonstrated to be reversed in mice. NAD+ and telomerase have also been demonstrated to reverse ageing.) Death Time Heat transfer through a finite temperature difference Friction Plastic deformation Flow of electric current through a resistance Magnetization or polarization with a hysteresis Unrestrained expansion of fluids Spontaneous chemical reactions Spontaneous mixing of matter of varying composition/states A Joule expansion is an example of classical thermodynamics, as it is easy to work out the resulting increase in entropy. It occurs where a volume of gas is kept in one side of a thermally isolated container (via a small partition), with the other side of the container being evacuated; the partition between the two parts of the container is then opened, and the gas fills the whole container. The internal energy of the gas remains the same, while the volume increases. The original state cannot be recovered by simply compressing the gas to its original volume, since the internal energy will be increased by this compression. The original state can only be recovered by then cooling the re-compressed system, and thereby irreversibly heating the environment. The diagram to the right applies only if the first expansion is "free" (Joule expansion), i.e. there can be no atmospheric pressure outside the cylinder and no weight lifted. Complex systems The difference between reversible and irreversible events has particular explanatory value in complex systems (such as living organisms, or ecosystems). According to the biologists Humberto Maturana and Francisco Varela, living organisms are characterized by autopoiesis, which enables their continued existence. More primitive forms of self-organizing systems have been described by the physicist and chemist Ilya Prigogine. In the context of complex systems, events which lead to the end of certain self-organising processes, like death, extinction of a species or the collapse of a meteorological system can be considered as irreversible. Even if a clone with the same organizational principle (e.g. identical DNA-structure) could be developed, this would not mean that the former distinct system comes back into being. Events to which the self-organizing capacities of organisms, species or other complex systems can adapt, like minor injuries or changes in the physical environment are reversible. However, adaptation depends on import of negentropy into the organism, thereby increasing irreversible processes in its environment. Ecological principles, like those of sustainability and the precautionary principle can be defined with reference to the concept of reversibility.
Physical sciences
Thermodynamics
Physics
636219
https://en.wikipedia.org/wiki/Pressure%20vessel
Pressure vessel
A pressure vessel is a container designed to hold gases or liquids at a pressure substantially different from the ambient pressure. Construction methods and materials may be chosen to suit the pressure application, and will depend on the size of the vessel, the contents, working pressure, mass constraints, and the number of items required. Pressure vessels can be dangerous, and fatal accidents have occurred in the history of their development and operation. Consequently, pressure vessel design, manufacture, and operation are regulated by engineering authorities backed by legislation. For these reasons, the definition of a pressure vessel varies from country to country. The design involves parameters such as maximum safe operating pressure and temperature, safety factor, corrosion allowance and minimum design temperature (for brittle fracture). Construction is tested using nondestructive testing, such as ultrasonic testing, radiography, and pressure tests. Hydrostatic pressure tests usually use water, but pneumatic tests use air or another gas. Hydrostatic testing is preferred, because it is a safer method, as much less energy is released if a fracture occurs during the test (water does not greatly increase its volume when rapid depressurisation occurs, unlike gases, which expand explosively). Mass or batch production products will often have a representative sample tested to destruction in controlled conditions for quality assurance. Pressure relief devices may be fitted if the overall safety of the system is sufficiently enhanced. In most countries, vessels over a certain size and pressure must be built to a formal code. In the United States that code is the ASME Boiler and Pressure Vessel Code (BPVC). In Europe the code is the Pressure Equipment Directive. These vessels also require an authorised inspector to sign off on every new vessel constructed and each vessel has a nameplate with pertinent information about the vessel, such as maximum allowable working pressure, maximum temperature, minimum design metal temperature, what company manufactured it, the date, its registration number (through the National Board), and American Society of Mechanical Engineers's official stamp for pressure vessels (U-stamp). The nameplate makes the vessel traceable and officially an ASME Code vessel. A special application is pressure vessels for human occupancy, for which more stringent safety rules apply. Definition and scope The ASME definition of a pressure vessel is a container designed to hold gases or liquids at a pressure substantially different from the ambient pressure. The Australian and New Zealand standard "AS/NZS 1200:2000 Pressure equipment" defines a pressure vessel as a vessel subject to internal or external pressure, including connected components and accessories up to the connection to external piping. This article may include information on pressure vessels in the broad sense, and is not restricted to any single definition. Components A pressure vessel comprises a shell, and usually one or more other components needed to pressurise, retain the pressure, depressurise, and provide access for maintenance and inspection. There may be other components and equipment provided to facilitate the intended use, and some of these may be considered parts of the pressure vessel, such as shell penetrations and their closures, and viewports and airlocks on a pressure vessel for human occupancy, as they affect the integrity and strength of the shell, are also part of the structure retaining the pressure. Pressure gauges and safety devices like pressure relief valves may also be deemed part of the pressure vessel. There may also be structural components permanently attached to the vessel for lifting, moving, or mounting it, like a foot ring, skids, handles, lugs, or mounting brackets. Types – system used in tall buildings and marine environments to maintain water pressure. } Dissolved gas storage Fired pressure vessels Liquefied gas (vapour over liquid) storage Permanent gas storage Supercritical fluid storage Internal pressure vs external Types by construction method Types by construction material Uses Pressure vessels are used in a variety of applications in both industry and the private sector. They appear in these sectors as industrial compressed air receivers, boilers and domestic hot water storage tanks. Other examples of pressure vessels are diving cylinders, recompression chambers, distillation towers, pressure reactors, autoclaves, and many other vessels in mining operations, oil refineries and petrochemical plants, nuclear reactor vessels, submarine and space ship habitats, atmospheric diving suits, pneumatic reservoirs, hydraulic reservoirs under pressure, rail vehicle air brake reservoirs, road vehicle air brake reservoirs, and storage vessels for high pressure permanent gases and liquified gases such as ammonia, chlorine, and LPG (propane, butane). A pressure vessel may also support structural loads. The passenger cabin of an airliner's outer skin carries both the structural and maneuvering loads of the aircraft, and the cabin pressurization loads. The pressure hull of a submarine also carries the hull structural and maneuvering loads. Design Working pressure The working pressure, i.e. the pressure difference between the interior of the pressure vessel and the surroundings is the primary characteristic considered for design and construction. The concepts of high pressure and low pressure are somewhat flexible, and may be defined differently depending on context. There is also the matter of whether the internal pressure is greater or less than the external pressure, and its magnitude relative to normal atmospheric pressure. A vessel with internal pressure lower than atmospheric may also be called a hypobaric vessel or a vacuum vessel. A pressure vessel with high internal pressure can easily be made to be structurally stable, and will usually fail in tension, but failure due to excessive external pressure is usually by buckling instability and collapse. Shape Pressure vessels can theoretically be almost any shape, but shapes made of sections of spheres, cylinders, ellipsoids of revolution, and cones with circular sections are usually employed, though some other surfaces of revolution are also inherently stable. A common design is a cylinder with end caps called heads. Head shapes are frequently either hemispherical or dished (torispherical). More complicated shapes have historically been much harder to analyze for safe operation and are usually far more difficult to construct. Theoretically, a spherical pressure vessel has approximately twice the strength of a cylindrical pressure vessel with the same wall thickness, and is the ideal shape to hold internal pressure. However, a spherical shape is difficult to manufacture, and therefore more expensive, so most pressure vessels are cylindrical with 2:1 semi-elliptical heads or end caps on each end. Smaller pressure vessels are assembled from a pipe and two covers. For cylindrical vessels with a diameter up to 600 mm (NPS of 24 in), it is possible to use seamless pipe for the shell, thus avoiding many inspection and testing issues, mainly the nondestructive examination of radiography for the long seam if required. A disadvantage of these vessels is that greater diameters are more expensive, so that for example the most economic shape of a , pressure vessel might be a diameter of and a length of including the 2:1 semi-elliptical domed end caps. Scaling No matter what shape it takes, the minimum mass of a pressure vessel scales with the pressure and volume it contains and is inversely proportional to the strength to weight ratio of the construction material (minimum mass decreases as strength increases). Scaling of stress in walls of vessel Pressure vessels are held together against the gas pressure due to tensile forces within the walls of the container. The normal (tensile) stress in the walls of the container is proportional to the pressure and radius of the vessel and inversely proportional to the thickness of the walls. Therefore, pressure vessels are designed to have a thickness proportional to the radius of tank and the pressure of the tank and inversely proportional to the maximum allowed normal stress of the particular material used in the walls of the container. Because (for a given pressure) the thickness of the walls scales with the radius of the tank, the mass of a tank (which scales as the length times radius times thickness of the wall for a cylindrical tank) scales with the volume of the gas held (which scales as length times radius squared). The exact formula varies with the tank shape but depends on the density, ρ, and maximum allowable stress σ of the material in addition to the pressure P and volume V of the vessel. (See below for the exact equations for the stress in the walls.) Spherical vessel For a sphere, the minimum mass of a pressure vessel is , where: is mass, (kg) is the pressure difference from ambient (the gauge pressure), (Pa) is volume, is the density of the pressure vessel material, (kg/m3) is the maximum working stress that material can tolerate. (Pa) Other shapes besides a sphere have constants larger than 3/2 (infinite cylinders take 2), although some tanks, such as non-spherical wound composite tanks can approach this. Cylindrical vessel with hemispherical ends This is sometimes called a "bullet" for its shape, although in geometric terms it is a capsule. For a cylinder with hemispherical ends, , where R is the Radius (m) W is the middle cylinder width only, and the overall width is W + 2R (m) Cylindrical vessel with semi-elliptical ends In a vessel with an aspect ratio of middle cylinder width to radius of 2:1, . Gas storage capacity In looking at the first equation, the factor PV, in SI units, is in units of (pressurization) energy. For a stored gas, PV is proportional to the mass of gas at a given temperature, thus . (see gas law) The other factors are constant for a given vessel shape and material. So we can see that there is no theoretical "efficiency of scale", in terms of the ratio of pressure vessel mass to pressurization energy, or of pressure vessel mass to stored gas mass. For storing gases, "tankage efficiency" is independent of pressure, at least for the same temperature. So, for example, a typical design for a minimum mass tank to hold helium (as a pressurant gas) on a rocket would use a spherical chamber for a minimum shape constant, carbon fiber for best possible , and very cold helium for best possible . Stress in thin-walled pressure vessels Stress in a thin-walled pressure vessel in the shape of a sphere is , where is hoop stress, or stress in the circumferential direction, is stress in the longitudinal direction, p is internal gauge pressure, r is the inner radius of the sphere, and t is thickness of the sphere wall. A vessel can be considered "thin-walled" if the diameter is at least 10 times (sometimes cited as 20 times) greater than the wall thickness. Stress in a thin-walled pressure vessel in the shape of a cylinder is , , where: is hoop stress, or stress in the circumferential direction is stress in the longitudinal direction p is internal gauge pressure r is the inner radius of the cylinder t is thickness of the cylinder wall. Almost all pressure vessel design standards contain variations of these two formulas with additional empirical terms to account for variation of stresses across thickness, quality control of welds and in-service corrosion allowances. All formulae mentioned above assume uniform distribution of membrane stresses across thickness of shell but in reality, that is not the case. Deeper analysis is given by Lamé's theorem, which gives the distribution of stress in the walls of a thick-walled cylinder of a homogeneous and isotropic material. The formulae of pressure vessel design standards are extension of Lamé's theorem by putting some limit on ratio of inner radius and thickness. For example, the ASME Boiler and Pressure Vessel Code (BPVC) (UG-27) formulas are: Spherical shells: Thickness has to be less than 0.356 times inner radius Cylindrical shells: Thickness has to be less than 0.5 times inner radius where E is the joint efficiency, and all others variables as stated above. The factor of safety is often included in these formulas as well, in the case of the ASME BPVC this term is included in the material stress value when solving for pressure or thickness. Shell penetrations Also sometimes called hull penetrations, depending on context, shell penetrations are intentional breaks in the structural integrity of the shell, and are usually significant local stress-raisers, so they must be accounted for in the design so they do not become failure points. It is usually necessary to reinforce the shell in the immediate vicinity of such penetrations. Shell penetrations are necessary to provide a variety of functions, including passage of the contents from the outside to the inside and back out, and in special applications for transmission of electricity, light, and other services through the shell. The simplest case is gas cylinders, which need only a neck penetration threaded to fit a valve, while a submarine or spacecraft may have a large number of penetrations for a large number of functions. Penetration thread The screw thread used for high pressure vessel shell penetrations is subject to high loads and must not leak. High pressure cylinders are produced with conical (tapered) threads and parallel threads. Two sizes of tapered threads have dominated the full metal cylinders in industrial use from in volume. For smaller fittings, taper thread standard 17E is used, with a 12% taper right hand thread, standard Whitworth 55° form with a pitch of 14 threads per inch (5.5 threads per cm) and pitch diameter at the top thread of the cylinder of . These connections are sealed using thread tape and torqued to between on steel cylinders, and between on aluminium cylinders. For larger fittings, taper thread standard 25E is used. To screw in the valve, a higher torque of typically about is necessary, Until around 1950, hemp was used as a sealant. Later, a thin sheet of lead pressed to a hat form which closely fitted the external threads, with a hole on top was used. The fitter would squeeze the soft lead shim to conform better with the grooves and ridges of the fitting before screwing it into the hole. The lead would deform to form a thin layer between the internal and external thread, and thereby fill the gaps to create the seal. Since 2005, PTFE-tape has been used to avoid using lead. A tapered thread provides simple assembly, but requires high torque for connecting and leads to high radial forces in the vessel neck, and has a limited number of times it can be used before it is excessively deformed. This could be extended a bit by always returning the same fitting to the same hole, and avoiding over-tightening. All cylinders built for working pressure, all diving cylinders, and all composite cylinders use parallel threads. Parallel threads for cylinder necks and similar penetrations of pressure vessels are made to several standards: M25x2 ISO parallel thread, which is sealed by an O-ring and torqued to on steel, and on aluminium cylinders; M18x1.5 parallel thread, which is sealed by an O-ring, and torqued to on steel cylinders, and on aluminium cylinders; 3/4"x14 BSP parallel thread, which has a 55° Whitworth thread form, a pitch diameter of and a pitch of 14 threads per inch (1.814 mm); 3/4"x14 NGS (NPSM) parallel thread, sealed by an O-ring, torqued to on aluminium cylinders, which has a 60° thread form, a pitch diameter of , and a pitch of 14 threads per inch (5.5 threads per cm); 3/4"x16 UNF, sealed by an O-ring, torqued to on aluminium cylinders. 7/8"x14 UNF, sealed by an O-ring. The 3/4"NGS and 3/4"BSP are very similar, having the same pitch and a pitch diameter that only differs by about , but they are not compatible, as the thread forms are different. All parallel thread valves are sealed using an elastomer O-ring at top of the neck thread which seals in a chamfer or step in the cylinder neck and against the flange of the valve. Pressure vessel closures Pressure vessel closures are pressure retaining structures designed to provide quick access to pipelines, pressure vessels, pig traps, filters and filtration systems. Typically pressure vessel closures allow access by maintenance personnel. A commonly used maintenance access hole shape is elliptical, which allows the closure to be passed through the opening, and rotated into the working position, and is held in place by a bar on the outside, secured by a central bolt. The internal pressure prevents it from being inadvertently opened under load. Placing the closure on the high pressure side of the opening uses the pressure difference to lock the closure when at service pressure. Where this is impracticable a safety interlock may be mandated. An airlock is a room or compartment which permits passage between environments of differing atmospheric pressure or composition, while minimizing the changing of pressure or composition between the differing environments. It consists of a chamber with two airtight doors or hatches arranged in series, which are not opened simultaneously. Airlocks can be small or large enough for one or more people to pass through, which may take the form of an antechamber. An airlock may also be used underwater to allow passage between the air environment in a pressure vessel, such as a submarine or diving bell, and the water environment outside. In such cases the airlock can contain air or water. This is called a floodable airlock or underwater airlock, and is used to prevent water from entering a submersible vessel or underwater habitat. A similar arrangement is used on spacecraft to facilitate extravehicular activity. Construction materials Many pressure vessels are made of steel. To manufacture a cylindrical or spherical pressure vessel, rolled and possibly forged parts would have to be welded together. Some mechanical properties of steel, achieved by rolling or forging, could be adversely affected by welding, unless special precautions are taken. In addition to adequate mechanical strength, current standards dictate the use of steel with a high impact resistance, especially for vessels used in low temperatures. In applications where carbon steel would suffer corrosion, special corrosion resistant material should also be used. Some pressure vessels are made of composite materials, such as filament wound composite using carbon fibre held in place with a polymer. Due to the very high tensile strength of carbon fibre these vessels can be very light, but are much more difficult to manufacture. The composite material may be wound around a metal liner, forming a composite overwrapped pressure vessel. Other very common materials include polymers such as PET in carbonated beverage containers and copper in plumbing. Pressure vessels may be lined with various metals, ceramics, or polymers to prevent leaking and protect the structure of the vessel from the contained medium. This liner may also carry a significant portion of the pressure load. Pressure Vessels may also be constructed from concrete (PCV) or other materials which are weak in tension. Cabling, wrapped around the vessel or within the wall or the vessel itself, provides the necessary tension to resist the internal pressure. A "leakproof steel thin membrane" lines the internal wall of the vessel. Such vessels can be assembled from modular pieces and so have "no inherent size limitations". There is also a high order of redundancy thanks to the large number of individual cables resisting the internal pressure. The very small vessels used to make liquid butane fueled cigarette lighters are subjected to about 2 bar pressure, depending on ambient temperature. These vessels are often oval (1 x 2 cm ... 1.3 x 2.5 cm) in cross section but sometimes circular. The oval versions generally include one or two internal tension struts which appear to be baffles but which also provide additional cylinder strength. Manufacturing processes Riveted The standard method of construction for boilers, compressed air receivers and other pressure vessels of iron or steel before gas and electrical welding of reliable quality became widespread was riveted sheets which had been rolled and forged into shape, then riveted together, often using butt straps along the joints, and caulked along the riveted seams by deforming the edges of the overlap with a blunt chisel to create a continuous line of high contact pressure along the joint. Hot riveting caused the rivets to contract on cooling, forming a tighter joint. Welded Large and low pressure vessels are commonly manufactured from formed plates welded together. Weld quality is critical to safety in pressure vessels for human occupancy. Seamless The typical circular-cylindrical high pressure gas cylinders for permanent gases (that do not liquify at storing pressure, like air, oxygen, nitrogen, hydrogen, argon, helium) have been manufactured by hot forging by pressing and rolling to get a seamless vessel of consistent material characteristics and minimised stress concentrations. Working pressure of cylinders for use in industry, skilled craft, diving and medicine had a standardized working pressure (WP) of about in Europe until about 1950. From about 1975, the standard pressure rose to about . Firemen need slim, lightweight cylinders to move in confined spaces; since about 1995 cylinders for WP were used (first in pure steel). A demand for reduced weight led to different generations of composite (fiber and matrix, over a liner) cylinders that are more vulnerable to impact damage. Composite cylinders for breathing gas are usually built for working pressure of . Manufacturing methods for seamless metal pressure vessels are commonly used for relatively small diameter cylinders where large numbers will be produced, as the machinery and tooling require large capital outlay. The methods are well suited to high pressure gas transport and storage applications, and provide consistently high quality products. Backward extrusion Backward extrusion is a process by which the material is forced to flow back along the mandrel between the mandrel and die. Cold extrusion (aluminium): Seamless aluminium cylinders may be manufactured by cold backward extrusion of aluminium billets in a process which first presses the walls and base, then trims the top edge of the cylinder walls, followed by press forming the shoulder and neck. Hot extrusion (steel): In the hot extrusion process a billet of steel is cut to size, induction heated to the correct temperature for the alloy, descaled and placed in the die. The metal is backward extruded by forcing the mandrel into it, causing it to flow through the annular gap until a deep cup is formed. This cup is further drawn to diameter and wall thickness reduced and the bottom formed. After inspection and trimming of the open end, the cylinder is hot spun to close the end and form the neck. Drawn Seamless cylinders may also be cold drawn from steel plate discs to a cylindrical cup form, in two to four stages, depending on the final ratio of diameter to cylinder length. After forming the base and side walls, the top of the cylinder is trimmed to length, heated and hot spun to form the shoulder and close the neck. The spinning process thickens the material of the shoulder. The cylinder is heat-treated by quenching and tempering to provide the best strength and toughness. Spun from seamless tube A seamless steel cylinder can also be formed by hot spinning a closure at both ends. The base is first closed completely, and trimmed to form a smooth internal surface before the shoulder and neck are formed. Regardless of the method used to form the cylinder, it will be machined to finish the neck and cut the neck threads, heat treated, cleaned, and surface finished, stamp marked, tested, and inspected for quality assurance. Composite Composite pressure vessels are generally laid up from filament wound rovings in a thermosetting polymer matrix. The mandrel may be removable after cure, or may remain a part of the finished product, often providing a more reliable gas or liquid-tight liner, or better chemical resistance to the intended contents than the resin matrix. Metallic inserts may be provided for attaching threaded accessories, such as valves and pipes. Development of composite vessels To classify the different structural principles of gas storage cylinders, 4 types are defined. Type 1 – Full metal: Cylinder is made entirely from metal. Type 2 – Hoop wrap: Metal cylinder, reinforced by a belt-like hoop wrap with fibre-reinforced resin. Type 3 – Fully wrapped, over metal liner: Diagonally wrapped fibres form the load bearing shell on the cylindrical section and at the bottom and shoulder around the metal neck. The metal liner is thin and provides the gas tight barrier. Type 4 – Fully wrapped, over non-metal liner: A lightweight thermoplastic liner provides the gas tight barrier, and the mandrel to wrap fibres and resin matrix around. Only the neck which carries the neck thread and its anchor to the liner is made of metal. Type 2 and 3 cylinders have been in production since around 1995. Type 4 cylinders are commercially available at least since 2016. Winding angle of composite vessels Wound infinite cylindrical shapes optimally take a winding angle of 54.7 degrees to the cylindrical axis, as this gives the necessary twice the strength in the circumferential direction to the longitudinal. Hoop wound fibre reinforcement is wound at an angle of nearly 90° to the cylinder axis. Safety Oveerpressure relief As the pressure vessel is designed to a pressure, there is typically a safety valve or relief valve to ensure that this pressure is not exceeded in operation. There may be a rupture disc fitted to the vessel or the cylinder valve or a fusible plug to protect in case of overheating. Leak before burst Leak before burst describes a pressure vessel designed such that a crack in the vessel will grow through the wall, allowing the contained fluid to escape and reducing the pressure, prior to growing so large as to cause catastrophic fracture at the operating pressure. Many pressure vessel standards, including the ASME Boiler and Pressure Vessel Code and the AIAA metallic pressure vessel standard, either require pressure vessel designs to be leak before burst, or require pressure vessels to meet more stringent requirements for fatigue and fracture if they are not shown to be leak before burst. Testing and inspection Hydrostatic test (filled with water) pressure is usually 1.5 times working pressure, but DOT test pressure for scuba cylinders is 5/3 (1.66) times working pressure. Operation standards Pressure vessels are designed to operate safely at a specific pressure and temperature, technically referred to as the "Design Pressure" and "Design Temperature". A vessel that is inadequately designed to handle a high pressure constitutes a very significant safety hazard. Because of that, the design and certification of pressure vessels is governed by design codes such as the ASME Boiler and Pressure Vessel Code in North America, the Pressure Equipment Directive of the EU (PED), Japanese Industrial Standard (JIS), CSA B51 in Canada, Australian Standards in Australia and other international standards like Lloyd's, Germanischer Lloyd, Det Norske Veritas, Société Générale de Surveillance (SGS S.A.), Lloyd's Register Energy Nederland (formerly known as Stoomwezen) etc. Note that where the pressure-volume product is part of a safety standard, any incompressible liquid in the vessel can be excluded as it does not contribute to the potential energy stored in the vessel, so only the volume of the compressible part such as gas is used. List of standards EN 13445: The current European Standard, harmonized with the Pressure Equipment Directive (Originally "97/23/EC", since 2014 "2014/68/EU"). Extensively used in Europe. ASME Boiler and Pressure Vessel Code Section VIII: Rules for Construction of Pressure Vessels. BS 5500: Former British Standard, replaced in the UK by BS EN 13445 but retained under the name PD 5500 for the design and construction of export equipment. AD Merkblätter: German standard, harmonized with the Pressure Equipment Directive. EN 286 (Parts 1 to 4): European standard for simple pressure vessels (air tanks), harmonized with Council Directive 87/404/EEC. BS 4994: Specification for design and construction of vessels and tanks in reinforced plastics. ASME PVHO: US standard for Pressure Vessels for Human Occupancy. CODAP: French Code for Construction of Unfired Pressure Vessel. AS/NZS 1200: Australian and New Zealand Standard for the requirements of Pressure equipment including Pressure Vessels, boilers and pressure piping. AS 1210: Australian Standard for the design and construction of Pressure Vessels AS/NZS 3788: Australian and New Zealand Standard for the inspection of pressure vessels API 510. ISO 11439: Compressed natural gas (CNG) cylinders IS 2825–1969 (RE1977)_code_unfired_Pressure_vessels. FRP tanks and vessels. AIAA S-080-1998: AIAA Standard for Space Systems – Metallic Pressure Vessels, Pressurized Structures, and Pressure Components. AIAA S-081A-2006: AIAA Standard for Space Systems – Composite Overwrapped Pressure Vessels (COPVs). ECSS-E-ST-32-02C Rev.1: Space engineering – Structural design and verification of pressurized hardware B51-09 Canadian Boiler, pressure vessel, and pressure piping code. HSE guidelines for pressure systems. Stoomwezen: Former pressure vessels code in the Netherlands, also known as RToD: Regels voor Toestellen onder Druk (Dutch Rules for Pressure Vessels). SANS 10019:2021 South African National Standard: Transportable pressure receptacles for compressed, dissolved and liquefied gases - Basic design, manufacture, use and maintenance. SANS 1825:2010 Edition 3: South African National Standard: Gas cylinder test stations ― General requirements for periodic inspection and testing of transportable refillable gas pressure receptacles. ISBN 978-0-626-23561-1 History The earliest documented design of pressure vessels was described in 1495 in the book by Leonardo da Vinci, the Codex Madrid I, in which containers of pressurized air were theorized to lift heavy weights underwater. However, vessels resembling those used today did not come about until the 1800s, when steam was generated in boilers' helping to spur the Industrial Revolution. However, with poor material quality and manufacturing techniques along with improper knowledge of design, operation and maintenance there was a large number of damaging and often deadly explosions associated with these boilers and pressure vessels, with a death occurring on a nearly daily basis in the United States. Local provinces and states in the US began enacting rules for constructing these vessels after some particularly devastating vessel failures occurred killing dozens of people at a time, which made it difficult for manufacturers to keep up with the varied rules from one location to another. The first pressure vessel code was developed starting in 1911 and released in 1914, starting the ASME Boiler and Pressure Vessel Code (BPVC). In an early effort to design a tank capable of withstanding pressures up to , a diameter tank was developed in 1919 that was spirally-wound with two layers of high tensile strength steel wire to prevent sidewall rupture, and the end caps longitudinally reinforced with lengthwise high-tensile rods. The need for high pressure and temperature vessels for petroleum refineries and chemical plants gave rise to vessels joined with welding instead of rivets (which were unsuitable for the pressures and temperatures required) and in the 1920s and 1930s the BPVC included welding as an acceptable means of construction; welding is the main means of joining metal vessels today. There have been many advancements in the field of pressure vessel engineering such as advanced non-destructive examination, phased array ultrasonic testing and radiography, new material grades with increased corrosion resistance and stronger materials, and new ways to join materials such as explosion welding, friction stir welding, advanced theories and means of more accurately assessing the stresses encountered in vessels such as with the use of Finite Element Analysis, allowing the vessels to be built safer and more efficiently. Pressure vessels in the USA require BPVC stamping, but the BPVC is not just a domestic code, many other countries have adopted the BPVC as their official code. There are, however, other official codes in some countries, such as Japan, Australia, Canada, Britain, and other countries in the European Union. Nearly all recognize the inherent potential hazards of pressure vessels and the need for standards and codes regulating their design and construction. Gallery Alternatives Natural gas storage Gas holder Depending on the application and local circumstances, alternatives to pressure vessels exist. Examples can be seen in domestic water collection systems, where the following may be used: Gravity-controlled systems which typically consist of an unpressurized water tank at an elevation higher than the point of use. Pressure at the point of use is the result of the hydrostatic pressure caused by the elevation difference. Gravity systems produce per foot of water head (elevation difference). A municipal water supply or pumped water is typically around . Inline pump controllers or pressure-sensitive pumps. In nuclear reactors, pressure vessels are primarily used to keep the coolant (water) liquid at high temperatures to increase Carnot efficiency. Other coolants can be kept at high temperatures with much less pressure, explaining the interest in molten salt reactors, lead cooled fast reactors and gas cooled reactors. However, the benefits of not needing a pressure vessel or one of less pressure are in part compensated by drawbacks unique to each alternative approach.
Technology
Containers
null
636225
https://en.wikipedia.org/wiki/Thrust%20reversal
Thrust reversal
Thrust reversal, also called reverse thrust, is the temporary diversion of an aircraft engine's thrust for it to act against the forward travel of the aircraft, providing deceleration. Thrust reverser systems are featured on many jet aircraft to help slow down just after touch-down, reducing wear on the brakes and enabling shorter landing distances. Such devices affect the aircraft significantly and are considered important for safe operations by airlines. There have been accidents involving thrust reversal systems, including fatal ones. Reverse thrust is also available on many propeller-driven aircraft through reversing the controllable-pitch propellers to a negative angle. The equivalent concept for a ship is called astern propulsion. Principle and uses A landing roll consists of touchdown, bringing the aircraft to taxi speed, and eventually to a complete stop. However, most commercial jet engines continue to produce thrust in the forward direction, even when idle, acting against the deceleration of the aircraft. The brakes of the landing gear of most modern aircraft are sufficient in normal circumstances to stop the aircraft by themselves, but for safety purposes, and to reduce the stress on the brakes, another deceleration method can be beneficial. In scenarios involving bad weather, where factors like snow or rain on the runway reduce the effectiveness of the brakes, and in emergencies like rejected takeoffs, this need is more pronounced. A simple and effective method is to reverse the direction of the exhaust stream of the jet engine and use the power of the engine itself to decelerate. Ideally, the reversed exhaust stream would be directed straight forward. However, for aerodynamic reasons, this is not possible, and a 135° angle is taken, resulting in less effectiveness than would otherwise be possible. Thrust reversal can also be used in flight to reduce airspeed, though this is not common with modern aircraft. There are three common types of thrust reversing systems used on jet engines: the target, clam-shell, and cold stream systems. Some propeller-driven aircraft equipped with variable-pitch propellers can reverse thrust by changing the pitch of their propeller blades. Most commercial jetliners have such devices, and it also has applications in military aviation. Types of systems Small aircraft typically do not have thrust reversal systems, except in specialized applications. On the other hand, large aircraft (those weighing more than 12,500 lb) almost always have the ability to reverse thrust. Reciprocating engine, turboprop and jet aircraft can all be designed to include thrust reversal systems. Propeller-driven aircraft Propeller-driven aircraft generate reverse thrust by changing the angle of their controllable-pitch propellers so that the propellers direct their thrust forward. This reverse thrust feature became available with the development of controllable-pitch propellers, which change the angle of the propeller blades to make efficient use of engine power over a wide range of conditions. Reverse thrust is created when the propeller pitch angle is reduced from fine to negative. This is called the beta position. While piston-engine aircraft tend not to have reverse thrust, turboprop aircraft generally do. Examples include the PAC P-750 XSTOL, Cessna 208 Caravan, and Pilatus PC-6 Porter. One special application of reverse thrust comes in its use on multi-engine seaplanes and flying boats. These aircraft, when landing on water, have no conventional braking method and must rely on slaloming and/or reverse thrust, as well as the drag of the water in order to slow or stop. In addition, reverse thrust is often necessary for maneuvering on the water, where it is used to make tight turns or even propel the aircraft in reverse, maneuvers which may prove necessary for leaving a dock or beach. Jet aircraft On aircraft using jet engines, thrust reversal is accomplished by causing the jet blast to flow forward. The engine does not run or rotate in reverse; instead, thrust reversing devices are used to block the blast and redirect it forward. High bypass ratio engines usually reverse thrust by changing the direction of only the fan airflow, since the majority of thrust is generated by this section, as opposed to the core. There are three jet engine thrust reversal systems in common use: External types The target thrust reverser uses a pair of hydraulically operated bucket or clamshell type doors to reverse the hot gas stream. For forward thrust, these doors form the propelling nozzle of the engine. In the original implementation of this system on the Boeing 707, and still common today, two reverser buckets were hinged so when deployed they block the rearward flow of the exhaust and redirect it with a forward component. This type of reverser is visible at the rear of the engine during deployment. Internal types Internal thrust reversers use deflector doors inside the engine shroud to redirect airflow through openings in the side of the nacelle. In turbojet and mixed-flow bypass turbofan engines, one type uses pneumatically operated clamshell deflectors to redirect engine exhaust. The reverser ducts may be fitted with cascade vanes to further redirect the airflow forward. In contrast to the two types used on turbojet and low-bypass turbofan engines, many high-bypass turbofan engines use a cold-stream reverser. This design places the deflector doors in the bypass duct to redirect only the portion of the airflow from the engine's fan section that bypasses the combustion chamber. Engines such as the A320 and A340 versions of the CFM56 direct the airflow forward with a pivoting-door reverser similar to the internal clamshell used in some turbojets. Cascade reversers use a vane cascade that is uncovered by a sleeve around the perimeter of the engine nacelle that slides aft by means of an air motor. During normal operation, the reverse thrust vanes are blocked. On selection, the system folds the doors to block off the cold stream final nozzle and redirect this airflow to the cascade vanes. In cold-stream reversers, the exhaust from the combustion chamber continues to generate forward thrust, making this design less effective. It can also redirect core exhaust flow if equipped with a hot stream spoiler. The cold stream cascade system is known for structural integrity, reliability and versatility, but can be heavy and difficult to integrate into nacelles housing large engines. Operation In most cockpit setups, reverse thrust is set when the thrust levers are on idle by pulling them farther back. Reverse thrust is typically applied immediately after touchdown, often along with spoilers, to improve deceleration early in the landing roll when residual aerodynamic lift and high speed limit the effectiveness of the brakes located on the landing gear. Reverse thrust is always selected manually, either using levers attached to the thrust levers or moving the thrust levers into a reverse thrust 'gate'. The early deceleration provided by reverse thrust can reduce landing roll by a quarter or more. Regulations dictate, however, that an aircraft must be able to land on a runway without the use of thrust reversal in order to be certified to land there as part of scheduled airline service. Once the aircraft's speed has slowed, reverse thrust is shut down to prevent the reversed airflow from throwing debris in front of the engine intakes where it can be ingested, causing foreign object damage. If circumstances require it, reverse thrust can be used all the way to a stop, or even to provide thrust to push the aircraft backward, though aircraft tugs or towbars are more commonly used for that purpose. When reverse thrust is used to push an aircraft back from the gate, the maneuver is called a powerback. Some manufacturers warn against the use of this procedure during icy conditions as using reverse thrust on snow- or slush-covered ground can cause slush, water, and runway deicers to become airborne and adhere to wing surfaces. If the full power of reverse thrust is not desirable, thrust reverse can be operated with the throttle set at less than full power, even down to idle power, which reduces stress and wear on engine components. Reverse thrust is sometimes selected on idling engines to eliminate residual thrust, in particular in icy or slick conditions, or when the engines' jet blast could cause damage. In-flight operation Some aircraft, notably some Russian and Soviet aircraft, are able to safely use reverse thrust in flight, though the majority of these are propeller-driven. Many commercial aircraft, however, cannot. In-flight use of reverse thrust has several advantages. It allows for rapid deceleration, enabling quick changes of speed. It also prevents the speed build-up normally associated with steep dives, allowing for rapid loss of altitude, which can be especially useful in hostile environments such as combat zones, and when making steep approaches to land. The Douglas DC-8 series of airliners has been certified for in-flight reverse thrust since service entry in 1959. Safe and effective for facilitating quick descents at acceptable speeds, it nonetheless produced significant aircraft buffeting, so actual use was less common on passenger flights and more common on cargo and ferry flights, where passenger comfort is not a concern. The Hawker Siddeley Trident, a 120- to 180-seat airliner, was capable of descending at up to 10,000 ft/min (3,050 m/min) by use of reverse thrust, though this capability was rarely used. The Concorde supersonic airliner could use reverse thrust in the air to increase the rate of descent. Only the inboard engines were used, and the engines were placed in reverse idle only in subsonic flight and when the aircraft was below in altitude. This would increase the rate of descent to around . The Boeing C-17 Globemaster III is one of the few modern aircraft that uses reverse thrust in flight. The Boeing-manufactured aircraft is capable of in-flight deployment of reverse thrust on all four engines to facilitate steep tactical descents up to 15,000 ft/min (4,600 m/min) into combat environments (a descent rate of just over 170 mph, or 274 km/h). The Lockheed C-5 Galaxy, introduced in 1969, also has in-flight reverse capability, although on the inboard engines only. The Saab 37 Viggen (retired in November 2005) also had the ability to use reverse thrust both before landing, to shorten the needed runway, and taxiing after landing, allowing many Swedish roads to double as wartime runways. The Shuttle Training Aircraft, a highly modified Grumman Gulfstream II, used reverse thrust in flight to help simulate Space Shuttle aerodynamics so astronauts could practice landings. A similar technique was employed on a modified Tupolev Tu-154 which simulated the Russian Buran space shuttle. Effectiveness The amount of thrust and power generated are proportional to the speed of the aircraft, making reverse thrust more effective at high speeds. For maximum effectiveness, it should be applied quickly after touchdown. If activated at low speeds, foreign object damage is possible. There is some danger of an aircraft with thrust reversers applied momentarily leaving the ground again due to both the effect of the reverse thrust and the nose-up pitch effect from the spoilers. For aircraft susceptible to such an occurrence, pilots must take care to achieve a firm position on the ground before applying reverse thrust. If applied before the nose-wheel is in contact with the ground, there is a chance of asymmetric deployment causing an uncontrollable yaw towards the side of higher thrust, as steering the aircraft with the nose wheel is the only way to maintain control of the direction of travel in this situation. Reverse thrust mode is used only for a fraction of aircraft operating time but affects it greatly in terms of design, weight, maintenance, performance, and cost. Penalties are significant but necessary since it provides stopping force for added safety margins, directional control during landing rolls, and aids in rejected take-offs and ground operations on contaminated runways where normal braking effectiveness is diminished. Airlines consider thrust reverser systems a vital part of reaching a maximum level of aircraft operating safety. Related accidents and incidents In-flight deployment of reverse thrust has directly contributed to the crashes of several transport-type aircraft: On 4 July 1966 an Air New Zealand Douglas DC-8-52 with the registration ZK-NZB crashed on takeoff on a routine training flight from Auckland International Airport due to reverse thrust applied during a simulated failure of the no. 4 engine on takeoff. The crash killed 2 of the 5 crew on board. On 11 February 1978, Pacific Western Airlines Flight 314, a Boeing 737-200, crashed while executing a rejected landing at Cranbrook Airport. The left thrust reverser had not properly stowed; it deployed during the climbout, causing the aircraft to roll to the left and strike the ground. Out of 44 passengers and 5 crew members, only 6 passengers and a flight attendant survived. On 9 February 1982, Japan Airlines Flight 350 crashed short of the runway at Tokyo Haneda Airport following the intentional deployment of reverse thrust on two of the Douglas DC-8's four engines by the mentally unstable captain, resulting in 24 passenger deaths. On 29 August 1990, a United States Air Force Lockheed C-5 Galaxy crashed shortly after take-off from Ramstein Air Base in Germany. As the aircraft started to climb off the runway, one of the thrust reversers suddenly deployed. This resulted in loss of control of the aircraft and the subsequent crash. Of the 17 people on board, 4 survived the crash. On 26 May 1991, Lauda Air Flight 004, a Boeing 767-300ER, had an uncommanded deployment of the left engine's thrust reverser, which caused the airliner to go into a rapid dive and break up in mid-air. All 213 passengers and 10 crew were killed. On 31 October 1996, TAM Linhas Aéreas Flight 402, a Fokker 100, crashed shortly after take-off from Congonhas-São Paulo International Airport, São Paulo, Brazil, striking two apartment buildings and several houses. All 90 passengers and 6 crew members as well as 3 people on the ground died in the crash. The crash was attributed to the un-commanded deployment of a faulty thrust reverser on the right engine shortly after take-off. On 10 February 2004, Kish Air Flight 7170, a Fokker 50, crashed while on approach to Sharjah International Airport. A total of 43 out of the 46 passengers and crew on board were killed. Investigators determined that the pilots had prematurely set the propellers to reverse thrust mode, causing them to lose control of the aircraft.
Technology
Aircraft components
null
636268
https://en.wikipedia.org/wiki/Botnet
Botnet
A botnet is a group of Internet-connected devices, each of which runs one or more bots. Botnets can be used to perform distributed denial-of-service (DDoS) attacks, steal data, send spam, and allow the attacker to access the device and its connection. The owner can control the botnet using command and control (C&C) software. The word "botnet" is a portmanteau of the words "robot" and "network". The term is usually used with a negative or malicious connotation. Overview A botnet is a logical collection of Internet-connected devices, such as computers, smartphones or Internet of things (IoT) devices whose security have been breached and control ceded to a third party. Each compromised device, known as a "bot," is created when a device is penetrated by software from a malware (malicious software) distribution. The controller of a botnet is able to direct the activities of these compromised computers through communication channels formed by standards-based network protocols, such as IRC and Hypertext Transfer Protocol (HTTP). Botnets are increasingly rented out by cyber criminals as commodities for a variety of purposes, including as booter/stresser services. Architecture Botnet architecture has evolved over time in an effort to evade detection and disruption. Traditionally, bot programs are constructed as clients which communicate via existing servers. This allows the bot herder (the controller of the botnet) to perform all control from a remote location, which obfuscates the traffic. Many recent botnets now rely on existing peer-to-peer networks to communicate. These P2P bot programs perform the same actions as the client–server model, but they do not require a central server to communicate. Client–server model The first botnets on the Internet used a client–server model to accomplish their tasks. Typically, these botnets operate through Internet Relay Chat networks, domains, or websites. Infected clients access a predetermined location and await incoming commands from the server. The bot herder sends commands to the server, which relays them to the clients. Clients execute the commands and report their results back to the bot herder. In the case of IRC botnets, infected clients connect to an infected IRC server and join a channel pre-designated for C&C by the bot herder. The bot herder sends commands to the channel via the IRC server. Each client retrieves the commands and executes them. Clients send messages back to the IRC channel with the results of their actions. Peer-to-peer In response to efforts to detect and decapitate IRC botnets, bot herders have begun deploying malware on peer-to-peer networks. These bots may use digital signatures so that only someone with access to the private key can control the botnet, such as in Gameover ZeuS and the ZeroAccess botnet. Newer botnets fully operate over P2P networks. Rather than communicate with a centralized server, P2P bots perform as both a command distribution server and a client which receives commands. This avoids having any single point of failure, which is an issue for centralized botnets. In order to find other infected machines, P2P bots discreetly probe random IP addresses until they identify another infected machine. The contacted bot replies with information such as its software version and list of known bots. If one of the bots' version is lower than the other, they will initiate a file transfer to update. This way, each bot grows its list of infected machines and updates itself by periodically communicating to all known bots. Core components A botnet's originator (known as a "bot herder" or "bot master") controls the botnet remotely. This is known as the command-and-control (C&C). The program for the operation must communicate via a covert channel to the client on the victim's machine (zombie computer). Control protocols IRC is a historically favored means of C&C because of its communication protocol. A bot herder creates an IRC channel for infected clients to join. Messages sent to the channel are broadcast to all channel members. The bot herder may set the channel's topic to command the botnet. For example, the message :herder!herder@example.com TOPIC #channel DDoS www.victim.com from the bot herder alerts all infected clients belonging to #channel to begin a DDoS attack on the website www.victim.com. An example response :bot1!bot1@compromised.net PRIVMSG #channel I am DDoSing www.victim.com by a bot client alerts the bot herder that it has begun the attack. Some botnets implement custom versions of well-known protocols. The implementation differences can be used for detection of botnets. For example, Mega-D features a slightly modified Simple Mail Transfer Protocol (SMTP) implementation for testing spam capability. Bringing down the Mega-D's SMTP server disables the entire pool of bots that rely upon the same SMTP server. Zombie computer In computer science, a zombie computer is a computer connected to the Internet that has been compromised by a hacker, computer virus or trojan horse and can be used to perform malicious tasks under remote direction. Botnets of zombie computers are often used to spread e-mail spam and launch denial-of-service attacks (DDoS). Most owners of zombie computers are unaware that their system is being used in this way. Because the owner tends to be unaware, these computers are metaphorically compared to zombies. A coordinated DDoS attack by multiple botnet machines also resembles a zombie horde attack. The process of stealing computing resources as a result of a system being joined to a "botnet" is sometimes referred to as "scrumping". Global law enforcement agencies, with the DOJ and FBI, dismantled the 911 S5 botnet, responsible for $5.9 billion in theft and various cybercrimes. Chinese national YunHe Wang, charged with operating the botnet, faces up to 65 years in prison. Authorities seized $60 million in assets, including luxury items and properties. Command and control Botnet command and control (C&C) protocols have been implemented in a number of ways, from traditional IRC approaches to more sophisticated versions. Telnet Telnet botnets use a simple C&C botnet protocol in which bots connect to the main command server to host the botnet. Bots are added to the botnet by using a scanning script, which runs on an external server and scans IP ranges for telnet and SSH server default logins. Once a login is found, the scanning server can infect it through SSH with malware, which pings the control server. IRC IRC networks use simple, low bandwidth communication methods, making them widely used to host botnets. They tend to be relatively simple in construction and have been used with moderate success for coordinating DDoS attacks and spam campaigns while being able to continually switch channels to avoid being taken down. However, in some cases, merely blocking of certain keywords has proven effective in stopping IRC-based botnets. The RFC 1459 (IRC) standard is popular with botnets. The first known popular botnet controller script, "MaXiTE Bot" was using IRC XDCC protocol for private control commands. One problem with using IRC is that each bot client must know the IRC server, port, and channel to be of any use to the botnet. Anti-malware organizations can detect and shut down these servers and channels, effectively halting the botnet attack. If this happens, clients are still infected, but they typically lie dormant since they have no way of receiving instructions. To mitigate this problem, a botnet can consist of several servers or channels. If one of the servers or channels becomes disabled, the botnet simply switches to another. It is still possible to detect and disrupt additional botnet servers or channels by sniffing IRC traffic. A botnet adversary can even potentially gain knowledge of the control scheme and imitate the bot herder by issuing commands correctly. P2P Since most botnets using IRC networks and domains can be taken down with time, hackers have moved to P2P botnets with C&C to make the botnet more resilient and resistant to termination. Some have also used encryption as a way to secure or lock down the botnet from others, most of the time when they use encryption it is public-key cryptography and has presented challenges in both implementing it and breaking it. Domains Many large botnets tend to use domains rather than IRC in their construction (see Rustock botnet and Srizbi botnet). They are usually hosted with bulletproof hosting services. This is one of the earliest types of C&C. A zombie computer accesses a specially-designed webpage or domain(s) which serves the list of controlling commands. The advantages of using web pages or domains as C&C is that a large botnet can be effectively controlled and maintained with very simple code that can be readily updated. Disadvantages of using this method are that it uses a considerable amount of bandwidth at large scale, and domains can be quickly seized by government agencies with little effort. If the domains controlling the botnets are not seized, they are also easy targets to compromise with denial-of-service attacks. Fast-flux DNS can be used to make it difficult to track down the control servers, which may change from day to day. Control servers may also hop from DNS domain to DNS domain, with domain generation algorithms being used to create new DNS names for controller servers. Some botnets use free DNS hosting services such as DynDns.org, No-IP.com, and Afraid.org to point a subdomain towards an IRC server that harbors the bots. While these free DNS services do not themselves host attacks, they provide reference points (often hard-coded into the botnet executable). Removing such services can cripple an entire botnet. Others Calling back to popular sites such as GitHub, Twitter, Reddit, Instagram, the XMPP open source instant message protocol and Tor hidden services are popular ways of avoiding egress filtering to communicate with a C&C server. Construction Traditional This example illustrates how a botnet is created and used for malicious gain. A hacker purchases or builds a Trojan and/or exploit kit and uses it to start infecting users' computers, whose payload is a malicious application—the bot. The bot instructs the infected PC to connect to a particular command-and-control (C&C) server. (This allows the botmaster to keep logs of how many bots are active and online.) The botmaster may then use the bots to gather keystrokes or use form grabbing to steal online credentials and may rent out the botnet as DDoS and/or spam as a service or sell the credentials online for a profit. Depending on the quality and capability of the bots, the value is increased or decreased. Newer bots can automatically scan their environment and propagate themselves using vulnerabilities and weak passwords. Generally, the more vulnerabilities a bot can scan and propagate through, the more valuable it becomes to a botnet controller community. Computers can be co-opted into a botnet when they execute malicious software. This can be accomplished by luring users into making a drive-by download, exploiting web browser vulnerabilities, or by tricking the user into running a Trojan horse program, which may come from an email attachment. This malware will typically install modules that allow the computer to be commanded and controlled by the botnet's operator. After the software is downloaded, it will call home (send a reconnection packet) to the host computer. When the re-connection is made, depending on how it is written, a Trojan may then delete itself or may remain present to update and maintain the modules. Others In some cases, a botnet may be temporarily created by volunteer hacktivists, such as with implementations of the Low Orbit Ion Cannon as used by 4chan members during Project Chanology in 2010. China's Great Cannon of China allows the modification of legitimate web browsing traffic at internet backbones into China to create a large ephemeral botnet to attack large targets such as GitHub in 2015. Common uses Distributed denial-of-service attacks are one of the most common uses for botnets, in which multiple systems submit as many requests as possible to a single Internet computer or service, overloading it and preventing it from servicing legitimate requests. An example is an attack on a victim's server. The victim's server is bombarded with requests by the bots, attempting to connect to the server, therefore, overloading it. Google fraud czar Shuman Ghosemajumder has said that these types of attacks causing outages on major websites will continue to occur regularly due the use of botnets as a service. Spyware is software which sends information to its creators about a user's activities – typically passwords, credit card numbers and other information that can be sold on the black market. Compromised machines that are located within a corporate network can be worth more to the bot herder, as they can often gain access to confidential corporate information. Several targeted attacks on large corporations aimed to steal sensitive information, such as the Aurora botnet. E-mail spam are e-mail messages disguised as messages from people, but are either advertising, annoying, or malicious. Click fraud occurs when the user's computer visits websites without the user's awareness to create false web traffic for personal or commercial gain. Ad fraud is often a consequence of malicious bot activity, according to CHEQ, Ad Fraud 2019, The Economic Cost of Bad Actors on the Internet. Commercial purposes of bots include influencers using them to boost their supposed popularity, and online publishers using bots to increase the number of clicks an ad receives, allowing sites to earn more commission from advertisers. Credential stuffing attacks use botnets to log in to many user accounts with stolen passwords, such as in the attack against General Motors in 2022. Bitcoin mining was used in some of the more recent botnets have which include bitcoin mining as a feature in order to generate profits for the operator of the botnet. Self-spreading functionality, to seek for pre-configured command-and-control (CNC) pushed instruction contains targeted devices or network, to aim for more infection, is also spotted in several botnets. Some of the botnets are utilizing this function to automate their infections. Market The botnet controller community constantly competes over who has the most bots, the highest overall bandwidth, and the most "high-quality" infected machines, like university, corporate, and even government machines. While botnets are often named after the malware that created them, multiple botnets typically use the same malware but are operated by different entities. Phishing Botnets can be used for many electronic scams. These botnets can be used to distribute malware such as viruses to take control of a regular users computer/software By taking control of someone's personal computer they have unlimited access to their personal information, including passwords and login information to accounts. This is called phishing. Phishing is the acquiring of login information to the "victim's" accounts with a link the "victim" clicks on that is sent through an email or text. A survey by Verizon found that around two-thirds of electronic "espionage" cases come from phishing. Countermeasures The geographic dispersal of botnets means that each recruit must be individually identified/corralled/repaired and limits the benefits of filtering. Computer security experts have succeeded in destroying or subverting malware command and control networks, by, among other means, seizing servers or getting them cut off from the Internet, denying access to domains that were due to be used by malware to contact its C&C infrastructure, and, in some cases, breaking into the C&C network itself. In response to this, C&C operators have resorted to using techniques such as overlaying their C&C networks on other existing benign infrastructure such as IRC or Tor, using peer-to-peer networking systems that are not dependent on any fixed servers, and using public key encryption to defeat attempts to break into or spoof the network. Norton AntiBot was aimed at consumers, but most target enterprises and/or ISPs. Host-based techniques use heuristics to identify bot behavior that has bypassed conventional anti-virus software. Network-based approaches tend to use the techniques described above; shutting down C&C servers, null-routing DNS entries, or completely shutting down IRC servers. BotHunter is software, developed with support from the U.S. Army Research Office, that detects botnet activity within a network by analyzing network traffic and comparing it to patterns characteristic of malicious processes. Researchers at Sandia National Laboratories are analyzing botnets' behavior by simultaneously running one million Linux kernels—a similar scale to a botnet—as virtual machines on a 4,480-node high-performance computer cluster to emulate a very large network, allowing them to watch how botnets work and experiment with ways to stop them. Detecting automated bot becomes more difficult as newer and more sophisticated generations of bots get launched by attackers. For example, an automated attack can deploy a large bot army and apply brute-force methods with highly accurate username and password lists to hack into accounts. The idea is to overwhelm sites with tens of thousands of requests from different IPs all over the world, but with each bot only submitting a single request every 10 minutes or so, which can result in more than 5 million attempts per day. In these cases, many tools try to leverage volumetric detection, but automated bot attacks now have ways of circumventing triggers of volumetric detection. One of the techniques for detecting these bot attacks is what's known as "signature-based systems" in which the software will attempt to detect patterns in the request packet. However, attacks are constantly evolving, so this may not be a viable option when patterns cannot be discerned from thousands of requests. There is also the behavioral approach to thwarting bots, which ultimately tries to distinguish bots from humans. By identifying non-human behavior and recognizing known bot behavior, this process can be applied at the user, browser, and network levels. The most capable method of using software to combat against a virus has been to utilize honeypot software in order to convince the malware that a system is vulnerable. The malicious files are then analyzed using forensic software. On 15 July 2014, the Subcommittee on Crime and Terrorism of the Committee on the Judiciary, United States Senate, held a hearing on the threats posed by botnets and the public and private efforts to disrupt and dismantle them. The rise in vulnerable IoT devices has led to an increase in IoT-based botnet attacks. To address this, a novel network-based anomaly detection method for IoT called N-BaIoT was introduced. It captures network behavior snapshots and employs deep autoencoders to identify abnormal traffic from compromised IoT devices. The method was tested by infecting nine IoT devices with Mirai and BASHLITE botnets, showing its ability to accurately and promptly detect attacks originating from compromised IoT devices within a botnet. Additionally, comparing different ways of detecting botnets is really useful for researchers. It helps them see how well each method works compared to others. This kind of comparison is good because it lets researchers evaluate the methods fairly and find ways to make them better. Historical list of botnets The first botnet was first acknowledged and exposed by EarthLink during a lawsuit with notorious spammer Khan C. Smith in 2001. The botnet was constructed for the purpose of bulk spam, and accounted for nearly 25% of all spam at the time. Around 2006, to thwart detection, some botnets were scaling back in size. Researchers at the University of California, Santa Barbara took control of a botnet that was six times smaller than expected. In some countries, it is common that users change their IP address a few times in one day. Estimating the size of the botnet by the number of IP addresses is often used by researchers, possibly leading to inaccurate assessments.
Technology
Computer security
null
636831
https://en.wikipedia.org/wiki/Bungarus
Bungarus
Bungarus (commonly known as kraits ) is a genus of venomous snakes in the family Elapidae. The genus is native to Asia. Often found on the floor of tropical forests in South Asia, Southeast Asia and Southern China, they are medium-sized, highly venomous snakes with a total length (including tail) typically not exceeding . These are nocturnal ophiophagious predators which prey primarily on other snakes at night, occasionally taking lizards, amphibians and rodents. Most species are with banded patterns acting as a warning sign to their predators. Despite being considered as generally docile and timid, kraits are capable of delivering highly potent neurotoxic venom which is medically significant with potential lethality to humans. The genus currently holds 18 species and 5 subspecies. Distribution Kraits are found in tropical and subtropical South and Southeast Asia and Indochina, ranging in the west from Iran, east through the Indian subcontinent (including Bangladesh, Nepal, Pakistan, & Sri Lanka) and into Southeast Asia (including the island of Borneo, Brunei, Cambodia, Indonesia, Laos, Malaysia, Myanmar, Papua New Guinea, the Philippines, Thailand and Vietnam). Description Kraits usually range between in total length (including tail), although specimens as large as have been observed. The banded krait (B. fasciatus) may grow as large as . Most species of kraits are covered in smooth, glossy scales arranged in bold, striped patterns of alternating black and light-colored areas. This may serve as aposematic colouration in its habitat of grassland and scrub jungle. The scales along the dorsal ridge of the back are hexagonal. The head is slender, and the eyes have round pupils. Kraits have pronounced dorsolateral flattening, which causes them to be triangular in cross section. Ecology Kraits are nocturnal and ophiophagous, preying chiefly upon other snakes including those of their kinds, although occasional hunt for small rodents and lizards has been observed. They are seldom encountered during daytime while becoming highly alert at night. If disturbed, fleeing is usually their first choice; if failed, they tend to coil up with the head underneath the body for protection. In spite of being generally docile and timid, some species are known to thrash fiercely when caught for relocation. Repeated provocation may result in bites which are the last resort of the snakes. Kraits are oviparous, releasing a clutch of 12 to 14 eggs in piles of leaf litter. The female usually stays with them until they hatch. Venom Bungarus contains some species that are among the most venomous land snakes in the world, to mice, based on their . They have a highly potent, neurotoxic venom, which can induce muscle paralysis. Clinically, their venom contains mostly presynaptic neurotoxins, which affect the ability of neuron endings to properly release a chemical communication mechanism to the next neuron. Following envenomation with bungarotoxins, transmitter release is initially blocked (leading to a brief paralysis), followed by a period of massive overexcitation (cramps, tremors, spasms), which finally tapers off to paralysis. These phases of envenomation may or may not be experienced in all parts of the body; they may or may not be experienced simultaneously. The severity of the bite itself and the actual dosage of venom delivered plays a role in the intensity of symptoms. As kraits are mainly nocturnal, encounters with humans are rare during the daytime. Bites mainly occur after sunset, and are often (initially) painless; thus, a bite may go unnoticed if the victim is sleeping or otherwise does not see or notice the krait, further prolonging envenomation damage within the body. Still—whenever possible—medical treatment should be sought posthaste, as a bite from a krait is considered potentially life-threatening. All venomous snake bites must be taken seriously as an immediate medical emergency. Typically, victims will start to notice severe abdominal cramps accompanied by progressive muscular paralysis, and frequently starting with ptosis. As no local symptoms are usually seen, a patient should be carefully observed for tell-tale signs of paralysis (e.g. the onset of bilateral ptosis, diplopia, and dysphagia), and subsequently treated (as quickly as possible) with antivenom. Frequently, little or no pain occurs at the site of a krait bite, which can provide false reassurance to the victim. The major medical difficulty of envenomated patients is the lack of medical resources (especially intubation supplies and mechanical ventilators in rural hospitals) and potential for ineffectiveness by the antivenom. Upon arriving at a healthcare facility, support must be provided until the venom has metabolised and the victim can breathe unaided, especially if no species-specific antivenom is available. Given that the toxins alter acetylcholine transmission—which causes the paralysis—some patients have been successfully treated with cholinesterase inhibitors, such as physostigmine or neostigmine, but success is variable and may be species-dependent, as well. If death occurs, it typically takes place about 6-12 hours after the krait bite, but can be significantly delayed. The usual cause of death in that situation is respiratory failure—suffocation by complete paralysis of the diaphragm. Even if patients make it to a hospital, subsequently entering a permanent coma (and even brain death from hypoxia) may occur, given the potential for long transport times to get medical care, in some regions. Mortality rates caused by bites from the members of this genus vary by species; according to University of Adelaide Department of Toxicology, bites from the banded krait have a mortality rate of 1–10% in untreated humans, while that of the common krait is 70–80%. In common with those of all other venomous snakes, the death time and fatality rate resulting from bites of kraits depend on numerous factors, such as the venom yield and the health status of the victim. Polyvalent elapid antivenom is effective in neutralizing of the venoms of B. candidus and B. flaviceps, and rather effective for B. fasciatus, and the monovalent B. fasciatus antivenom is also moderately effective. Species *) Not including the nominate subspecies (typical form). T) Type species Nota bene: A binomial authority in parentheses indicates that the species was originally described in a genus other than Bungarus.
Biology and health sciences
Snakes
Animals
636929
https://en.wikipedia.org/wiki/Dromaeosauridae
Dromaeosauridae
Dromaeosauridae () is a family of feathered coelurosaurian theropod dinosaurs. They were generally small to medium-sized feathered carnivores that flourished in the Cretaceous Period. The name Dromaeosauridae means 'running lizards', from Greek (), meaning 'running at full speed', 'swift', and (), meaning 'lizard'. In informal usage, they are often called raptors (after Velociraptor), a term popularized by the film Jurassic Park; several genera include the term "raptor" directly in their name, and popular culture has come to emphasize their bird-like appearance and speculated bird-like behavior. Dromaeosaurid fossils have been found across the globe in North America, Europe, Africa, Asia and South America, with some fossils giving credence to the possibility that they inhabited Australia as well. The earliest body fossils are known from the Early Cretaceous (145–140 million years ago), and they survived until the end of the Cretaceous (Maastrichtian stage, 66 ma), existing until the Cretaceous–Paleogene extinction event. The presence of dromaeosaurids as early as the Middle Jurassic has been suggested by the discovery of isolated fossil teeth, though no dromaeosaurid body fossils have been found from this period. Description Technical diagnosis Dromaeosaurids are diagnosed by the following features: short T-shaped frontals that form the rostral boundary of the supratemporal fenestra; a caudolateral overhanging shelf of the squamosal; a lateral process of the quadrate that contacts the quadratojugal; raised, stalked, parapophyses on the dorsal vertebrae, a modified pedal digit II; chevrons and prezygapophysis of the caudal vertebrae elongate and spanning several vertebrae; the presence of a subglenoid fossa on the coracoid. Size and general build Dromaeosaurids were small to medium-sized dinosaurs, ranging from in length (in the case of Velociraptor) to approaching or over (in Utahraptor, Dakotaraptor and Achillobator). Large size appears to have evolved at least twice among dromaeosaurids; once among the dromaeosaurines Utahraptor and Achillobator, and again among the unenlagiines (Austroraptor, which measured long). A possible third lineage of giant dromaeosaurids is represented by isolated teeth found on the Isle of Wight, England. The teeth belong to an animal the size of the dromaeosaurine Utahraptor, but they appear to belong to velociraptorines, judging by the shape of the teeth. The distinctive dromaeosaurid body plan helped to rekindle theories that dinosaurs may have been active, fast, and closely related to birds. Robert Bakker's illustration for John Ostrom's 1969 monograph, showing the dromaeosaurid Deinonychus in a fast run, is among the most influential paleontological reconstructions in history. The dromaeosaurid body plan includes a relatively large skull, serrated teeth, narrow snout (an exception being the derived dromaeosaurines), and forward-facing eyes which indicate some degree of binocular vision. Dromaeosaurids, like most other theropods, had a moderately long S-curved neck, and their trunk was relatively short and deep. Like other maniraptorans, they had long arms that could be folded against the body in some species, and relatively large hands with three long fingers (the middle finger being the longest and the first finger being the shortest) ending in large claws. The dromaeosaurid hip structure featured a characteristically large pubic boot projecting beneath the base of the tail. Dromaeosaurid feet bore a large, recurved claw on the second toe. Their tails were slender, with long, low, vertebrae lacking transverse process and neural spines after the 14th caudal vertebra. Ossified uncinate processes of ribs have been identified in several dromaeosaurids. Foot Like other theropods, dromaeosaurids were bipedal; that is, they walked on their hind legs. However, whereas most theropods walked with three toes contacting the ground, fossilized footprint tracks confirm that many early paravian groups, including the dromaeosaurids, held the second toe off the ground in a hyperextended position, with only the third and fourth toes bearing the weight of the animal. This is called functional didactyly. The enlarged second toe bore an unusually large, curved, falciform (sickle-shaped, alt. drepanoid) claw (held off the ground or 'retracted' when walking), which is thought to have been used in capturing prey and climbing trees (see "Claw function" below). This claw was especially blade-like in the large-bodied predatory eudromaeosaurs. One possible dromaeosaurid species, Balaur bondoc, also possessed a first toe which was highly modified in parallel with the second. Both the first and second toes on each foot of B. bondoc were also held retracted and bore enlarged, sickle-shaped claws. Tail Dromaeosaurids had long tails. Most of the tail vertebrae bore bony, rod-like extensions (called prezygapophyses), as well as bony tendons in some species. In his study of Deinonychus, Ostrom proposed that these features stiffened the tail so that it could only flex at the base, and the whole tail would then move as a single, rigid, lever. However, one well-preserved specimen of Velociraptor mongoliensis (IGM 100/986) has an articulated tail skeleton that is curved horizontally in a long S-shape. This suggests that, in life, the tail could bend from side to side with a substantial degree of flexibility. It has been proposed that this tail was used as a stabilizer or counterweight while running or in the air; in Microraptor, an elongate diamond-shaped fan of feathers is preserved on the end of the tail. This may have been used as an aerodynamic stabilizer and rudder during gliding or powered flight (see "Flight and gliding" below). Feathers There is a large body of evidence showing that dromaeosaurids were covered in feathers. Some dromaeosaurid fossils preserve long, pennaceous feathers on the hands and arms (remiges) and tail (rectrices), as well as shorter, down-like feathers covering the body. Other fossils, which do not preserve actual impressions of feathers, still preserve the associated bumps on the forearm bones where long wing feathers would have attached in life. Overall, this feather pattern looks very much like Archaeopteryx. The first known dromaeosaurid with definitive evidence of feathers was Sinornithosaurus, reported from China by Xu et al. in 1999. Many other dromaeosaurid fossils have been found with feathers covering their bodies, some with fully developed feathered wings. Microraptor even shows evidence of a second pair of wings on the hind legs. While direct feather impressions are only possible in fine-grained sediments, some fossils found in coarser rocks show evidence of feathers by the presence of quill knobs, the attachment points for wing feathers possessed by some birds. The dromaeosaurids Rahonavis and Velociraptor have both been found with quill knobs, showing that these forms had feathers despite no impressions having been found. In light of this, it is most likely that even the larger ground-dwelling dromaeosaurids bore feathers, since even flightless birds today retain most of their plumage, and relatively large dromaeosaurids, like Velociraptor, are known to have retained pennaceous feathers. Though some scientists had suggested that the larger dromaeosaurids lost some or all of their insulatory covering, the discovery of feathers in Velociraptor specimens has been cited as evidence that all members of the family retained feathers. More recently, the discovery of Zhenyuanlong established the presence of a full feathered coat in relatively large dromaeosaurids. Additionally, the animal displays proportionally large, aerodynamic wing feathers, as well as a tail-spanning fan, both of which are unexpected traits that may offer an understanding of the integument of large dromaeosaurids. Dakotaraptor is an even larger dromaeosaurid species with evidence of feathers, albeit indirect in the form of quill knobs, though the taxon is considered as chimeara by other researchers as even the dinosaurian elements with supposed traits diagnostic for dromaeosaurs also referrable to caenagnathids and ornithomimosaurians. Classification Relationship with birds Dromaeosaurids share many features with early birds (clade Avialae or Aves). The precise nature of their relationship to birds has undergone a great deal of study, and hypotheses about that relationship have changed as large amounts of new evidence became available. As late as 2001, Mark Norell and colleagues analyzed a large survey of coelurosaur fossils and produced the tentative result that dromaeosaurids were most closely related to birds, with troodontids as a more distant outgroup. They even suggested that Dromaeosauridae could be paraphyletic relative to Avialae.<ref name=Norelletal01>Norell, M. Clark, J.M., Makovicky, P.J. (2001). "Phylogenetic relationships among coelurosaurian theropods. " New Perspectives on the Origin and Evolution of Birds: Proceedings of the International Symposium in Honor of John H. Ostrom", Yale Peabody Museum: 49–67</ref> In 2002, Hwang and colleagues utilized the work of Norell et al., including new characters and better fossil evidence, to determine that birds (avialans) were better thought of as cousins to the dromaeosaurids and troodontids. The consensus of paleontologists is that there is not yet enough evidence to determine whether any dromaeosaurids could fly or glide, or whether they evolved from ancestors that could. Alternative theories and flightlessness Dromaeosaurids are so bird-like that they have led some researchers to argue that they would be better classified as birds. First, since they had feathers, dromaeosaurids (along with many other coelurosaurian theropod dinosaurs) are "birds" under traditional definitions of the word "bird", or "Aves", that are based on the possession of feathers. However, other scientists, such as Lawrence Witmer, have argued that calling a theropod like Caudipteryx a bird because it has feathers may stretch the word past any useful meaning. At least two schools of researchers have proposed that dromaeosaurids may actually be descended from flying ancestors. Hypotheses involving a flying ancestor for dromaeosaurids are sometimes called "Birds Came First" (BCF). George Olshevsky is usually credited as the first author of BCF. In his own work, Gregory S. Paul pointed out numerous features of the dromaeosaurid skeleton that he interpreted as evidence that the entire group had evolved from flying, dinosaurian ancestors, perhaps an animal like Archaeopteryx. In that case, the larger dromaeosaurids were secondarily flightless, like the modern ostrich. In 1988, Paul suggested that dromaeosaurids may actually be more closely related to modern birds than to Archaeopteryx. By 2002, however, Paul placed dromaeosaurids and Archaeopteryx as the closest relatives to one another. In 2002, Hwang et al. found that Microraptor was the most primitive dromaeosaurid. Xu and colleagues in 2003 cited the basal position of Microraptor, along with feather and wing features, as evidence that the ancestral dromaeosaurid could glide. In that case the larger dromaeosaurids would be secondarily terrestrial—having lost the ability to glide later in their evolutionary history. Also in 2002, Steven Czerkas described Cryptovolans, though it is a probable junior synonym of Microraptor. He reconstructed the fossil inaccurately with only two wings and thus argued that dromaeosaurids were powered fliers, rather than passive gliders. He later issued a revised reconstruction in agreement with that of MicroraptorOther researchers, like Larry Martin, have proposed that dromaeosaurids, along with all maniraptorans, were not dinosaurs at all. Martin asserted for decades that birds were unrelated to maniraptorans, but in 2004 he changed his position, agreeing that the two were close relatives. However, Martin believed that maniraptorans were secondarily flightless birds, and that birds did not evolve from dinosaurs, but rather from non-dinosaurian archosaurs. In 2005, Mayr and Peters described the anatomy of a very well preserved specimen of Archaeopteryx, and determined that its anatomy was more like non-avian theropods than previously understood. Specifically, they found that Archaeopteryx had a primitive palatine, unreversed hallux, and hyper-extendable second toe. Their phylogenetic analysis produced the controversial result that Confuciusornis was closer to Microraptor than to Archaeopteryx, making the Avialae a paraphyletic taxon. They also suggested that the ancestral paravian was able to fly or glide, and that the dromaeosaurids and troodontids were secondarily flightless (or had lost the ability to glide). Corfe and Butler criticized this work on methodological grounds. A challenge to all of these alternative scenarios came when Turner and colleagues in 2007 described a new dromaeosaurid, Mahakala, which they found to be the most basal and most primitive member of the Dromaeosauridae, more primitive than Microraptor. Mahakala had short arms and no ability to glide. Turner et al. also inferred that flight evolved only in the Avialae, and these two points suggested that the ancestral dromaeosaurid could not glide or fly. Based on this cladistic analysis, Mahakala suggests that the ancestral condition for dromaeosaurids is non-volant. However, in 2012, an expanded and revised study incorporating the most recent dromaeosaurid finds recovered the Archaeopteryx-like Xiaotingia as the most primitive member of the clade Dromaeosauridae, which appears to suggest the earliest members of the clade may have been capable of flight. Taxonomy The authorship of the family Dromaeosauridae is credited to William Diller Matthew and Barnum Brown, who erected it as a subfamily (Dromaeosaurinae) of the family Deinodontidae in 1922, containing only the new genus Dromaeosaurus. The subfamilies of Dromaeosauridae frequently shift in content based on new analysis, but typically consist of the following groups. A number of dromaeosaurids have not been assigned to any particular subfamily, often because they are too poorly preserved to be placed confidently in phylogenetic analysis (see section Phylogeny below) or are indeterminate, being assigned to different groups depending on the methodology employed in different papers. The most basal known subfamily of dromaeosaurids is Halszkaraptorinae, a group of bizarre creatures with long fingers and necks, a large number of small teeth, and possible semiaquatic habits. Another enigmatic group, Unenlagiinae, is the most poorly supported subfamily of dromaeosaurids and it is possible that some or all of its members belong outside of Dromaeosauridae. The larger, ground-dwelling members like Buitreraptor and Unenlagia show strong flight adaptations, although they were probably too large to 'take off'. One possible member of this group, Rahonavis, is very small, with well-developed wings that show evidence of quill knobs (the attachment points for flight feathers) and it is very likely that it could fly. The next most primitive clade of dromaeosaurids is the Microraptoria. This group includes many of the smallest dromaeosaurids, which show adaptations for living in trees. All known dromaeosaurid skin impressions hail from this group and all show an extensive covering of feathers and well-developed wings. Like the unenlagiines, some species may have been capable of active flight. The most advanced subgroup of dromaeosaurids, Eudromaeosauria, includes stocky and short-legged genera which were likely ambush hunters. This group includes Velociraptorinae, Dromaeosaurinae, and in some studies a third group: Saurornitholestinae. The subfamily Velociraptorinae has traditionally included Velociraptor, Deinonychus, and Saurornitholestes, and while the discovery of Tsaagan lent support to this grouping, the inclusion of Deinonychus, Saurornitholestes, and a few other genera is still uncertain. The Dromaeosaurinae is usually found to consist of medium to giant-sized species, with generally box-shaped skulls (the other subfamilies generally have narrower snouts). The following classification of the various genera of dromaeosaurids follows the table provided in Holtz, 2011 unless otherwise noted. Family Dromaeosauridae Nuthetes Pamparaptor Variraptor Pyroraptor Zhenyuanlong Daurlong Subfamily Halszkaraptorinae Halszkaraptor Mahakala Hulsanpes Natovenator Subfamily Unenlagiinae Ornithodesmus Austroraptor Rahonavis Unenlagia Buitreraptor Neuquenraptor Unquillosaurus Ypupiara Diuqin Subfamily Microraptorinae Shanag Tianyuraptor Graciliraptor Changyuraptor Hesperonychus Microraptor Sinornithosaurus Wulong Zhongjianosaurus Node EudromaeosauriaDeinonychusDineobellator Vectiraptor Subfamily Saurornitholestinae Bambiraptor Saurornitholestes Atrociraptor Acheroraptor? Subfamily Velociraptorinae Luanchuanraptor? Linheraptor? Velociraptor Tsaagan? Adasaurus?ShriKansaignathusKuru Subfamily Dromaeosaurinae Achillobator? Itemirus? Dromaeosaurus Dakotaraptor? Dromaeosauroides? Utahraptor? Yurgovuchia? Phylogeny Dromaeosauridae was first defined as a clade by Paul Sereno in 1998, as the most inclusive natural group containing Dromaeosaurus but not Troodon, Ornithomimus or Passer. The various "subfamilies" have also been re-defined as clades, usually defined as all species closer to the groups namesake than to Dromaeosaurus or any namesakes of other sub-clades (for example, Makovicky defined the clade Unenlagiinae as all dromaeosaurids closer to Unenlagia than to Velociraptor). The Microraptoria is the only dromaeosaurid sub-clade not converted from a subfamily. Senter and colleagues expressly coined the name without the subfamily suffix -inae to avoid perceived issues with erecting a traditional family-group taxon, should the group be found to lie outside dromaeosauridae proper. Sereno offered a revised definition of the sub-group containing Microraptor to ensure that it would fall within Dromaeosauridae, and erected the subfamily Microraptorinae, attributing it to Senter et al., though this usage has only appeared on his online TaxonSearch database and has not been formally published. The extensive cladistic analysis conducted by Turner et al. (2012) further supported the monophyly of Dromaeosauridae. The cladogram below follows a 2015 analysis by DePalma et al. using updated data from the Theropod Working Group. Another cladogram constructed below follows the phylogenetic analysis conducted in 2017 by Cau et al. using the updated data from the Theropod Working Group in their description of Halszkaraptor. Paleobiology Senses Comparisons between the scleral rings of several dromaeosaurids (Microraptor, Sinornithosaurus, and Velociraptor) and modern birds and reptiles indicate that some dromaeosaurids (including Microraptor and Velociraptor) may have been nocturnal predators, while Sinornithosaurus is inferred to be cathemeral (active throughout the day at short intervals). However, the discovery of iridescent plumage in Microraptor has cast doubt on the inference of nocturnality in this genus, as no modern birds that have iridescent plumage are known to be nocturnal. Studies of the olfactory bulbs of dromaeosaurids reveal that they had similar olfactory ratios for their size to other non-avian theropods and modern birds with an acute sense of smell, such as tyrannosaurids and the turkey vulture, probably reflecting the importance of the olfactory sense in the daily activities of dromaeosaurids such as finding food. Feeding Dromaeosaurid feeding was discovered to be typical of coelurosaurian theropods, with a characteristic "puncture and pull" feeding method. Studies of wear patterns on the teeth of dromaeosaurids by Angelica Torices et al. indicate that dromaeosaurid teeth share similar wear patterns to those seen in the Tyrannosauridae and Troodontidae. However, microwear on the teeth indicated that dromaeosaurids likely preferred larger prey items than the troodontids they often shared their environment with. Such dietary differentiations likely allowed them to inhabit the same environment. The same study also indicated that dromaeosaurids such as Dromaeosaurus and Saurornitholestes (two dromaeosaurids analyzed in the study) likely included bone in their diet and were better adapted to handle struggling prey while troodontids, equipped with weaker jaws, preyed on softer animals and prey items such as invertebrates and carrion. Claw function There is currently disagreement about the function of the enlarged "sickle claw" on the second toe. When John Ostrom described it for Deinonychus in 1969, he interpreted the claw as a blade-like slashing weapon, much like the canines of some saber-toothed cats, used with powerful kicks to cut into prey. Adams (1987) suggested that the talon was used to disembowel large ceratopsian dinosaurs. The interpretation of the sickle claw as a killing weapon applied to all dromaeosaurids. However, Manning et al. argued that the claw instead served as a hook, reconstructing the keratinous sheath with an elliptical cross section, instead of the previously inferred inverted teardrop shape. In Manning's interpretation, the second toe claw would be used as a climbing aid when subduing bigger prey and also as a stabbing weapon. Ostrom compared Deinonychus to the ostrich and cassowary. He noted that the bird species can inflict serious injury with the large claw on the second toe. The cassowary has claws up to long. Ostrom cited Gilliard (1958) in saying that they can sever an arm or disembowel a man. Kofron (1999 and 2003) studied 241 documented cassowary attacks and found that one human and two dogs had been killed, but no evidence that cassowaries can disembowel or dismember other animals. Cassowaries use their claws to defend themselves, to attack threatening animals, and in agonistic displays such as the Bowed Threat Display. The seriema also has an enlarged second toe claw, and uses it to tear apart small prey items for swallowing. Phillip Manning and colleagues (2009) attempted to test the function of the sickle claw and similarly shaped claws on the forelimbs. They analyzed the bio-mechanics of how stresses and strains would be distributed along the claws and into the limbs, using X-ray imaging to create a three-dimensional contour map of a forelimb claw from Velociraptor. For comparison, they analyzed the construction of a claw from a modern predatory bird, the eagle owl. They found that, based on the way that stress was conducted along the claw, they were ideal for climbing. The scientists found that the sharpened tip of the claw was a puncturing and gripping instrument, while the curved and expanded claw base helped transfer stress loads evenly. The Manning team also compared the curvature of the dromaeosaurid "sickle claw" on the foot with curvature in modern birds and mammals. Previous studies had shown that the amount of curvature in a claw corresponded to what lifestyle the animal has: animals with strongly curved claws of a certain shape tend to be climbers, while straighter claws indicate ground-dwelling lifestyles. The sickle claws of the dromaeosaurid Deinonychus have a curvature of 160 degrees, well within the range of climbing animals. The forelimb claws they studied also fell within the climbing range of curvature. Paleontologist Peter Mackovicky commented on the Manning team's study, stating that small, primitive dromaeosaurids (such as Microraptor) were likely to have been tree-climbers, but that climbing did not explain why later, gigantic dromaeosaurids such as Achillobator retained highly curved claws when they were too large to have climbed trees. Mackovicky speculated that giant dromaeosaurids may have adapted the claw to be used exclusively for latching on to prey. In 2009 Phil Senter published a study on dromaeosaurid toes and showed that their range of motion was compatible with the excavation of tough insect nests. Senter suggested that small dromaeosaurids such as Rahonavis and Buitreraptor were small enough to be partial insectivores, while larger genera such as Deinonychus and Neuquenraptor could have used this ability to catch vertebrate prey residing in insect nests. However, Senter did not test whether the strong curvature of dromaeosaurid claws was also conducive to such activities. In 2011, Denver Fowler and colleagues suggested a new method by which dromaeosaurids may have taken smaller prey. This model, known as the "raptor prey restraint" (RPR) model of predation, proposes that dromaeosaurids killed their prey in a manner very similar to extant accipitrid birds of prey: by leaping onto their quarry, pinning it under their body weight, and gripping it tightly with the large, sickle-shaped claws. Like accipitrids, the dromaeosaurid would then begin to feed on the animal while still alive, until it eventually died from blood loss and organ failure. This proposal is based primarily on comparisons between the morphology and proportions of the feet and legs of dromaeosaurids to several groups of extant birds of prey with known predatory behaviors. Fowler found that the feet and legs of dromaeosaurids most closely resemble those of eagles and hawks, especially in terms of having an enlarged second claw and a similar range of grasping motion. The short metatarsus and foot strength, however, would have been more similar to that of owls. The RPR method of predation would be consistent with other aspects of dromaeosaurid anatomy, such as their unusual dentition and arm morphology. The arms, which could exert a lot of force but were likely covered in long feathers, may have been used as flapping stabilizers for balance while atop a struggling prey animal, along with the stiff counterbalancing tail. Dromaeosaurid jaws, thought by Fowler and colleagues to be comparatively weak, would have been useful for eating prey alive but not as useful for quick, forceful dispatch of the prey. These predatory adaptations working together may also have implications for the origin of flapping in paravians. In 2019, Peter Bishop reconstructed the leg skeleton and musculature of Deinonychus by using three-dimensional models of muscles, tendons, and bones. With the addition of mathematical models and equations, Bishop simulated the conditions that would provide maximum force at the tip of the sickle claw and therefore the most likely function. Among the proposed modes of the sickle claw use are: kicking to cut, slash or disembowel prey; for gripping onto the flanks of prey; piercing aided by body weight; to attack vital areas of the prey; to restrain prey; intra- or interspecific competition; and digging out prey from hideouts. The results obtained by Bishop showed that a crouching posture increased the claw forces, however, these forces remained relatively weak indicating that the claws were not strong enough to be used in slashing strikes. Rather than being used for slashing, the sickle claws were more likely to be useful in flexed leg angles such as restraining prey and stabbing prey at close quarters. These results are consistent with the Fighting Dinosaurs specimen, which preserves a Velociraptor and Protoceratops locked in combat, with the former gripping onto the other with its claws in a non-extended leg posture. Despite the obtained results, Bishop considered that the capabilities of the sickle claw could have varied within taxa given that among dromaeosaurids, Adasaurus had an unusually smaller sickle claw that retained the characteristic ginglymoid—a structure divided in two parts—and hyperextensible articular surface of the penultimate phalange. He could neither confirm nor disregard that the pedal digit II could have loss or retain its functionally. A 2020 study by Gianechini et al., also indicates that velociraptorines, dromaeosaurines and other eudromaeosaurs in Laurasia differed greatly in their locomotive and killing techniques from the unenlagiine dromaeosaurids of Gondwana. The shorter second phalanx in the second digit of the foot allowed for increased force to be generated by that digit, which, combined with a shorter and wider metatarsus, and a noticeable marked hinge‐like morphology of the articular surfaces of metatarsals and phalanges, possibly allowed eudromaeosaurs to exert a greater gripping strength than unenlagiines, allowing for more efficient subduing and killing of large prey. In comparison, the unenlagiine dromaeosaurids had a longer and slender subarctometatarsus, and less well‐marked hinge joints, a trait that possibly gave them greater cursorial capacities and allowed for greater speed. Additionally, the longer second phalanx of the second digit allowed unenlagiines fast movements of their feet's second digits to hunt smaller and more elusive types of prey. These differences in locomotor and predatory specializations may have been a key feature that influenced the evolutionary pathways that shaped both groups of dromaeosaurs in the northern and southern hemispheres. Group behaviorDeinonychus fossils have been uncovered in small groups near the remains of the herbivore Tenontosaurus, a larger ornithischian dinosaur. This had been interpreted as evidence that these dromaeosaurids hunted in coordinated packs like some modern mammals. However, not all paleontologists found the evidence conclusive, and a subsequent study published in 2007 by Roach and Brinkman suggests that the Deinonychus may have actually displayed a disorganized mobbing behavior. Modern diapsids, including birds and crocodiles (the closest relatives of dromaeosaurids), display minimal long-term cooperative hunting (except the aplomado falcon and Harris's hawk); instead, they are usually solitary hunters, either joining forces time to time to increase hunting success (as crocodilians sometimes do), or are drawn to previously killed carcasses, where conflict often occurs between individuals of the same species. For example, in situations where groups of Komodo dragons are eating together, the largest individuals eat first and might attack smaller Komodo dragons that attempt to feed; if the smaller animal dies, it is usually cannibalized. When this information is applied to the sites containing putative pack-hunting behavior in dromaeosaurids, it appears somewhat consistent with a Komodo dragon-like feeding strategy. Deinonychus skeletal remains found at these sites are from subadults, with missing parts that may have been eaten by other Deinonychus, which a study by Roach et al. presented as evidence against the idea that the animals cooperated in the hunt. Different dietary preferences between juvenile and adult Deinonychus published in 2020 indicate that the animal did not exhibit complex, cooperative behavior seen in pack-hunting animals. Whether this extended to other dromaeosaurs is currently unknown. A third possible option is that dromaeosaurids did not exhibit long-term cooperative behaviour, but did show short-term cooperative behaviour as seen in crocodilians, which display both true cooperation and competition for prey. In 2001, multiple Utahraptor specimens ranging in age from fully grown adult to tiny three-foot-long baby were found at a site considered by some to be a quicksand predator trap. Some consider this as evidence of family hunting behaviour; however, the full sandstone block is yet to be opened and researchers are unsure as to whether or not the animals died at the same time. In 2007, scientists described the first known extensive dromaeosaurid trackway, in Shandong, China. In addition to confirming the hypothesis that the sickle claw was held retracted off the ground, the trackway (made by a large, Achillobator-sized species) showed evidence of six individuals of about equal size moving together along a shoreline. The individuals were spaced about one meter apart, traveling in the same direction and walking at a fairly slow pace. The authors of the paper describing these footprints interpreted the trackways as evidence that some species of dromaeosaurids lived in groups. While the trackways clearly do not represent hunting behavior, the idea that groups of dromaeosaurids may have hunted together, according to the authors, could not be ruled out. Flying and gliding The forearms of dromaeosaurids appear well adapted to resisting the torsional and bending stresses associated with flapping and gliding, and the ability to fly or glide has been suggested for at least five dromaeosaurid species. The first, Rahonavis ostromi (originally classified as avian bird, but found to be a dromaeosaurid in later studies) may have been capable of powered flight, as indicated by its long forelimbs with evidence of quill knob attachments for long sturdy flight feathers. The forelimbs of Rahonavis were more powerfully built than Archaeopteryx, and show evidence that they bore strong ligament attachments necessary for flapping flight. Luis Chiappe concluded that, given these adaptations, Rahonavis could probably fly but would have been more clumsy in the air than modern birds. Another species of dromaeosaurid, Microraptor gui, may have been capable of gliding using its well-developed wings on both the fore and hind limbs. A 2005 study by Sankar Chatterjee suggested that the wings of Microraptor functioned like a split-level "biplane", and that it likely employed a phugoid style of gliding, in which it would launch from a perch and swoop downward in a U-shaped curve, then lift again to land on another tree, with the tail and hind wings helping to control its position and speed. Chatterjee also found that Microraptor had the basic requirements to sustain level powered flight in addition to gliding.Changyuraptor yangi is a close relative of Microraptor gui, also thought to be a glider or flyer based on the presence of four wings and similar limb proportions. However, it is a considerably larger animal, around the size of a wild turkey, being among the largest known flying Mesozoic paravians. Another dromaeosaurid species, Deinonychus antirrhopus, may display partial flight capacities. The young of this species bore longer arms and more robust pectoral girdles than adults, and which were similar to those seen in other flapping theropods, implying that they may have been capable of flight when young and then lost the ability as they grew. The possibility that Sinornithosaurus millenii was capable of gliding or even powered flight has also been brought up several times, though no further studies have occurred.Zhenyuanlong preserves wing feathers that are aerodynamically shaped, with particularly bird-like coverts as opposed to the longer, wider-spanning coverts of forms like Archaeopteryx and Anchiornis, as well as fused sternal plates. Due to its size and short arms it is unlikely that Zhenyuanlong was capable of powered flight (though the importance of biomechanical modelling in this regard is stressed), but it may suggest a relatively close descendance from flying ancestors, or even some capacity for gliding or wing-assisted incline running. Paleopathology In 2001, Bruce Rothschild and others published a study examining evidence for stress fractures and tendon avulsions in theropod dinosaurs and the implications for their behavior. Since stress fractures are caused by repeated trauma rather than singular events they are more likely to be caused by regular behavior than other types of injuries. The researchers found lesions like those caused by stress fractures on a dromaeosaurid hand claw, one of only two such claw lesions discovered in the course of the study. Stress fractures in the hands have special behavioral significance compared to those found in the feet, since stress fractures in the feet can be obtained while running or during migration. Hand injuries, by contrast, are more likely to be obtained while in contact with struggling prey. Swimming At least one dromaeosaurid group, Halszkaraptorinae, whose members are halszkaraptorines, are most likely to have been specialised for aquatic or semiaquatic habits, having developed limb proportions, tooth morphology, and rib cage akin to those of diving birds. Fishing habits have been proposed for unenlagiines, including comparisons to attributed semi-aquatic spinosaurids, but any aquatic propulsion mechanisms have not been discussed so far. Reproduction In 2006, Grellet-Tinner and Makovicky reported an egg associated with a specimen of Deinonychus. The egg shares similarities with oviraptorid eggs, and the authors interpreted the association as potentially indicative of brooding. A study published in November 2018 by Norell, Yang and Wiemann et al., indicates that Deinonychus laid blue eggs, likely to camouflage them as well as creating open nests. Other dromaeosaurids may have done the same, and it is theorized that they and other maniraptoran dinosaurs may have been an origin point for laying colored eggs and creating open nests as many birds do today. In popular cultureVelociraptor, a dromaeosaurid, gained much attention after it was featured prominently in the 1993 Steven Spielberg film Jurassic Park. However, the dimensions of the Velociraptor in the film are much larger than the largest members of that genus. Robert Bakker recalled that Spielberg had been disappointed with the dimensions of Velociraptor and so upsized it. Gregory S. Paul, in his 1988 book Predatory Dinosaurs of the World, also considered Deinonychus antirrhopus a species of Velociraptor, and so rechristened the species Velociraptor antirrhopus''. This taxonomic opinion has not been widely followed. Timeline of dromaeosaurid genera
Biology and health sciences
Theropods
Animals
637102
https://en.wikipedia.org/wiki/Plant%20physiology
Plant physiology
Plant physiology is a subdiscipline of botany concerned with the functioning, or physiology, of plants. Plant physiologists study fundamental processes of plants, such as photosynthesis, respiration, plant nutrition, plant hormone functions, tropisms, nastic movements, photoperiodism, photomorphogenesis, circadian rhythms, environmental stress physiology, seed germination, dormancy and stomata function and transpiration. Plant physiology interacts with the fields of plant morphology (structure of plants), plant ecology (interactions with the environment), phytochemistry (biochemistry of plants), cell biology, genetics, biophysics and molecular biology. Aims The field of plant physiology includes the study of all the internal activities of plants—those chemical and physical processes associated with life as they occur in plants. This includes study at many levels of scale of size and time. At the smallest scale are molecular interactions of photosynthesis and internal diffusion of water, minerals, and nutrients. At the largest scale are the processes of plant development, seasonality, dormancy, and reproductive control. Major subdisciplines of plant physiology include phytochemistry (the study of the biochemistry of plants) and phytopathology (the study of disease in plants). The scope of plant physiology as a discipline may be divided into several major areas of research. First, the study of phytochemistry (plant chemistry) is included within the domain of plant physiology. To function and survive, plants produce a wide array of chemical compounds not found in other organisms. Photosynthesis requires a large array of pigments, enzymes, and other compounds to function. Because they cannot move, plants must also defend themselves chemically from herbivores, pathogens and competition from other plants. They do this by producing toxins and foul-tasting or smelling chemicals. Other compounds defend plants against disease, permit survival during drought, and prepare plants for dormancy, while other compounds are used to attract pollinators or herbivores to spread ripe seeds. Secondly, plant physiology includes the study of biological and chemical processes of individual plant cells. Plant cells have a number of features that distinguish them from cells of animals, and which lead to major differences in the way that plant life behaves and responds differently from animal life. For example, plant cells have a cell wall which maintains the shape of plant cells. Plant cells also contain chlorophyll, a chemical compound that interacts with light in a way that enables plants to manufacture their own nutrients rather than consuming other living things as animals do. Thirdly, plant physiology deals with interactions between cells, tissues, and organs within a plant. Different cells and tissues are physically and chemically specialized to perform different functions. Roots and rhizoids function to anchor the plant and acquire minerals in the soil. Leaves catch light in order to manufacture nutrients. For both of these organs to remain living, minerals that the roots acquire must be transported to the leaves, and the nutrients manufactured in the leaves must be transported to the roots. Plants have developed a number of ways to achieve this transport, such as vascular tissue, and the functioning of the various modes of transport is studied by plant physiologists. Fourthly, plant physiologists study the ways that plants control or regulate internal functions. Like animals, plants produce chemicals called hormones which are produced in one part of the plant to signal cells in another part of the plant to respond. Many flowering plants bloom at the appropriate time because of light-sensitive compounds that respond to the length of the night, a phenomenon known as photoperiodism. The ripening of fruit and loss of leaves in the winter are controlled in part by the production of the gas ethylene by the plant. Finally, plant physiology includes the study of plant response to environmental conditions and their variation, a field known as environmental physiology. Stress from water loss, changes in air chemistry, or crowding by other plants can lead to changes in the way a plant functions. These changes may be affected by genetic, chemical, and physical factors. Biochemistry of plants The chemical elements of which plants are constructed—principally carbon, oxygen, hydrogen, nitrogen, phosphorus, sulfur, etc.—are the same as for all other life forms: animals, fungi, bacteria and even viruses. Only the details of their individual molecular structures vary. Despite this underlying similarity, plants produce a vast array of chemical compounds with unique properties which they use to cope with their environment. Pigments are used by plants to absorb or detect light, and are extracted by humans for use in dyes. Other plant products may be used for the manufacture of commercially important rubber or biofuel. Perhaps the most celebrated compounds from plants are those with pharmacological activity, such as salicylic acid from which aspirin is made, morphine, and digoxin. Drug companies spend billions of dollars each year researching plant compounds for potential medicinal benefits. Constituent elements Plants require some nutrients, such as carbon and nitrogen, in large quantities to survive. Some nutrients are termed macronutrients, where the prefix macro- (large) refers to the quantity needed, not the size of the nutrient particles themselves. Other nutrients, called micronutrients, are required only in trace amounts for plants to remain healthy. Such micronutrients are usually absorbed as ions dissolved in water taken from the soil, though carnivorous plants acquire some of their micronutrients from captured prey. The following tables list element nutrients essential to plants. Uses within plants are generalized. Pigments Among the most important molecules for plant function are the pigments. Plant pigments include a variety of different kinds of molecules, including porphyrins, carotenoids, and anthocyanins. All biological pigments selectively absorb certain wavelengths of light while reflecting others. The light that is absorbed may be used by the plant to power chemical reactions, while the reflected wavelengths of light determine the color the pigment appears to the eye. Chlorophyll is the primary pigment in plants; it is a porphyrin that absorbs red and blue wavelengths of light while reflecting green. It is the presence and relative abundance of chlorophyll that gives plants their green color. All land plants and green algae possess two forms of this pigment: chlorophyll a and chlorophyll b. Kelps, diatoms, and other photosynthetic heterokonts contain chlorophyll c instead of b, red algae possess chlorophyll a. All chlorophylls serve as the primary means plants use to intercept light to fuel photosynthesis. Carotenoids are red, orange, or yellow tetraterpenoids. They function as accessory pigments in plants, helping to fuel photosynthesis by gathering wavelengths of light not readily absorbed by chlorophyll. The most familiar carotenoids are carotene (an orange pigment found in carrots), lutein (a yellow pigment found in fruits and vegetables), and lycopene (the red pigment responsible for the color of tomatoes). Carotenoids have been shown to act as antioxidants and to promote healthy eyesight in humans. Anthocyanins (literally "flower blue") are water-soluble flavonoid pigments that appear red to blue, according to pH. They occur in all tissues of higher plants, providing color in leaves, stems, roots, flowers, and fruits, though not always in sufficient quantities to be noticeable. Anthocyanins are most visible in the petals of flowers, where they may make up as much as 30% of the dry weight of the tissue. They are also responsible for the purple color seen on the underside of tropical shade plants such as Tradescantia zebrina. In these plants, the anthocyanin catches light that has passed through the leaf and reflects it back towards regions bearing chlorophyll, in order to maximize the use of available light Betalains are red or yellow pigments. Like anthocyanins they are water-soluble, but unlike anthocyanins they are indole-derived compounds synthesized from tyrosine. This class of pigments is found only in the Caryophyllales (including cactus and amaranth), and never co-occur in plants with anthocyanins. Betalains are responsible for the deep red color of beets, and are used commercially as food-coloring agents. Plant physiologists are uncertain of the function that betalains have in plants which possess them, but there is some preliminary evidence that they may have fungicidal properties. Signals and regulators Plants produce hormones and other growth regulators which act to signal a physiological response in their tissues. They also produce compounds such as phytochrome that are sensitive to light and which serve to trigger growth or development in response to environmental signals. Plant hormones Plant hormones, known as plant growth regulators (PGRs) or phytohormones, are chemicals that regulate a plant's growth. According to a standard animal definition, hormones are signal molecules produced at specific locations, that occur in very low concentrations, and cause altered processes in target cells at other locations. Unlike animals, plants lack specific hormone-producing tissues or organs. Plant hormones are often not transported to other parts of the plant and production is not limited to specific locations. Plant hormones are chemicals that in small amounts promote and influence the growth, development and differentiation of cells and tissues. Hormones are vital to plant growth; affecting processes in plants from flowering to seed development, dormancy, and germination. They regulate which tissues grow upwards and which grow downwards, leaf formation and stem growth, fruit development and ripening, as well as leaf abscission and even plant death. The most important plant hormones are abscissic acid (ABA), auxins, ethylene, gibberellins, and cytokinins, though there are many other substances that serve to regulate plant physiology. Photomorphogenesis While most people know that light is important for photosynthesis in plants, few realize that plant sensitivity to light plays a role in the control of plant structural development (morphogenesis). The use of light to control structural development is called photomorphogenesis, and is dependent upon the presence of specialized photoreceptors, which are chemical pigments capable of absorbing specific wavelengths of light. Plants use four kinds of photoreceptors: phytochrome, cryptochrome, a UV-B photoreceptor, and protochlorophyllide a. The first two of these, phytochrome and cryptochrome, are photoreceptor proteins, complex molecular structures formed by joining a protein with a light-sensitive pigment. Cryptochrome is also known as the UV-A photoreceptor, because it absorbs ultraviolet light in the long wave "A" region. The UV-B receptor is one or more compounds not yet identified with certainty, though some evidence suggests carotene or riboflavin as candidates. Protochlorophyllide a, as its name suggests, is a chemical precursor of chlorophyll. The most studied of the photoreceptors in plants is phytochrome. It is sensitive to light in the red and far-red region of the visible spectrum. Many flowering plants use it to regulate the time of flowering based on the length of day and night (photoperiodism) and to set circadian rhythms. It also regulates other responses including the germination of seeds, elongation of seedlings, the size, shape and number of leaves, the synthesis of chlorophyll, and the straightening of the epicotyl or hypocotyl hook of dicot seedlings. Photoperiodism Many flowering plants use the pigment phytochrome to sense seasonal changes in day length, which they take as signals to flower. This sensitivity to day length is termed photoperiodism. Broadly speaking, flowering plants can be classified as long day plants, short day plants, or day neutral plants, depending on their particular response to changes in day length. Long day plants require a certain minimum length of daylight to start flowering, so these plants flower in the spring or summer. Conversely, short day plants flower when the length of daylight falls below a certain critical level. Day neutral plants do not initiate flowering based on photoperiodism, though some may use temperature sensitivity (vernalization) instead. Although a short day plant cannot flower during the long days of summer, it is not actually the period of light exposure that limits flowering. Rather, a short day plant requires a minimal length of uninterrupted darkness in each 24-hour period (a short daylength) before floral development can begin. It has been determined experimentally that a short day plant (long night) does not flower if a flash of phytochrome activating light is used on the plant during the night. Plants make use of the phytochrome system to sense day length or photoperiod. This fact is utilized by florists and greenhouse gardeners to control and even induce flowering out of season, such as the poinsettia (Euphorbia pulcherrima). Environmental physiology Paradoxically, the subdiscipline of environmental physiology is on the one hand a recent field of study in plant ecology and on the other hand one of the oldest. Environmental physiology is the preferred name of the subdiscipline among plant physiologists, but it goes by a number of other names in the applied sciences. It is roughly synonymous with ecophysiology, crop ecology, horticulture and agronomy. The particular name applied to the subdiscipline is specific to the viewpoint and goals of research. Whatever name is applied, it deals with the ways in which plants respond to their environment and so overlaps with the field of ecology. Environmental physiologists examine plant response to physical factors such as radiation (including light and ultraviolet radiation), temperature, fire, and wind. Of particular importance are water relations (which can be measured with the Pressure bomb) and the stress of drought or inundation, exchange of gases with the atmosphere, as well as the cycling of nutrients such as nitrogen and carbon. Environmental physiologists also examine plant response to biological factors. This includes not only negative interactions, such as competition, herbivory, disease and parasitism, but also positive interactions, such as mutualism and pollination. While plants, as living beings, can perceive and communicate physical stimuli and damage, they do not feel pain as members of the animal kingdom do simply because of the lack of any pain receptors, nerves, or a brain, and, by extension, lack of consciousness. Many plants are known to perceive and respond to mechanical stimuli at a cellular level, and some plants such as the venus flytrap or touch-me-not, are known for their "obvious sensory abilities". Nevertheless, the plant kingdom as a whole do not feel pain notwithstanding their abilities to respond to sunlight, gravity, wind, and any external stimuli such as insect bites, since they lack any nervous system. The primary reason for this is that, unlike the members of the animal kingdom whose evolutionary successes and failures are shaped by suffering, the evolution of plants are simply shaped by life and death. Tropisms and nastic movements Plants may respond both to directional and non-directional stimuli. A response to a directional stimulus, such as gravity or sun light, is called a tropism. A response to a nondirectional stimulus, such as temperature or humidity, is a nastic movement. Tropisms in plants are the result of differential cell growth, in which the cells on one side of the plant elongates more than those on the other side, causing the part to bend toward the side with less growth. Among the common tropisms seen in plants is phototropism, the bending of the plant toward a source of light. Phototropism allows the plant to maximize light exposure in plants which require additional light for photosynthesis, or to minimize it in plants subjected to intense light and heat. Geotropism allows the roots of a plant to determine the direction of gravity and grow downwards. Tropisms generally result from an interaction between the environment and production of one or more plant hormones. Nastic movements results from differential cell growth (e.g. epinasty and hiponasty), or from changes in turgor pressure within plant tissues (e.g., nyctinasty), which may occur rapidly. A familiar example is thigmonasty (response to touch) in the Venus fly trap, a carnivorous plant. The traps consist of modified leaf blades which bear sensitive trigger hairs. When the hairs are touched by an insect or other animal, the leaf folds shut. This mechanism allows the plant to trap and digest small insects for additional nutrients. Although the trap is rapidly shut by changes in internal cell pressures, the leaf must grow slowly to reset for a second opportunity to trap insects. Plant disease Economically, one of the most important areas of research in environmental physiology is that of phytopathology, the study of diseases in plants and the manner in which plants resist or cope with infection. Plant are susceptible to the same kinds of disease organisms as animals, including viruses, bacteria, and fungi, as well as physical invasion by insects and roundworms. Because the biology of plants differs with animals, their symptoms and responses are quite different. In some cases, a plant can simply shed infected leaves or flowers to prevent the spread of disease, in a process called abscission. Most animals do not have this option as a means of controlling disease. Plant diseases organisms themselves also differ from those causing disease in animals because plants cannot usually spread infection through casual physical contact. Plant pathogens tend to spread via spores or are carried by animal vectors. One of the most important advances in the control of plant disease was the discovery of Bordeaux mixture in the nineteenth century. The mixture is the first known fungicide and is a combination of copper sulfate and lime. Application of the mixture served to inhibit the growth of downy mildew that threatened to seriously damage the French wine industry. History Early history Francis Bacon published one of the first plant physiology experiments in 1627 in the book, Sylva Sylvarum. Bacon grew several terrestrial plants, including a rose, in water and concluded that soil was only needed to keep the plant upright. Jan Baptist van Helmont published what is considered the first quantitative experiment in plant physiology in 1648. He grew a willow tree for five years in a pot containing 200 pounds of oven-dry soil. The soil lost just two ounces of dry weight and van Helmont concluded that plants get all their weight from water, not soil. In 1699, John Woodward published experiments on growth of spearmint in different sources of water. He found that plants grew much better in water with soil added than in distilled water. Stephen Hales is considered the Father of Plant Physiology for the many experiments in the 1727 book, Vegetable Staticks; though Julius von Sachs unified the pieces of plant physiology and put them together as a discipline. His Lehrbuch der Botanik was the plant physiology bible of its time. Researchers discovered in the 1800s that plants absorb essential mineral nutrients as inorganic ions in water. In natural conditions, soil acts as a mineral nutrient reservoir but the soil itself is not essential to plant growth. When the mineral nutrients in the soil are dissolved in water, plant roots absorb nutrients readily, soil is no longer required for the plant to thrive. This observation is the basis for hydroponics, the growing of plants in a water solution rather than soil, which has become a standard technique in biological research, teaching lab exercises, crop production and as a hobby. Economic applications Food production In horticulture and agriculture along with food science, plant physiology is an important topic relating to fruits, vegetables, and other consumable parts of plants. Topics studied include: climatic requirements, fruit drop, nutrition, ripening, fruit set. The production of food crops also hinges on the study of plant physiology covering such topics as optimal planting and harvesting times and post harvest storage of plant products for human consumption and the production of secondary products like drugs and cosmetics. Crop physiology steps back and looks at a field of plants as a whole, rather than looking at each plant individually. Crop physiology looks at how plants respond to each other and how to maximize results like food production through determining things like optimal planting density.
Biology and health sciences
Basics
Biology
637333
https://en.wikipedia.org/wiki/Dentures
Dentures
Dentures (also known as false teeth) are prosthetic devices constructed to replace missing teeth, supported by the surrounding soft and hard tissues of the oral cavity. Conventional dentures are removable (removable partial denture or complete denture). However, there are many denture designs, some of which rely on bonding or clasping onto teeth or dental implants (fixed prosthodontics). There are two main categories of dentures, the distinction being whether they fit onto the mandibular arch or on the maxillary arch. Medical uses Dentures can help people via: Mastication: chewing ability is improved by the replacement of edentulous (lacking teeth) areas with denture teeth. Aesthetics: the presence of teeth gives a natural appearance to the face, and wearing a denture to replace missing teeth provides support for the lips and cheeks and corrects the collapsed appearance that results from the loss of teeth. Pronunciation: replacing missing teeth, especially the anteriors, enables patients to speak better, enunciating more easily sibilants and fricatives in particular. Self-esteem: improved looks and speech boost confidence in patients' ability to interact socially. Complications Stomatitis Denture stomatitis is an inflammatory condition of the skin under the dentures. It can affect both partial and complete denture wearers, and is most commonly seen on the palatal mucosa. Clinically, it appears as simple localized inflammation (Type I), generalized erythema covering the denture-bearing area (Type II) and inflammatory papillary hyperplasia (Type III). People with denture stomatitis are more likely to have angular cheilitis. Denture stomatitis is caused by a mixed infection of Candida albicans (90%) and a number of bacteria such as Staphylococcus, Streptococcus, Fusobacterium and Bacteroides species. Acrylic resin is more susceptible for fungal colonization, adherence and proliferation. In poor fitting dentures, these inflammations can be identified and referred to as a common sore of the mouth and are dependent on the severity of the inflammation. It's crucial to acknowledge that denture stomatitis ranks among the most prevalent conditions affecting denture wearers, affecting approximately 70% of this population. Early recognition of the signs and symptoms of denture stomatitis is vital for prompt treatment. Some of these symptoms include oral white or red patches, sore throat, pain or discomfort when swallowing, or sores in mouth. Common risk factors for denture stomatitis include denture trauma, poor denture hygiene and nocturnal denture wear. Additionally, systemic risk factors such as nutritional deficiencies, immunosuppression, smoking, diabetes, use of steroid inhalers, and xerostomia play a significant role. Therefore, it's important to conduct thorough examinations to detect any underlying systemic diseases. Precautions denture wearers should take care improving the fit of ill-fitting dentures to eliminate any dental trauma. Stress on the importance of good denture hygiene including cleaning of the denture, soaking the dentures in disinfectant solution and not wearing it during sleeping at night is the key to treating all types of denture stomatitis. Topical application and systemic use of antifungal agents can be used to treat denture stomatitis cases that fail to respond to local conservative measures. Ulceration Mouth ulceration is the most common lesion in people with dentures. It can be caused by repetitive minor trauma like poorly fitting dentures including over-extension of a denture. Pressure-indicating paste can be used to check the fitting of dentures. It allows the areas of premature contact to be distinguished from areas of physiologic tissue contact. Therefore, the particular area can be polished with an acrylic bur. Leaching of residual monomer methyl methacrylate from inadequately cured denture acrylic resin material can cause mucosal irritation and hence oral ulceration as well. Patients are advised to use warm salt water mouth rinses and a betamethasone rinse can heal ulcer. Review of persisting oral ulcerations for more than 3 weeks is recommended. Tooth loss People can become entirely edentulous for many reasons, the most prevalent being removal due to dental disease, which typically relates to oral flora control, i.e., periodontal disease and tooth decay. Other reasons include pregnancy, tooth developmental defects caused by severe malnutrition, genetic defects such as dentinogenesis imperfecta, trauma, or drug use. Periodontitis is defined as an inflammatory lesion mediated by host-pathogen interaction that results in the loss of connective tissue fiber attachment to the root surface and ultimately to the alveolar bone. It is the loss of connective tissue to the root surface that leads to teeth falling out. The hormones associated with pregnancy increases the risk of gingivitis and vomiting. Hormones released during pregnancy softens the cardia muscle ring that keeps food within the stomach. Hydrochloric acid is the acid involved in gastric reflux, also known as morning sickness. This acid, at a pH of 1.5-3.5, coats the enamel on the teeth, mainly affecting the palatal surfaces of the maxillary teeth. Eventually the enamel is softened and easily wears away. Dental trauma refers to trauma (injury) to the teeth and/or periodontium (gums, periodontal ligament, alveolar bone). Strong force may cause the root of the tooth to completely dislocate from its socket, mild trauma may cause the tooth to chip. Types Removable partial dentures Removable partial dentures are for patients who are missing some of their teeth on a particular arch. Fixed partial dentures, also known as "crown and bridge" dentures, are made from crowns that are fitted on the remaining teeth. They act as abutments and pontics, and are made from materials resembling the missing teeth. Fixed bridges are more expensive than removable appliances but are more stable. Another option in this category is the flexible partial, which takes advantage of innovations in digital technology. Flexible partial fabrication involves only non-invasive procedures. Dentures can be difficult to clean and can affect oral hygiene. Complete dentures Complete dentures are worn by patients who are missing all of the teeth in a single arch—i.e. the maxillary (upper) or mandibular (lower) arch—or, more commonly, in both arches. The full denture is removable because it is held in place by suction. They are painful at first and can take some time to get used to. There are two types of full dentures: immediate dentures and conventional dentures. Copy dentures Copy dentures can be made for either partial, but mainly complete denture patients. These dentures require fewer visits to make and usually are made for older patients, patients who would have difficulty adjusting to new dentures, would like a spare pair of dentures or like the aesthetics of their dentures already. This requires taking an impression of the patient's current denture and remaking it. Materials Dentures are mainly made from acrylic due to the ease of material manipulation and likeness to intra-oral tissues, i.e. gums. Most dentures are composed of heat-cured acrylic polymethyl methacrylate and rubber-reinforced polymethyl methacrylate. Coloring agents and synthetic fibers are added to obtain the tissue-like shade, and to mimic the small capillaries of the oral mucosa, respectively. However, dentures made from acrylic can be fragile and fracture easily if the patient has trouble adapting neuromuscular control. This can be overcome by reinforcing the denture base with cobalt chromium (Co-Cr). They are often thinner (therefore more comfortable) and stronger (to prevent repeating fractures). History As early as the 7th century BC, Etruscans in northern Italy made partial dentures out of human or other animal teeth fastened together with gold bands. The Romans had likely borrowed this technique by the 5th century BC. A text by Martial (c. AD 40-103) referenced Cascellius, who extracted or repaired painful teeth. H. L. Strömgren (1935), postulated that by repairing it was meant tooth replacement and not tooth filling. Wooden full dentures were invented in Japan around the early 16th century. Softened beeswax was inserted into the patient's mouth to create an impression, which was then filled with harder bees wax. Wooden dentures were then meticulously carved based on that model. The earliest of these dentures were entirely wooden, but later versions used natural human teeth or sculpted pagodite, ivory, or animal horn for the teeth. These dentures were built with a broad base, exploiting the principles of adhesion to stay in place. This was an advanced technique for the era; it would not be replicated in the West until the late 18th century. Wooden dentures continued to be used in Japan until the Opening of Japan to the West in the 19th century. In 1728, Pierre Fauchard described the construction of dentures using a metal frame and teeth sculpted from animal bone. The first porcelain dentures were made around 1770 by Alexis Duchâteau. In 1791, the first British patent was granted to Nicholas Dubois De Chemant, previous assistant to Duchateau, for "De Chemant's Specification": He began selling his wares in 1792, with most of his porcelain paste supplied by Wedgwood. 17th century London's Peter de la Roche is believed to be one of the first 'operators for the teeth', men who advertized themselves as specialists in dental work. They were often professional goldsmiths, ivory turners or students of barber-surgeons. In 1820, Samuel Stockton, a goldsmith by trade, began manufacturing high-quality porcelain dentures mounted on 18-carat gold plates. Later dentures from the 1850s onwards were made of Vulcanite, a form of hardened rubber into which porcelain teeth were set. In the 20th century, acrylic resin and other plastics were used. In Britain, sequential Adult Dental Health Surveys revealed that in 1968 79% of those aged 65–74 had no natural teeth; by 1998, this proportion had fallen to 36%. George Washington George Washington (1732–1799) suffered from problems with his teeth throughout his life, and historians have tracked his experiences in great detail. He lost his first adult tooth when he was twenty-two and had only one left by the time he became president. He had several sets of false teeth made, four of them by a dentist named John Greenwood. None of the sets, contrary to popular belief, were made from wood or contained any wood. The set made when he became president were carved from hippopotamus and elephant ivory, held together with gold springs. Prior to these, he had a set made with real human teeth, likely ones he purchased from "several unnamed Negroes, presumably Mount Vernon slaves" in 1784. Manufacturing Modern dentures are most often fabricated in a commercial dental laboratory or by a denturist using a combination of tissue shaded powders polymethyl methacrylate acrylic (PMMA). These acrylics are available as heat-cured or cold-cured types. Commercially produced acrylic teeth are widely available in hundreds of shapes and tooth colors. The process of fabricating a denture usually begins with an initial dental impression of the maxillary and mandibular ridges. Standard impression materials are used during the process. The initial impression is used to create a simple stone model that represents the maxillary and mandibular arches of the patient's mouth. This is not a detailed impression at this stage. Once the initial impression is taken, the stone model is used to create a 'custom impression tray', which is then used to take a second and much more detailed and accurate impression of the patient's maxillary and mandibular ridges. Polyvinyl siloxane impression material is one of several very accurate impression materials used when the final impression is taken of the maxillary and mandibular ridges. A wax rim is fabricated to assist the dentist or denturist in establishing the vertical dimension of occlusion. After this, a bite registration is created to marry the position of one arch to the other. Once the relative position of each arch to the other is known, the wax rim can be used as a base to place the selected denture teeth in correct position. This arrangement of teeth is tested in the mouth so that adjustments can be made to the occlusion. After the occlusion has been verified by the dentist or denturist and the patient, and all phonetic requirements are met, the denture is processed. Processing a denture is usually performed using a lost-wax technique whereby the form of the final denture, including the acrylic denture teeth, is invested in stone. This investment is then heated, and when it melts the wax is removed through a spruing channel. The remaining cavity is then either filled by forced injection or pouring in the uncured denture acrylic, which is either a heat-cured or cold-cured type. During the processing period, heat cured acrylics—also called permanent denture acrylics—go through a process called polymerization, causing the acrylic materials to bond very tightly and taking several hours to complete. After a curing period, the stone investment is removed, the acrylic is polished, and the denture is complete. The end result is a denture that looks much more natural, is much stronger and more durable than a cold-cured temporary denture, resists stains and odors, and will last for many years. Cold-cured or cold-pour dentures, also known as temporary dentures, do not look as natural, are less durable, tend to be highly porous and are only used as a temporary expedient until a more permanent solution is found. These types of dentures tend to cost much less due to their quick production time (usually minutes) and composition of low-cost materials. It is not suggested that a patient wear a cold-cured denture for a long period of time, as they are prone to cracks and can break rather easily. Prosthodontic principles Support Support is the principle that describes how well the underlying mucosa (oral tissues, including gums) keeps the denture from moving vertically towards the arch in question during chewing, and thus being excessively depressed and moving deeper into the arch. For the mandibular arch, this function is provided primarily by the buccal shelf, a region extending laterally from the back or posterior ridges, and by the pear-shaped pad (the most posterior area of keratinized gingival formed by the scaling down of the retro-molar papilla after the extraction of the last molar tooth). Secondary support for the complete mandibular denture is provided by the alveolar ridge crest. The maxillary arch receives primary support from the horizontal hard palate and the posterior alveolar ridge crest. The larger the denture flanges (that part of the denture that extends into the vestibule), the better the stability (another parameter to assess fit of a complete denture). Long flanges beyond the functional depth of the sulcus are a common error in denture construction, often (but not always) leading to movement in function, and ulcerations (denture sore spots). Stability Stability is the principle that describes how well the denture base is prevented from moving in a horizontal plane, and thus sliding from side to side or front to back. The more the denture base (pink material) is in smooth and continuous contact with the edentulous ridge (the hill upon which the teeth used to reside, but now only residual alveolar bone with overlying mucosa), the better the stability. Of course, the higher and broader the ridge, the better the stability will be, but this is usually a result of patient anatomy, barring surgical intervention (bone grafts, etc.). Retention Retention is the principle that describes how well the denture is prevented from moving vertically in the opposite direction of insertion. The better the topographical mimicry of the intaglio (interior) surface of the denture base to the surface of the underlying mucosa, the better the retention will be (in removable partial dentures, the clasps are a major provider of retention), as surface tension, suction and friction will aid in keeping the denture base from breaking intimate contact with the mucosal surface. It is important to note that the most critical element in the retentive design of a maxillary complete denture is a complete and total border seal (complete peripheral seal) in order to achieve 'suction'. The border seal is composed of the edges of the anterior and lateral aspects and the posterior palatal seal. The posterior palatal seal design is accomplished by covering the entire hard palate and extending not beyond the soft palate and ending 1–2 mm from the vibrating line. Prosthodontists use a scale called the Kapur index to quantify denture stability and retention. Implant technology can vastly improve the patient's denture-wearing experience by increasing stability and preventing bone from wearing away. Implants can also aid retention. Instead of merely placing the implants to serve as blocking mechanism against the denture's pushing on the alveolar bone, small retentive appliances can be attached to the implants that can then snap into a modified denture base to allow for tremendously increased retention. Available options include a metal "Hader bar" or precision ball attachments. Fit, maintenance and relining Generally speaking, partial dentures tend to be held in place by the presence of the remaining natural teeth and complete dentures tend to rely on muscular co-ordination and limited suction to stay in place. The maxilla very commonly has more favorable denture-bearing anatomy, as the ridge tends to be well formed and there is a larger area on the palate for suction to retain the denture. Conversely, the mandible tends to make lower dentures much less retentive due to the displacing presence of the tongue and the higher rate of resorption, frequently leading to significantly resorbed lower ridges. Disto-lingual regions tend to offer retention even in highly resorbed mandibles, and extension of the flange into these regions tends to produce a more retentive lower denture. An implant supported lower denture is another option for improving retention. Dentures that fit well during the first few years after creation will not necessarily fit well for the rest of the wearer's lifetime. This is because the bone and mucosa of the mouth are living tissues, which are dynamic over decades. Bone remodeling never stops in living bone. Edentulous jaw ridges tend to resorb progressively over the years, especially the alveolar ridge of the lower jaw. Mucosa reacts to being chronically rubbed by the dentures. Poorly fitting dentures hasten both of those processes compared to the rates with well-fitting dentures. Poor fitting dentures may also lead to the development of conditions such as epulis fissuratum. In addition, the occlusion (chewing surfaces of the teeth) tends to wear away over time, which reduces chewing efficacy and decreases the vertical dimension of occlusion (the "openness" of the jaws and mouth). Costs In countries where denturism is legally performed by denturists, it is typically a denturist association that publishes the fee guide. In countries where it is performed by dentists, it is typically a dental association that publishes the fee guide. Some governments also provide additional coverage for the purchase of dentures by seniors. Typically, only standard low-cost dentures are covered by insurance and because many individuals would prefer to have a premium cosmetic denture or a premium precision denture they rely on consumer dental patient financing options. A low-cost denture starts at about $300–$500 per denture, or $600–$1,000 for a complete set of upper and lower dentures. These tend to be cold-cured dentures, which are considered temporary because of the lower quality materials and streamlined processing methods used in their manufacture. In many cases, there is no opportunity to try them on for fit before they are finished. They also tend to look artificial and not as natural as higher quality, higher priced dentures. A mid-priced (and better quality) heat-cured denture typically costs $500–$1,500 per denture, or $1,000–$3,000 for a complete set. The teeth look much more natural and are much longer-lasting than cold-cured or temporary dentures. In many cases, they may be tried out before they are finished to ensure that all the teeth occlude (meet) properly and look esthetically pleasing. These usually come with a 90-day to two-year warranty and in some cases a money-back guarantee if the customer is not satisfied. In some cases, the cost of subsequent adjustments to the dentures is included. Premium heat-cured dentures can cost $2,000–$4,000 per denture, or $4,000–$8,000 or more for a set. Dentures in this price range are usually completely customized and personalized, use high-end materials to simulate the lifelike look of gums and teeth as closely as possible, last a long time and are warrantied against chipping and cracking for 5–10 years or longer. Often the price includes several follow-up visits to fine-tune the fit. In the United Kingdom, as of 13 March 2018, an NHS patient must pay £244.30 for a denture to be made. This is a flat rate and no additional charges may be made regarding material used or appointments needed. Privately, the cost can lie upwards of £300. Care Daily cleaning of dentures is recommended. Plaque and tartar can build up on false teeth, just as they do on natural teeth. Cleaning can be done using chemical or mechanical denture cleaners. Dentures should not be worn continuously, but rather taken out of the mouth during sleep. This is to give the tissues a chance to recover: wearing dentures at night is likened to sleeping in shoes. The main risk is the development of fungal infections, especially denture-related stomatitis. Dentures should also be removed while smoking, as the heat can damage the denture acrylic, and overheated acrylic can burn the soft tissues. Deposits such as microbial plaque, calculus and food debris can accumulate on the dentures, which may lead to issues such as angular stomatitis, denture stomatitis, undesirable odors and tastes as well as staining. These deposits can also quicken the degradation of some of the denture materials. Due to the presence of these deposits, there is an increased risk of the denture wearer and other people around them developing a systemic disease by organisms such as methicillin-resistant Staphylococcus aureus (MRSA), but research shows that denture cleaners are effective against MRSA. Therefore, denture cleaning is imperative for the overall health of the denture wearers as well as for the health of people they come into contact with. Brushing After receiving dentures, the patient should brush them often with soap, water and a soft nylon toothbrush which has a small head, as this will enable the brush to reach into all the areas of the denture surface. The bristles must be soft in order for them to easily conform to the contours of the dentures for adequate cleaning: stiff bristles will not conform well and are likely to cause abrasion of the denture acrylic resin. If a patient finds it difficult to utilize a toothbrush, e.g. a patient with arthritis, a brush with easy-grip modifications may be used. Disclosing solutions can be used at home to make less obvious plaque deposits more visible to ensure thorough cleaning of plaque. Food dyes can be utilized as a disclosing solution when used correctly. Instead of brushing their dentures with soap and water, patients can use pastes designed for dentures or conventional toothpaste to clean their dentures. However, the American Dental Association advises against using toothpaste as it can be too harsh for cleaning dentures. Immersion Patients should combine the brushing of their dentures with soaking them in an immersion cleaner from time to time as this combined cleaning strategy has been shown to control denture plaque. Due to microbial invasion, the lack of use of immersion cleaners and inadequate denture plaque control will cause rapid deterioration of the soft linings of the denture. Cleansers and methods Liquid cleansers that dentures can be immersed in include: bleaches e.g. sodium hypochlorite; effervescent solutions e.g. alkaline peroxides, perborates and persulfates; acid cleaners. Sodium hypochlorite cleansers Sodium hypochlorite (NaOCl) cleansers have a disinfectant action and remove non-viable organisms and other deposits from the surface, but they are weak for eliminating calculus from the denture surface. Immersing dentures in a hypochlorite solution for more than 6 hours occasionally will eliminate plaque and staining. Furthermore, as microbial invasion is prevented, the deterioration of the soft lining material does not occur. Corrosion of cobalt chromium has occurred when hypochlorite cleansers have been used, and they may also result in the fading of the acrylic and silicone lining, but the softness or elastically of the linings are not greatly changed. Effervescent cleansers Effervescent cleansers are the most popular immersion cleansers and include alkaline peroxides, perborates and persulfates. Their cleansing action occurs by the formation of small bubbles which displace loosely-attached material from the surface of the denture. They are not very effective as cleansers and have a restricted ability to eliminate microbial plaque. Moreover, they are safe for use and do not cause deterioration of the acrylic resin or the metals used in denture construction. Despite this, they are able to cause rapid damage to some short-term soft lining. Discoloration of the acrylic resin to a white denture often occurs; however, this can be due to the use of very hot water with cleaning agents against manufacturer instructions. Acid cleansers Sulfamic acid is a type of acid cleanser that is used to prevent the formation of calculus on dentures. Sulfamic acid has a very good compatibility with many denture materials, including the metals used in denture construction. 5% hydrochloric acid is another type of acid cleanser. In this case, the denture is immersed in the hydrochloric cleanser to soften the calculus so that it can be brushed away. The acid can cause damage to clothes if accidentally spilt, and can cause corrosion of cobalt-chromium or stainless steel if immersed in the acid often and for long periods of time. Other denture cleaning methods Other denture cleaning methods include enzymes, ultrasonic cleansers and microwave exposure. A Cochrane Review found that there is weak evidence to support soaking dentures in effervescent tablets or in enzymatic solutions, and while the most effective method for eliminating plaque is not clear, the review shows that brushing with paste eliminates microbial plaque better than inactive methods. There is a need for studies to provide reports about the cost of materials and the negative effects that may be associated with their use, as these factors could affect the acceptability of such materials by patients which will in turn affect their effectiveness in a daily setting in the long term. Putting dentures into a dishwasher overnight can be a useful shortcut when away from home. Additionally, further studies comparing the different methods of cleaning dentures are needed. Broken dentures Dentures sometimes break, often during eating or when dropped during cleaning. A repair or replacement should be sought as soon as possible to restore function and aesthetics; the continued wearing of a broken denture results in unnecessary intra-oral tissue irritation, which may result in an increased risk of infection and other pathologies including malignancies.
Technology
Devices
null
637512
https://en.wikipedia.org/wiki/Nosebleed
Nosebleed
A nosebleed, also known as epistaxis, is an instance of bleeding from the nose. Blood can flow down into the stomach, and cause nausea and vomiting. In more severe cases, blood may come out of both nostrils. Rarely, bleeding may be so significant that low blood pressure occurs. Blood may also be forced to flow up and through the nasolacrimal duct and out of the eye, producing bloody tears. Risk factors include trauma, including putting the finger in the nose, blood thinners, high blood pressure, alcoholism, seasonal allergies, dry weather, and inhaled corticosteroids. There are two types: anterior, which is more common; and posterior, which is less common but more serious. Anterior nosebleeds generally occur from Kiesselbach's plexus while posterior bleeds generally occur from the sphenopalatine artery or Woodruff's plexus. The diagnosis is by direct observation. Prevention may include the use of petroleum jelly in the nose. Initially, treatment is generally the application of pressure for at least five minutes over the lower half of the nose. If this is not sufficient, nasal packing may be used. Tranexamic acid may also be helpful. If bleeding episodes continue, endoscopy is recommended. About 60% of people have a nosebleed at some point in their life. About 10% of nosebleeds are serious. Nosebleeds are rarely fatal, accounting for only 4 of the 2.4 million deaths in the U.S. in 1999. Nosebleeds most commonly affect those younger than 10 and older than 50. Cause Nosebleeds can occur due to a variety of reasons. Some of the most common causes include trauma from nose picking, blunt trauma (such as a motor vehicle accident), or insertion of a foreign object (more likely in children). Low relative humidity (such as in centrally heated buildings), respiratory tract infections, chronic sinusitis, rhinitis or environmental irritants can cause inflammation and thinning of the tissue in the nose, leading to a greater likelihood of bleeding from the nose. Most causes of nose bleeding are self-limiting and do not require medical attention. However, if nosebleeds are recurrent or do not respond to home therapies, an underlying cause may need to be investigated. Some rarer causes are listed below: Coagulopathy Thrombocytopenia (thrombotic thrombocytopenic purpura, idiopathic thrombocytopenic purpura) Von Willebrand's disease Hemophilia Leukemia HIV Chronic liver disease—cirrhosis causes deficiency of factor II, VII, IX,& X Dietary Sulfur dioxide (sulphur dioxide) E220 (as a food preservative used particularly in wines, dried fruits, etc. ) Sulphites as food preservatives Salicylates naturally occurring in some fruits and vegetables Inflammatory Granulomatosis with polyangiitis Systemic lupus erythematosus Medications/Drugs Anticoagulation (warfarin, heparin, aspirin, etc.) Insufflated drugs (particularly cocaine) Nasal sprays (particularly prolonged or improper use of nasal steroids) Neoplastic Squamous cell carcinoma Adenoid cystic carcinoma Melanoma Nasopharyngeal carcinoma Nasopharyngeal angiofibroma Nosebleeds can be a sign of cancer in the sinus area, which is rare, or tumors starting at the base of the brain, such as meningioma. Due to the sensitive location, nosebleeds caused by tumors are typically associated with other symptoms, such as hearing or vision problems. Traumatic Anatomical deformities (e.g. septal spurs) Blunt trauma (usually a sharp blow to the face such as a punch, sometimes accompanying a nasal fracture) Foreign bodies (such as fingers during nose-picking) Digital trauma (nose picking) Middle ear barotrauma (such as from descent in aircraft or ascent in scuba diving) Nasal bone fracture Septal fracture/perforation Intranasal tumors (e.g. Nasopharyngeal carcinoma or nasopharyngeal angiofibroma) Nasal cannula O2 (tending to dry the olfactory mucosa) Nasal sprays (particularly prolonged or improper use of nasal steroids) Surgery (e.g. septoplasty and functional endoscopic sinus surgery) Leech infestation Nasal bleeds may be due to fracture of facial bones namely maxilla and zygoma. Vascular Hereditary hemorrhagic telangiectasia (Osler–Weber–Rendu disease) Angioma Aneurysm of the carotid artery Pathophysiology The nasal mucosa contains a rich blood supply that can be easily ruptured and cause bleeding. Rupture may be spontaneous or initiated by trauma. Nosebleeds are reported in up to 60% of the population with peak incidences in those under the age of ten and over the age of 50 and appear to occur in males more than females. An increase in blood pressure (e.g. due to general hypertension) tends to increase the duration of spontaneous epistaxis. Anticoagulant medication and disorders of blood clotting can promote and prolong bleeding. Spontaneous epistaxis is more common in the elderly as the nasal mucosa (lining) becomes dry and thin and blood pressure tends to be higher. The elderly are also more prone to prolonged nosebleeds as their blood vessels are less able to constrict and control the bleeding. The vast majority of nosebleeds occur in the front anterior (front) part of the nose from the nasal septum. This area is richly endowed with blood vessels (Kiesselbach's plexus). This region is also known as Little's area. Bleeding farther back in the nose is known as a posterior bleed and is usually due to bleeding from Woodruff's plexus, a venous plexus situated in the posterior part of inferior meatus. Posterior bleeds are often prolonged and difficult to control. They can be associated with bleeding from both nostrils and with a greater flow of blood into the mouth. Sometimes blood flowing from other sources of bleeding passes through the nasal cavity and exits the nostrils. It is thus blood coming from the nose but is not a true nosebleed, that is, not truly originating from the nasal cavity. Such bleeding is called "pseudoepistaxis" (pseudo + epistaxis). Examples include blood coughed up through the airway and ending up in the nasal cavity, then dripping out. Prevention People with uncomplicated nosebleeds can use conservative methods to prevent future nosebleeds such as sleeping in a humidified environment or applying petroleum jelly to the nasal nares. Individuals who suffer from nosebleeds regularly, especially children, are encouraged to use over-the-counter nasal saline sprays and avoid vigorous nose-blowing as preventative measures. Treatment Most anterior nosebleeds can be stopped by applying direct pressure, which helps by promoting blood clots. Those who have a nosebleed should first attempt to blow out any blood clots and then apply pressure to the soft anterior part of the nose (by pinching the nasal ala; not the bony nasal bridge) for at least five minutes and up to 30 minutes. Pressure should be firm and tilting the head forward helps decrease the chance of nausea and airway obstruction due to blood dripping into the airway. When attempting to stop a nosebleed at home, the head should not be tilted back. Swallowing excess blood can irritate the stomach and cause vomiting. Vasoconstrictive medications such as oxymetazoline (Afrin) or phenylephrine are widely available over the counter for treatment of allergic rhinitis and may also be used to control benign cases of epistaxis. For example, a few sprays of oxymetazoline may be applied into the bleeding side(s) of the nose followed by application of direct pressure. Those with nosebleeds that last longer than 30 minutes (despite use of direct pressure and vasoconstrictive medications such as oxymetazoline) should seek medical attention. Chemical Cauterization This method involves applying a chemical such as silver nitrate to the nasal mucosa, which burns and seals off the bleeding. Eventually the nasal tissue to which the chemical is applied will undergo necrosis. This form of treatment is best for mild bleeds, especially in children, that are clearly visible. A topical anesthetic (such as lidocaine) is usually applied prior to cauterization. Silver nitrate can cause blackening of the skin due to silver sulfide deposit, though this will fade with time. Once the silver nitrate is deposited, saline may be used to neutralize any excess silver nitrate via formation of silver chloride precipitate. Nasal packing If pressure and chemical cauterization cannot stop bleeding, nasal packing is the mainstay of treatment. Nasal packing is typically categorized into anterior nasal packing and posterior nasal packing. Nasal packing may also be categorized into dissolvable and non-dissolvable types. Dissolvable nasal packing materials stop bleeding through use of thrombotic agents that promote blood clots, such as surgicel and gelfoam. The thrombogenic foams and gels do not require removal and dissolve after a few days. Typically, dissolvable nasal packing is first attempted; if the bleeding persists, non-dissolvable nasal packing is the next option. Traditionally, nasal packing was accomplished by packing gauze into the nose, thereby placing pressure on the vessels in the nose and stopping the bleeding. Traditional gauze packing has been replaced with other non-dissolvable nasal packing products such as Merocel and the Rapid Rhino. The Merocel nasal tampon is similar to gauze packing except it is a synthetic foam polymer (made of polyvinyl alcohol and expands in the nose after application of water) that provides a less hospitable medium for bacteria. The Rapid Rhino stops nosebleeds using a balloon catheter, made of carboxymethylcellulose, which has a cuff that is inflated by air to stop bleeding through extra pressure in the nasal cavity. Systematic review articles have demonstrated that the efficacy in stopping nosebleeds is similar between the Rapid Rhino and Merocel packs; however, the Rapid Rhino has been shown to have greater ease of insertion and reduced discomfort. Posterior nasal packing can be achieved by using a Foley catheter, blowing up the balloon when it is in the back of the throat, and applying anterior traction so that the inflated balloon occludes the choanae. Patients who receive non-dissolvable nasal packing need to return to a medical professional in 24–72 hours in order to have packing removed. Complications of non-dissolvable nasal packing include abscesses, septal hematomas, sinusitis, and pressure necrosis. In rare cases toxic shock syndrome can occur with prolonged nasal packing. As a result, any patient who has non-dissolvable nasal packing should be given prophylactic antibiotic medication to be taken as long as the nasal packing remains in the nose. Surgery Ongoing bleeding despite good nasal packing is a surgical emergency and can be treated by endoscopic evaluation of the nasal cavity under general anesthesia to identify an elusive bleeding point or to directly ligate (tie off) the blood vessels supplying the nose. These blood vessels include the sphenopalatine, anterior and posterior ethmoidal arteries. More rarely the maxillary or a branch of the external carotid artery can be ligated. The bleeding can also be stopped by intra-arterial embolization using a catheter placed in the groin and threaded up the aorta to the bleeding vessel by an interventional radiologist. There is no difference in outcomes between embolization and ligation as treatment options, but embolization is considerably more expensive. Continued bleeding may be an indication of more serious underlying conditions. Tranexamic acid Tranexamic acid helps promote blood clotting. For nosebleeds it can be applied to the site of bleeding, taken by mouth, or injected into a vein. Other The utility of local cooling of the head and neck is controversial. Some state that applying ice to the nose or forehead is not useful. Others feel that it may promote vasoconstriction of the nasal blood vessels and thus be useful. In Indonesian traditional medicine, betel leaf is used to stop nosebleeds as it contains tannin which causes blood to coagulate, thus stopping active bleeding. Society and culture In the visual language of Japanese manga and anime, nosebleeding often indicates that the bleeding person is sexually aroused. In Western fiction, nosebleeds often signify intense mental focus or effort, particularly during the use of psychic powers. In American and Canadian usage, "nosebleed section" and "nosebleed seats" are common slang for seating at sporting or other spectator events that are the highest up and farthest away from the event. The reference alludes to the propensity for nasal hemorrhage at high altitudes, usually owing to lower barometric pressure. The oral history of the Native American Sioux tribe includes reference to women who experience nosebleeds as a result of a lover's playing of music, implying sexual arousal. In the Finnish language, "picking blood from one's nose" and "begging for a nosebleed" are commonly used in abstract meaning to describe self-destructive behaviour, for example ignoring safety procedures or deliberately aggravating stronger parties. In Filipino slang, to "have a nosebleed" is to have serious difficulty conversing in English with a fluent or native English speaker. It can also refer to anxiety brought on by a stressful event such as an examination or a job interview. In the Dutch language, "pretending to have a nosebleed" is a saying that means pretending not to know anything about something. Etymology The word epistaxis is from epistazo, "to bleed from the nose" from epi, "above, over" and stazo, "to drip [from the nostrils]".
Biology and health sciences
Types
Health
14828359
https://en.wikipedia.org/wiki/Pulse%20%28physics%29
Pulse (physics)
In physics, a pulse is a generic term describing a single disturbance that moves through a transmission medium. This medium may be vacuum (in the case of electromagnetic radiation) or matter, and may be indefinitely large or finite. Pulse reflection Consider a pulse moving through a medium - perhaps through a rope or a slinky. When the pulse reaches the end of that medium, what happens to it depends on whether the medium is fixed in space or free to move at its end. For example, if the pulse is moving through a rope and the end of the rope is held firmly by a person, then it is said that the pulse is approaching a fixed end. On the other hand, if the end of the rope is fixed to a stick such that it is free to move up or down along the stick when the pulse reaches its end, then it is said that the pulse is approaching a free end. Free end A pulse will reflect off a free end and return with the same direction of displacement that it had before reflection. That is, a pulse with an upward displacement will reflect off the end and return with an upward displacement. This is illustrated by figures 1 and 2 that were obtained by the numerical integration of the wave equation. Fixed end A pulse will reflect off a fixed end and return with the opposite direction of displacement. In this case, the pulse is said to have inverted. That is, a pulse with an upward displacement will reflect off the end and return with a downward displacement. This is illustrated by figures 3 and 4 that were obtained by the numerical integration of the wave equation. In addition it is illustrated in the animation of figure 5. Crossing media When there exists a pulse in a medium that is connected to another less heavy or less dense medium, the pulse will reflect as if it were approaching a free end (no inversion). Contrarily, when a pulse is traveling through a medium connected to a heavier or denser medium, the pulse will reflect as if it were approaching a fixed end (inversion). Optical pulse Dark pulse Dark pulses are characterized by being formed from a localized reduction of intensity compared to a more intense continuous wave background. Scalar dark solitons (linearly polarized dark solitons) can be formed in all normal dispersion fiber lasers mode-locked by the nonlinear polarization rotation method and can be rather stable. Vector dark solitons are much less stable due to the cross-interaction between the two polarization components. Therefore, it is interesting to investigate how the polarization state of these two polarization components evolves. In 2008, the first dark pulse laser was reported in a quantum dot diode laser with a saturable absorber. In 2009, the dark pulse fiber laser was successfully achieved in an all-normal dispersion erbium-doped fiber laser with a polarizer in cavity. Experimentation has revealed that apart from the bright pulse emission, under appropriate conditions the fiber laser could also emit single or multiple dark pulses. Based on numerical simulations, the dark pulse formation in the laser is a result of dark soliton shaping. In 2022, the first free space dark pulse laser using a nonlinear crystal inside of a solid state laser demonstrated.
Physical sciences
Waves
Physics
364976
https://en.wikipedia.org/wiki/Bolide
Bolide
A bolide is normally taken to mean an exceptionally bright meteor, but the term is subject to more than one definition, according to context. It may refer to any large crater-forming body, or to one that explodes in the atmosphere. It can be a synonym for a fireball, sometimes specific to those with an apparent magnitude of −4 or brighter. Definitions The word bolide (; from Italian via Latin, ) may refer to somewhat different phenomena depending on the context in which the word appears, and readers may need to make inferences to determine which meaning is intended in a particular publication. An early usage occurs in Natural History, where Pliny the Elder describes two types of prodigies, "those which are called lampades and those which are called bolides". At least one of the prodigies described by Pliny (a "spark" that fell, grew to the "size of the moon", and "returned into the heavens") has been interpreted by astronomers as a bolide in the modern sense. His description of an object coming near the earth and continuing back into the sky matches the expected trajectory of a fireball crossing above an observer. A 1771 fireball that burst above Melun, France, was widely discussed by contemporary astronomers as a "bolide" and was the subject of an official French Academy of Sciences investigation led by Jean-Baptiste Le Roy. In 1794, Ernst Chladni published a book proposing that meteors were small objects that fell to Earth from space and that small bodies existed in space beyond the moon. Astronomers use the word to describe any extremely bright meteor (or fireball), especially one that explodes in the atmosphere. Geologists use the word to describe a very large impact event. One definition describes a bolide as a fireball reaching an apparent magnitude of −4 or brighter. Another definition describes a bolide as any generic large crater-forming impacting body whose composition (for example, whether it is a rocky or metallic asteroid, or an icy comet) is unknown. A superbolide is a bolide that reaches an apparent magnitude of −17 or brighter, which is roughly 100 times brighter than the full moon. Recent examples of superbolides include the Sutter's Mill meteorite in California and the Chelyabinsk meteor in Russia. Astronomy The IAU has no official definition of "bolide", and generally considers the term synonymous with fireball, a brighter-than-usual meteor; however, the term generally applies to fireballs reaching an apparent magnitude −4 or brighter. Astronomers tend to use bolide to identify an exceptionally bright fireball, particularly one that explodes (sometimes called a detonating fireball). It may also be used to mean a fireball that is audible. Superbolide Selected superbolide air bursts: Tunguska event (Russia, 1908) 2009 Sulawesi superbolide (Indonesia, 2009) Chelyabinsk meteor (Russia, 2013) Geology Geologists use the term bolide differently from astronomers. In geology, it indicates a very large impactor. For example, the Woods Hole Coastal and Marine Science Center of the USGS uses bolide for any large crater-forming impacting body whose origin and composition is unknown, as, for example, whether it was a stony or metallic asteroid, or a less dense, icy comet made of volatiles, such as water, ammonia, and methane. The most notable example is the bolide that caused the Chicxulub crater 66 million years ago. Scientific consensus agrees that this event directly led to the extinction of all non-avian dinosaurs, and it is evidenced by a thin layer of iridium found at that geological layer marking the K–Pg boundary. Gallery
Physical sciences
Planetary science
Astronomy
365101
https://en.wikipedia.org/wiki/Sciatica
Sciatica
Sciatica is pain going down the leg from the lower back. This pain may go down the back, outside, or front of the leg. Onset is often sudden following activities such as heavy lifting, though gradual onset may also occur. The pain is often described as shooting. Typically, symptoms are only on one side of the body. Certain causes, however, may result in pain on both sides. Lower back pain is sometimes present. Weakness or numbness may occur in various parts of the affected leg and foot. About 90% of sciatica is due to a spinal disc herniation pressing on one of the lumbar or sacral nerve roots. Spondylolisthesis, spinal stenosis, piriformis syndrome, pelvic tumors, and pregnancy are other possible causes of sciatica. The straight-leg-raising test is often helpful in diagnosis. The test is positive if, when the leg is raised while a person is lying on their back, pain shoots below the knee. In most cases medical imaging is not needed. However, imaging may be obtained if bowel or bladder function is affected, there is significant loss of feeling or weakness, symptoms are long standing, or there is a concern for tumor or infection. Conditions that may present similarly are diseases of the hip and infections such as early shingles (prior to rash formation). Initial treatment typically involves pain medications. However, evidence for effectiveness of the pain medication and muscle relaxants is lacking. It is generally recommended that people continue with normal activity to the best of their abilities. Often all that is required for sciatica resolution is time; in about 90% of people symptoms resolve in less than six weeks. If the pain is severe and lasts for more than six weeks, surgery may be an option. While surgery often speeds pain improvement, its long term benefits are unclear. Surgery may be required if complications occur, such as loss of normal bowel or bladder function. Many treatments, including corticosteroids, gabapentin, pregabalin, acupuncture, heat or ice, and spinal manipulation, have limited or poor evidence for their use. Depending on how it is defined, less than 1% to 40% of people have sciatica at some point in time. Sciatica is most common between the ages of 40 and 59, and men are more frequently affected than women. The condition has been known since ancient times. The first known modern use of the word sciatica dates from 1451, although Dioscorides (1st-century CE) mentions it in his Materia Medica. Definition The term "sciatica" usually describes a symptom—pain along the sciatic nerve pathway—rather than a specific condition, illness, or disease. Some use it to mean any pain starting in the lower back and going down the leg. The pain is characteristically described as shooting or shock-like, quickly traveling along the course of the affected nerves. Others use the term as a diagnosis (i.e. an indication of cause and effect) for nerve dysfunction caused by compression of one or more lumbar or sacral nerve roots from a spinal disc herniation. Pain typically occurs in the distribution of a dermatome and goes below the knee to the foot. It may be associated with neurological dysfunction, such as weakness and numbness. Causes Risk factors Modifiable risk factors for sciatica include smoking, obesity, occupation, and physical sports where back muscles and heavy weights are involved. Non-modifiable risk factors include increasing age, being male, and having a personal history of low back pain. Spinal disc herniation Spinal disc herniation pressing on one of the lumbar or sacral nerve roots is the most frequent cause of sciatica, being present in about 90% of cases. This is particularly true in those under age 50. Disc herniation most often occurs during heavy lifting. Pain typically increases when bending forward or sitting, and reduces when lying down or walking. Spinal stenosis Other compressive spinal causes include lumbar spinal stenosis, a condition in which the spinal canal, the space the spinal cord runs through, narrows and compresses the spinal cord, cauda equina, or sciatic nerve roots. This narrowing can be caused by bone spurs, spondylolisthesis, inflammation, or a herniated disc, which decreases available space for the spinal cord, thus pinching and irritating nerves from the spinal cord that become the sciatic nerve. This is the most frequent cause after age 50. Sciatic pain due to spinal stenosis is most commonly brought on by standing, walking, or sitting for extended periods of time, and reduces when bending forward. However, pain can arise with any position or activity in severe cases. The pain is most commonly relieved by rest. Piriformis syndrome Piriformis syndrome is a condition that, depending on the analysis, varies from a "very rare" cause to contributing up to 8% of low back or buttock pain. In 17% of people, the sciatic nerve runs through the piriformis muscle rather than beneath it. When the piriformis shortens or spasms due to trauma or overuse, it is posited that this causes compression of the sciatic nerve. Piriformis syndrome has colloquially been referred to as "wallet sciatica" since a wallet carried in a rear hip pocket compresses the buttock muscles and sciatic nerve when the bearer sits down. Piriformis syndrome may be suspected as a cause of sciatica when the spinal nerve roots contributing to the sciatic nerve are normal and no herniation of a spinal disc is apparent. Deep gluteal syndrome Deep gluteal syndrome is non-discogenic, extrapelvic sciatic nerve entrapment in the deep gluteal space. Piriformis syndrome was once the traditional model of sciatic nerve entrapment in this anatomic region. The understanding of non-discogenic sciatic nerve entrapment has changed significantly with improved knowledge of posterior hip anatomy, nerve kinematics, and advances in endoscopic techniques to explore the sciatic nerve. There are now many known causes of sciatic nerve entrapment, such as fibrous bands restricting nerve mobility, that are unrelated to the piriformis in the deep gluteal space. Deep gluteal syndrome was created as an improved classification for the many distinct causes of sciatic nerve entrapment in this anatomic region. Piriformis syndrome is now considered one of many causes of deep gluteal syndrome. Endometriosis Sciatic endometriosis, also called catamenial or cyclical sciatica, is a sciatica whose cause is endometriosis. Its incidence is unknown. Diagnosis is usually made by an MRI or CT-myelography. Pregnancy Sciatica may also occur during pregnancy, especially during later stages, as a result of the weight of the fetus pressing on the sciatic nerve during sitting or during leg spasms. While most cases do not directly harm the woman or the fetus, indirect harm may come from the numbing effect on the legs, which can cause loss of balance and falls. There is no standard treatment for pregnancy-induced sciatica. Other Pain that does not improve when lying down suggests a nonmechanical cause, such as cancer, inflammation, or infection. Sciatica can be caused by tumors impinging on the spinal cord or the nerve roots. Severe back pain extending to the hips and feet, loss of bladder or bowel control, or muscle weakness may result from spinal tumors or cauda equina syndrome. Trauma to the spine, such as from a car accident or hard fall onto the heel or buttocks, may also lead to sciatica. A relationship has been proposed with a latent Cutibacterium acnes infection in the intervertebral discs, but the role it plays is not yet clear. Pathophysiology The sciatic nerve comprises nerve roots L4, L5, S1, S2, and S3 in the spine. These nerve roots merge in the pelvic cavity to form the sacral plexus and the sciatic nerve branches from that. Sciatica symptoms can occur when there is pathology anywhere along the course of these nerves. Intraspinal sciatica Intraspinal, or discogenic sciatica refers to sciatica whose pathology involves the spine. In 90% of sciatica cases, this can occur as a result of a spinal disc bulge or herniation. Sciatica is generally caused by the compression of lumbar nerves L4 or L5 or sacral nerve S1. Less commonly, sacral nerves S2 or S3 may cause sciatica. Intervertebral spinal discs consist of an outer anulus fibrosus and an inner nucleus pulposus. The anulus fibrosus forms a rigid ring around the nucleus pulposus early in human development, and the gelatinous contents of the nucleus pulposus are thus contained within the disc. Discs separate the spinal vertebrae, thereby increasing spinal stability and allowing nerve roots to properly exit through the spaces between the vertebrae from the spinal cord. As an individual ages, the anulus fibrosus weakens and becomes less rigid, making it at greater risk for tear. When there is a tear in the anulus fibrosus, the nucleus pulposus may extrude through the tear and press against spinal nerves within the spinal cord, cauda equina, or exiting nerve roots, causing inflammation, numbness, or excruciating pain. Inflammation of spinal tissue can then spread to adjacent facet joints and cause facet syndrome, which is characterized by lower back pain and referred pain in the posterior thigh. Other causes of sciatica secondary to spinal nerve entrapment include the roughening, enlarging, or misalignment (spondylolisthesis) of vertebrae, or disc degeneration that reduces the diameter of the lateral foramen through which nerve roots exit the spine. When sciatica is caused by compression of a dorsal nerve root, it is considered a lumbar radiculopathy or radiculitis when accompanied by an inflammatory response. Extraspinal sciatica The sciatic nerve is highly mobile during hip and leg movements. Any pathology which restricts normal movement of the sciatic nerve can put abnormal pressure, strain, or tension on the nerve in certain positions or during normal movements. For example, the presence of scar tissue around a nerve can cause traction neuropathy. A well known muscular cause of extraspinal sciatica is piriformis syndrome. The piriformis muscle is directly adjacent to the course of the sciatic nerve as it traverses through the intrapelvic space. Pathologies of the piriformis muscle such as injury (e.g. swelling and scarring), inflammation (release of cytokines affecting the local cellular environment), or space occupying lesions (e.g. tumor, cyst, hypertrophy) can affect the sciatic nerve. Anatomic variations in nerve branching can also predispose the sciatic nerve to further compression by the piriformis muscle, such as if the sciatic nerve pierces the piriformis muscle. The sciatic nerve can also be entrapped outside of the pelvic space and this is called deep gluteal syndrome. Surgical research has identified new causes of entrapment such as fibrovascular scar bands, vascular abnormalities, heterotropic ossification, gluteal muscles, hamstring muscles, and the gemelli-obturator internus complex. In almost half of the endoscopic surgery cases, fibrovascular scar bands were found to be the cause of entrapment, impeding the movement of the sciatic nerve. Diagnosis Sciatica is typically diagnosed by physical examination, and the history of the symptoms. Physical tests Generally, if a person reports the typical radiating pain in one leg, as well as one or more neurological indications of nerve root tension or neurological deficit, sciatica can be diagnosed. The most frequently used diagnostic test is the straight leg raise to produce Lasègue's sign, which is considered positive if pain in the distribution of the sciatic nerve is reproduced with passive flexion of the straight leg between 30 and 70 degrees. While this test is positive in about 90% of people with sciatica, approximately 75% of people with a positive test do not have sciatica. Straight leg raising of the leg unaffected by sciatica may produce sciatica in the leg on the affected side; this is known as the Fajersztajn sign. The presence of the Fajersztajn sign is a more specific finding for a herniated disc than Lasègue's sign. Maneuvers that increase intraspinal pressure, such as coughing, flexion of the neck, and bilateral compression of the jugular veins, may transiently worsen sciatica pain. Medical imaging Imaging modalities such as computerised tomography or magnetic resonance imaging can help with the diagnosis of lumbar disc herniation. Both are equally effective at diagnosing lumbar disk herniation, but computerized tomography has a higher radiation dose. Radiography is not recommended because disks cannot be visualized by X-rays. The utility of MR neurography in the diagnosis of piriformis syndrome is controversial. Discography could be considered to determine a specific disc's role in an individual's pain. Discography involves the insertion of a needle into a disc to determine the pressure of disc space. Radiocontrast is then injected into the disc space to assess for visual changes that may indicate an anatomic abnormality of the disc. The reproduction of an individual's pain during discography is also diagnostic. Differential diagnosis Cancer should be suspected if there is previous history of it, unexplained weight loss, or unremitting pain. Spinal epidural abscess is more common among those who have diabetes mellitus or immunodeficiency, or who have had spinal surgery, injection or catheter; it typically causes fever, leukocytosis and increased erythrocyte sedimentation rate. If cancer or spinal epidural abscess is suspected, urgent magnetic resonance imaging is recommended for confirmation. Proximal diabetic neuropathy typically affects middle aged and older people with well-controlled type-2 diabetes mellitus; onset is sudden, causing pain, usually in multiple dermatomes, quickly followed by weakness. Diagnosis typically involves electromyography and lumbar puncture. Shingles is more common among the elderly and immunocompromised; typically, pain is followed by the appearance of a rash with small blisters along a single dermatome. Acute Lyme radiculopathy may follow a history of outdoor activities during warmer months in likely tick habitats in the previous 1–12 weeks. In the U.S., Lyme is most common in New England and Mid-Atlantic states and parts of Wisconsin and Minnesota, but it is expanding to other areas. The first manifestation is usually an expanding rash possibly accompanied by flu-like symptoms. Lyme can also cause a milder, chronic radiculopathy an average of 8 months after the acute illness. Management Sciatica can be managed with a number of different treatments with the goal of restoring a person's normal functional status and quality of life. When the cause of sciatica is lumbar disc herniation (90% of cases), most cases resolve spontaneously over weeks to months. Initially treatment in the first 6–8 weeks should be conservative. More than 75% of sciatica cases are managed without surgery. Smokers with sciatica are strongly urged to quit in order to promote healing. Treatment of the underlying cause of nerve compression is needed in cases of epidural abscess, epidural tumors, and cauda equina syndrome. Physical activity Physical activity is often recommended for the conservative management of sciatica for persons who are physically able. Bed rest is not recommended. Although structured exercises provide small, short-term benefit for leg pain, in the long term no difference is seen between exercise or simply staying active. The evidence for physical therapy in sciatica is unclear though such programs appear safe. Physical therapy is commonly used. Nerve mobilization techniques for sciatic nerve are supported by tentative evidence. Medication There is no one medication regimen used to treat sciatica. Evidence supporting the use of opioids and muscle relaxants is poor. Low-quality evidence indicates that NSAIDs do not appear to improve immediate pain, and all NSAIDs appear to be nearly equivalent in their ability to relieve sciatica. Nevertheless, NSAIDs are commonly recommended as a first-line treatment for sciatica. In those with sciatica due to piriformis syndrome, botulinum toxin injections may improve pain and function. While there is little evidence supporting the use of epidural or systemic steroids, systemic steroids may be offered to individuals with confirmed disc herniation if there is a contraindication to NSAID use. Low-quality evidence supports the use of gabapentin for acute pain relief in those with chronic sciatica. Anticonvulsants and biologics have not been shown to improve acute or chronic sciatica. Antidepressants have demonstrated some efficacy in treating chronic sciatica, and may be offered to individuals who are not amenable to NSAIDs or who have failed NSAID therapy. Surgery If sciatica is caused by a herniated disc, the disc's partial or complete removal, known as a discectomy, has tentative evidence of benefit in the short term. A modest reduction in pain is seen after 26 weeks, but not after one year (about 52 weeks). If the cause is spondylolisthesis or spinal stenosis, surgery appears to provide pain relief for up to two years. For non-discogenic sciatica, the surgical treatment is typically a nerve decompression. A decompression seeks to remove tissue around the nerve that may be compressing it or restricting movement of the nerve. Alternative medicine Low to moderate-quality evidence suggests that spinal manipulation is an effective treatment for acute sciatica. For chronic sciatica, the evidence supporting spinal manipulation as treatment is poor. Spinal manipulation has been found generally safe for the treatment of disc-related pain; however, case reports have found an association with cauda equina syndrome, and it is contraindicated when there are progressive neurological deficits. Prognosis About 39% to 50% of people with sciatica still have symptoms after one to four years. In one study, around 20% were unable to work at their one-year followup, and 10% had surgery for the condition. Epidemiology Depending on how it is defined, less than 1% to 40% of people have sciatica at some point in time. Sciatica is most common between the ages of 40 and 59, and men are more frequently affected than women.
Biology and health sciences
Types
Health
365159
https://en.wikipedia.org/wiki/CIELAB%20color%20space
CIELAB color space
The CIELAB color space, also referred to as L*a*b*, is a color space defined by the International Commission on Illumination (abbreviated CIE) in 1976. It expresses color as three values: L* for perceptual lightness and a* and b* for the four unique colors of human vision: red, green, blue and yellow. CIELAB was intended as a perceptually uniform space, where a given numerical change corresponds to a similar perceived change in color. While the LAB space is not truly perceptually uniform, it nevertheless is useful in industry for detecting small differences in color. Like the CIEXYZ space it derives from, CIELAB color space is a device-independent, "standard observer" model. The colors it defines are not relative to any particular device such as a computer monitor or a printer, but instead relate to the CIE standard observer which is an averaging of the results of color matching experiments under laboratory conditions. Coordinates The CIELAB space is three-dimensional and covers the entire gamut (range) of human color perception. It is based on the opponent model of human vision, where red and green form an opponent pair and blue and yellow form an opponent pair. The lightness value, L* (pronounced "L star"), defines black at 0 and white at 100. The a* axis is relative to the green–red opponent colors, with negative values toward green and positive values toward red. The b* axis represents the blue–yellow opponents, with negative numbers toward blue and positive toward yellow. The a* and b* axes are unbounded and depending on the reference white they can easily exceed ±150 to cover the human gamut. Nevertheless, software implementations often clamp these values for practical reasons. For instance, if integer math is being used it is common to clamp a* and b* in the range of −128 to 127. CIELAB is calculated relative to a reference white, for which the CIE recommends the use of CIE Standard illuminant D65. D65 is used in the vast majority of industries and applications, with the notable exception being the printing industry which uses D50. The International Color Consortium largely supports the printing industry and uses D50 with either CIEXYZ or CIELAB in the Profile Connection Space, for v2 and v4 ICC profiles. While the intention behind CIELAB was to create a space that was more perceptually uniform than CIEXYZ using only a simple formula, CIELAB is known to lack perceptual uniformity, particularly in the area of blue hues. The lightness value, L* in CIELAB is calculated using the cube root of the relative luminance with an offset near black. This results in an effective power curve with an exponent of approximately 0.43 which represents the human eye's response to light under daylight (photopic) conditions. The three coordinates of CIELAB represent the lightness of the color (L* = 0 yields black and L* = 100 indicates white), its position between red and green (a*, where negative values indicate green and positive values indicate red) and its position between yellow and blue (b*, where negative values indicate blue and positive values indicate yellow). The asterisks (*) after L*, a*, and b* are pronounced star and are part of the full name to distinguish L*a*b* from Hunter's Lab, described below. Since the L*a*b* model has three axes, it requires a three-dimensional space to be represented completely. Also, because each axis is non-linear, it is not possible to create a two-dimensional chromaticity diagram. Additionally, the visual representations shown in the plots of the full CIELAB gamut on this page are an approximation, as it is impossible for a monitor to display the full gamut of LAB colors. The green-red and blue-yellow opponent channels relate to the human vision system's opponent color process. This makes CIELAB a Hering opponent color space. The nature of the transformations also characterizes it as an chromatic value color space. Perceptual differences The nonlinear relations for L*, a* and b* are intended to mimic the nonlinear response of the visual system. Furthermore, uniform changes of components in the L*a*b* color space aim to correspond to uniform changes in perceived color, so the relative perceptual differences between any two colors in L*a*b* can be approximated by treating each color as a point in a three-dimensional space (with three components: L*, a*, b*) and taking the Euclidean distance between them. RGB and CMYK conversions In order to convert RGB or CMYK values to or from L*a*b*, the RGB or CMYK data must be linearized relative to light. The reference illuminant of the RGB or CMYK data must be known, as well as the RGB primary coordinates or the CMYK printer's reference data in the form of a color lookup table (CLUT). In color managed systems, ICC profiles contains these needed data, which are then used to perform the conversions. Range of coordinates As mentioned previously, the L* coordinate nominally ranges from 0 to 100. The range of a* and b* coordinates is technically unbounded, though it is commonly clamped to the range of −128 to 127 for use with integer code values, though this results in potentially clipping some colors depending on the size of the source color space. The gamut's large size and inefficient utilization of the coordinate space means the best practice is to use floating-point values for all three coordinates. Advantages Unlike the RGB and CMYK color models, CIELAB is designed to approximate human vision. The L* component closely matches human perception of lightness, though it does not take the Helmholtz–Kohlrausch effect into account. CIELAB is less uniform in the color axes, but is useful for predicting small differences in color. The CIELAB coordinate space represents the entire gamut of human photopic (daylight) vision and far exceeds the gamut for sRGB or CMYK. In an integer implementation such as TIFF, ICC or Photoshop, the large coordinate space results in substantial data inefficiency due to unused code values. Only about 35% of the available coordinate code values are inside the CIELAB gamut with an integer format. Using CIELAB in an 8-bit per channel integer format typically results in significant quantization errors. Even 16-bit per channel can result in clipping, as the full gamut extends past the bounding coordinate space. Ideally, CIELAB should be used with floating-point data to minimize obvious quantization errors. CIE standards and documents are copyrighted by the CIE and must be purchased; however, the formulas for CIELAB are available on the CIE website. Converting between CIELAB and CIEXYZ coordinates From CIEXYZ to CIELAB where t is or : , , and describe the color stimulus considered and , , describe a specified white achromatic reference illuminant. for the CIE 1931 (2°) standard colorimetric observer and assuming normalization where the reference white has , the values are: For Standard Illuminant D65: For illuminant D50, which is used in the printing industry: The division of the domain of the function into two parts was done to prevent an infinite slope at . The function was assumed to be linear below some and was assumed to match the part of the function at in both value and slope. In other words: The intercept was chosen so that would be 0 for : . The above two equations can be solved for and : where . From CIELAB to CIEXYZ The reverse transformation is most easily expressed using the inverse of the function f above: where and where . Cylindrical model The "CIELCh" or "CIEHLC" space is a color space based on CIELAB, which uses the polar coordinates C* (chroma, colorfulness of the color) and h° (hue angle, angle of the hue in the CIELAB color wheel) instead of the Cartesian coordinates a* and b*. The CIELAB lightness L* remains unchanged. The conversion of a* and b* to C* and h° is performed as follows: Conversely, given the polar coordinates, conversion to Cartesian coordinates is achieved with: The LCh (or HLC) color space is not the same as the HSV, HSL or HSB color models, although their values can also be interpreted as a base color, saturation and lightness of a color. The HSL values are a polar coordinate transformation of what is technically defined RGB cube color space. LCh is still perceptually uniform. Further, H and h are not identical, because HSL space uses as primary colors the three additive primary colors red, green and blue (H = 0, 120, 240°). Instead, the LCh system uses the four colors red, yellow, green and blue (h = 0, 90, 180, 270°). Regardless the angle h, C = 0 means the achromatic colors (non saturated), that is, the gray axis. The simplified spellings LCh, LCh(ab), LCH, LCH(ab) and HLC are common, but the letter presents a different order. HCL color space (Hue-Chroma-Luminance) on the other hand is a commonly used alternative name for the L*C*h(uv) color space, also known as the cylindrical representation or polar CIELUV. This name is commonly used by information visualization practitioners who want to present data without the bias implicit in using varying saturation. The name Lch(ab) is sometimes used to differentiate from L*C*h(uv). Other related color spaces A related color space, the CIE 1976 L*u*v* color space (a.k.a. CIELUV), preserves the same L* as L*a*b* but has a different representation of the chromaticity components. CIELAB and CIELUV can also be expressed in cylindrical form (CIELChab and CIELChuv, respectively), with the chromaticity components replaced by correlates of chroma and hue. Since the work on CIELAB and CIELUV, the CIE has been incorporating an increasing number of color appearance phenomena into their models and difference equations to better predict human color perception. These color appearance models, of which CIELAB is a simple example, culminated with CIECAM02. Oklab is built on the same spatial structure and achieves greater perceptual uniformity. Usage Some systems and software applications that support CIELAB include: CIELAB is used by Datacolor spectrophotometers, including the related color difference calculations. CIELAB is used by the PantoneLive library. CIELAB is used extensively by X-Rite as a color space with their hardware and software color measuring systems. CIELAB D50 is available in Adobe Photoshop, where it is called "Lab mode". CIELAB is available in Affinity Photo by changing the document's Colour Format to "Lab (16 bit)". The white point, which defaults to D50, can be changed by ICC profile. CIELAB D50 is available in ICC profiles as a profile connection space named "Lab color space". CIELAB (any white point) is a supported color space in TIFF image files. CIELAB (any white point) is available in PDF documents, where it is called the "Lab color space". CIELAB is an option in Digital Color Meter on macOS described as "L*a*b*". CIELAB is available in the RawTherapee photo editor, where it is called the "Lab color space". CIELAB is used by GIMP for the hue-chroma adjustment filter, fuzzy-select and paint-bucket. There is also a LCh(ab) color picker. Web browser support for CIELAB was introduced as part of CSS Color Module Level 4, and is supported in all major browsers. Hunter Lab
Physical sciences
Basics
Physics
365196
https://en.wikipedia.org/wiki/Beetroot
Beetroot
The beetroot (British English) or beet (North American English) is the taproot portion of a Beta vulgaris subsp. vulgaris plant in the Conditiva Group. The plant is a root vegetable also known as the table beet, garden beet, dinner beet, or else categorized by color: red beet or golden beet. It is also a leaf vegetable called beet greens. Beetroot can be eaten raw, roasted, steamed, or boiled. Beetroot can also be canned, either whole or cut up, and often are pickled, spiced, or served in a sweet-and-sour sauce. It is one of several cultivated varieties of Beta vulgaris subsp. vulgaris grown for their edible taproots or leaves, classified as belonging to the Conditiva Group. Other cultivars of the same subspecies include the sugar beet, the leaf vegetable known as spinach beet (Swiss chard), and the fodder crop mangelwurzel. Etymology Beta is the ancient Latin name for beetroot, possibly of Celtic origin, becoming bete in Old English. Root derives from the late Old English rōt, itself from Old Norse rót. History The domestication of beetroot can be traced to the emergence of an allele that enables biennial harvesting of leaves and taproot. Beetroot was domesticated in the ancient Middle East, primarily for their greens, and were grown by the Ancient Egyptians, Greeks, and Romans. By the Roman era, it is thought that they were also cultivated for their roots. From the Middle Ages, beetroot was used to treat various conditions, especially illnesses relating to digestion and the blood. Bartolomeo Platina recommended taking beetroot with garlic to nullify the effects of "garlic-breath". During the middle of the 17th century, wine often was colored with beetroot juice. Food shortages in Europe following World War I caused great hardships, including cases of mangelwurzel disease, as relief workers called it. It was symptomatic of eating only beetroot. Culinary use Usually, the deep purple roots of beetroot are eaten boiled, roasted, or raw, and either alone or combined with any salad vegetable. The green, leafy portion of the beetroot is also edible. The young leaves can be added raw to salads, while the mature leaves are most commonly served boiled or steamed, in which case they have a taste and texture similar to spinach. Beetroot can be roasted, boiled or steamed, peeled, and then eaten warm with or without butter; cooked, pickled, and then eaten cold as a condiment; or peeled, shredded raw, and then eaten as a salad. Pickled beetroot is a traditional food in many countries. Australia and New Zealand In Australia and New Zealand, sliced pickled beetroot is a common ingredient in traditional hamburgers. Eastern Europe In Eastern Europe, beetroot soup, such as borscht [Ukrainian] and barszcz czerwony [Polish], is common. In Ukraine, a related dish called "shpundra" is also common; this hearty beetroot stew, often made with pork belly or ribs, is sometimes referred to as a thicker version of borscht. In Poland and Ukraine, beetroot is combined with horseradish to form ćwikła or бурячки (buryachky), which is traditionally used with cold cuts and sandwiches, but often also added to a meal consisting of meat and potatoes. Similarly, in Serbia, beetroot (referred to by the local name cvekla) is used as winter salad, seasoned with salt and vinegar, with meat dishes. As an addition to horseradish, it is also used to produce the "red" variety of chrain, a condiment in Ashkenazi Jewish, Hungarian, Polish, Lithuanian, Russian, and Ukrainian cuisine. Cold beetroot soup called "Šaltibarščiai" is very popular in Lithuania. Traditionally it consists of kefir, boiled beetroot, cucumber, dill, spring onions and can be eaten with boiled eggs and potatoes. is an old-time traditional Russian cold soup made from leftover beet greens and chopped beetroots, typically with bread and kvass added. Botvinya got its name from the Russian botva, which means "root vegetable greens", referring to beet plant leaves. , or svyokolnik, is yet another Russian beet-based soup, typically distinguished from borscht in that vegetables for svekolnik are cooked raw and not sauteed, while many types of borscht typically include sauteed carrots and other vegetables. Svekolnik got its name from svyokla, Russian word for "beet." Sometimes, various types of cold borscht are also called "svekolnik". India In Indian cuisine, chopped, cooked, spiced beetroot is a common side dish. Yellow-colored beetroots are grown on a very small scale for home consumption. North America Besides standard fruit and vegetable dishes, certain varieties of beets are sometimes used as a garnish to a tart. Northern Europe A common dish in Sweden and elsewhere in the Nordic countries is Biff à la Lindström, a variant of meatballs or burgers, with chopped or grated beetroot added to the minced meat. In Northern Germany, beetroot is mashed with Labskaus or added as its side order. Industrial production and other uses A large proportion of commercial production is processed into boiled and sterilized beetroot or pickles. Betanin, obtained from the roots, is used industrially as red food colorant to enhance the color and flavor of tomato paste, sauces, desserts, jams and jellies, ice cream, candy, and breakfast cereals. When beetroot juice is used, it is most stable in foods with low water content, such as frozen novelties and fruit fillings. Beetroot can be used to make wine. Nutrition Raw beetroot is 88% water, 10% carbohydrates, 2% protein, and less than 1% fat (see table). In a amount providing of food energy, raw beetroot is a rich source (27% of the Daily Value (DV)) of folate and a moderate source (16% DV) of manganese, with other nutrients having insignificant content (table). Health effects A clinical trial review reported that consumption of beetroot juice modestly reduced systolic blood pressure but not diastolic blood pressure. Pigment The red color compound betanin is a betalain in the category of betacyanins. It is not broken down in the body, and in higher concentrations, may temporarily cause urine or stools to assume a reddish color, in the case of urine a condition called beeturia. Although harmless, this effect may cause initial concern as a medical problem due to a visual similarity with blood in the stool, blood passing through the anus (hematochezia), or blood in the urine (hematuria). Nitrosamine formation in beetroot juice can reliably be prevented by adding ascorbic acid. Cultivars Below is a list of several commonly available cultivars of beetroot. Generally, 55 to 65 days are needed from germination to harvest of the root. All cultivars can be harvested earlier for use as greens. Unless otherwise noted, the root colors are shades of red and dark red, with different degrees of zoning noticeable in slices. Gallery
Biology and health sciences
Caryophyllales
null
365435
https://en.wikipedia.org/wiki/Crystallite
Crystallite
A crystallite is a small or even microscopic crystal which forms, for example, during the cooling of many materials. Crystallites are also referred to as grains. Bacillite is a type of crystallite. It is rodlike with parallel longulites. Structure The orientation of crystallites can be random with no preferred direction, called random texture, or directed, possibly due to growth and processing conditions. While the structure of a single crystal is highly ordered and its lattice is continuous and unbroken, amorphous materials, such as glass and many polymers, are non-crystalline and do not display any structures, as their constituents are not arranged in an ordered manner. Polycrystalline structures and paracrystalline phases are in between these two extremes. Polycrystalline materials, or polycrystals, are solids that are composed of many crystallites of varying size and orientation. Most materials are polycrystalline, made of a large number crystallites held together by thin layers of amorphous solid. Most inorganic solids are polycrystalline, including all common metals, many ceramics, rocks, and ice. The areas where crystallites meet are known as grain boundaries. Size Crystallite size in monodisperse microstructures is usually approximated from X-ray diffraction patterns and grain size by other experimental techniques like transmission electron microscopy. Solid objects large enough to see and handle are rarely composed of a single crystal, except for a few cases (gems, silicon single crystals for the electronics industry, certain types of fiber, single crystals of a nickel-based superalloy for turbojet engines, and some ice crystals which can exceed 0.5 meters in diameter). The crystallite size can vary from a few nanometers to several millimeters. Effects on material physical properties The extent to which a solid is crystalline (crystallinity) has important effects on its physical properties. Sulfur, while usually polycrystalline, may also occur in other allotropic forms with completely different properties. Although crystallites are referred to as grains, powder grains are different, as they can be composed of smaller polycrystalline grains themselves. Generally, polycrystals cannot be superheated; they will melt promptly once they are brought to a high enough temperature. This is because grain boundaries are amorphous, and serve as nucleation points for the liquid phase. By contrast, if no solid nucleus is present as a liquid cools, it tends to become supercooled. Since this is undesirable for mechanical materials, alloy designers often take steps against it (by grain refinement). Material fractures can be either intergranular or a transgranular fracture. There is an ambiguity with powder grains: a powder grain can be made of several crystallites. Thus, the (powder) "grain size" found by laser granulometry can be different from the "grain size" (rather, crystallite size) found by X-ray diffraction (e.g. Scherrer method), by optical microscopy under polarised light, or by scanning electron microscopy (backscattered electrons). If the individual crystallites are oriented completely at random, a large enough volume of polycrystalline material will be approximately isotropic. This property helps the simplifying assumptions of continuum mechanics to apply to real-world solids. However, most manufactured materials have some alignment to their crystallites, resulting in texture that must be taken into account for accurate predictions of their behavior and characteristics. When the crystallites are mostly ordered with a random spread of orientations, one has a mosaic crystal. Abnormal grain growth, where a small number of crystallites are significantly larger than the mean crystallite size, is commonly observed in diverse polycrystalline materials, and results in mechanical and optical properties that diverge from similar materials having a monodisperse crystallite size distribution with a similar mean crystallite size. Coarse grained rocks are formed very slowly, while fine grained rocks are formed quickly, on geological time scales. If a rock forms very quickly, such as from the solidification of lava ejected from a volcano, there may be no crystals at all. This is how obsidian forms. Grain boundaries Grain boundaries are interfaces where crystals of different orientations meet. A grain boundary is a single-phase interface, with crystals on each side of the boundary being identical except in orientation. The term "crystallite boundary" is sometimes, though rarely, used. Grain boundary areas contain those atoms that have been perturbed from their original lattice sites, dislocations, and impurities that have migrated to the lower energy grain boundary. Treating a grain boundary geometrically as an interface of a single crystal cut into two parts, one of which is rotated, we see that there are five variables required to define a grain boundary. The first two numbers come from the unit vector that specifies a rotation axis. The third number designates the angle of rotation of the grain. The final two numbers specify the plane of the grain boundary (or a unit vector that is normal to this plane). Grain boundaries disrupt the motion of dislocations through a material. Dislocation propagation is impeded because of the stress field of the grain boundary defect region and the lack of slip planes and slip directions and overall alignment across the boundaries. Reducing grain size is therefore a common way to improve strength, often without any sacrifice in toughness because the smaller grains create more obstacles per unit area of slip plane. This crystallite size-strength relationship is given by the Hall–Petch relationship. The high interfacial energy and relatively weak bonding in grain boundaries makes them preferred sites for the onset of corrosion and for the precipitation of new phases from the solid. Grain boundary migration plays an important role in many of the mechanisms of creep. Grain boundary migration occurs when a shear stress acts on the grain boundary plane and causes the grains to slide. This means that fine-grained materials actually have a poor resistance to creep relative to coarser grains, especially at high temperatures, because smaller grains contain more atoms in grain boundary sites. Grain boundaries also cause deformation in that they are sources and sinks of point defects. Voids in a material tend to gather in a grain boundary, and if this happens to a critical extent, the material could fracture. During grain boundary migration, the rate determining step depends on the angle between two adjacent grains. In a small angle dislocation boundary, the migration rate depends on vacancy diffusion between dislocations. In a high angle dislocation boundary, this depends on the atom transport by single atom jumps from the shrinking to the growing grains. Grain boundaries are generally only a few nanometers wide. In common materials, crystallites are large enough that grain boundaries account for a small fraction of the material. However, very small grain sizes are achievable. In nanocrystalline solids, grain boundaries become a significant volume fraction of the material, with profound effects on such properties as diffusion and plasticity. In the limit of small crystallites, as the volume fraction of grain boundaries approaches 100%, the material ceases to have any crystalline character, and thus becomes an amorphous solid. Grain boundaries are also present in magnetic domains in magnetic materials. A computer hard disk, for example, is made of a hard ferromagnetic material that contains regions of atoms whose magnetic moments can be realigned by an inductive head. The magnetization varies from region to region, and the misalignment between these regions forms boundaries that are key to data storage. The inductive head measures the orientation of the magnetic moments of these domain regions and reads out either a “1” or “0”. These bits are the data being read. Grain size is important in this technology because it limits the number of bits that can fit on one hard disk. The smaller the grain sizes, the more data that can be stored. Because of the dangers of grain boundaries in certain materials such as superalloy turbine blades, great technological leaps were made to minimize as much as possible the effect of grain boundaries in the blades. The result was directional solidification processing in which grain boundaries were eliminated by producing columnar grain structures aligned parallel to the axis of the blade, since this is usually the direction of maximum tensile stress felt by a blade during its rotation in an airplane. The resulting turbine blades consisted of a single grain, improving reliability.
Physical sciences
Solid mechanics
Physics
365558
https://en.wikipedia.org/wiki/Nicotinamide%20adenine%20dinucleotide
Nicotinamide adenine dinucleotide
Nicotinamide adenine dinucleotide (NAD) is a coenzyme central to metabolism. Found in all living cells, NAD is called a dinucleotide because it consists of two nucleotides joined through their phosphate groups. One nucleotide contains an adenine nucleobase and the other, nicotinamide. NAD exists in two forms: an oxidized and reduced form, abbreviated as NAD and NADH (H for hydrogen), respectively. In cellular metabolism, NAD is involved in redox reactions, carrying electrons from one reaction to another, so it is found in two forms: NAD is an oxidizing agent, accepting electrons from other molecules and becoming reduced; with H+, this reaction forms NADH, which can be used as a reducing agent to donate electrons. These electron transfer reactions are the main function of NAD. It is also used in other cellular processes, most notably as a substrate of enzymes in adding or removing chemical groups to or from proteins, in posttranslational modifications. Because of the importance of these functions, the enzymes involved in NAD metabolism are targets for drug discovery. In organisms, NAD can be synthesized from simple building-blocks (de novo) from either tryptophan or aspartic acid, each a case of an amino acid. Alternatively, more complex components of the coenzymes are taken up from nutritive compounds such as niacin; similar compounds are produced by reactions that break down the structure of NAD, providing a salvage pathway that recycles them back into their respective active form. Some NAD is converted into the coenzyme nicotinamide adenine dinucleotide phosphate (NADP), whose chemistry largely parallels that of NAD, though its predominant role is as a coenzyme in anabolic metabolism. In the name NAD, the superscripted plus sign indicates the positive formal charge on one of its nitrogen atoms. A biological coenzyme that acts as an electron carrier in enzymatic reactions. NADP is a reducing agent in anabolic reactions like the Calvin cycle and lipid and nucleic acid syntheses. NADP exists in two forms: NADP+, the oxidized form, and NADPH, the reduced form. NADP is similar to nicotinamide adenine dinucleotide (NAD), but NADP has a phosphate group at the C-2′ position of the adenosyl Physical and chemical properties Nicotinamide adenine dinucleotide consists of two nucleosides joined by pyrophosphate. The nucleosides each contain a ribose ring, one with adenine attached to the first carbon atom (the 1' position) (adenosine diphosphate ribose) and the other with nicotinamide at this position. The compound accepts or donates the equivalent of H−. Such reactions (summarized in formula below) involve the removal of two hydrogen atoms from the reactant (R), in the form of a hydride ion (H−), and a proton (H). The proton is released into solution, while the reductant RH is oxidized and NAD reduced to NADH by transfer of the hydride to the nicotinamide ring. RH + NAD → NADH + H + R; From the hydride electron pair, one electron is attracted to the slightly more electronegative atom of the nicotinamide ring of NAD, becoming part of the nicotinamide moiety. The second electron and proton atom are transferred to the carbon atom adjacent to the N atom. The midpoint potential of the NAD/NADH redox pair is −0.32 volts, which makes NADH a moderately strong reducing agent. The reaction is easily reversible, when NADH reduces another molecule and is re-oxidized to NAD. This means the coenzyme can continuously cycle between the NAD and NADH forms without being consumed. In appearance, all forms of this coenzyme are white amorphous powders that are hygroscopic and highly water-soluble. The solids are stable if stored dry and in the dark. Solutions of NAD are colorless and stable for about a week at 4 °C and neutral pH, but decompose rapidly in acidic or alkaline solutions. Upon decomposition, they form products that are enzyme inhibitors. Both NAD and NADH strongly absorb ultraviolet light because of the adenine. For example, peak absorption of NAD is at a wavelength of 259 nanometers (nm), with an extinction coefficient of 16,900 M−1cm−1. NADH also absorbs at higher wavelengths, with a second peak in UV absorption at 339 nm with an extinction coefficient of 6,220 M−1cm−1. This difference in the ultraviolet absorption spectra between the oxidized and reduced forms of the coenzymes at higher wavelengths makes it simple to measure the conversion of one to another in enzyme assays – by measuring the amount of UV absorption at 340 nm using a spectrophotometer. NAD and NADH also differ in their fluorescence. Freely diffusing NADH in aqueous solution, when excited at the nicotinamide absorbance of ~335 nm (near-UV), fluoresces at 445–460 nm (violet to blue) with a fluorescence lifetime of 0.4 nanoseconds, while NAD does not fluoresce. The properties of the fluorescence signal changes when NADH binds to proteins, so these changes can be used to measure dissociation constants, which are useful in the study of enzyme kinetics. These changes in fluorescence are also used to measure changes in the redox state of living cells, through fluorescence microscopy. NADH can be converted to NAD+ in a reaction catalysed by copper, which requires hydrogen peroxide. Thus, the supply of NAD+ in cells requires mitochondrial copper(II). Concentration and state in cells In rat liver, the total amount of NAD and NADH is approximately 1 μmole per gram of wet weight, about 10 times the concentration of NADP and NADPH in the same cells. The actual concentration of NAD in cell cytosol is harder to measure, with recent estimates in animal cells ranging around 0.3 mM, and approximately 1.0 to 2.0 mM in yeast. However, more than 80% of NADH fluorescence in mitochondria is from bound form, so the concentration in solution is much lower. NAD concentrations are highest in the mitochondria, constituting 40% to 70% of the total cellular NAD. NAD in the cytosol is carried into the mitochondrion by a specific membrane transport protein, since the coenzyme cannot diffuse across membranes. The intracellular half-life of NAD+ was claimed to be between 1–2 hours by one review, whereas another review gave varying estimates based on compartment: intracellular 1–4 hours, cytoplasmic 2 hours, and mitochondrial 4–6 hours. The balance between the oxidized and reduced forms of nicotinamide adenine dinucleotide is called the NAD/NADH ratio. This ratio is an important component of what is called the redox state of a cell, a measurement that reflects both the metabolic activities and the health of cells. The effects of the NAD/NADH ratio are complex, controlling the activity of several key enzymes, including glyceraldehyde 3-phosphate dehydrogenase and pyruvate dehydrogenase. In healthy mammalian tissues, estimates of the ratio of free NAD to NADH in the cytoplasm typically lie around 700:1; the ratio is thus favorable for oxidative reactions. The ratio of total NAD/NADH is much lower, with estimates ranging from 3–10 in mammals. In contrast, the NADP/NADPH ratio is normally about 0.005, so NADPH is the dominant form of this coenzyme. These different ratios are key to the different metabolic roles of NADH and NADPH. Biosynthesis NAD is synthesized through two metabolic pathways. It is produced either in a de novo pathway from amino acids or in salvage pathways by recycling preformed components such as nicotinamide back to NAD. Although most tissues synthesize NAD by the salvage pathway in mammals, much more de novo synthesis occurs in the liver from tryptophan, and in the kidney and macrophages from nicotinic acid. De novo production Most organisms synthesize NAD from simple components. The specific set of reactions differs among organisms, but a common feature is the generation of quinolinic acid (QA) from an amino acideither tryptophan (Trp) in animals and some bacteria, or aspartic acid (Asp) in some bacteria and plants. The quinolinic acid is converted to nicotinic acid mononucleotide (NaMN) by transfer of a phosphoribose moiety. An adenylate moiety is then transferred to form nicotinic acid adenine dinucleotide (NaAD). Finally, the nicotinic acid moiety in NaAD is amidated to a nicotinamide (Nam) moiety, forming nicotinamide adenine dinucleotide. In a further step, some NAD is converted into NADP by NAD kinase, which phosphorylates NAD. In most organisms, this enzyme uses adenosine triphosphate (ATP) as the source of the phosphate group, although several bacteria such as Mycobacterium tuberculosis and a hyperthermophilic archaeon Pyrococcus horikoshii, use inorganic polyphosphate as an alternative phosphoryl donor. Salvage pathways Despite the presence of the de novo pathway, the salvage reactions are essential in humans; a lack of niacin in the diet causes the vitamin deficiency disease pellagra. This high requirement for NAD results from the constant consumption of the coenzyme in reactions such as posttranslational modifications, since the cycling of NAD between oxidized and reduced forms in redox reactions does not change the overall levels of the coenzyme. The major source of NAD in mammals is the salvage pathway which recycles the nicotinamide produced by enzymes utilizing NAD. The first step, and the rate-limiting enzyme in the salvage pathway is nicotinamide phosphoribosyltransferase (NAMPT), which produces nicotinamide mononucleotide (NMN). NMN is the immediate precursor to NAD+ in the salvage pathway. Besides assembling NAD de novo from simple amino acid precursors, cells also salvage preformed compounds containing a pyridine base. The three vitamin precursors used in these salvage metabolic pathways are nicotinic acid (NA), nicotinamide (Nam) and nicotinamide riboside (NR). These compounds can be taken up from the diet and are termed vitamin B or niacin. However, these compounds are also produced within cells and by digestion of cellular NAD. Some of the enzymes involved in these salvage pathways appear to be concentrated in the cell nucleus, which may compensate for the high level of reactions that consume NAD in this organelle. There are some reports that mammalian cells can take up extracellular NAD from their surroundings, and both nicotinamide and nicotinamide riboside can be absorbed from the gut. The salvage pathways used in microorganisms differ from those of mammals. Some pathogens, such as the yeast Candida glabrata and the bacterium Haemophilus influenzae are NAD auxotrophs – they cannot synthesize NAD – but possess salvage pathways and thus are dependent on external sources of NAD or its precursors. Even more surprising is the intracellular pathogen Chlamydia trachomatis, which lacks recognizable candidates for any genes involved in the biosynthesis or salvage of both NAD and NADP, and must acquire these coenzymes from its host. Functions Nicotinamide adenine dinucleotide has several essential roles in metabolism. It acts as a coenzyme in redox reactions, as a donor of ADP-ribose moieties in ADP-ribosylation reactions, as a precursor of the second messenger molecule cyclic ADP-ribose, as well as acting as a substrate for bacterial DNA ligases and a group of enzymes called sirtuins that use NAD to remove acetyl groups from proteins. In addition to these metabolic functions, NAD+ emerges as an adenine nucleotide that can be released from cells spontaneously and by regulated mechanisms, and can therefore have important extracellular roles. Oxidoreductase binding of NAD The main role of NAD in metabolism is the transfer of electrons from one molecule to another. Reactions of this type are catalyzed by a large group of enzymes called oxidoreductases. The correct names for these enzymes contain the names of both their substrates: for example NADH-ubiquinone oxidoreductase catalyzes the oxidation of NADH by coenzyme Q. However, these enzymes are also referred to as dehydrogenases or reductases, with NADH-ubiquinone oxidoreductase commonly being called NADH dehydrogenase or sometimes coenzyme Q reductase. There are many different superfamilies of enzymes that bind NAD / NADH. One of the most common superfamilies includes a structural motif known as the Rossmann fold. The motif is named after Michael Rossmann, who was the first scientist to notice how common this structure is within nucleotide-binding proteins. An example of a NAD-binding bacterial enzyme involved in amino acid metabolism that does not have the Rossmann fold is found in Pseudomonas syringae pv. tomato (; ). When bound in the active site of an oxidoreductase, the nicotinamide ring of the coenzyme is positioned so that it can accept a hydride from the other substrate. Depending on the enzyme, the hydride donor is positioned either "above" or "below" the plane of the planar C4 carbon, as defined in the figure. Class A oxidoreductases transfer the atom from above; class B enzymes transfer it from below. Since the C4 carbon that accepts the hydrogen is prochiral, this can be exploited in enzyme kinetics to give information about the enzyme's mechanism. This is done by mixing an enzyme with a substrate that has deuterium atoms substituted for the hydrogens, so the enzyme will reduce NAD by transferring deuterium rather than hydrogen. In this case, an enzyme can produce one of two stereoisomers of NADH. Despite the similarity in how proteins bind the two coenzymes, enzymes almost always show a high level of specificity for either NAD or NADP. This specificity reflects the distinct metabolic roles of the respective coenzymes, and is the result of distinct sets of amino acid residues in the two types of coenzyme-binding pocket. For instance, in the active site of NADP-dependent enzymes, an ionic bond is formed between a basic amino acid side-chain and the acidic phosphate group of NADP. On the converse, in NAD-dependent enzymes the charge in this pocket is reversed, preventing NADP from binding. However, there are a few exceptions to this general rule, and enzymes such as aldose reductase, glucose-6-phosphate dehydrogenase, and methylenetetrahydrofolate reductase can use both coenzymes in some species. Role in redox metabolism The redox reactions catalyzed by oxidoreductases are vital in all parts of metabolism, but one particularly important area where these reactions occur is in the release of energy from nutrients. Here, reduced compounds such as glucose and fatty acids are oxidized, thereby releasing energy. This energy is transferred to NAD by reduction to NADH, as part of beta oxidation, glycolysis, and the citric acid cycle. In eukaryotes the electrons carried by the NADH that is produced in the cytoplasm are transferred into the mitochondrion (to reduce mitochondrial NAD) by mitochondrial shuttles, such as the malate-aspartate shuttle. The mitochondrial NADH is then oxidized in turn by the electron transport chain, which pumps protons across a membrane and generates ATP through oxidative phosphorylation. These shuttle systems also have the same transport function in chloroplasts. Since both the oxidized and reduced forms of nicotinamide adenine dinucleotide are used in these linked sets of reactions, the cell maintains significant concentrations of both NAD and NADH, with the high NAD/NADH ratio allowing this coenzyme to act as both an oxidizing and a reducing agent. In contrast, the main function of NADPH is as a reducing agent in anabolism, with this coenzyme being involved in pathways such as fatty acid synthesis and photosynthesis. Since NADPH is needed to drive redox reactions as a strong reducing agent, the NADP/NADPH ratio is kept very low. Although it is important in catabolism, NADH is also used in anabolic reactions, such as gluconeogenesis. This need for NADH in anabolism poses a problem for prokaryotes growing on nutrients that release only a small amount of energy. For example, nitrifying bacteria such as Nitrobacter oxidize nitrite to nitrate, which releases sufficient energy to pump protons and generate ATP, but not enough to produce NADH directly. As NADH is still needed for anabolic reactions, these bacteria use a nitrite oxidoreductase to produce enough proton-motive force to run part of the electron transport chain in reverse, generating NADH. Non-redox roles The coenzyme NAD is also consumed in ADP-ribose transfer reactions. For example, enzymes called ADP-ribosyltransferases add the ADP-ribose moiety of this molecule to proteins, in a posttranslational modification called ADP-ribosylation. ADP-ribosylation involves either the addition of a single ADP-ribose moiety, in mono-ADP-ribosylation, or the transferral of ADP-ribose to proteins in long branched chains, which is called poly(ADP-ribosyl)ation. Mono-ADP-ribosylation was first identified as the mechanism of a group of bacterial toxins, notably cholera toxin, but it is also involved in normal cell signaling. Poly(ADP-ribosyl)ation is carried out by the poly(ADP-ribose) polymerases. The poly(ADP-ribose) structure is involved in the regulation of several cellular events and is most important in the cell nucleus, in processes such as DNA repair and telomere maintenance. In addition to these functions within the cell, a group of extracellular ADP-ribosyltransferases has recently been discovered, but their functions remain obscure. NAD may also be added onto cellular RNA as a 5'-terminal modification. Another function of this coenzyme in cell signaling is as a precursor of cyclic ADP-ribose, which is produced from NAD by ADP-ribosyl cyclases, as part of a second messenger system. This molecule acts in calcium signaling by releasing calcium from intracellular stores. It does this by binding to and opening a class of calcium channels called ryanodine receptors, which are located in the membranes of organelles, such as the endoplasmic reticulum, and inducing the activation of the transcription factor NAFC3 NAD is also consumed by different NAD+-consuming enzymes, such as CD38, CD157, PARPs and the NAD-dependent deacetylases (sirtuins,such as Sir2.). These enzymes act by transferring an acetyl group from their substrate protein to the ADP-ribose moiety of NAD; this cleaves the coenzyme and releases nicotinamide and O-acetyl-ADP-ribose. The sirtuins mainly seem to be involved in regulating transcription through deacetylating histones and altering nucleosome structure. However, non-histone proteins can be deacetylated by sirtuins as well. These activities of sirtuins are particularly interesting because of their importance in the regulation of aging. Other NAD-dependent enzymes include bacterial DNA ligases, which join two DNA ends by using NAD as a substrate to donate an adenosine monophosphate (AMP) moiety to the 5' phosphate of one DNA end. This intermediate is then attacked by the 3' hydroxyl group of the other DNA end, forming a new phosphodiester bond. This contrasts with eukaryotic DNA ligases, which use ATP to form the DNA-AMP intermediate. Li et al. have found that NAD directly regulates protein-protein interactions. They also show that one of the causes of age-related decline in DNA repair may be increased binding of the protein DBC1 (Deleted in Breast Cancer 1) to PARP1 (poly[ADP–ribose] polymerase 1) as NAD levels decline during aging. The decline in cellular concentrations of NAD during aging likely contributes to the aging process and to the pathogenesis of the chronic diseases of aging. Thus, the modulation of NAD may protect against cancer, radiation, and aging. Extracellular actions of NAD+ In recent years, NAD+ has also been recognized as an extracellular signaling molecule involved in cell-to-cell communication. NAD+ is released from neurons in blood vessels, urinary bladder, large intestine, from neurosecretory cells, and from brain synaptosomes, and is proposed to be a novel neurotransmitter that transmits information from nerves to effector cells in smooth muscle organs. In plants, the extracellular nicotinamide adenine dinucleotide induces resistance to pathogen infection and the first extracellular NAD receptor has been identified. Further studies are needed to determine the underlying mechanisms of its extracellular actions and their importance for human health and life processes in other organisms. Clinical significance The enzymes that make and use NAD and NADH are important in both pharmacology and the research into future treatments for disease. Drug design and drug development exploits NAD in three ways: as a direct target of drugs, by designing enzyme inhibitors or activators based on its structure that change the activity of NAD-dependent enzymes, and by trying to inhibit NAD biosynthesis. Because cancer cells utilize increased glycolysis, and because NAD enhances glycolysis, nicotinamide phosphoribosyltransferase (NAD salvage pathway) is often amplified in cancer cells. It has been studied for its potential use in the therapy of neurodegenerative diseases such as Alzheimer's and Parkinson's disease as well as multiple sclerosis. A placebo-controlled clinical trial of NADH (which excluded NADH precursors) in people with Parkinson's failed to show any effect. NAD is also a direct target of the drug isoniazid, which is used in the treatment of tuberculosis, an infection caused by Mycobacterium tuberculosis. Isoniazid is a prodrug and once it has entered the bacteria, it is activated by a peroxidase enzyme, which oxidizes the compound into a free radical form. This radical then reacts with NADH, to produce adducts that are very potent inhibitors of the enzymes enoyl-acyl carrier protein reductase, and dihydrofolate reductase. Since many oxidoreductases use NAD and NADH as substrates, and bind them using a highly conserved structural motif, the idea that inhibitors based on NAD could be specific to one enzyme is surprising. However, this can be possible: for example, inhibitors based on the compounds mycophenolic acid and tiazofurin inhibit IMP dehydrogenase at the NAD binding site. Because of the importance of this enzyme in purine metabolism, these compounds may be useful as anti-cancer, anti-viral, or immunosuppressive drugs. Other drugs are not enzyme inhibitors, but instead activate enzymes involved in NAD metabolism. Sirtuins are a particularly interesting target for such drugs, since activation of these NAD-dependent deacetylases extends lifespan in some animal models. Compounds such as resveratrol increase the activity of these enzymes, which may be important in their ability to delay aging in both vertebrate, and invertebrate model organisms. In one experiment, mice given NAD for one week had improved nuclear-mitochrondrial communication. Because of the differences in the metabolic pathways of NAD biosynthesis between organisms, such as between bacteria and humans, this area of metabolism is a promising area for the development of new antibiotics. For example, the enzyme nicotinamidase, which converts nicotinamide to nicotinic acid, is a target for drug design, as this enzyme is absent in humans but present in yeast and bacteria. In bacteriology, NAD, sometimes referred to factor V, is used as a supplement to culture media for some fastidious bacteria. History The coenzyme NAD was first discovered by the British biochemists Arthur Harden and William John Young in 1906. They noticed that adding boiled and filtered yeast extract greatly accelerated alcoholic fermentation in unboiled yeast extracts. They called the unidentified factor responsible for this effect a coferment. Through a long and difficult purification from yeast extracts, this heat-stable factor was identified as a nucleotide sugar phosphate by Hans von Euler-Chelpin. In 1936, the German scientist Otto Heinrich Warburg showed the function of the nucleotide coenzyme in hydride transfer and identified the nicotinamide portion as the site of redox reactions. Vitamin precursors of NAD were first identified in 1938, when Conrad Elvehjem showed that liver has an "anti-black tongue" activity in the form of nicotinamide. Then, in 1939, he provided the first strong evidence that niacin is used to synthesize NAD. In the early 1940s, Arthur Kornberg was the first to detect an enzyme in the biosynthetic pathway. In 1949, the American biochemists Morris Friedkin and Albert L. Lehninger proved that NADH linked metabolic pathways such as the citric acid cycle with the synthesis of ATP in oxidative phosphorylation. In 1958, Jack Preiss and Philip Handler discovered the intermediates and enzymes involved in the biosynthesis of NAD; salvage synthesis from nicotinic acid is termed the Preiss-Handler pathway. In 2004, Charles Brenner and co-workers uncovered the nicotinamide riboside kinase pathway to NAD. The non-redox roles of NAD(P) were discovered later. The first to be identified was the use of NAD as the ADP-ribose donor in ADP-ribosylation reactions, observed in the early 1960s. Studies in the 1980s and 1990s revealed the activities of NAD and NADP metabolites in cell signaling – such as the action of cyclic ADP-ribose, which was discovered in 1987. The metabolism of NAD remained an area of intense research into the 21st century, with interest heightened after the discovery of the NAD-dependent protein deacetylases called sirtuins in 2000, by Shin-ichiro Imai and coworkers in the laboratory of Leonard P. Guarente. In 2009 Imai proposed the "NAD World" hypothesis that key regulators of aging and longevity in mammals are sirtuin 1 and the primary NAD synthesizing enzyme nicotinamide phosphoribosyltransferase (NAMPT). In 2016 Imai expanded his hypothesis to "NAD World 2.0", which postulates that extracellular NAMPT from adipose tissue maintains NAD in the hypothalamus (the control center) in conjunction with myokines from skeletal muscle cells. In 2018, Napa Therapeutics was formed to develop drugs against a novel aging-related target based on the research in NAD metabolism conducted in the lab of Eric Verdin.
Biology and health sciences
Coenzymes
Biology
365765
https://en.wikipedia.org/wiki/Machining
Machining
Machining is a manufacturing process where a desired shape or part is created using the controlled removal of material, most often metal, from a larger piece of raw material by cutting. Machining is a form of subtractive manufacturing, which utilizes machine tools, in contrast to additive manufacturing (e.g. 3D printing), which uses controlled addition of material. Machining is a major process of the manufacture of many metal products, but it can also be used on other materials such as wood, plastic, ceramic, and composites. A person who specializes in machining is called a machinist. As a commercial venture, machining is generally performed in a machine shop, which consists of one or more workrooms containing primary machine tools. Although a machine shop can be a standalone operation, many businesses maintain internal machine shops or tool rooms that support their specialized needs. Much modern-day machining uses computer numerical control (CNC), in which computers control the movement and operation of mills, lathes, and other cutting machines. History and terminology The precise meaning of the term machining has changed over the past one and a half centuries as technology has advanced in a number of ways. In the 18th century, the word machinist meant a person who built or repaired machines. This person's work was primarily done by hand, using processes such as the carving of wood and the writing-forging and hand-filing of metal. At the time, millwrights and builders of new kinds of engines (meaning, more or less, machines of any kind), such as James Watt or John Wilkinson, would fit the definition. The noun machine tool and the verb to machine (machined, machining) did not yet exist. Around the middle of the 20th century, the latter words were coined as the concepts they described evolved into widespread existence. Therefore, during the Machine Age, machining referred to (what we today might call) the "traditional" machining processes, such as turning, boring, drilling, milling, broaching, sawing, shaping, planing, abrasive cutting, reaming, and tapping. In these "traditional" or "conventional" machining processes, machine tools, such as lathes, milling machines, drill presses, or others, are used with a sharp cutting tool to remove material to achieve a desired geometry. Since the advent of new technologies in the post–World War II era, such as electrical discharge machining, electrochemical machining, electron beam machining, photochemical machining, and ultrasonic machining, the retronym "conventional machining" can be used to differentiate those classic technologies from the newer ones. Currently, "machining" without qualification usually implies the traditional machining processes. In the decades of the 2000s and 2010s, as additive manufacturing (AM) evolved beyond its earlier laboratory and rapid prototyping contexts and began to become standard throughout all phases of manufacturing, the term subtractive manufacturing became common retronymously in logical contrast with AM, covering essentially any removal processes also previously covered by the term machining. The two terms are effectively synonymous, although the long-established usage of the term machining continues. This is comparable to the idea that the verb sense of contact evolved because of the proliferation of ways to contact someone (telephone, email, IM, SMS, and so on) but did not entirely replace the earlier terms such as call, talk to, or write to. Machining operations Machining is any process in which a cutting tool removes material from the workpiece (the workpiece is often called the "work"). Relative motion is required in traditional machining between the device and the work to remove material; non-traditional machining processes use other methods of material removal, such as electric current in EDM (electro-discharge machining). This relative motion is achieved in most machining operations by moving (by lateral rotary or lateral motion) either the tool, or the workpiece. The shape of the tool, the relative motion, and its penetration into the work, produce the desired shape of the resulting work surface. Machining operations can be broken down into traditional, and non-traditional operations. Within the traditional operations, there are two categories of machining based on the shape they machine; being circular shapes that includes; turning, boring, drilling, reaming, threading and more, and various/straight shapes that includes; milling, broaching, sawing, grinding and shaping. Cutting tool A cutting tool has one or more sharp cutting edges and is made of a harder material than the work material. The cutting edge serves to separate the chip from the parent work material. Connected to the cutting edge are the two surfaces of the tool: The rake face; and The flank. The rake face, which directs the flow of the newly formed chip, is oriented at a certain angle and is called the rake angle "α." It is measured relative to the plane perpendicular to the work surface. The rake angle can be positive or negative. The flank of the tool provides a clearance between the tool and the newly formed work surface, thus protecting the surface from abrasion, which would degrade the finish. This angle between the work and flank surfaces is called the relief angle. There are two basic types of cutting tools: Single point tool; and Multiple-cutting-edge tool A single-point tool has one cutting edge for turning, boring, and planing. During machining, the device's point penetrates below the work part's original work surface. The fact is sometimes rounded to a certain radius, called the nose radius. Multiple cutting-edge tools have more than one cutting edge and usually achieve their motion relative to the work part by rotating. Drilling and milling use turning multiple-cutting-edge tools. Although the shapes of these tools are different from a single-point device, many elements of tool geometry are similar. Traditional machining Circular machining operations Turning operations involve rotating the exterior of the workpiece against a non rotating cutting tool that is moved into the workpiece. The rotation of the workpiece is the method of producing a relative motion against the tool. Lathes are the principal machine tool used in turning. Boring involves the machining of an internal surface of a hole to increase it diameter, this can be performed by either turning the workpiece on a lathe (also called internal turning), or a mill where a tool is rotated around the circumference of the hole. Drilling operations are those in which holes are produced or refined by bringing a rotating cutting tool (often using a drill bit) with cutting edges on the lower face and edge, that is brought into contact axially with the workpiece. Drilling operations can be performed on a lathe, mill or drill press, or even by hand. Threading or tapping involves the cutting of defined helix (thread) into a hole (tapping or threading), or onto shaft (threading), with a constant pitch, and specific geometry designed to accept the opposite thread and object in a turning motion to fasten items together (e.g. a nut and bolt) Various shape machining Sawing aims to create smaller cut lengths of bar stock material, using a saw, or cut off machine that passing a spinning (circular saw) or linear (band saw) toothed blade against the material to cut a kerf (thickness) from the material until it is cut in two. Depending on the material, a certain blade speed (in metres per minute, or feet per minute) measured as the linear speed of the teeth, may be required, between as low as 200 or 1000 feet per minute. Milling operations are operations where the cutting tool with cutting edges along its cylindrical face are brought against a workpiece to remove material in the profile of the spinning tools shaft and lower edge. Milling machines are the principal machine tool used in milling. Advanced CNC machines may combine lathe and milling operations. Broaching can refer to two operations, linear broaching, where a multi toothed tool is pressed through a hole to cut a desired shape (e.g. a spline, square, or hex shape) or along a surface by taking increasingly larger cuts by the increasing sized teeth of the broach; or rotary broaching, where a drafted tool is rotated in a special toolholder that rocks the tool around and offset axis, the tool and workpiece and mated together during machining in order to cut the desired shape. When performed in a lathe the workpiece and cutting tool rotate together, while the toolholder remains static in the tail-stock; when milling the cutting tool stops once in contact with the workpiece, only rocking around the offset axis, with the toolholder rotating in the mill. Shaping operations are those which remove material from a workpiece through the linear movement of a non rotating cutting tool, that is pushed along the surface of a workpiece, and designed to cut flat geometry. A shaper often uses High Speed Steel tooling similar in shape and geometry to lathe tooling. Shaping is similar to turning, in a linear axis as opposed to a circular one. Shaping operations are performed using a shaper machine, that strokes back and forth, but cuts only in one direction. A clapper box is used to raise the tool up from the work piece so that it can move backwards. Grinding operations involve the passing a fast moving/rotating abrasive material, such as stone, aluminium oxide, or diamond against a workpiece to remove material via grinding the material away using the abrasive surface of the tool. Non-traditional machining Plasma beam machining Waterjet machining involves the cutting of workpiece by use of a jet of water (usually also included with an abrasive material like garnet) to cut all the way through the thickness the workpiece. A waterjet cutter may be 2-axis to produce 2-dimensional shapes, or 5-axis, to produce almost any 3-dimensional shape. Electrical discharge machining (EDM) operations involve the removal of material from a workpiece using an electrically charge metal rod, or wire (wire EDM), that vaporizes the material from the workpiece. This may be used to machine holes, or cut out a specific shape from another piece. An advantage of EDM is that it can have a very small kerf, and the wire can be passed through a hole, allowing intricate shapes to be cut from a piece without cutting through the edge of the workpiece, allowing the machine of a plug and socket that fit together perfectly. An unfinished workpiece requiring machining must have some material cut away to create a finished product. A finished product would be a workpiece that meets the specifications set out for that workpiece by engineering drawings or blueprints. For example, a workpiece may require a specific outside diameter. A lathe is a machine tool that can create that diameter by rotating a metal workpiece so that a cutting tool can cut metal away, creating a smooth, round surface matching the required diameter and surface finish. A drill can remove the metal in the shape of a cylindrical hole. Other tools that may be used for metal removal are milling machines, saws, and grinding machines. Many of these same techniques are used in woodworking. Machining requires attention to many details for a workpiece to meet the specifications in the engineering drawings or blueprints. Besides the obvious problems related to correct dimensions, there is the problem of achieving the right finish or surface smoothness on the workpiece. The inferior finish found on the machined surface of a workpiece may be caused by incorrect clamping, a dull tool, or inappropriate presentation of a device. Frequently, this poor surface finish, known as chatter, is evident by an undulating or regular finish of waves on the machined surfaces of the workpiece. Cutting conditions Relative motion is required between the tool and work to perform a machining operation. The primary action is at a specific cutting speed. In addition, the device must be moved laterally across the work. This is a much slower motion called the feed. The remaining dimension of the cut is the penetration of the cutting tool below the original work surface, reaching the cut's depth. Speed, feed, and depth of cut are called the cutting conditions. They form the three dimensions of the machining process, and for certain operations, their product can be used to obtain the material removal rate for the process: where – the material removal rate in mm3/s, (in3/s), – the cutting speed in mm/s, (in/min), – the feed in mm, (in), – the depth of cut in mm, (in). Note: All units must be converted to the corresponding decimal (or USCU) units. Stages in metal cutting Machining operations usually divide into two categories, distinguished by purpose and cutting conditions: Roughing cuts Finishing cuts Roughing cuts are used to remove a large amount of material from the starting work part as rapidly as possible, i.e., with a significant Material Removal Rate (MRR), to produce a shape close to the desired form but leaving some material on the piece for a subsequent finishing operation. Finishing cuts complete the part and achieve the final dimension, tolerances, and surface finish. In production machining jobs, one or more roughing cuts are usually performed on the work, followed by one or two finishing cuts. Roughing operations are done at high feeds and depths – feeds of 0.4–1.25  mm/rev (0.015–0.050 in/rev) and depths of 2.5–20 mm (0.100–0.750 in) are typical, but actual values depend on the workpiece materials. Finishing operations are carried out at low feeds and depths – dinners of 0.0125–0.04  mm/rev (0.0005–0.0015 in/rev) and depths of 0.75–2.0 mm (0.030–0.075 in) are typical. Cutting speeds are lower in roughing than in finishing. A cutting fluid is often applied to the machining operation to cool and lubricate the cutting tool. Determining whether a cutting fluid should be used and, if so, choosing the proper cutting fluid is usually included within the scope of the cutting condition. Today other forms of metal cutting are becoming increasingly popular. An example of this is water jet cutting. Water jet cutting involves pressurized water over 620 MPa (90,000 psi) and can cut metal and have a finished product. This process is called cold cutting, which eliminates the damage caused by a heat-affected zone, as opposed to laser and plasma cutting. Relationship of subtractive and additive techniques With the recent proliferation of additive manufacturing technologies, conventional machining has been retronymously classified, in thought and language, as a subtractive manufacturing method. In narrow contexts, additive and subtractive methods may compete with each other. In the broad context of entire industries, their relationship is complementary. Each method has its advantages over the other. While additive manufacturing methods can produce very intricate prototype designs impossible to replicate by machining, strength and material selection may be limited.
Technology
Metallurgy
null
365876
https://en.wikipedia.org/wiki/Distribution%20function%20%28physics%29
Distribution function (physics)
In molecular kinetic theory in physics, a system's distribution function is a function of seven variables, , which gives the number of particles per unit volume in single-particle phase space. It is the number of particles per unit volume having approximately the velocity near the position and time . The usual normalization of the distribution function is where N is the total number of particles and n is the number density of particles – the number of particles per unit volume, or the density divided by the mass of individual particles. A distribution function may be specialised with respect to a particular set of dimensions. E.g. take the quantum mechanical six-dimensional phase space, and multiply by the total space volume, to give the momentum distribution, i.e. the number of particles in the momentum phase space having approximately the momentum . Particle distribution functions are often used in plasma physics to describe wave–particle interactions and velocity-space instabilities. Distribution functions are also used in fluid mechanics, statistical mechanics and nuclear physics. The basic distribution function uses the Boltzmann constant and temperature with the number density to modify the normal distribution: Related distribution functions may allow bulk fluid flow, in which case the velocity origin is shifted, so that the exponent's numerator is , where is the bulk velocity of the fluid. Distribution functions may also feature non-isotropic temperatures, in which each term in the exponent is divided by a different temperature. Plasma theories such as magnetohydrodynamics may assume the particles to be in thermodynamic equilibrium. In this case, the distribution function is Maxwellian. This distribution function allows fluid flow and different temperatures in the directions parallel to, and perpendicular to, the local magnetic field. More complex distribution functions may also be used, since plasmas are rarely in thermal equilibrium. The mathematical analogue of a distribution is a measure; the time evolution of a measure on a phase space is the topic of study in dynamical systems.
Physical sciences
Statistical mechanics
Physics
365938
https://en.wikipedia.org/wiki/Germline
Germline
In biology and genetics, the germline is the population of a multicellular organism's cells that develop into germ cells. In other words, they are the cells that form gametes (eggs and sperm), which can come together to form a zygote. They differentiate in the gonads from primordial germ cells into gametogonia, which develop into gametocytes, which develop into the final gametes. This process is known as gametogenesis. Germ cells pass on genetic material through the process of sexual reproduction. This includes fertilization, recombination and meiosis. These processes help to increase genetic diversity in offspring. Certain organisms reproduce asexually via processes such as apomixis, parthenogenesis, autogamy, and cloning. Apomixis and Parthenogenesis both refer to the development of an embryo without fertilization. The former typically occurs in plants seeds, while the latter tends to be seen in nematodes, as well as certain species of reptiles, birds, and fish. Autogamy is a term used to describe self pollination in plants. Cloning is a technique used to creation of genetically identical cells or organisms. In sexually reproducing organisms, cells that are not in the germline are called somatic cells. According to this definition, mutations, recombinations and other genetic changes in the germline may be passed to offspring, but changes in a somatic cell will not be. This need not apply to somatically reproducing organisms, such as some Porifera and many plants. For example, many varieties of citrus, plants in the Rosaceae and some in the Asteraceae, such as Taraxacum, produce seeds apomictically when somatic diploid cells displace the ovule or early embryo. In an earlier stage of genetic thinking, there was a clear distinction between germline and somatic cells. For example, August Weismann proposed and pointed out, a germline cell is immortal in the sense that it is part of a lineage that has reproduced indefinitely since the beginning of life and, barring accident, could continue doing so indefinitely. However, it is now known in some detail that this distinction between somatic and germ cells is partly artificial and depends on particular circumstances and internal cellular mechanisms such as telomeres and controls such as the selective application of telomerase in germ cells, stem cells and the like. Not all multicellular organisms differentiate into somatic and germ lines, but in the absence of specialised technical human intervention practically all but the simplest multicellular structures do so. In such organisms somatic cells tend to be practically totipotent, and for over a century sponge cells have been known to reassemble into new sponges after having been separated by forcing them through a sieve. Germline can refer to a lineage of cells spanning many generations of individuals—for example, the germline that links any living individual to the hypothetical last universal common ancestor, from which all plants and animals descend. Evolution Plants and basal metazoans such as sponges (Porifera) and corals (Anthozoa) do not sequester a distinct germline, generating gametes from multipotent stem cell lineages that also give rise to ordinary somatic tissues. It is therefore likely that germline sequestration first evolved in complex animals with sophisticated body plans, i.e. bilaterians. There are several theories on the origin of the strict germline-soma distinction. Setting aside an isolated germ cell population early in embryogenesis might promote cooperation between the somatic cells of a complex multicellular organism. Another recent theory suggests that early germline sequestration evolved to limit the accumulation of deleterious mutations in mitochondrial genes in complex organisms with high energy requirements and fast mitochondrial mutation rates. DNA damage, mutation and repair Reactive oxygen species (ROS) are produced as byproducts of metabolism. In germline cells, ROS are likely a significant cause of DNA damages that, upon DNA replication, lead to mutations. 8-Oxoguanine, an oxidized derivative of guanine, is produced by spontaneous oxidation in the germline cells of mice, and during the cell's DNA replication cause GC to TA transversion mutations. Such mutations occur throughout the mouse chromosomes as well as during different stages of gametogenesis. The mutation frequencies for cells in different stages of gametogenesis are about 5 to 10-fold lower than in somatic cells both for spermatogenesis and oogenesis. The lower frequencies of mutation in germline cells compared to somatic cells appears to be due to more efficient DNA repair of DNA damages, particularly homologous recombinational repair, during germline meiosis. Among humans, about five percent of live-born offspring have a genetic disorder, and of these, about 20% are due to newly arisen germline mutations. Epigenetic alterations Epigenetic alterations of DNA include modifications that affect gene expression, but are not caused by changes in the sequence of bases in DNA. A well-studied example of such an alteration is the methylation of DNA cytosine to form 5-methylcytosine. This usually occurs in the DNA sequence CpG, changing the DNA at the CpG site from CpG to 5-mCpG. Methylation of cytosines in CpG sites in promoter regions of genes can reduce or silence gene expression. About 28 million CpG dinucleotides occur in the human genome, and about 24 million CpG sites in the mouse genome (which is 86% as large as the human genome). In most tissues of mammals, on average, 70% to 80% of CpG cytosines are methylated (forming 5-mCpG). In the mouse, by days 6.25 to 7.25 after fertilization of an egg by a sperm, cells in the embryo are set aside as primordial germ cells (PGCs). These PGCs will later give rise to germline sperm cells or egg cells. At this point the PGCs have high typical levels of methylation. Then primordial germ cells of the mouse undergo genome-wide DNA demethylation, followed by subsequent new methylation to reset the epigenome in order to form an egg or sperm. In the mouse, PGCs undergo DNA demethylation in two phases. The first phase, starting at about embryonic day 8.5, occurs during PGC proliferation and migration, and it results in genome-wide loss of methylation, involving almost all genomic sequences. This loss of methylation occurs through passive demethylation due to repression of the major components of the methylation machinery. The second phase occurs during embryonic days 9.5 to 13.5 and causes demethylation of most remaining specific loci, including germline-specific and meiosis-specific genes. This second phase of demethylation is mediated by the TET enzymes TET1 and TET2, which carry out the first step in demethylation by converting 5-mC to 5-hydroxymethylcytosine (5-hmC) during embryonic days 9.5 to 10.5. This is likely followed by replication-dependent dilution during embryonic days 11.5 to 13.5. At embryonic day 13.5, PGC genomes display the lowest level of global DNA methylation of all cells in the life cycle. In the mouse, the great majority of differentially expressed genes in PGCs from embryonic day 9.5 to 13.5, when most genes are demethylated, are upregulated in both male and female PGCs. Following erasure of DNA methylation marks in mouse PGCs, male and female germ cells undergo new methylation at different time points during gametogenesis. While undergoing mitotic expansion in the developing gonad, the male germline starts the re-methylation process by embryonic day 14.5. The sperm-specific methylation pattern is maintained during mitotic expansion. DNA methylation levels in primary oocytes before birth remain low, and re-methylation occurs after birth in the oocyte growth phase.
Biology and health sciences
Biological reproduction
Biology
366112
https://en.wikipedia.org/wiki/Chikungunya
Chikungunya
Chikungunya is an infection caused by the Alphavirus chikungunya (CHIKV). The disease was first identified in 1952 in Tanzania and named based on the Kimakonde words for "to become contorted". Symptoms include fever and joint pain. These typically occur two to twelve days after exposure. Other symptoms may include headache, muscle pain, joint swelling, and a rash. Symptoms usually improve within a week; however, occasionally the joint pain may last for months or years. The risk of death is around 1 in 1,000. The very young, old, and those with other health problems are at risk of more severe disease. The virus is spread between people by two types of mosquitos: Aedes albopictus and Aedes aegypti, which mainly bite during the day. The virus may circulate within a number of animals, including birds and rodents. Diagnosis is done by either testing the blood for viral RNA or antibodies to the virus. The symptoms can be mistaken for those of dengue fever and Zika fever. It is believed most people become immune after a single infection. The best means of prevention are overall mosquito control and the avoidance of bites in areas where the disease is common. This may be partly achieved by decreasing mosquitoes' access to water, as well as the use of insect repellent and mosquito nets. In November 2023 the USFDA approved an adults-only vaccine (Ixchiq) for prevention of the disease. Once infected and symptomatic, recommendations to patients should include rest, fluids, and medications to help with fever and joint pain. In 2014, more than a million suspected cases occurred globally. While the disease is endemic in Africa and Asia, outbreaks have been reported in Europe and the Americas since the 2000s; in 2014, an outbreak was reported in Florida in the continental United States, but as of 2016 there were no further locally-acquired cases. Signs and symptoms Around 85% of people infected with the chikungunya virus experience symptoms, typically beginning with a sudden high fever above . The fever is soon followed by severe muscle and joint pain. Pain usually affects multiple joints in the arms and legs, and is symmetric – i.e. if one elbow is affected, the other is as well. People with chikungunya also frequently experience headache, back pain, nausea, and fatigue. Around half of those affected develop a rash, with reddening and sometimes small bumps on the palms, foot soles, torso, and face. For some, the rash remains constrained to a small part of the body; for others, the rash can be extensive, covering more than 90% of the skin. Some people experience gastrointestinal issues, with abdominal pain and vomiting. Others experience eye problems, namely sensitivity to light, conjunctivitis, and pain behind the eye. This first set of symptoms – called the "acute phase" of chikungunya – lasts around a week, after which most symptoms resolve on their own. Many people continue to have symptoms after the "acute phase" resolves, termed the "post-acute phase" for symptoms lasting three weeks to three months, and the "chronic stage" for symptoms lasting longer than three months. In both cases, the lasting symptoms tend to be joint pains: arthritis, tenosynovitis, and/or bursitis. If the affected person has pre-existing joint issues, these tend to worsen. Overuse of a joint can result in painful swelling, stiffness, nerve damage, and neuropathic pain. Typically the joint pain improves with time; however, the chronic stage can last anywhere from a few months to several years. Joint pain is reported in 87–98% of cases, and nearly always occurs in more than one joint, though joint swelling is uncommon. Typically the affected joints are located in both arms and legs. Joints are more likely to be affected if they have previously been damaged by disorders such as arthritis. Pain most commonly occurs in peripheral joints, such as the wrists, ankles, and joints of the hands and feet as well as some of the larger joints, typically the shoulders, elbows and knees. Pain may also occur in the muscles or ligaments. In more than half of cases, normal activity is limited by significant fatigue and pain. Infrequently, inflammation of the eyes may occur in the form of iridocyclitis, or uveitis, and retinal lesions may occur. Temporary damage to the liver may occur. People with chikungunya occasionally develop neurologic disorders, most frequently swelling or degeneration of the brain, inflammation or degeneration of the myelin sheaths around neurons, Guillain–Barré syndrome, acute disseminated encephalomyelitis, hypotonia (in newborns), and issues with visual processing. In particularly rare cases, people may develop behavioral changes, seizures, irritation of the cerebellum or meninges, oculomotor nerve palsy, or paralysis of the eye muscles. Newborns are susceptible to particularly severe effects of Chikungunya infection. Signs of infection typically begin with fever, rash, and swelling in the extremities. Around half of newborns have a mild case of the disease that resolves on its own; the other half have severe disease with inflammation of the brain and seizures. In severe cases, affected newborns may also have issues with bleeding and blood flow and problems with heart function. In addition to newborns, the elderly, and those with diabetes, heart disease, liver and kidney diseases, and human immunodeficiency virus infection tend to have more severe cases of Chikungunya. Around 1 to 5 in 1,000 people with symptomatic Chikungunya die of the disease. Cause Virology Chikungunya virus (CHIKV), is a member of the genus Alphavirus, and family Togaviridae. It was first isolated in 1953 in Tanzania and is an RNA virus with a positive-sense single-stranded genome of about 11.6kb. It is a member of the Semliki Forest virus complex and is closely related to Ross River virus, O'nyong'nyong virus, and Semliki Forest virus. Because it is transmitted by arthropods, namely mosquitoes, it can also be referred to as an arbovirus (arthropod-borne virus). In the United States, it is classified as a category B priority pathogen, and work requires biosafety level III precautions. Three genotypes of this virus have been described, each with a distinct genotype and antigenic character: West African, East/Central/South African, and Asian genotypes. The Asian lineage originated in 1952 and has subsequently split into two lineages – India (Indian Ocean Lineage) and South East Asian clades. This virus was first reported in the Americas in 2014. Phylogenetic investigations have shown two strains in Brazil – the Asian and East/Central/South African types – and that the Asian strain arrived in the Caribbean (most likely from Oceania) in about March 2013. The rate of molecular evolution was estimated to have a mean rate of 5 × 10−4 substitutions per site per year (95% higher probability density 2.9–7.9 × 10−4). Transmission Chikungunya is generally transmitted from mosquitoes to humans. Less common modes of transmission include vertical transmission, which is transmission from mother to child during pregnancy or at birth. Transmission via infected blood products and through organ donation is also theoretically possible during times of outbreak, though no cases have yet been documented. The incubation period ranges from one to twelve days and is most typically three to seven. Chikungunya is related to mosquitoes, their environments, and human behavior. The adaptation of mosquitoes to the changing climate of North Africa around 5,000 years ago made them seek out environments where humans stored water. Human habitation and the mosquitoes' environments were then very closely connected. During epidemics, humans are the reservoir of the virus. Because high amounts of virus are present in the blood at the beginning of acute infection, the virus can be spread from a viremic human to a mosquito, and back to a human. During other times, monkeys, birds, and other vertebrates have served as reservoirs. Chikungunya is spread through bites from Aedes mosquitoes, and the species A. aegypti was identified as the most common vector, though the virus has recently been associated with many other species, including A. albopictus. Research by the Pasteur Institute in Paris has suggested Chikungunya virus strains in the 2005–2006 Reunion Island outbreak incurred a mutation that facilitated transmission by the Asian tiger mosquito (A. albopictus). Other species potentially able to transmit Chikungunya virus include Ae. furcifer-taylori, Ae. africanus, and Ae. luteocephalus. Mechanism Chikungunya virus is passed to humans when a bite from an infected mosquito breaks the skin and introduces the virus into the body. The pathogenesis of chikungunya infection in humans is still poorly understood, despite recent outbreaks. It appears that in vitro, Chikungunya virus is able to replicate in human epithelial and endothelial cells, primary fibroblasts, and monocyte-derived macrophages. Viral replication is highly cytopathic, but susceptible to type-I and -II interferon. In vivo, in studies using living cells, chikungunya virus appears to replicate in fibroblasts, skeletal muscle progenitor cells, and myofibers. The type-1 interferon response is important in the host's response to chikungunya infection. Upon infection with chikungunya, the host's fibroblasts produce type-1 alpha and beta interferon (IFN-α and IFN-β). In mouse studies, deficiencies in INF-1 in mice exposed to the virus cause increased morbidity and mortality. The chikungunya-specific upstream components of the type-1 interferon pathway involved in the host's response to chikungunya infection are still unknown. Nonetheless, mouse studies suggest that IPS-1 is an important factor, and that IRF3 and IRF7 are important in an age-dependent manner. Mouse studies also suggest that chikungunya evades host defenses and counters the type-I interferon response by producing NS2, a nonstructural protein that degrades RBP1 and turns off the host cell's ability to transcribe DNA. NS2 interferes with the JAK-STAT signaling pathway and prevents STAT from becoming phosphorylated. In the acute phase of chikungunya, the virus is typically present in the areas where symptoms present, specifically skeletal muscles, and joints. In the chronic phase, it is suggested that viral persistence (the inability of the body to entirely rid itself of the virus), lack of clearance of the antigen, or both, contribute to joint pain. The inflammation response during both the acute and chronic phases of the disease results in part from interactions between the virus and monocytes and macrophages. Chikungunya virus disease in humans is associated with elevated serum levels of specific cytokines and chemokines. High levels of specific cytokines have been linked to more severe acute disease: interleukin-6 (IL-6), IL-1β, RANTES, monocyte chemoattractant protein 1 (MCP-1), monokine induced by gamma interferon (MIG), and interferon gamma-induced protein 10 (IP-10). Cytokines may also contribute to chronic Chikungunya virus disease, as persistent joint pain has been associated with elevated levels of IL-6 and granulocyte-macrophage colony-stimulating factor (GM-CSF). In those with chronic symptoms, a mild elevation of C-reactive protein (CRP) has been observed, suggesting ongoing chronic inflammation. However, there is little evidence linking chronic Chikungunya virus disease and the development of autoimmunity. Viral replication The virus consists of four nonstructural proteins and three structural proteins. The structural proteins are the capsid and two envelope glycoproteins: E1 and E2, which form heterodimeric spikes on the viron surface. E2 binds to cellular receptors in order to enter the host cell through receptor-mediated endocytosis. E1 contains a fusion peptide which, when exposed to the acidity of the endosome in eukaryotic cells, dissociates from E2 and initiates membrane fusion that allows the release of nucleocapsids into the host cytoplasm, promoting infection. The mature virion contains 240 heterodimeric spikes of E2/E1, which after release, bud on the surface of the infected cell, where they are released by exocytosis to infect other cells. Diagnosis Chikungunya is diagnosed on the basis of clinical, epidemiological, and laboratory criteria. Clinically, acute onset of high fever and severe joint pain would lead to suspicion of chikungunya. Epidemiological criteria consist of whether the individual has traveled to or spent time in an area in which chikungunya is present within the last twelve days (i.e.) the potential incubation period). Laboratory criteria include a decreased lymphocyte count consistent with viremia. However a definitive laboratory diagnosis can be accomplished through viral isolation, RT-PCR, or serological diagnosis. The differential diagnosis may include other mosquito-borne diseases, such as dengue or malaria, or other infections such as influenza. Chronic recurrent polyarthralgia occurs in at least 20% of chikungunya patients one year after infection, whereas such symptoms are uncommon in dengue. Virus isolation provides the most definitive diagnosis, but takes one to two weeks for completion and must be carried out in biosafety level III laboratories. The technique involves exposing specific cell lines to samples from whole blood and identifying Chikungunya virus-specific responses. RT-PCR using nested primer pairs is used to amplify several chikungunya-specific genes from whole blood, generating thousands to millions of copies of the genes to identify them. RT-PCR can also quantify the viral load in the blood. Using RT-PCR, diagnostic results can be available in one to two days. Serological diagnosis requires a larger amount of blood than the other methods and uses an ELISA assay to measure chikungunya-specific IgM levels in the blood serum. One advantage offered by serological diagnosis is that serum IgM is detectable from 5 days to months after the onset of symptoms, but drawbacks are that results may require two to three days, and false positives can occur with infection due to other related viruses, such as o'nyong'nyong virus and Semliki Forest virus. Presently, there is no specific way to test for chronic signs and symptoms associated with Chikungunya fever although nonspecific laboratory findings such as C reactive protein and elevated cytokines can correlate with disease activity. Prevention Although an approved vaccine exists, the most effective means of prevention are protection against contact with disease-carrying mosquitoes and controlling mosquito populations by limiting their habitat. Mosquito control focuses on eliminating the standing water where mosquitos lay eggs and develop as larvae; if elimination of the standing water is not possible, insecticides or biological control agents can be added. Methods of protection against contact with mosquitos include using insect repellents with substances such as DEET, icaridin, PMD (p-menthane-3,8-diol, a substance derived from the lemon eucalyptus tree), or ethyl butylacetylaminopropionate (IR3535). However, increasing insecticide resistance presents a challenge to chemical control methods. Wearing bite-proof long sleeves and trousers also offers protection, and garments can be treated with pyrethroids, a class of insecticides that often has repellent properties. Vaporized pyrethroids (for example in mosquito coils) are also insect repellents. As infected mosquitoes often feed and rest inside homes, securing screens on windows and doors will help to keep mosquitoes out of the house. In the case of the day-active A. aegypti and A. albopictus, however, this will have only a limited effect, since many contacts between the mosquitoes and humans occur outdoors. Vaccination Treatment Currently, no specific treatment for chikungunya is available. Supportive care is recommended, and symptomatic treatment of fever and joint swelling includes the use of nonsteroidal anti-inflammatory drugs such as naproxen, non-aspirin analgesics such as paracetamol (acetaminophen) and fluids. Aspirin is not recommended due to the increased risk of bleeding. Despite anti-inflammatory effects, corticosteroids are not recommended during the acute phase of disease, as they may cause immunosuppression and worsen infection. Passive immunotherapy has potential benefits in the treatment of chikungunya. Studies in animals using passive immunotherapy have been effective, and clinical studies using passive immunotherapy in those particularly vulnerable to severe infection are currently in progress. Passive immunotherapy involves administration of anti-CHIKV hyperimmune human intravenous antibodies (immunoglobulins) to those exposed to a high risk of chikungunya infection. No antiviral treatment for Chikungunya virus is currently available, though testing has shown several medications to be effective in vitro. Chronic arthritis In those who have more than two weeks of arthritis, ribavirin may be useful. The effect of chloroquine is not clear. It does not appear to help acute disease, but tentative evidence indicates it might help those with chronic arthritis. Steroids do not appear to be an effective treatment. NSAIDs and simple analgesics can be used to provide partial symptom relief in most cases. Methotrexate, a drug used in the treatment of rheumatoid arthritis, has been shown to have a benefit in treating inflammatory polyarthritis resulting from chikungunya, though the drug mechanism for improving viral arthritis is unclear. Prognosis The mortality rate of chikungunya is slightly less than 1 in 1000. Those over the age of 65, neonates, and those with underlying chronic medical problems are most likely to have severe complications. Neonates are vulnerable as it is possible to vertically transmit chikungunya from mother to infant during delivery, which results in high rates of morbidity, as infants lack fully developed immune systems. The likelihood of prolonged symptoms or chronic joint pain is increased with increased age and prior rheumatological disease. Epidemiology Historically, chikungunya has been present mostly in the developing world. The disease causes an estimated 3 million infections each year. Epidemics in the Indian Ocean, Pacific Islands, and in the Americas, continue to change the distribution of the disease. In Africa, chikungunya is spread by a sylvatic cycle in which the virus largely cycles between other non-human primates, small mammals, and mosquitos between human outbreaks. During outbreaks, due to the high concentration of virus in the blood of those in the acute phase of infection, the virus can circulate from humans to mosquitoes and back to humans. The transmission of the pathogen between humans and mosquitoes that exist in urban environments was established on multiple occasions from strains occurring on the eastern half of Africa in non-human primate hosts. This emergence and spread beyond Africa may have started as early as the 18th century. Currently, available data does not indicate whether the introduction of chikungunya into Asia occurred in the 19th century or more recently, but this epidemic Asian strain causes outbreaks in India and continues to circulate in Southeast Asia. In Africa, outbreaks were typically tied to heavy rainfall causing increased mosquito population. In recent outbreaks in urban centers, the virus has spread by circulating between humans and mosquitoes. Global rates of chikungunya infection are variable, depending on outbreaks. When chikungunya was first identified in 1952, it had a low-level circulation in West Africa, with infection rates linked to rainfall. Beginning in the 1960s, periodic outbreaks were documented in Asia and Africa. However, since 2005, following several decades of relative inactivity, chikungunya has re-emerged and caused large outbreaks in Africa, Asia, and the Americas. In India, for instance, chikungunya re-appeared following 32 years of absence of viral activity. Outbreaks have occurred in Europe, the Caribbean, and South America, areas in which chikungunya was not previously transmitted. Local transmission has also occurred in the United States and Australia, countries in which the virus was previously unknown. In 2005, an outbreak on the island of Réunion was the largest then documented, with an estimated 266,000 cases on an island with a population of approximately 770,000. In a 2006 outbreak, India reported 1.25 million suspected cases. Chikungunya was introduced to the Americas in 2013, first detected on the French island of Saint Martin, and for the next two years in the Americas, 1,118,763 suspected cases and 24,682 confirmed cases were reported by the PAHO. An analysis of the genetic code of Chikungunya virus suggests that the increased severity of the 2005–present outbreak may be due to a change in the genetic sequence which altered the E1 segment of the virus' viral coat protein, a variant called E1-A226V. This mutation potentially allows the virus to multiply more easily in mosquito cells. The change allows the virus to use the Asian tiger mosquito (an invasive species) as a vector in addition to the more strictly tropical main vector, Aedes aegypti. Enhanced transmission of Chikungunya virus by A. albopictus could mean an increased risk for outbreaks in other areas where the Asian tiger mosquito is present. A albopictus is an invasive species which has spread through Europe, the Americas, the Caribbean, Africa, and the Middle East. After the detection of zika virus in Brazil in April 2015, the first ever in the Western Hemisphere, it is now thought some chikungunya and dengue cases could in fact be zika virus cases or coinfections. History The disease was first described by Marion Robinson and W.H.R. Lumsden in a pair of 1955 papers, following an outbreak in 1952 on the Makonde Plateau, along the border between Mozambique and Tanganyika (the mainland part of modern-day Tanzania). Since then outbreaks have occurred occasionally in Africa, South Asia, and Southeast Asia; recent outbreaks have spread the disease over a wider range. The first recorded outbreak may have been in 1779. This is in agreement with the molecular genetics evidence that suggests it evolved around the year 1700. According to the original paper by Lumsden, the term 'chikungunya' is derived from the Makonde root verb kungunyala, meaning to dry up or become contorted. In concurrent research, Robinson glossed the Makonde term more specifically as "that which bends up". It is understood to refer to the contorted posture of people affected with severe joint pain and arthritic symptoms associated with this disease. Subsequent authors overlooked the references to the Makonde language and assumed the term to have been derived from Swahili, the lingua franca of the region. The erroneous attribution to Swahili has been repeated in numerous print sources. Erroneous spellings of the name of the disease are also in common use. Research Chikungunya is one of more than a dozen agents researched as a potential biological weapon. This disease is part of the group of neglected tropical diseases.
Biology and health sciences
Viral diseases
Health
366168
https://en.wikipedia.org/wiki/Mesoplodon
Mesoplodon
Mesoplodont whales are 16 species of toothed whale in the genus Mesoplodon, making it the largest genus in the cetacean order. Two species were described as recently as 1991 (pygmy beaked whale) and 2002 (Perrin's beaked whale), and marine biologists predict the discovery of more species in the future. A new species was described in 2021. They are the most poorly known group of large mammals. The generic name "mesoplodon" comes from the Greek meso- (middle) - hopla (arms) - odon (teeth), and may be translated as 'armed with a tooth in the centre of the jaw'. Physical description Mesoplodont beaked whales are small whales, (pygmy beaked whale) to (strap-toothed whale) in length, even compared with closely related whales such as the bottlenose whales and giant beaked whales. The spindle-shaped body has a small dorsal fin and short and narrow flippers. The head is small and tapered and has a semicircular blow hole that is sometimes asymmetric. The beak, which vary in length between species, blends with the small melon without a crease. Sexual dimorphism is poorly known, but the females tend to be the same size or larger than males at least in some species. The males typically have a bolder coloration and a unique dentition. The males of most species are covered in scars from the teeth of other males. The lower jaw often forms a huge arch in some species, sometimes extending above the rostrum in a shape comparable to a playground slide. Every species has large (sometimes tusk-like) teeth of variable size, shape, and position. Both sexes often have bites from cookie-cutter sharks. The dorsal fin is rather small and located between two-thirds and three-quarters down the back of the animal. Information on longevity and lactation is non-existent, and information on gestation is nearly so. Behavior Most species are very rarely observed, and little is known about their behavior. They are typically found in groups, possibly segregated between sexes. Some species are so uncommon, they have yet to be observed alive. On the surface, they are typically very slow swimmers and do not make obvious blows. They have never been observed raising their flukes above the water. They are all very deep divers, and many feed entirely on squid. Conservation The mesoplodonts are completely unknown as far as population estimates are concerned. They have been hunted occasionally by the Japanese, but never directly. They are also accidentally captured in drift nets. It is not known what effect this has on the population. Species Andrews' beaked whale (M. bowdoini) Andrews, 1908 Blainville's beaked whale (M. densirostris) Blainville, 1817 Deraniyagala's beaked whale (M. hotaula) Deraniyagala, 1963 Gervais's beaked whale (M. europaeus) Gervais, 1855 Ginkgo-toothed beaked whale (M. ginkgodens) Nishiwaki and Kamiya, 1958 Gray's beaked whale (M. grayi) von Haast, 1876 Hector's beaked whale (M. hectori) Gray, 1871 Hubbs' beaked whale (M. carlhubbsi) Moore, 1963 Perrin's beaked whale (M. perrini) Dalebout, Mead, Baker, Baker & van Helden, 2002 Pygmy beaked whale (M. peruvianus) Reyes, Mead, and Van Waerebeek, 1991 Sowerby's beaked whale (M. bidens) Sowerby, 1804 Spade-toothed whale (M. traversii) Gray, 1874 Stejneger's beaked whale (M. stejnegeri) True, 1885 Strap-toothed whale (M. layardii) Gray, 1865 True's beaked whale (M. mirus) True, 1913 Ramari's beaked whale (M. eueu) Carroll et al, 2021 Longman's beaked whale (Indopacetus pacificus, also known as the Indo-Pacific beaked whale or the tropical bottlenose whale) was originally assigned to Mesoplodon, but Joseph Curtis Moore placed it in its own genus, Indopacetus, a taxonomic assignment which has been followed by all researchers. Four extinct species of Mesoplodon are known, M. longirostris M. posti, M. slangkopi. and M. tumidirostris.
Biology and health sciences
Toothed whale
Animals
366273
https://en.wikipedia.org/wiki/Brickwork
Brickwork
Brickwork is masonry produced by a bricklayer, using bricks and mortar. Typically, rows of bricks called courses are laid on top of one another to build up a structure such as a brick wall. Bricks may be differentiated from blocks by size. For example, in the UK a brick is defined as a unit having dimensions less than and a block is defined as a unit having one or more dimensions greater than the largest possible brick. Brick is a popular medium for constructing buildings, and examples of brickwork are found through history as far back as the Bronze Age. The fired-brick faces of the ziggurat of ancient Dur-Kurigalzu in Iraq date from around 1400 BC, and the brick buildings of ancient Mohenjo-daro in modern day Pakistan were built around 2600 BC. Much older examples of brickwork made with dried (but not fired) bricks may be found in such ancient locations as Jericho in Palestine, Çatal Höyük in Anatolia, and Mehrgarh in Pakistan. These structures have survived from the Stone Age to the modern day. Brick dimensions are expressed in construction or technical documents in two ways as co-ordinating dimensions and working dimensions. Coordination dimensions are the actual physical dimensions of the brick with the mortar required on one header face, one stretcher face and one bed. Working dimensions is the size of a manufactured brick. It is also called the nominal size of a brick. Brick size may be slightly different due to shrinkage or distortion due to firing, etc. An example of a co-ordinating metric commonly used for bricks in the UK is as follows: Bricks of dimensions 215 mm × 102.5 mm × 65 mm; Mortar beds (horizontal) and perpends (vertical) of a uniform 10 mm. In this case the co-ordinating metric works because the length of a single brick (215 mm) is equal to the total of the width of a brick (102.5 mm) plus a perpend (10 mm) plus the width of a second brick (102.5 mm). There are many other brick sizes worldwide, and many of them use this same co-ordinating principle. Terminology As the most common bricks are rectangular prisms, six surfaces are named as follows: Top and bottom surfaces are called beds Ends or narrow surfaces are called headers or header faces Sides or wider surfaces are called stretchers or stretcher faces Mortar placed between bricks is also given separate names with respect to their position. Mortar placed horizontally below or top of a brick is called a bed, and mortar placed vertically between bricks is called a perpend. A brick made with just rectilinear dimensions is called a solid brick. Bricks might have a depression on both beds or on a single bed. The depression is called a frog, and the bricks are known as frogged bricks. Frogs can be deep or shallow but should never exceed 20% of the total volume of the brick. Cellular bricks have depressions exceeding 20% of the volume of the brick. Perforated bricks have holes through the brick from bed to bed, cutting it all the way. Most of the building standards and good construction practices recommend the volume of holes should not exceed 20% of the total volume of the brick. Parts of brickwork include bricks, beds and perpends. The bed is the mortar upon which a brick is laid. A perpend is a vertical joint between any two bricks and is usually—but not always—filled with mortar. A "face brick" is a higher-quality brick, designed for use in visible external surfaces in face-work, as opposed to a "filler brick" for internal parts of the wall, or where the surface is to be covered with stucco or a similar coating, or where the filler bricks will be concealed by other bricks (in structures more than two bricks thick). Orientation A brick is given a classification based on how it is laid, and how the exposed face is oriented relative to the face of the finished wall. Stretcher or stretching brick A brick laid flat with its long narrow side exposed. Header or heading brick A brick laid flat with its width exposed. Soldier A brick laid vertically with its long narrow side exposed. Sailor A brick laid vertically with the broad face of the brick exposed. Rowlock A brick laid on the long narrow side with the short end of the brick exposed. Shiner or rowlock stretcher A brick laid on the long narrow side with the broad face of the brick exposed. Cut The practice of laying uncut full-sized bricks wherever possible gives brickwork its maximum possible strength. In the diagrams below, such uncut full-sized bricks are coloured as follows: Occasionally though a brick must be cut to fit a given space, or to be the right shape for fulfilling some particular purpose such as generating an offset—called a lap—at the beginning of a course. In some cases these special shapes or sizes are manufactured. In the diagrams below, some of the cuts most commonly used for generating a lap are coloured as follows: A brick cut to three-quarters of its length, and laid flat with its long, narrow side exposed. A brick cut to three-quarters of its length, and laid flat with its short side exposed. A brick cut in half across its length, and laid flat. A brick cut in half down its width, and laid with its smallest face exposed and standing vertically. A queen closer is often used for the purpose of creating a lap. Less frequently used cuts are all coloured as follows: A brick cut to a quarter of its length. A queen closer cut to three-quarters of its length. A brick with one corner cut away, leaving one header face at half its standard width. Bonding A nearly universal rule in brickwork is that perpends should not be contiguous across courses. Walls, running linearly and extending upwards, can be of varying depth or thickness. Typically, the bricks are laid also running linearly and extending upwards, forming wythes or leafs. It is as important as with the perpends to bond these leaves together. Historically, the dominant method for consolidating the leaves together was to lay bricks across them, rather than running linearly. Brickwork observing either or both of these two conventions is described as being laid in one or another bond. Thickness (and leaves) A leaf is as thick as the width of one brick, but a wall is said to be one brick thick if it as wide as the length of a brick. Accordingly, a single-leaf wall is a half brick thickness; a wall with the simplest possible masonry transverse bond is said to be one brick thick, and so on. The thickness specified for a wall is determined by such factors as damp proofing considerations, whether or not the wall has a cavity, load-bearing requirements, expense, and the era during which the architect was or is working. Wall thickness specification has proven considerably various, and while some non-load-bearing brick walls may be as little as half a brick thick, or even less when shiners are laid stretcher bond in partition walls, others brick walls are much thicker. The Monadnock Building in Chicago, for example, is a very tall masonry building, and has load-bearing brick walls nearly two metres thick at the base. The majority of brick walls are however usually between one and three bricks thick. At these more modest wall thicknesses, distinct patterns have emerged allowing for a structurally sound layout of bricks internal to each particular specified thickness of wall. Cavity walls and ties The advent during the mid twentieth century of the cavity wall saw the popularisation and development of another method of strengthening brickwork—the wall tie. A cavity wall comprises two totally discrete walls, separated by an air gap, which serves both as barrier to moisture and heat. Typically the main loads taken by the foundations are carried there by the inner leaf, and the major functions of the external leaf are to protect the whole from weather, and to provide a fitting aesthetic finish. Despite there being no masonry connection between the leaves, their transverse rigidity still needs to be guaranteed. The device used to satisfy this need is the insertion at regular intervals of wall ties into the cavity wall's mortar beds. Load-bearing bonds Courses of mixed headers and stretchers Flemish bond Flemish bond has stretcher between headers, with the headers centred over the stretchers in the courses below. Where a course begins with a quoin stretcher, the course will ordinarily terminate with a quoin stretcher at the other end. The next course up will begin with a quoin header. For the course's second brick, a queen closer is laid, generating the lap of the bond. The third brick along is a stretcher, and is—on account of the lap—centred above the header below. This second course then resumes its paired run of stretcher and header, until the final pair is reached, whereupon a second and final queen closer is inserted as the penultimate brick, mirroring the arrangement at the beginning of the course, and duly closing the bond. Some examples of Flemish bond incorporate stretchers of one colour and headers of another. This effect is commonly a product of treating the header face of the heading bricks while the bricks are being baked as part of the manufacturing process. Some of the header faces are exposed to wood smoke, generating a grey-blue colour, while other simply vitrified until they reach a deeper blue colour. Some headers have a glazed face, caused by using salt in the firing. Sometimes Staffordshire Blue bricks are used for the heading bricks. Brickwork that appears as Flemish bond from both the front and the rear is double Flemish bond, so called on account of the front and rear duplication of the pattern. If the wall is arranged such that the bricks at the rear do not have this pattern, then the brickwork is said to be single Flemish bond. Flemish bond brickwork with a thickness of one brick is the repeating pattern of a stretcher laid immediately to the rear of the face stretcher, and then next along the course, a header. A lap (correct overlap) is generated by a queen closer on every alternate course: Double Flemish bond of one brick's thickness: overhead sections of alternate (odd and even) courses, and side elevation The colour-coded plans highlight facing bricks in the east–west wall. An elevation for this east–west wall is shown to the right. A simple way to add some width to the wall would be to add stretching bricks at the rear, making a Single Flemish bond one and a half bricks thick: Overhead sections of alternate (odd and even) courses of single Flemish bond of one and a half bricks' thickness The colour-coded plans highlight facing bricks in the east–west wall. An elevation for this east–west wall is shown to the right. For a double Flemish bond of one and a half bricks' thickness, facing bricks and the bricks behind the facing bricks may be laid in groups of four bricks and a half-bat. The half-bat sits at the centre of the group and the four bricks are placed about the half-bat, in a square formation. These groups are laid next to each other for the length of a course, making brickwork one and a half bricks thick. To preserve the bond, it is necessary to lay a three-quarter bat instead of a header following a quoin stretcher at the corner of the wall. This fact has no bearing on the appearance of the wall; the choice of brick appears to the spectator like any ordinary header: Overhead plans of alternate (odd and even) courses of double Flemish bond of one and a half bricks' thickness The colour-coded plans highlight facing bricks in the east–west wall. An elevation for this east–west wall is shown to the right. For a more substantial wall, a header may be laid directly behind the face header, a further two headers laid at 90° behind the face stretcher, and then finally a stretcher laid to the rear of these two headers. This pattern generates brickwork a full two bricks thick: Overhead sections of alternate (odd and even) courses of double Flemish bond of two bricks' thickness The colour-coded plans highlight facing bricks in the east–west wall. An elevation for this east–west wall is shown to the right. Overhead sections of alternate (odd and even) courses of double Flemish bond of two and a half bricks' thickness The colour-coded plans highlight facing bricks in the east–west wall. An elevation for this east–west wall is shown to the right. For a still more substantial wall, two headers may be laid directly behind the face header, a further two pairs of headers laid at 90° behind the face stretcher, and then finally a stretcher laid to the rear of these four headers. This pattern generates brickwork a full three bricks thick: Overhead sections of alternate (odd and even) courses of double Flemish bond of three bricks' thickness The colour-coded plans highlight facing bricks in the east–west wall. An elevation for this east–west wall is shown to the right. Monk bond This bond has stretchers between every header with the headers centred over the perpend between the two stretchers in the course below in the bond's most symmetric form. The great variety of monk bond patterns allow for many possible layouts at the quoins, and many possible arrangements for generating a lap. A quoin brick may be a stretcher, a three-quarter bat, or a header. Queen closers may be used next to the quoins, but the practice is not mandatory. Raking monk bonds Monk bond may however take any of a number of arrangements for course staggering. The disposal of bricks in these often highly irregular raking patterns can be a challenging task for the bricklayer to correctly maintain while constructing a wall whose courses are partially obscured by scaffold, and interrupted by door or window openings, or other bond-disrupting obstacles. If the bricklayer frequently stops to check that bricks are correctly arranged, then masonry in a raking monk bond can be expensive to build. Occasionally, brickwork in such a raking monk bond may contain minor errors of header and stretcher alignment some of which may have been silently corrected by incorporating a compensating irregularity into the brickwork in a course further up the wall. In spite of these complexities and their associated costs, the bond has proven a common choice for constructing brickwork in the north of Europe. Raking courses in monk bond may—for instance—be staggered in such a way as to generate the appearance of diagonal lines of stretchers. One method of achieving this effect relies on the use of a repeating sequence of courses with back-and-forth header staggering. In this grouping, a header appears at a given point in the group's first course. In the next course up, a header is offset one and a half stretcher lengths to the left of the header in the course below, and then in the third course, a header is offset one stretcher length to the right of the header in the middle course. This accented swing of headers, one and a half to the left, and one to the right, generates the appearance of lines of stretchers running from the upper left hand side of the wall down to the lower right. Such an example of a raking monk bond layout is shown in the New Malden Library, Kingston upon Thames, Greater London. Elsewhere, raking courses in monk bond may be staggered in such a way as to generate a subtle appearance of indented pyramid-like diagonals. Such an arrangement appears in the picture here from the building in Solna, Sweden. Many other particular adjustments of course alignment exist in monk bond, generating a variety of visual effects which differ in detail, but often having the effect of directing a viewing eye diagonally down the wall. Overhead plan for alternate courses of monk bond of one brick's thickness The colour-coded plans highlight facing bricks in the east–west wall. An elevation for this east–west wall is shown to the right. Sussex bond This bond has stretchers between every header, with the headers centred above the midpoint of three stretchers in the course below. The bond's horizontally extended proportion suits long stretches of masonry such as garden walls or the run of brickwork over a ribbon window; conversely, the bond is less suitable for a surface occupied by many features, such as a Georgian façade. The relatively infrequent use of headers serves to make Sussex bond one of the less expensive bonds in which to build a wall, as it allows for the bricklayer to proceed rapidly with run after run of three stretchers at a time. One stretching course per heading course One of the two kinds of course in this family of bonds is called a stretching course, and this typically comprises nothing but stretchers at the face from quoin to quoin. The other kind of course is the heading course, and this usually consists of headers, with two queen closers—one by the quoin header at either end—to generate the bond. English bond This bond has alternating stretching and heading courses, with the headers centred over the midpoint of the stretchers, and perpends in each alternate course aligned. Queen closers appear as the second brick, and the penultimate brick in heading courses. A muted colour scheme for occasional headers is sometimes used in English bond to lend a subtle texture to the brickwork. Examples of such schemes include blue-grey headers among otherwise red bricks—seen in the south of England—and light brown headers in a dark brown wall, more often found in parts of the north of England. Overhead plan for alternate courses of English bond of one brick's thickness The colour-coded plans highlight facing bricks in the east–west wall. An elevation for this east–west wall is shown to the right. Overhead plan for alternate courses of English bond of one and a half bricks' thickness The colour-coded plans highlight facing bricks in the east–west wall. An elevation for this east–west wall is shown to the right. Overhead plan for alternate courses of English bond of two bricks' thickness The colour-coded plans highlight facing bricks in the east–west wall. An elevation for this east–west wall is shown to the right. English cross bond This bond also has alternating stretching and heading courses. However, whilst the heading courses are identical with those found in the standard English bond, the stretching courses alternate between a course composed entirely of stretchers, and a course composed of stretchers half off-set relative to the stretchers two courses above or below, by reason of a header placed just before the quoins at either end. The bond is widely found in Northern France, Belgium and the Netherlands. Large areas of English cross bond can appear to have a twill like characteristic, an effect caused by the unbroken series of perpends moving diagonally down the bond. Dutch bond This bond is exactly like English cross bond except in the generating of the lap at the quoins. In Dutch bond, all quoins are three-quarter bats—placed in alternately stretching and heading orientation with successive courses—and no use whatever is made of queen closers. To the Dutch this is simply a variant of what they call a cross bond. Two or more stretching course per heading course English garden wall bond This bond has three courses of stretchers between every course of headers. For the standard English garden wall bond, headers are used as quoins for the middle stretching course in order to generate the lap, with queen closers as the penultimate brick at either end of the heading courses. A more complex set of quoins and queen closers is necessary to achieve the lap for a raking English garden wall bond. The heading course in English garden wall bond sometimes features bricks of a different colour to its surrounding stretchers. In English chalk districts, flint is substituted for the stretchers, and the headers constitute a lacing course. Scottish bond This bond has five courses of stretchers between every course of headers. The lap is generated by the use of headers as quoins for the even-numbered stretching courses, counting up from the previous heading course, with queen closers as the penultimate brick at either end of the heading courses. American, or common bond This bond may have between three and nine courses of stretchers between each course of headers. Six is the most common number of courses of stretchers. Headers are used as quoins for the even-numbered stretching courses, counting up from the previous heading course, in order to achieve the necessary off-set in a standard American bond, with queen closers as the penultimate brick at either end of the heading courses. The brick Clarke-Palmore House in Henrico County, Virginia, has a lower level built in 1819 described as being American bond of three to five stretching courses between each heading course, and an upper level built in 1855 with American bond of six to seven stretching courses between each heading course. Only stretching or heading courses Header bond All bricks in this bond are headers, but for the lap-generating quoin three-quarter bat which offsets each successive courses by half a header. Header bond is often used on curving walls with a small radius of curvature. In Lewes, Sussex, England UK many small buildings are constructed in this bond, using blue coloured bricks and vitrified surfaces. Stretcher, or running bond All bricks in this bond are stretchers, with the bricks in each successive course staggered by half a stretcher. Headers are used as quoins on alternating stretching courses in order to achieve the necessary off-set. It is the simplest repeating pattern, and will create a wall one header thick. Raking stretcher bond Also consists entirely of courses of stretchers, but with the bricks in each successive course staggered in some pattern other than that of standard stretcher bond. One or more stretching courses per alternating course Flemish stretcher bond Flemish stretcher bond separates courses of alternately laid stretchers and headers, with a number of courses of stretchers alone. Brickwork in this bond may have between one and four courses of stretchers to one course after the Flemish manner. The courses of stretchers are often but not always staggered in a raking pattern. Courses of mixed rowlocks and shiners Rat-trap bond Rat-trap bond (also Chinese bond) substantially observes the same pattern as Flemish bond, but consists of rowlocks and shiners instead of headers and stretchers. This gives a wall with an internal cavity bridged by the rowlocks, hence the reference to rat-traps. One shiner course per heading course Dearne's bond Dearne's bond substantially observes the same pattern as English bond, but uses shiners in place of stretchers. Non-load-bearing bonds Courses of mixed shiners and sailors Single basket weave bond A row of single basket weave bond comprises pairs of sailors laid side-by-side, capped with a shiner, alternating with pairs of sailors laid side-by-side sat atop a shiner. Subsequent rows are identical and aligned with those above. Double basket weave bond A row of double basket weave bond comprises pairs of shiners laid atop one another, alternating with pairs of sailors laid side by side. The following row is off-set so the pair of shiners sits below the pair of sailors in the row above. This results in bricks arranged in pairs in a square grid so that the join between each pair is perpendicular to the join of the four pairs around it. Herringbone bond The herringbone pattern (opus spicatum) made by placing soldiers next to stretchers or vice versa (i.e. headers perpendicular) making 'L' shapes, nesting each L in the same order of laying. Thin bricks are more common. The pattern is usually rotated by 45° to create a completely vertical (plumb) succession of 'V' shapes. It follows either the left or right brick forms the tip of the v in any wall. Herringbone is sometimes used as infill in timber-framed buildings. Brickwork built around square fractional-sized bricks Pinwheel bond Pinwheel bond is made of four bricks surrounding a square half-brick, repeated in a square grid. Della Robbia bond A pattern made of four bricks surrounding a square brick, one-quarter the size of a half-brick. It is designed to resemble woven cloth. Another, similar pattern is called the interlacing bond. Diapering Brickwork formed into a diamond pattern is called diapering. Flemish diagonal bond Flemish diagonal bond comprises a complex pattern of stretcher courses alternating with courses of one or two stretchers between headers, at various offsets, such that over ten courses, a diamond-shaped pattern appears. Damp-proof courses Moisture may ascend into a building from the foundation of a wall or gain ingress into a building from a wet patch of ground, where it meets a solid wall. The manifest result of this process is called damp. One of many methods of resisting such ingresses of water is to construct the wall with several low courses of dense engineering bricks such as Staffordshire blue bricks. This method of damp proofing appears as a distinctive navy blue band running around the circumference of a building. It is only partially effective, as in spite of the lower courses of brick being more moisture resistant the mortar bedding and perpends joining the bricks remain permeable.
Technology
Building materials
null
366445
https://en.wikipedia.org/wiki/Mathematical%20puzzle
Mathematical puzzle
Mathematical puzzles make up an integral part of recreational mathematics. They have specific rules, but they do not usually involve competition between two or more players. Instead, to solve such a puzzle, the solver must find a solution that satisfies the given conditions. Mathematical puzzles require mathematics to solve them. Logic puzzles are a common type of mathematical puzzle. Conway's Game of Life and fractals, as two examples, may also be considered mathematical puzzles even though the solver interacts with them only at the beginning by providing a set of initial conditions. After these conditions are set, the rules of the puzzle determine all subsequent changes and moves. Many of the puzzles are well known because they were discussed by Martin Gardner in his "Mathematical Games" column in Scientific American. Mathematical puzzles are sometimes used to motivate students in teaching elementary school math problem solving techniques. Creative thinkingor "thinking outside the box"often helps to find the solution. List of mathematical puzzles Numbers, arithmetic, and algebra Cross-figures or cross number puzzles Dyson numbers Four fours KenKen Water pouring puzzle The monkey and the coconuts Pirate loot problem Verbal arithmetics 24 Game Combinatorial Cryptograms Fifteen Puzzle Kakuro Rubik's Cube and other sequential movement puzzles Str8ts a number puzzle based on sequences Sudoku Sujiko Think-a-Dot Tower of Hanoi Bridges Game Analytical or differential Ant on a rubber rope
Mathematics
Basics
null
366555
https://en.wikipedia.org/wiki/Biomolecule
Biomolecule
A biomolecule or biological molecule is loosely defined as a molecule produced by a living organism and essential to one or more typically biological processes. Biomolecules include large macromolecules such as proteins, carbohydrates, lipids, and nucleic acids, as well as small molecules such as vitamins and hormones. A general name for this class of material is biological materials. Biomolecules are an important element of living organisms. They are often endogenous, i.e. produced within the organism, but organisms usually also need exogenous biomolecules, for example certain nutrients, to survive. Biomolecules and their reactions are studied in biology and its subfields of biochemistry and molecular biology. Most biomolecules are organic compounds, and just four elements—oxygen, carbon, hydrogen, and nitrogen—make up 96% of the human body's mass. But many other elements, such as the various biometals, are also present in small amounts. The uniformity of both specific types of molecules (the biomolecules) and of certain metabolic pathways are invariant features among the wide diversity of life forms; thus these biomolecules and metabolic pathways are referred to as "biochemical universals" or "theory of material unity of the living beings", a unifying concept in biology, along with cell theory and evolution theory. Types of biomolecules A diverse range of biomolecules exist, including: Small molecules: Lipids, fatty acids, glycolipids, sterols, monosaccharides Vitamins Hormones, neurotransmitters Metabolites Monomers, oligomers and polymers: Nucleosides and nucleotides Nucleosides are molecules formed by attaching a nucleobase to a ribose or deoxyribose ring. Examples of these include cytidine (C), uridine (U), adenosine (A), guanosine (G), and thymidine (T). Nucleosides can be phosphorylated by specific kinases in the cell, producing nucleotides. Both DNA and RNA are polymers, consisting of long, linear molecules assembled by polymerase enzymes from repeating structural units, or monomers, of mononucleotides. DNA uses the deoxynucleotides C, G, A, and T, while RNA uses the ribonucleotides (which have an extra hydroxyl(OH) group on the pentose ring) C, G, A, and U. Modified bases are fairly common (such as with methyl groups on the base ring), as found in ribosomal RNA or transfer RNAs or for discriminating the new from old strands of DNA after replication. Each nucleotide is made of an acyclic nitrogenous base, a pentose and one to three phosphate groups. They contain carbon, nitrogen, oxygen, hydrogen and phosphorus. They serve as sources of chemical energy (adenosine triphosphate and guanosine triphosphate), participate in cellular signaling (cyclic guanosine monophosphate and cyclic adenosine monophosphate), and are incorporated into important cofactors of enzymatic reactions (coenzyme A, flavin adenine dinucleotide, flavin mononucleotide, and nicotinamide adenine dinucleotide phosphate). DNA and RNA structure DNA structure is dominated by the well-known double helix formed by Watson-Crick base-pairing of C with G and A with T. This is known as B-form DNA, and is overwhelmingly the most favorable and common state of DNA; its highly specific and stable base-pairing is the basis of reliable genetic information storage. DNA can sometimes occur as single strands (often needing to be stabilized by single-strand binding proteins) or as A-form or Z-form helices, and occasionally in more complex 3D structures such as the crossover at Holliday junctions during DNA replication. RNA, in contrast, forms large and complex 3D tertiary structures reminiscent of proteins, as well as the loose single strands with locally folded regions that constitute messenger RNA molecules. Those RNA structures contain many stretches of A-form double helix, connected into definite 3D arrangements by single-stranded loops, bulges, and junctions. Examples are tRNA, ribosomes, ribozymes, and riboswitches. These complex structures are facilitated by the fact that RNA backbone has less local flexibility than DNA but a large set of distinct conformations, apparently because of both positive and negative interactions of the extra OH on the ribose. Structured RNA molecules can do highly specific binding of other molecules and can themselves be recognized specifically; in addition, they can perform enzymatic catalysis (when they are known as "ribozymes", as initially discovered by Tom Cech and colleagues). Saccharides Monosaccharides are the simplest form of carbohydrates with only one simple sugar. They essentially contain an aldehyde or ketone group in their structure. The presence of an aldehyde group in a monosaccharide is indicated by the prefix aldo-. Similarly, a ketone group is denoted by the prefix keto-. Examples of monosaccharides are the hexoses, glucose, fructose, Trioses, Tetroses, Heptoses, galactose, pentoses, ribose, and deoxyribose. Consumed fructose and glucose have different rates of gastric emptying, are differentially absorbed and have different metabolic fates, providing multiple opportunities for two different saccharides to differentially affect food intake. Most saccharides eventually provide fuel for cellular respiration. Disaccharides are formed when two monosaccharides, or two single simple sugars, form a bond with removal of water. They can be hydrolyzed to yield their saccharin building blocks by boiling with dilute acid or reacting them with appropriate enzymes. Examples of disaccharides include sucrose, maltose, and lactose. Polysaccharides are polymerized monosaccharides, or complex carbohydrates. They have multiple simple sugars. Examples are starch, cellulose, and glycogen. They are generally large and often have a complex branched connectivity. Because of their size, polysaccharides are not water-soluble, but their many hydroxy groups become hydrated individually when exposed to water, and some polysaccharides form thick colloidal dispersions when heated in water. Shorter polysaccharides, with 3 to 10 monomers, are called oligosaccharides. A fluorescent indicator-displacement molecular imprinting sensor was developed for discriminating saccharides. It successfully discriminated three brands of orange juice beverage. The change in fluorescence intensity of the sensing films resulting is directly related to the saccharide concentration. Lignin Lignin is a complex polyphenolic macromolecule composed mainly of beta-O4-aryl linkages. After cellulose, lignin is the second most abundant biopolymer and is one of the primary structural components of most plants. It contains subunits derived from p-coumaryl alcohol, coniferyl alcohol, and sinapyl alcohol, and is unusual among biomolecules in that it is racemic. The lack of optical activity is due to the polymerization of lignin which occurs via free radical coupling reactions in which there is no preference for either configuration at a chiral center. Lipid Lipids (oleaginous) are chiefly fatty acid esters, and are the basic building blocks of biological membranes. Another biological role is energy storage (e.g., triglycerides). Most lipids consist of a polar or hydrophilic head (typically glycerol) and one to three non polar or hydrophobic fatty acid tails, and therefore they are amphiphilic. Fatty acids consist of unbranched chains of carbon atoms that are connected by single bonds alone (saturated fatty acids) or by both single and double bonds (unsaturated fatty acids). The chains are usually 14-24 carbon groups long, but it is always an even number. For lipids present in biological membranes, the hydrophilic head is from one of three classes: Glycolipids, whose heads contain an oligosaccharide with 1-15 saccharide residues. Phospholipids, whose heads contain a positively charged group that is linked to the tail by a negatively charged phosphate group. Sterols, whose heads contain a planar steroid ring, for example, cholesterol. Other lipids include prostaglandins and leukotrienes which are both 20-carbon fatty acyl units synthesized from arachidonic acid. They are also known as fatty acids Amino acids Amino acids contain both amino and carboxylic acid functional groups. (In biochemistry, the term amino acid is used when referring to those amino acids in which the amino and carboxylate functionalities are attached to the same carbon, plus proline which is not actually an amino acid). Modified amino acids are sometimes observed in proteins; this is usually the result of enzymatic modification after translation (protein synthesis). For example, phosphorylation of serine by kinases and dephosphorylation by phosphatases is an important control mechanism in the cell cycle. Only two amino acids other than the standard twenty are known to be incorporated into proteins during translation, in certain organisms: Selenocysteine is incorporated into some proteins at a UGA codon, which is normally a stop codon. Pyrrolysine is incorporated into some proteins at a UAG codon. For instance, in some methanogens in enzymes that are used to produce methane. Besides those used in protein synthesis, other biologically important amino acids include carnitine (used in lipid transport within a cell), ornithine, GABA and taurine. Protein structure The particular series of amino acids that form a protein is known as that protein's primary structure. This sequence is determined by the genetic makeup of the individual. It specifies the order of side-chain groups along the linear polypeptide "backbone". Proteins have two types of well-classified, frequently occurring elements of local structure defined by a particular pattern of hydrogen bonds along the backbone: alpha helix and beta sheet. Their number and arrangement is called the secondary structure of the protein. Alpha helices are regular spirals stabilized by hydrogen bonds between the backbone CO group (carbonyl) of one amino acid residue and the backbone NH group (amide) of the i+4 residue. The spiral has about 3.6 amino acids per turn, and the amino acid side chains stick out from the cylinder of the helix. Beta pleated sheets are formed by backbone hydrogen bonds between individual beta strands each of which is in an "extended", or fully stretched-out, conformation. The strands may lie parallel or antiparallel to each other, and the side-chain direction alternates above and below the sheet. Hemoglobin contains only helices, natural silk is formed of beta pleated sheets, and many enzymes have a pattern of alternating helices and beta-strands. The secondary-structure elements are connected by "loop" or "coil" regions of non-repetitive conformation, which are sometimes quite mobile or disordered but usually adopt a well-defined, stable arrangement. The overall, compact, 3D structure of a protein is termed its tertiary structure or its "fold". It is formed as result of various attractive forces like hydrogen bonding, disulfide bridges, hydrophobic interactions, hydrophilic interactions, van der Waals force etc. When two or more polypeptide chains (either of identical or of different sequence) cluster to form a protein, quaternary structure of protein is formed. Quaternary structure is an attribute of polymeric (same-sequence chains) or heteromeric (different-sequence chains) proteins like hemoglobin, which consists of two "alpha" and two "beta" polypeptide chains. Apoenzymes An apoenzyme (or, generally, an apoprotein) is the protein without any small-molecule cofactors, substrates, or inhibitors bound. It is often important as an inactive storage, transport, or secretory form of a protein. This is required, for instance, to protect the secretory cell from the activity of that protein. Apoenzymes become active enzymes on addition of a cofactor. Cofactors can be either inorganic (e.g., metal ions and iron-sulfur clusters) or organic compounds, (e.g., [Flavin group|flavin] and heme). Organic cofactors can be either prosthetic groups, which are tightly bound to an enzyme, or coenzymes, which are released from the enzyme's active site during the reaction. Isoenzymes Isoenzymes, or isozymes, are multiple forms of an enzyme, with slightly different protein sequence and closely similar but usually not identical functions. They are either products of different genes, or else different products of alternative splicing. They may either be produced in different organs or cell types to perform the same function, or several isoenzymes may be produced in the same cell type under differential regulation to suit the needs of changing development or environment. LDH (lactate dehydrogenase) has multiple isozymes, while fetal hemoglobin is an example of a developmentally regulated isoform of a non-enzymatic protein. The relative levels of isoenzymes in blood can be used to diagnose problems in the organ of secretion .
Biology and health sciences
Biochemistry and molecular biology
null
366723
https://en.wikipedia.org/wiki/Indeterminate%20form
Indeterminate form
Indeterminate form is a mathematical expression that can obtain any value depending on circumstances. In calculus, it is usually possible to compute the limit of the sum, difference, product, quotient or power of two functions by taking the corresponding combination of the separate limits of each respective function. For example, and likewise for other arithmetic operations; this is sometimes called the algebraic limit theorem. However, certain combinations of particular limiting values cannot be computed in this way, and knowing the limit of each function separately does not suffice to determine the limit of the combination. In these particular situations, the limit is said to take an indeterminate form, described by one of the informal expressions among a wide variety of uncommon others, where each expression stands for the limit of a function constructed by an arithmetical combination of two functions whose limits respectively tend to or as indicated. A limit taking one of these indeterminate forms might tend to zero, might tend to any finite value, might tend to infinity, or might diverge, depending on the specific functions involved. A limit which unambiguously tends to infinity, for instance is not considered indeterminate. The term was originally introduced by Cauchy's student Moigno in the middle of the 19th century. The most common example of an indeterminate form is the quotient of two functions each of which converges to zero. This indeterminate form is denoted by . For example, as approaches the ratios , , and go to , , and respectively. In each case, if the limits of the numerator and denominator are substituted, the resulting expression is , which is indeterminate. In this sense, can take on the values , , or , by appropriate choices of functions to put in the numerator and denominator. A pair of functions for which the limit is any particular given value may in fact be found. Even more surprising, perhaps, the quotient of the two functions may in fact diverge, and not merely diverge to infinity. For example, . So the fact that two functions and converge to as approaches some limit point is insufficient to determinate the limit An expression that arises by ways other than applying the algebraic limit theorem may have the same form of an indeterminate form. However it is not appropriate to call an expression "indeterminate form" if the expression is made outside the context of determining limits. An example is the expression . Whether this expression is left undefined, or is defined to equal , depends on the field of application and may vary between authors. For more, see the article Zero to the power of zero. Note that and other expressions involving infinity are not indeterminate forms. Some examples and non-examples Indeterminate form 0/0 The indeterminate form is particularly common in calculus, because it often arises in the evaluation of derivatives using their definition in terms of limit. As mentioned above, while This is enough to show that is an indeterminate form. Other examples with this indeterminate form include and Direct substitution of the number that approaches into any of these expressions shows that these are examples correspond to the indeterminate form , but these limits can assume many different values. Any desired value can be obtained for this indeterminate form as follows: The value can also be obtained (in the sense of divergence to infinity): Indeterminate form 00 The following limits illustrate that the expression is an indeterminate form: Thus, in general, knowing that and is not sufficient to evaluate the limit If the functions and are analytic at , and is positive for sufficiently close (but not equal) to , then the limit of will be . Otherwise, use the transformation in the table below to evaluate the limit. Expressions that are not indeterminate forms The expression is not commonly regarded as an indeterminate form, because if the limit of exists then there is no ambiguity as to its value, as it always diverges. Specifically, if approaches and approaches then and may be chosen so that: approaches approaches The limit fails to exist. In each case the absolute value approaches , and so the quotient must diverge, in the sense of the extended real numbers (in the framework of the projectively extended real line, the limit is the unsigned infinity in all three cases). Similarly, any expression of the form with (including and ) is not an indeterminate form, since a quotient giving rise to such an expression will always diverge. The expression is not an indeterminate form. The expression obtained from considering gives the limit provided that remains nonnegative as approaches . The expression is similarly equivalent to ; if as approaches , the limit comes out as . To see why, let where and By taking the natural logarithm of both sides and using we get that which means that Evaluating indeterminate forms The adjective indeterminate does not imply that the limit does not exist, as many of the examples above show. In many cases, algebraic elimination, L'Hôpital's rule, or other methods can be used to manipulate the expression so that the limit can be evaluated. Equivalent infinitesimal When two variables and converge to zero at the same limit point and , they are called equivalent infinitesimal (equiv. ). Moreover, if variables and are such that and , then: Here is a brief proof: Suppose there are two equivalent infinitesimals and . For the evaluation of the indeterminate form , one can make use of the following facts about equivalent infinitesimals (e.g., if x becomes closer to zero): For example: In the 2nd equality, where as y become closer to 0 is used, and where is used in the 4th equality, and is used in the 5th equality. L'Hôpital's rule L'Hôpital's rule is a general method for evaluating the indeterminate forms and . This rule states that (under appropriate conditions) where and are the derivatives of and . (Note that this rule does not apply to expressions , , and so on, as these expressions are not indeterminate forms.) These derivatives will allow one to perform algebraic simplification and eventually evaluate the limit. L'Hôpital's rule can also be applied to other indeterminate forms, using first an appropriate algebraic transformation. For example, to evaluate the form 00: The right-hand side is of the form , so L'Hôpital's rule applies to it. Note that this equation is valid (as long as the right-hand side is defined) because the natural logarithm (ln) is a continuous function; it is irrelevant how well-behaved and may (or may not) be as long as is asymptotically positive. (the domain of logarithms is the set of all positive real numbers.) Although L'Hôpital's rule applies to both and , one of these forms may be more useful than the other in a particular case (because of the possibility of algebraic simplification afterwards). One can change between these forms by transforming to . List of indeterminate forms The following table lists the most common indeterminate forms and the transformations for applying l'Hôpital's rule.
Mathematics
Basics_2
null