id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
4168
https://en.wikipedia.org/wiki/Utility%20knife
Utility knife
A utility knife is any type of knife used for general manual work purposes. Such knives were originally fixed-blade knives with durable cutting edges suitable for rough work such as cutting cordage, cutting/scraping hides, butchering animals, cleaning fish scales, reshaping timber, and other tasks. Craft knives are small utility knives used as precision-oriented tools for finer, more delicate tasks such as carving and papercutting. Today, the term "utility knife" also includes small folding-, retractable- and/or replaceable-blade knives suited for use in the general workplace or in the construction industry. The latter type is sometimes generically called a Stanley knife, after a prominent brand designed by the American tool manufacturing company Stanley Black & Decker. There is also a utility knife for kitchen use, which is sized between a chef's knife and paring knife. History The fixed-blade utility knife was developed some 500,000 years ago, when human ancestors began to make stone knives. These knives were general-purpose tools, designed for cutting and shaping wooden implements, scraping hides, preparing food, and for other utilitarian purposes. By the 19th century the fixed-blade utility knife had evolved into a steel-bladed outdoors field knife capable of butchering game, cutting wood, and preparing campfires and meals. With the invention of the backspring, pocket-size utility knives were introduced with folding blades and other folding tools designed to increase the utility of the overall design. The folding pocketknife and utility tool is typified by the Camper or Boy Scout pocketknife, the Swiss Army Knife, and by multi-tools fitted with knife blades. The development of stronger locking blade mechanisms for folding knives—as with the Spanish navaja, the Opinel, and the Buck 110 Folding Hunter—significantly increased the utility of such knives when employed for heavy-duty tasks such as preparing game or cutting through dense or tough materials. Contemporary utility knives The fixed or folding blade utility knife is popular for both indoor and outdoor use. One of the most popular types of workplace utility knife is the retractable or folding utility knife (also known as a Stanley knife, box cutter, or by various other names). These types of utility knives are designed as multi-purpose cutting tools for use in a variety of trades and crafts. Designed to be lightweight and easy to carry and use, utility knives are commonly used in factories, warehouses, construction projects, and other situations where a tool is routinely needed to mark cut lines, trim plastic or wood materials, or to cut tape, cord, strapping, cardboard, or other packaging material. Names In British, Australian and New Zealand English, along with Dutch, Danish and Austrian German, a utility knife is often referred to as a Stanley knife. This name is a generic trademark named after Stanley Works, a manufacturer of such knives. In Israel and Switzerland, these knives are known as Japanese knives. In Brazil they are known as estiletes or cortadores Olfa (the latter, being another genericised trademark). In Portugal, Panama and Canada they are also known as X-Acto (yet another genericised trademark ). In India, Russia, the Philippines, France, Iraq, Italy, Egypt, and Germany, they are simply called cutter. In the Flemish region of Belgium it is called cuttermes(je) (cutter knife). In general Spanish, they are known as cortaplumas (penknife, when it comes to folding blades); in Spain, Mexico, and Costa Rica, they are colloquially known as cutters; in Argentina and Uruguay the segmented fixed-blade knives are known as "Trinchetas". In Turkey, they are known as maket bıçağı (which literally translates as model knife). Other names for the tool are box cutter or boxcutter, blade knife, carpet knife, pen knife, stationery knife, sheetrock knife, or drywall knife. Design Utility knives may use fixed, folding, or retractable or replaceable blades, and come in a wide variety of lengths and styles suited to the particular set of tasks they are designed to perform. Thus, an outdoors utility knife suited for camping or hunting might use a broad fixed blade, while a utility knife designed for the construction industry might feature a replaceable utility blade for cutting packaging, cutting shingles, marking cut lines, or scraping paint. Fixed blade utility knife Large fixed-blade utility knives are most often employed in an outdoors context, such as fishing, camping, or hunting. Outdoor utility knives typically feature sturdy blades from in length, with edge geometry designed to resist chipping and breakage. The term "utility knife" may also refer to small fixed-blade knives used for crafts, model-making and other artisanal projects. These small knives feature light-duty blades best suited for cutting thin, lightweight materials. The small, thin blade and specialized handle permit cuts requiring a high degree of precision and control. Workplace utility knives The largest construction or workplace utility knives typically feature retractable and replaceable blades, and are made of either die-cast metal or molded plastic. Some use standard blades, others specialized double-ended utility blades. The user can adjust how far the blade extends from the handle, so that, for example, the knife can be used to cut the tape sealing a package without damaging the contents of the package. When the blade becomes dull, it can be quickly reversed or switched for a new one. Spare or used blades are stored in the hollow handle of some models, and can be accessed by removing a screw and opening the handle. Other models feature a quick-change mechanism that allows replacing the blade without tools, as well as a flip-out blade storage tray. The blades for this type of utility knife come in both double- and single-ended versions, and are interchangeable with many, but not all, of the later copies. Specialized blades also exist for cutting string, linoleum, and other materials. Another style is a snap-off utility knife that contains a long, segmented blade that slides out from it. As the endmost edge becomes dull, it can be broken off the remaining blade, exposing the next section, which is sharp and ready for use. The snapping is best accomplished with a blade snapper that is often built-in, or a pair of pliers, and the break occurs at the score lines, where the metal is thinnest. When all of the individual segments are used, the knife may be thrown away, or, more often, refilled with a replacement blade. This design was introduced by Japanese manufacturer OLFA in 1956 as the world's first snap-off blade and was inspired from analyzing the sharp cutting edge produced when glass is broken and how pieces of a chocolate bar break into segments. The sharp cutting edge on these knives is not on the edge where the blade is snapped off; rather one long edge of the whole blade is sharpened, and there are scored diagonal breakoff lines at intervals down the blade. Thus each snapped-off piece is roughly a parallelogram, with each long edge being a breaking edge, and one or both of the short ends being a sharpened edge. Another utility knife often used for cutting open boxes consists of a simple sleeve around a rectangular handle into which single-edge utility blades can be inserted. The sleeve slides up and down on the handle, holding the blade in place during use and covering the blade when not in use. The blade holder may either retract or fold into the handle, much like a folding-blade pocketknife. The blade holder is designed to expose just enough edge to cut through one layer of corrugated fibreboard, to minimize chances of damaging contents of cardboard boxes. Use as weapon Most utility knives are not well suited to use as offensive weapons, with the exception of some outdoor-type utility knives employing longer blades. However, even small blade type utility knives may sometimes find use as slashing weapons. The 9/11 Commission report stated passengers in cell phone calls reported knives or "box-cutters" were used as weapons (also Mace or a bomb) in hijacking airplanes in the September 11, 2001 terrorist attacks against the United States, though the exact design of the knives used is unknown. Two of the hijackers were known to have purchased Leatherman knives, which feature a slip-joint blade, which were not prohibited on U.S. flights at the time. Those knives were not found in the possessions the two hijackers left behind. Similar cutters, including paper cutters, have also been known to be used as a lethal weapon. Small work-type utility knives have also been used to commit robbery and other crimes. In June 2004, a Japanese student was slashed to death with a segmented-type utility knife. In the United Kingdom, the law was changed (effective 1 October 2007) to raise the age limit for purchasing knives, including utility knives, from 16 to 18, and to make it illegal to carry a utility knife in public without a good reason.
Technology
Knives
null
4169
https://en.wikipedia.org/wiki/Bronze
Bronze
Bronze is an alloy consisting primarily of copper, commonly with about 12–12.5% tin and often with the addition of other metals (including aluminium, manganese, nickel, or zinc) and sometimes non-metals (such as phosphorus) or metalloids (such as arsenic or silicon). These additions produce a range of alloys some of which are harder than copper alone or have other useful properties, such as strength, ductility, or machinability. The archaeological period during which bronze was the hardest metal in widespread use is known as the Bronze Age. The beginning of the Bronze Age in western Eurasia and India is conventionally dated to the mid-4th millennium BCE (~3500 BCE), and to the early 2nd millennium BCE in China; elsewhere it gradually spread across regions. The Bronze Age was followed by the Iron Age, which started about 1300 BCE and reaching most of Eurasia by about 500 BCE, although bronze continued to be much more widely used than it is in modern times. Because historical artworks were often made of bronzes and [brass]]es (alloys of copper and zinc) of different metallic compositions, modern museum and scholarly descriptions of older artworks increasingly use the generalized term "copper alloy" instead of the names of individual alloys. This is done (at least in part) to prevent database searches from failing merely because of errors or disagreements in the naming of historic copper alloys. Etymology The word bronze (1730–1740) is borrowed from Middle French (1511), itself borrowed from Italian (13th century, transcribed in Medieval Latin as ) from either: , back-formation from Byzantine Greek (, 11th century), perhaps from (, ), reputed for its bronze; or originally: in its earliest form from Old Persian , (, , modern ) and () , from which also came Georgian (), Turkish from "bir" (one) "birinç" (primary), and Armenian (), also meaning . History The discovery of bronze enabled people to create metal objects that were harder and more durable than had previously been possible. Bronze tools, weapons, armor, and building materials such as decorative tiles were harder and more durable than their stone and copper ("Chalcolithic") predecessors. Initially, bronze was made out of copper and arsenic or from naturally or artificially mixed ores of those metals, forming arsenic bronze. The earliest known arsenic-copper-alloy artifacts come from a Yahya Culture (Period V 3800-3400 BCE) site, at Tal-i-Iblis on the Iranian plateau, and were smelted from native arsenical copper and copper-arsenides, such as algodonite and domeykite. The earliest tin-copper-alloy artifact has been dated to , in a Vinča culture site in Pločnik (Serbia), and believed to have been smelted from a natural tin-copper ore, stannite. Other early examples date to the late 4th millennium BCE in Egypt, Susa (Iran) and some ancient sites in China, Luristan (Iran), Tepe Sialk (Iran), Mundigak (Afghanistan), and Mesopotamia (Iraq). Tin bronze was superior to arsenic bronze in that the alloying process could be more easily controlled, and the resulting alloy was stronger and easier to cast. Also, unlike those of arsenic, metallic tin and the fumes from tin refining are not toxic. Tin became the major non-copper ingredient of bronze in the late 3rd millennium BCE. Ores of copper and the far rarer tin are not often found together (exceptions include Cornwall in the United Kingdom, one ancient site in Thailand and one in Iran), so serious bronze work has always involved trade with other regions. Tin sources and trade in ancient times had a major influence on the development of cultures. In Europe, a major source of tin was the British deposits of ore in Cornwall, which were traded as far as Phoenicia in the eastern Mediterranean. In many parts of the world, large hoards of bronze artifacts are found, suggesting that bronze also represented a store of value and an indicator of social status. In Europe, large hoards of bronze tools, typically socketed axes (illustrated above), are found, which mostly show no signs of wear. With Chinese ritual bronzes, which are documented in the inscriptions they carry and from other sources, the case is clear. These were made in enormous quantities for elite burials, and also used by the living for ritual offerings. Transition to iron Though bronze, whose Vickers hardness is 60–258, is generally harder than wrought iron, with a hardness of 30–80,, the Bronze Age gave way to the Iron Age after a serious disruption of the tin trade: the population migrations of around 1200–1100 BCE reduced the shipment of tin around the Mediterranean and from Britain, limiting supplies and raising prices. As the art of working in iron improved, iron became cheaper and improved in quality. As later cultures advanced from hand-wrought iron to machine-forged iron (typically made with trip hammers powered by water), blacksmiths also learned how to make steel, which is stronger and harder than bronze and holds a sharper edge longer. Bronze was still used during the Iron Age and has continued in use for many purposes to the modern day. Composition There are many different bronze alloys, but typically modern bronze is about 88% copper and 12% tin. Alpha bronze consists of the alpha solid solution of tin in copper. Alpha bronze alloys of 4–5% tin are used to make coins, springs, turbines and blades. Historical "bronzes" are highly variable in composition, as most metalworkers probably used whatever scrap was on hand; the metal of the 12th-century English Gloucester Candlestick is bronze containing a mixture of copper, zinc, tin, lead, nickel, iron, antimony, arsenic and an unusually large amount of silver – between 22.5% in the base and 5.76% in the pan below the candle. The proportions of this mixture suggest that the candlestick was made from a hoard of old coins. The 13th-century Benin Bronzes are in fact brass, and the 12th-century Romanesque Baptismal font at St Bartholomew's Church, Liège is sometimes described as bronze and sometimes as brass. During the Bronze Age, two forms of bronze were commonly used: "classic bronze", about 10% tin, was used in casting; "mild bronze", about 6% tin, was hammered from ingots to make sheets. Bladed weapons were primarily cast from classic bronze while helmets and armor were hammered from mild bronze. Modern commercial bronze (90% copper and 10% zinc) and architectural bronze (57% copper, 3% lead, 40% zinc) are more properly regarded as brass alloys because they contain zinc as the main alloying ingredient. They are commonly used in architectural applications. Plastic bronze contains a significant quantity of lead, which makes for improved plasticity, and may have been used by the ancient Greeks in ship construction. has a composition of Si: 2.80–3.80%, Mn: 0.50–1.30%, Fe: 0.80% max., Zn: 1.50% max., Pb: 0.05% max., Cu: balance. Other bronze alloys include aluminium bronze, phosphor bronze, manganese bronze, bell metal, arsenical bronze, speculum metal, bismuth bronze, and cymbal alloys. Properties Copper-based alloys have lower melting points than steel or iron and are more readily produced from their constituent metals. They are generally about 10 percent denser than steel, although alloys using aluminum or silicon may be slightly less dense. Bronze conducts heat and electricity better than most steels. Copper-base alloys are generally more costly than steels but less so than nickel-base alloys. Bronzes are typically ductile alloys and are considerably less brittle than cast iron. Copper and its alloys have a huge variety of uses that reflect their versatile physical, mechanical, and chemical properties. Some common examples are the high electrical conductivity of pure copper, the low-friction properties of bearing bronze (bronze that has a high lead content— 6–8%), the resonant qualities of bell bronze (20% tin, 80% copper), and the resistance to corrosion by seawater of several bronze alloys. The melting point of bronze is about but varies depending on the ratio of the alloy components. Bronze is usually nonmagnetic, but certain alloys containing iron or nickel may have magnetic properties. Bronze typically oxidizes only superficially; once a copper oxide (eventually becoming copper carbonate) layer is formed, the underlying metal is protected from further corrosion. This can be seen on statues from the Hellenistic period. If copper chlorides are formed, a corrosion-mode called "bronze disease" will eventually destroy it completely. Uses Bronze, or bronze-like alloys and mixtures, were used for coins over a longer period. Bronze was especially suitable for use in boat and ship fittings prior to the wide employment of stainless steel owing to its combination of toughness and resistance to salt water corrosion. Bronze is still commonly used in ship propellers and submerged bearings. In the 20th century, silicon was introduced as the primary alloying element, creating an alloy with wide application in industry and the major form used in contemporary statuary. Sculptors may prefer silicon bronze because of the ready availability of silicon bronze brazing rod, which allows color-matched repair of defects in castings. Aluminum is also used for the structural metal aluminum bronze. Bronze parts are tough and typically used for bearings, clips, electrical connectors and springs. Bronze also has low friction against dissimilar metals, making it important for cannons prior to modern tolerancing, where iron cannonballs would otherwise stick in the barrel. It is still widely used today for springs, bearings, bushings, automobile transmission pilot bearings, and similar fittings, and is particularly common in the bearings of small electric motors. Phosphor bronze is particularly suited to precision-grade bearings and springs. It is also used in guitar and piano strings. Unlike steel, bronze struck against a hard surface will not generate sparks, so it (along with beryllium copper) is used to make hammers, mallets, wrenches and other durable tools to be used in explosive atmospheres or in the presence of flammable vapors. Bronze is used to make bronze wool for woodworking applications where steel wool would discolor oak. Phosphor bronze is used for ships' propellers, musical instruments, and electrical contacts. Bearings are often made of bronze for its friction properties. It can be impregnated with oil to make the proprietary Oilite and similar material for bearings. Aluminum bronze is hard and wear-resistant, and is used for bearings and machine tool ways. The Doehler Die Casting Co. of Toledo, Ohio were known for the production of Brastil, a high tensile corrosion resistant bronze alloy. Architectural bronze The Seagram Building on New York City's Park Avenue is the "iconic glass box sheathed in bronze, designed by Mies van der Rohe." The Seagram Building was the first time that an entire building was sheathed in bronze. The General Bronze Corporation fabricated 3,200,000 pounds (1,600 tons) of bronze at its plant in Garden City, New York. The Seagram Building is a 38-story, 516-foot bronze-and-topaz-tinted glass building. The building looks like a "squarish 38-story tower clad in a restrained curtain wall of metal and glass." "Bronze was selected because of its color, both before and after aging, its corrosion resistance, and its extrusion properties. In 1958, it was not only the most expensive building of its time — $36 million — but it was the first building in the world with floor-to-ceiling glass walls. Mies van der Rohe achieved the crisp edges that were custom-made with specific detailing by General Bronze and "even the screws that hold in the fixed glass-plate windows were made of brass." Sculptures Bronze is widely used for casting bronze sculptures. Common bronze alloys have the unusual and desirable property of expanding slightly just before they set, thus filling the finest details of a mould. Then, as the bronze cools, it shrinks a little, making it easier to separate from the mould. The Assyrian king Sennacherib (704–681 BCE) claims to have been the first to cast monumental bronze statues (of up to 30 tonnes) using two-part moulds instead of the lost-wax method. Bronze statues were regarded as the highest form of sculpture in Ancient Greek art, though survivals are few, as bronze was a valuable material in short supply in the Late Antique and medieval periods. Many of the most famous Greek bronze sculptures are known through Roman copies in marble, which were more likely to survive. In India, bronze sculptures from the Kushana (Chausa hoard) and Gupta periods (Brahma from Mirpur-Khas, Akota Hoard, Sultanganj Buddha) and later periods (Hansi Hoard) have been found. Indian Hindu artisans from the period of the Chola empire in Tamil Nadu used bronze to create intricate statues via the lost-wax casting method with ornate detailing depicting the deities of Hinduism. The art form survives to this day, with many silpis, craftsmen, working in the areas of Swamimalai and Chennai. In antiquity other cultures also produced works of high art using bronze. For example: in Africa, the bronze heads of the Kingdom of Benin; in Europe, Grecian bronzes typically of figures from Greek mythology; in east Asia, Chinese ritual bronzes of the Shang and Zhou dynasty—more often ceremonial vessels but including some figurine examples. Bronze continues into modern times as one of the materials of choice for monumental statuary. Lamps Tiffany Glass Studios, made famous by Louis C. Tiffany commonly referred to his product as favrile glass or "Tiffany glass," and used bronze in their artisan work for his Tiffany lamps. Fountains and doors The largest and most ornate bronze fountain known to be cast in the world was by the Roman Bronze Works and General Bronze Corporation in 1952. The material used for the fountain, known as statuary bronze, is a quaternary alloy made of copper, zinc, tin, and lead, and traditionally golden brown in color. This was made for the Andrew W. Mellon Memorial Fountain in Federal Triangle in Washington, DC. Another example of the massive, ornate design projects of bronze, and attributed to General Bronze/Roman Bronze Works were the massive bronze doors to the United States Supreme Court Building in Washington, DC. Mirrors Before it became possible to produce glass with acceptably flat surfaces, bronze was a standard material for mirrors. Bronze was used for this purpose in many parts of the world, probably based on independent discoveries. Bronze mirrors survive from the Egyptian Middle Kingdom (2040–1750 BCE), and China from at least . In Europe, the Etruscans were making bronze mirrors in the sixth century BCE, and Greek and Roman mirrors followed the same pattern. Although other materials such as speculum metal had come into use, and Western glass mirrors had largely taken over, bronze mirrors were still being made in Japan and elsewhere in the eighteenth century, and are still made on a small scale in Kerala, India. Musical instruments Bronze is the preferred metal for bells in the form of a high tin bronze alloy known as bell metal, which is typically about 23% tin. Nearly all professional cymbals are made from bronze, which gives a desirable balance of durability and timbre. Several types of bronze are used, commonly B20 bronze, which is roughly 20% tin, 80% copper, with traces of silver, or the tougher B8 bronze made from 8% tin and 92% copper. As the tin content in a bell or cymbal rises, the timbre drops. Bronze is also used for the windings of steel and nylon strings of various stringed instruments such as the double bass, piano, harpsichord, and guitar. Bronze strings are commonly reserved on pianoforte for the lower pitch tones, as they possess a superior sustain quality to that of high-tensile steel. Bronzes of various metallurgical properties are widely used in struck idiophones around the world, notably bells, singing bowls, gongs, cymbals, and other idiophones from Asia. Examples include Tibetan singing bowls, temple bells of many sizes and shapes, Javanese gamelan, and other bronze musical instruments. The earliest bronze archeological finds in Indonesia date from 1–2 BCE, including flat plates probably suspended and struck by a wooden or bone mallet. Ancient bronze drums from Thailand and Vietnam date back 2,000 years. Bronze bells from Thailand and Cambodia date back to 3600 BCE. Some companies are now making saxophones from phosphor bronze (3.5 to 10% tin and up to 1% phosphorus content). Bell bronze/B20 is used to make the tone rings of many professional model banjos. The tone ring is a heavy (usually ) folded or arched metal ring attached to a thick wood rim, over which a skin, or most often, a plastic membrane (or head) is stretched – it is the bell bronze that gives the banjo a crisp powerful lower register and clear bell-like treble register. Coins and medals Bronze has also been used in coins; most "copper" coins are actually bronze, with about 4 percent tin and 1 percent zinc. As with coins, bronze has been used in the manufacture of various types of medals for centuries, and "bronze medals" are known in contemporary times for being awarded for third place in sporting competitions and other events. The term is now often used for third place even when no actual bronze medal is awarded. The usage in part arose from the trio of gold, silver and bronze to represent the first three Ages of Man in Greek mythology: the Golden Age, when men lived among the gods; the Silver age, where youth lasted a hundred years; and the Bronze Age, the era of heroes. It was first adopted for a sports event at the 1904 Summer Olympics. At the 1896 event, silver was awarded to winners and bronze to runners-up, while at 1900 other prizes were given rather than medals. Bronze is the normal material for the related form of the plaquette, normally a rectangular work of art with a scene in relief, for a collectors' market. Bronze is also associated with eighth wedding anniversaries. Biblical references There are over 125 references to bronze ('nehoshet'), which appears to be the Hebrew word used for copper and any of its alloys. However, the Old Testament era Hebrews are not thought to have had the capability to manufacture zinc (needed to make brass) and so it is likely that 'nehoshet' refers to copper and its alloys with tin, now called bronze. In the King James Version, there is no use of the word 'bronze' and 'nehoshet' was translated as 'brass'. Modern translations use 'bronze'. Bronze (nehoshet) was used widely in the Tabernacle for items such as the bronze altar (Exodus Ch.27), bronze laver (Exodus Ch.30), utensils, and mirror (Exodus Ch.38). It was mentioned in the account of Moses holding up a bronze snake on a pole in Numbers Ch.21. In First Kings, it is mentioned that Hiram was very skilled in working with bronze, and he made many furnishings for Solomon's Temple including pillars, capitals, stands, wheels, bowls, and plates, some of which were highly decorative (see I Kings 7:13-47). Bronze was also widely used as battle armor and helmet, as in the battle of David and Goliath in I Samuel 17:5-6;38 (also see II Chron. 12:10).
Physical sciences
Chemistry
null
4177
https://en.wikipedia.org/wiki/Barge
Barge
A barge a typically flat-bottomed vessel which does not have its own means of mechanical propulsion. Original use was on inland waterways, while modern use is on both inland and marine water environments. The first modern barges were pulled by tugs, but on inland waterways, most are pushed by pusher boats, or other vessels. The term barge has a rich history, and therefore there are many types of barges. History of the barge Etymology Barge is attested from 1300, from Old French barge, from Vulgar Latin barga. The word originally could refer to any small boat; the modern meaning arose around 1480. Bark "small ship" is attested from 1420, from Old French barque, from Vulgar Latin barca (400 AD). A more precise meaning (see Barque)) arose in the 17th century and often takes the French spelling for disambiguation. Both are probably derived from the Latin barica, from Greek baris "Egyptian boat", from Coptic bari "small boat", hieroglyphic Egyptian D58-G29-M17-M17-D21-P1 and similar ba-y-r for "basket-shaped boat". By extension, the term "embark" literally means to board the kind of boat called a "barque". British river barges 18th century In Great Britain, a merchant barge was originally a flat bottomed merchant vessel for use on navigable rivers. Most of these barges had sails. For traffic on the River Severn, the barge was described thus: "The lesser sort are called barges and frigates, being from forty to sixty feet in length, having a single mast and square sail, and carrying from twenty to forty tons burthen." The larger vessels were called trows. On the River Irwell, there was reference to barges passing below Barton Aqueduct with their mast and sails standing. Early barges on the Thames were called west country barges. 19th century In the United Kingdom, the word barge had many meanings by the 1890s, and these varied locally. On the Mersey, a barge was called a 'Flat', on the Thames a Lighter or barge, and on the Humber a 'Keel'. A Lighter had neither mast nor rigging. A keel did have a single mast with sails. Barge and lighter were used indiscriminately. A local distinction was that any flat that was not propelled by steam was a barge, although it might be a sailing flat. The term Dumb barge was probably taken into use to end the confusion. The term Dumb barge surfaced in the early nineteenth century. It first denoted the use of a barge as a mooring platform in a fixed place. As it went up and down with the tides, it made a very convenient mooring place for steam vessels. Within a few decades, the term dumb barge evolved and came to mean: 'a vessel propelled by oars only'. By the 1890s, Dumb barge was still used only on the Thames. By 1880, barges on British rivers and canals were often towed by steam tugboats. On the Thames, many dumb barges still relied on their poles, oars and the tide. Others dumb barges made use of about 50 tugboats to tow them to their destinations. While many coal barges were towed, many dumb barges that handled single parcels were not. The Thames barge and Dutch barge today On the British river system and larger waterways, the Thames sailing barge, and Dutch barge and unspecified other styles of barge, are still known as barges. The term Dutch barge is nowadays often used to refer to an accommodation ship, but originally refers to the slightly larger Dutch version of the Thames sailing barge. British canals: narrowboats and widebeams During the Industrial Revolution, a substantial network of canals was developed in Great Britain from 1750 onward. Whilst the largest of these could accommodate ocean-going vessels, e.g the later Manchester Ship Canal, a complex network of smaller canals was also developed. These smaller canals had locks, bridges and tunnels that were at minimum only wide at the waterline. On wider sections, standard barges and other vessels could trade, but full access to the network necessitated the parallel development of the narrowboat, which usually had a beam a couple of inches less to allow for clearance, e.g. . It was soon realized that the narrow locks were too limiting, and later locks were therefore doubled in width to . This led to the development of the widebeam canal boat. The narrowboat (one word) definition in the Oxford English Dictionary is: The narrowboats were initially also known as barges, and the new canals were constructed with an adjacent towpath along which draft horses walked, towing the barges. These types of canal craft are so specific that on the British canal system the term 'barge' is no longer used to describe narrowboats and widebeams. Narrowboats and widebeams are still seen on canals, mostly for leisure cruising, and now engine-powered. Crew and pole The people who moved barges were known as lightermen. Poles are used on barges to fend off other nearby vessels or a wharf. These are often called 'pike poles'. The long pole used to maneuver or propel a barge has given rise to the saying "I wouldn't touch that [subject/thing] with a barge pole." The 19th century American barge In the United States a barge was not a sailing vessel by the end of the 19th century. Indeed, barges were often created by cutting down (razeeing) sailing vessels. In New York this was an accepted meaning of the term barge. The somewhat smaller scow was built as such, but the scow also had its sailing counterpart the sailing scow. The modern barge The iron barge The innovation that led to the modern barge was the use of iron barges towed by a steam tugboat. These were first used to transport grain and other bulk products. From about 1840 to 1870 the towed iron barge was quickly introduced on the Rhine, Danube, Don, Dniester, and rivers in Egypt, India and Australia. Many of these barges were built in Great Britain. Nowadays 'barge' generally refers to a dumb barge. In Europe, a Dumb barge is: An inland waterway transport freight vessel designed to be towed which does not have its own means of mechanical propulsion. In America, a barge is generally pushed. Modern use Barges are used today for transporting low-value bulk items, as the cost of hauling goods that way is very low and for larger project cargo, such as offshore wind turbine blades. Barges are also used for very heavy or bulky items; a typical American barge measures , and can carry up to about of cargo. The most common European barges measure and can carry up to about . As an example, on June 26, 2006, in the US a catalytic cracking unit reactor was shipped by barge from the Tulsa Port of Catoosa in Oklahoma to a refinery in Pascagoula, Mississippi. Extremely large objects are normally shipped in sections and assembled after delivery, but shipping an assembled unit reduces costs and avoids reliance on construction labor at the delivery site, which in the case of the reactor was still recovering from Hurricane Katrina. Of the reactor's journey, only about were traveled overland, from the final port to the refinery. The Transportation Institute at Texas A&M found that inland barge transportation in the US produces far fewer emissions of carbon dioxide for each ton of cargo moved compared to transport by truck or rail. According to the study, transporting cargo by barge produces 43% less greenhouse gas emissions than rail and more than 800% less than trucks. Environmentalists claim that in areas where barges, tugboats and towboats idle may produce more emissions like in the locks and dams of the Mississippi River. Self-propelled barges may be used for traveling downstream or upstream in placid waters; they are operated as an unpowered barge, with the assistance of a tugboat, when traveling upstream in faster waters. Canal barges are usually made for the particular canal in which they will operate. Unpowered vessels—barges—may be used for other purposes, such as large accommodation vessels, towed to where they are needed and stationed there as long as necessary. An example is the Bibby Stockholm. Types ("accommodation barge") Ferrocement or or Spitz barge Severn Image gallery
Technology
Maritime transport
null
4183
https://en.wikipedia.org/wiki/Botany
Botany
Botany, also called plant science or phytology, is the branch of natural science and biology studying plants, especially their anatomy, taxonomy, and ecology. A botanist, plant scientist or phytologist is a scientist who specialises in this field. Nowadays, botanists (in the strict sense) study approximately 410,000 species of land plants, including some 391,000 species of vascular plants (of which approximately 369,000 are flowering plants) and approximately 20,000 bryophytes. Botany originated in prehistory as herbalism with the efforts of early humans to identify – and later cultivate – plants that were edible, poisonous, and possibly medicinal, making it one of the first endeavours of human investigation. Medieval physic gardens, often attached to monasteries, contained plants possibly having medicinal benefit. They were forerunners of the first botanical gardens attached to universities, founded from the 1540s onwards. One of the earliest was the Padua botanical garden. These gardens facilitated the academic study of plants. Efforts to catalogue and describe their collections were the beginnings of plant taxonomy and led in 1753 to the binomial system of nomenclature of Carl Linnaeus that remains in use to this day for the naming of all biological species. In the 19th and 20th centuries, new techniques were developed for the study of plants, including methods of optical microscopy and live cell imaging, electron microscopy, analysis of chromosome number, plant chemistry and the structure and function of enzymes and other proteins. In the last two decades of the 20th century, botanists exploited the techniques of molecular genetic analysis, including genomics and proteomics and DNA sequences to classify plants more accurately. Modern botany is a broad subject with contributions and insights from most other areas of science and technology. Research topics include the study of plant structure, growth and differentiation, reproduction, biochemistry and primary metabolism, chemical products, development, diseases, evolutionary relationships, systematics, and plant taxonomy. Dominant themes in 21st-century plant science are molecular genetics and epigenetics, which study the mechanisms and control of gene expression during differentiation of plant cells and tissues. Botanical research has diverse applications in providing staple foods, materials such as timber, oil, rubber, fibre and drugs, in modern horticulture, agriculture and forestry, plant propagation, breeding and genetic modification, in the synthesis of chemicals and raw materials for construction and energy production, in environmental management, and the maintenance of biodiversity. Etymology The term "botany" comes from the Ancient Greek word () meaning "pasture", "herbs" "grass", or "fodder"; is in turn derived from (Greek: ), "to feed" or "to graze". Traditionally, botany has also included the study of fungi and algae by mycologists and phycologists respectively, with the study of these three groups of organisms remaining within the sphere of interest of the International Botanical Congress. History Early botany Botany originated as herbalism, the study and use of plants for their possible medicinal properties. The early recorded history of botany includes many ancient writings and plant classifications. Examples of early botanical works have been found in ancient texts from India dating back to before 1100 BCE, Ancient Egypt, in archaic Avestan writings, and in works from China purportedly from before 221 BCE. Modern botany traces its roots back to Ancient Greece specifically to Theophrastus (–287 BCE), a student of Aristotle who invented and described many of its principles and is widely regarded in the scientific community as the "Father of Botany". His major works, Enquiry into Plants and On the Causes of Plants, constitute the most important contributions to botanical science until the Middle Ages, almost seventeen centuries later. Another work from Ancient Greece that made an early impact on botany is , a five-volume encyclopedia about preliminary herbal medicine written in the middle of the first century by Greek physician and pharmacologist Pedanius Dioscorides. was widely read for more than 1,500 years. Important contributions from the medieval Muslim world include Ibn Wahshiyya's Nabatean Agriculture, Abū Ḥanīfa Dīnawarī's (828–896) the Book of Plants, and Ibn Bassal's The Classification of Soils. In the early 13th century, Abu al-Abbas al-Nabati, and Ibn al-Baitar (d. 1248) wrote on botany in a systematic and scientific manner. In the mid-16th century, botanical gardens were founded in a number of Italian universities. The Padua botanical garden in 1545 is usually considered to be the first which is still in its original location. These gardens continued the practical value of earlier "physic gardens", often associated with monasteries, in which plants were cultivated for suspected medicinal uses. They supported the growth of botany as an academic subject. Lectures were given about the plants grown in the gardens. Botanical gardens came much later to northern Europe; the first in England was the University of Oxford Botanic Garden in 1621. German physician Leonhart Fuchs (1501–1566) was one of "the three German fathers of botany", along with theologian Otto Brunfels (1489–1534) and physician Hieronymus Bock (1498–1554) (also called Hieronymus Tragus). Fuchs and Brunfels broke away from the tradition of copying earlier works to make original observations of their own. Bock created his own system of plant classification. Physician Valerius Cordus (1515–1544) authored a botanically and pharmacologically important herbal Historia Plantarum in 1544 and a pharmacopoeia of lasting importance, the Dispensatorium in 1546. Naturalist Conrad von Gesner (1516–1565) and herbalist John Gerard (1545–) published herbals covering the supposed medicinal uses of plants. Naturalist Ulisse Aldrovandi (1522–1605) was considered the father of natural history, which included the study of plants. In 1665, using an early microscope, Polymath Robert Hooke discovered cells (a term he coined) in cork, and a short time later in living plant tissue. Early modern botany During the 18th century, systems of plant identification were developed comparable to dichotomous keys, where unidentified plants are placed into taxonomic groups (e.g. family, genus and species) by making a series of choices between pairs of characters. The choice and sequence of the characters may be artificial in keys designed purely for identification (diagnostic keys) or more closely related to the natural or phyletic order of the taxa in synoptic keys. By the 18th century, new plants for study were arriving in Europe in increasing numbers from newly discovered countries and the European colonies worldwide. In 1753, Carl Linnaeus published his Species Plantarum, a hierarchical classification of plant species that remains the reference point for modern botanical nomenclature. This established a standardised binomial or two-part naming scheme where the first name represented the genus and the second identified the species within the genus. For the purposes of identification, Linnaeus's Systema Sexuale classified plants into 24 groups according to the number of their male sexual organs. The 24th group, Cryptogamia, included all plants with concealed reproductive parts, mosses, liverworts, ferns, algae and fungi. Increasing knowledge of plant anatomy, morphology and life cycles led to the realisation that there were more natural affinities between plants than the artificial sexual system of Linnaeus. Adanson (1763), de Jussieu (1789), and Candolle (1819) all proposed various alternative natural systems of classification that grouped plants using a wider range of shared characters and were widely followed. The Candollean system reflected his ideas of the progression of morphological complexity and the later Bentham & Hooker system, which was influential until the mid-19th century, was influenced by Candolle's approach. Darwin's publication of the Origin of Species in 1859 and his concept of common descent required modifications to the Candollean system to reflect evolutionary relationships as distinct from mere morphological similarity. In the 19th century botany was a socially acceptable hobby for upper-class women. These women would collect and paint flowers and plants from around the world with scientific accuracy. The paintings were used to record many species that could not be transported or maintained in other environments. Marianne North illustrated over 900 species in extreme detail with watercolor and oil paintings. Her work and many other women's botany work was the beginning of popularizing botany to a wider audience. Botany was greatly stimulated by the appearance of the first "modern" textbook, Matthias Schleiden's , published in English in 1849 as Principles of Scientific Botany. Schleiden was a microscopist and an early plant anatomist who co-founded the cell theory with Theodor Schwann and Rudolf Virchow and was among the first to grasp the significance of the cell nucleus that had been described by Robert Brown in 1831. In 1855, Adolf Fick formulated Fick's laws that enabled the calculation of the rates of molecular diffusion in biological systems. Late modern botany Building upon the gene-chromosome theory of heredity that originated with Gregor Mendel (1822–1884), August Weismann (1834–1914) proved that inheritance only takes place through gametes. No other cells can pass on inherited characters. The work of Katherine Esau (1898–1997) on plant anatomy is still a major foundation of modern botany. Her books Plant Anatomy and Anatomy of Seed Plants have been key plant structural biology texts for more than half a century. The discipline of plant ecology was pioneered in the late 19th century by botanists such as Eugenius Warming, who produced the hypothesis that plants form communities, and his mentor and successor Christen C. Raunkiær whose system for describing plant life forms is still in use today. The concept that the composition of plant communities such as temperate broadleaf forest changes by a process of ecological succession was developed by Henry Chandler Cowles, Arthur Tansley and Frederic Clements. Clements is credited with the idea of climax vegetation as the most complex vegetation that an environment can support and Tansley introduced the concept of ecosystems to biology. Building on the extensive earlier work of Alphonse de Candolle, Nikolai Vavilov (1887–1943) produced accounts of the biogeography, centres of origin, and evolutionary history of economic plants. Particularly since the mid-1960s there have been advances in understanding of the physics of plant physiological processes such as transpiration (the transport of water within plant tissues), the temperature dependence of rates of water evaporation from the leaf surface and the molecular diffusion of water vapour and carbon dioxide through stomatal apertures. These developments, coupled with new methods for measuring the size of stomatal apertures, and the rate of photosynthesis have enabled precise description of the rates of gas exchange between plants and the atmosphere. Innovations in statistical analysis by Ronald Fisher, Frank Yates and others at Rothamsted Experimental Station facilitated rational experimental design and data analysis in botanical research. The discovery and identification of the auxin plant hormones by Kenneth V. Thimann in 1948 enabled regulation of plant growth by externally applied chemicals. Frederick Campion Steward pioneered techniques of micropropagation and plant tissue culture controlled by plant hormones. The synthetic auxin 2,4-dichlorophenoxyacetic acid or 2,4-D was one of the first commercial synthetic herbicides. 20th century developments in plant biochemistry have been driven by modern techniques of organic chemical analysis, such as spectroscopy, chromatography and electrophoresis. With the rise of the related molecular-scale biological approaches of molecular biology, genomics, proteomics and metabolomics, the relationship between the plant genome and most aspects of the biochemistry, physiology, morphology and behaviour of plants can be subjected to detailed experimental analysis. The concept originally stated by Gottlieb Haberlandt in 1902 that all plant cells are totipotent and can be grown in vitro ultimately enabled the use of genetic engineering experimentally to knock out a gene or genes responsible for a specific trait, or to add genes such as GFP that report when a gene of interest is being expressed. These technologies enable the biotechnological use of whole plants or plant cell cultures grown in bioreactors to synthesise pesticides, antibiotics or other pharmaceuticals, as well as the practical application of genetically modified crops designed for traits such as improved yield. Modern morphology recognises a continuum between the major morphological categories of root, stem (caulome), leaf (phyllome) and trichome. Furthermore, it emphasises structural dynamics. Modern systematics aims to reflect and discover phylogenetic relationships between plants. Modern Molecular phylogenetics largely ignores morphological characters, relying on DNA sequences as data. Molecular analysis of DNA sequences from most families of flowering plants enabled the Angiosperm Phylogeny Group to publish in 1998 a phylogeny of flowering plants, answering many of the questions about relationships among angiosperm families and species. The theoretical possibility of a practical method for identification of plant species and commercial varieties by DNA barcoding is the subject of active current research. Branches of botany Botany is divided along several axes. Some subfields of botany relate to particular groups of organisms. Divisions related to the broader historical sense of botany include bacteriology, mycology (or fungology) and phycology - the study of bacteria, fungi and algae respectively - with lichenology as a subfield of mycology. The narrower sense of botany in the sense of the study of embryophytes (land plants) is disambiguated as phytology. Bryology is the study of mosses (and in the broader sense also liverworts and hornworts). Pteridology (or filicology) is the study of ferns and allied plants. A number of other taxa of ranks varying from family to subgenus have terms for their study, including agrostology (or graminology) for the study of grasses, synantherology for the study of composites, and batology for the study of brambles. Study can also be divided by guild rather than clade or grade. Dendrology is the study of woody plants. Many divisions of biology have botanical subfields. These are commonly denoted by prefixing the word plant (e.g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics, plant ecology), or prefixing or substituting the prefix phyto- (e.g. phytochemistry, phytogeography). The study of fossil plants is palaeobotany. Other fields are denoted by adding or substituting the word botany (e.g. systematic botany). Phytosociology is a subfield of plant ecology that classifies and studies communities of plants. The intersection of fields from the above pair of categories gives rise to fields such as bryogeography (the study of the distribution of mosses). Different parts of plants also give rise to their own subfields, including xylology, carpology (or fructology) and palynology, these being the study of wood, fruit and pollen/spores respectively. Botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology and phytopharmacology. Scope and importance The study of plants is vital because they underpin almost all animal life on Earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. Plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. As a by-product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. In addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. Plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. Historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. Botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. At each of these levels, a botanist may be concerned with the classification (taxonomy), phylogeny and evolution, structure (anatomy and morphology), or function (physiology) of plant life. The strictest definition of "plant" includes only the "land plants" or embryophytes, which include seed plants (gymnosperms, including the pines, and flowering plants) and the free-sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. Embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. They have life cycles with alternating haploid and diploid phases. The sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. Other groups of organisms that were previously studied by botanists include bacteria (now studied in bacteriology), fungi (mycology) – including lichen-forming fungi (lichenology), non-chlorophyte algae (phycology), and viruses (virology). However, attention is still given to these groups by botanists, and fungi (including lichens) and photosynthetic protists are usually covered in introductory botany courses. Palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. Cyanobacteria, the first oxygen-releasing photosynthetic organisms on Earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. The new photosynthetic plants (along with their algal relatives) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen-free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years. Among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life's basic ingredients: energy, carbon, oxygen, nitrogen and water, and ways that our plant stewardship can help address the global environmental issues of resource management, conservation, human food security, biologically invasive organisms, carbon sequestration, climate change, and sustainability. Human nutrition Virtually all staple foods come either directly from primary production by plants, or indirectly from animals that eat them. Plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. This is what ecologists call the first trophic level. The modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics. Botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity's ability to feed the world and provide food security for future generations. Botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. Ethnobotany is the study of the relationships between plants and people. When applied to the investigation of historical plant–people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. Some of the earliest plant-people relationships arose between the indigenous people of Canada in identifying edible plants from inedible plants. This relationship the indigenous people had with plants was recorded by ethnobotanists. Plant biochemistry Plant biochemistry is the study of the chemical processes used by plants. Some of these processes are used in their primary metabolism like the photosynthetic Calvin cycle and crassulacean acid metabolism. Others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. Plants and various other groups of photosynthetic eukaryotes collectively known as "algae" have unique organelles known as chloroplasts. Chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. Chloroplasts and cyanobacteria contain the blue-green pigment chlorophyll a. Chlorophyll a (as well as its plant and green algal-specific cousin chlorophyll b) absorbs light in the blue-violet and orange/red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. The energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy-rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen (O2) as a by-product. The light energy captured by chlorophyll a is initially in the form of electrons (and later a proton gradient) that's used to make molecules of ATP and NADPH which temporarily store and transport energy. Their energy is used in the light-independent reactions of the Calvin cycle by the enzyme rubisco to produce molecules of the 3-carbon sugar glyceraldehyde 3-phosphate (G3P). Glyceraldehyde 3-phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. Some of the glucose is converted to starch which is stored in the chloroplast. Starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family Asteraceae. Some of the glucose is converted to sucrose (common table sugar) for export to the rest of the plant. Unlike in animals (which lack chloroplasts), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. The fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. Plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. Vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. Lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. Sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. It is widely regarded as a marker for the start of land plant evolution during the Ordovician period. The concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the Ordovician and Silurian periods. Many monocots like maize and the pineapple and some dicots like the Asteraceae have since independently evolved pathways like Crassulacean acid metabolism and the carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common carbon fixation pathway. These biochemical strategies are unique to land plants. Medicine and materials Phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. Some of these compounds are toxins such as the alkaloid coniine from hemlock. Others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices (e.g., capsaicin), and in medicine as pharmaceuticals as in opium from opium poppies. Many medicinal and recreational drugs, such as tetrahydrocannabinol (active ingredient in cannabis), caffeine, morphine and nicotine come directly from plants. Others are simple derivatives of botanical natural products. For example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical modification of morphine obtained from the opium poppy. Popular stimulants come from plants, such as caffeine from coffee, tea and chocolate, and nicotine from tobacco. Most alcoholic beverages come from fermentation of carbohydrate-rich plant products such as barley (beer), rice (sake) and grapes (wine). Native Americans have used various plants as ways of treating illness or disease for thousands of years. This knowledge Native Americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery. Plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce Lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist's pigments gamboge and rose madder. Sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. Charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal-smelting fuel, as a filter material and adsorbent and as an artist's material and is one of the three ingredients of gunpowder. Cellulose, the world's most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. Products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. Sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. Sweetgrass was used by Native Americans to ward off bugs like mosquitoes. These bug repelling properties of sweetgrass were later found by the American Chemical Society in the molecules phytol and coumarin. Plant ecology Plant ecology is the science of the functional relationships between plants and their habitats – the environments where they complete their life cycles. Plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. Some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. This information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. The goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. Plants depend on certain edaphic (soil) and climatic factors in their environment but can modify these factors too. For example, they can change their environment's albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. Plants compete with other organisms in their ecosystem for resources. They interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. Regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest. Herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. Other organisms form mutually beneficial relationships with plants. For example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds. Plants, climate and environmental change Plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. For example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. Palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. Estimates of atmospheric concentrations since the Palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. Ozone depletion can expose plants to higher levels of ultraviolet radiation-B (UV-B), resulting in lower growth rates. Moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. Genetics Inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. Gregor Mendel discovered the genetic laws of inheritance by studying inherited traits such as shape in Pisum sativum (peas). What Mendel learned from studying plants has had far-reaching benefits outside of botany. Similarly, "jumping genes" were discovered by Barbara McClintock while she was studying maize. Nevertheless, there are some distinctive genetic differences between plants and other organisms. Species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. A familiar example is peppermint, Mentha × piperita, a sterile hybrid between Mentha aquatica and spearmint, Mentha spicata. The many cultivated varieties of wheat are the result of multiple inter- and intra-specific crosses between wild species and their hybrids. Angiosperms with monoecious flowers often have self-incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. This is one of several methods used by plants to promote outcrossing. In many land plants the male and female gametes are produced by separate individuals. These species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. Charles Darwin in his 1878 book The Effects of Cross and Self-Fertilization in the Vegetable Kingdom at the start of chapter XII noted "The first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross-fertilisation is beneficial and self-fertilisation often injurious, at least with the plants on which I experimented." An important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. This beneficial effect is also known as hybrid vigor or heterosis. Once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. Unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. The formation of stem tubers in potato is one example. Particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. This is one of several types of apomixis that occur in plants. Apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. Most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. This can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid (endopolyploidy), or during gamete formation. An allopolyploid plant may result from a hybridisation event between two different species. Both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross-breed successfully with the parent population because there is a mismatch in chromosome numbers. These plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. Some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. Durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. The commercial banana is an example of a sterile, seedless triploid hybrid. Common dandelion is a triploid that produces viable seeds by apomictic seed. As in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non-Mendelian. Chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. Molecular genetics A considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the Thale cress, Arabidopsis thaliana, a weedy species in the mustard family (Brassicaceae). The genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of DNA, forming one of the smallest genomes among flowering plants. Arabidopsis was the first plant to have its genome sequenced, in 2000. The sequencing of some other relatively small genomes, of rice (Oryza sativa) and Brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally. Model plants such as Arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. Ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. Corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in plants. The single celled green alga Chlamydomonas reinhardtii, while not an embryophyte itself, contains a green-pigmented chloroplast related to that of land plants, making it useful for study. A red alga Cyanidioschyzon merolae has also been used to study some basic chloroplast functions. Spinach, peas, soybeans and a moss Physcomitrella patens are commonly used to study plant cell biology. Agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus-inducing Ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. Schell and Van Montagu (1977) hypothesised that the Ti plasmid could be a natural vector for introducing the Nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. Today, genetic modification of the Ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. Epigenetics Epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying DNA sequence but cause the organism's genes to behave (or "express themselves") differently. One example of epigenetic change is the marking of the genes by DNA methylation which determines whether they will be expressed or not. Gene expression can also be controlled by repressor proteins that attach to silencer regions of the DNA and prevent that region of the DNA code from being expressed. Epigenetic marks may be added or removed from the DNA during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. Epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell's life. Some epigenetic changes have been shown to be heritable, while others are reset in the germ cells. Epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. A single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. The process results from the epigenetic activation of some genes and inhibition of others. Unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. Exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. While plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate. Epigenetic changes can lead to paramutations, which do not follow the Mendelian heritage rules. These epigenetic marks are carried from one generation to the next, with one allele inducing a change on the other. Plant evolution The chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, (commonly but incorrectly known as "blue-green algae") and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident. The algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. There are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. The algal division Charophyta, sister to the green algal division Chlorophyta, is considered to contain the ancestor of true plants. The Charophyte class Charophyceae and the land plant sub-kingdom Embryophyta together form the monophyletic group or clade Streptophytina. Nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. They include mosses, liverworts and hornworts. Pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free-living gametophytes evolved during the Silurian period and diversified into several lineages during the late Silurian and early Devonian. Representatives of the lycopods have survived to the present day. By the end of the Devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved "megaspory" – their spores were of two distinct sizes, larger megaspores and smaller microspores. Their reduced gametophytes developed from megaspores retained within the spore-producing organs (megasporangia) of the sporophyte, a condition known as endospory. Seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers (integuments). The young sporophyte develops within the seed, which on germination splits to release it. The earliest known seed plants date from the latest Devonian Famennian stage. Following the evolution of the seed habit, seed plants diversified, giving rise to a number of now-extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. Gymnosperms produce "naked seeds" not fully enclosed in an ovary; modern representatives include conifers, cycads, Ginkgo, and Gnetales. Angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. Ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. Plant physiology Plant physiology encompasses all the internal chemical and physical activities of plants associated with life. Chemicals obtained from the air, soil and water form the basis of all plant metabolism. The energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. Photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. Heterotrophs including all animals, all fungi, all completely parasitic plants, and non-photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. Respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis. Molecules are moved within plants by transport processes that operate at a variety of spatial scales. Subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. Minerals and water are transported from roots to other parts of the plant in the transpiration stream. Diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. Examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. In vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. Most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. Sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes. Plant hormones Plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. Tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of Mimosa pudica, the insect traps of Venus flytrap and bladderworts, and the pollinia of orchids. The hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. Darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded "It is hardly an exaggeration to say that the tip of the radicle . . acts like the brain of one of the lower animals . . directing the several movements". About the same time, the role of auxins (from the Greek , to grow) in control of plant growth was first outlined by the Dutch scientist Frits Went. The first known auxin, indole-3-acetic acid (IAA), which promotes cell growth, was only isolated from plants about 50 years later. This compound mediates the tropic responses of shoots and roots towards light and gravity. The finding in 1939 that plant callus could be maintained in culture containing IAA, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification. Cytokinins are a class of plant hormones named for their control of cell division (especially cytokinesis). The natural cytokinin zeatin was discovered in corn, Zea mays, and is a derivative of the purine adenine. Zeatin is produced in roots and transported to shoots in the xylem where it promotes cell division, bud development, and the greening of chloroplasts. The gibberelins, such as gibberelic acid are diterpenes synthesised from acetyl CoA via the mevalonate pathway. They are involved in the promotion of germination and dormancy-breaking in seeds, in regulation of plant height by controlling stem elongation and the control of flowering. Abscisic acid (ABA) occurs in all land plants except liverworts, and is synthesised from carotenoids in the chloroplasts and other plastids. It inhibits cell division, promotes seed maturation, and dormancy, and promotes stomatal closure. It was so named because it was originally thought to control abscission. Ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. It is now known to be the hormone that stimulates or regulates fruit ripening and abscission, and it, or the synthetic growth regulator ethephon which is rapidly metabolised to produce ethylene, are used on industrial scale to promote ripening of cotton, pineapples and other climacteric crops. Another class of phytohormones is the jasmonates, first isolated from the oil of Jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack. In addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. This can result in adaptive changes in a process known as photomorphogenesis. Phytochromes are the photoreceptors in a plant that are sensitive to light. Plant anatomy and morphology Plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. All plants are multicellular eukaryotes, their DNA stored in nuclei. The characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. Other plastids contain storage products such as starch (amyloplasts) or lipids (elaioplasts). Uniquely, streptophyte cells and those of the green algal order Trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. The bodies of vascular plants including clubmosses, ferns and seed plants (gymnosperms and angiosperms) generally have aerial and subterranean subsystems. The shoots consist of stems bearing green photosynthesising leaves and reproductive structures. The underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. Non-vascular plants, the liverworts, hornworts and mosses do not produce ground-penetrating vascular roots and most of the plant participates in photosynthesis. The sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. The root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. Cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. Stolons and tubers are examples of shoots that can grow roots. Roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. In the event that one of the systems is lost, the other can often regrow it. In fact it is possible to grow an entire plant from a single leaf, as is the case with plants in Streptocarpus sect. Saintpaulia, or even a single cell – which can dedifferentiate into a callus (a mass of unspecialised cells) that can grow into a new plant. In vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. Roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. Stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. Leaves gather sunlight and carry out photosynthesis. Large, flat, flexible, green leaves are called foliage leaves. Gymnosperms, such as conifers, cycads, Ginkgo, and gnetophytes are seed-producing plants with open seeds. Angiosperms are seed-producing plants that produce flowers and have enclosed seeds. Woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues: wood (secondary xylem) and bark (secondary phloem and cork). All gymnosperms and many angiosperms are woody plants. Some plants reproduce sexually, some asexually, and some via both means. Although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. Furthermore, structures can be seen as processes, that is, process combinations. Systematic botany Systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. It involves, or is related to, biological classification, scientific taxonomy and phylogenetics. Biological classification is the method by which botanists group organisms into categories such as genera or species. Biological classification is a form of scientific taxonomy. Modern taxonomy is rooted in the work of Carl Linnaeus, who grouped species according to shared physical characteristics. These groupings have since been revised to align better with the Darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. While scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses DNA sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. The dominant classification system is called Linnaean taxonomy. It includes ranks and binomial nomenclature. The nomenclature of botanical organisms is codified in the International Code of Nomenclature for algae, fungi, and plants (ICN) and administered by the International Botanical Congress. Kingdom Plantae belongs to Domain Eukaryota and is broken down recursively until each species is separately classified. The order is: Kingdom; Phylum (or Division); Class; Order; Family; Genus (plural genera); Species. The scientific name of a plant represents its genus and its species within the genus, resulting in a single worldwide name for each organism. For example, the tiger lily is Lilium columbianum. Lilium is the genus, and columbianum the specific epithet. The combination is the name of the species. When writing the scientific name of an organism, it is proper to capitalise the first letter in the genus and put all of the specific epithet in lowercase. Additionally, the entire term is ordinarily italicised (or underlined when italics are not available). The evolutionary relationships and heredity of a group of organisms is called its phylogeny. Phylogenetic studies attempt to discover phylogenies. The basic approach is to use similarities based on shared inheritance to determine relationships. As an example, species of Pereskia are trees or bushes with prominent leaves. They do not obviously resemble a typical leafless cactus such as an Echinocactus. However, both Pereskia and Echinocactus have spines produced from areoles (highly specialised pad-like structures) suggesting that the two genera are indeed related. Judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. Some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the two groups are not closely related. The cladistic method takes a systematic approach to characters, distinguishing between those that carry no information about shared evolutionary history – such as those evolved separately in different groups (homoplasies) or those left over from ancestors (plesiomorphies) – and derived characters, which have been passed down from innovations in a shared ancestor (apomorphies). Only derived characters, such as the spine-producing areoles of cacti, provide evidence for descent from a common ancestor. The results of cladistic analyses are expressed as cladograms: tree-like diagrams showing the pattern of evolutionary branching and descent. From the 1990s onwards, the predominant approach to constructing phylogenies for living plants has been molecular phylogenetics, which uses molecular characters, particularly DNA sequences, rather than morphological characters like the presence or absence of spines and areoles. The difference is that the genetic code itself is used to decide evolutionary relationships, instead of being used indirectly via the characters it gives rise to. Clive Stace describes this as having "direct access to the genetic basis of evolution." As a simple example, prior to the use of genetic evidence, fungi were thought either to be plants or to be more closely related to plants than animals. Genetic evidence suggests that the true evolutionary relationship of multicelled organisms is as shown in the cladogram below – fungi are more closely related to animals than to plants. In 1998, the Angiosperm Phylogeny Group published a phylogeny for flowering plants based on an analysis of DNA sequences from most families of flowering plants. As a result of this work, many questions, such as which families represent the earliest branches of angiosperms, have now been answered. Investigating how plant species are related to each other allows botanists to better understand the process of evolution in plants. Despite the study of model plants and increasing use of DNA evidence, there is ongoing work and discussion among taxonomists about how best to classify plants into various taxa. Technological developments such as computers and electron microscopes have greatly increased the level of detail studied and speed at which data can be analysed. Symbols A few symbols are in current use in botany. A number of others are obsolete; for example, Linnaeus used planetary symbols (Mars) for biennial plants, (Jupiter) for herbaceous perennials and (Saturn) for woody perennials, based on the planets' orbital periods of 2, 12 and 30 years; and Willd used (Saturn) for neuter in addition to (Mercury) for hermaphroditic. The following symbols are still used: ♀ female ♂ male ⚥ hermaphrodite/bisexual ⚲ vegetative (asexual) reproduction ◊ sex unknown ☉ annual ⚇ biennial ♾ perennial ☠ poisonous 🛈 further information × crossbred hybrid + grafted hybrid
Biology and health sciences
Biology
null
4185
https://en.wikipedia.org/wiki/Bacteriophage
Bacteriophage
A bacteriophage (), also known informally as a phage (), is a virus that infects and replicates within bacteria and archaea. The term is derived . Bacteriophages are composed of proteins that encapsulate a DNA or RNA genome, and may have structures that are either simple or elaborate. Their genomes may encode as few as four genes (e.g. MS2) and as many as hundreds of genes. Phages replicate within the bacterium following the injection of their genome into its cytoplasm. Bacteriophages are among the most common and diverse entities in the biosphere. Bacteriophages are ubiquitous viruses, found wherever bacteria exist. It is estimated there are more than 1031 bacteriophages on the planet, more than every other organism on Earth, including bacteria, combined. Viruses are the most abundant biological entity in the water column of the world's oceans, and the second largest component of biomass after prokaryotes, where up to 9x108 virions per millilitre have been found in microbial mats at the surface, and up to 70% of marine bacteria may be infected by bacteriophages. Bacteriophages were used from the 1920s as an alternative to antibiotics in the former Soviet Union and Central Europe, as well as in France. They are seen as a possible therapy against multi-drug-resistant strains of many bacteria (see phage therapy). Bacteriophages are known to interact with the immune system both indirectly via bacterial expression of phage-encoded proteins and directly by influencing innate immunity and bacterial clearance. Phage–host interactions are becoming increasingly important areas of research. Classification Bacteriophages occur abundantly in the biosphere, with different genomes and lifestyles. Phages are classified by the International Committee on Taxonomy of Viruses (ICTV) according to morphology and nucleic acid. It has been suggested that members of Picobirnaviridae infect bacteria, but not mammals. There are also many unassigned genera of the class Leviviricetes: Chimpavirus, Hohglivirus, Mahrahvirus, Meihzavirus, Nicedsevirus, Sculuvirus, Skrubnovirus, Tetipavirus and Winunavirus containing linear ssRNA genomes and the unassigned genus Lilyvirus of the order Caudovirales containing a linear dsDNA genome. History In 1896, Ernest Hanbury Hankin reported that something in the waters of the Ganges and Yamuna rivers in India had a marked antibacterial action against cholera and it could pass through a very fine porcelain filter. In 1915, British bacteriologist Frederick Twort, superintendent of the Brown Institution of London, discovered a small agent that infected and killed bacteria. He believed the agent must be one of the following: a stage in the life cycle of the bacteria an enzyme produced by the bacteria themselves, or a virus that grew on and destroyed the bacteria Twort's research was interrupted by the onset of World War I, as well as a shortage of funding and the discoveries of antibiotics. Independently, French-Canadian microbiologist Félix d'Hérelle, working at the Pasteur Institute in Paris, announced on 3 September 1917 that he had discovered "an invisible, antagonistic microbe of the dysentery bacillus". For d'Hérelle, there was no question as to the nature of his discovery: "In a flash I had understood: what caused my clear spots was in fact an invisible microbe... a virus parasitic on bacteria." D'Hérelle called the virus a bacteriophage, a bacterium-eater (from the Greek , meaning "to devour"). He also recorded a dramatic account of a man suffering from dysentery who was restored to good health by the bacteriophages. It was d'Hérelle who conducted much research into bacteriophages and introduced the concept of phage therapy. In 1919, in Paris, France, d'Hérelle conducted the first clinical application of a bacteriophage, with the first reported use in the United States being in 1922. Nobel prizes awarded for phage research In 1969, Max Delbrück, Alfred Hershey, and Salvador Luria were awarded the Nobel Prize in Physiology or Medicine for their discoveries of the replication of viruses and their genetic structure. Specifically the work of Hershey, as contributor to the Hershey–Chase experiment in 1952, provided convincing evidence that DNA, not protein, was the genetic material of life. Delbrück and Luria carried out the Luria–Delbrück experiment which demonstrated statistically that mutations in bacteria occur randomly and thus follow Darwinian rather than Lamarckian principles. Uses Phage therapy Phages were discovered to be antibacterial agents and were used in the former Soviet Republic of Georgia (pioneered there by Giorgi Eliava with help from the co-discoverer of bacteriophages, Félix d'Hérelle) during the 1920s and 1930s for treating bacterial infections. D'Herelle "quickly learned that bacteriophages are found wherever bacteria thrive: in sewers, in rivers that catch waste runoff from pipes, and in the stools of convalescent patients." They had widespread use, including treatment of soldiers in the Red Army. However, they were abandoned for general use in the West for several reasons: Antibiotics were discovered and marketed widely. They were easier to make, store, and prescribe. Medical trials of phages were carried out, but a basic lack of understanding of phages raised questions about the validity of these trials. Publication of research in the Soviet Union was mainly in the Russian or Georgian languages and for many years was not followed internationally. The Soviet technology was widely discouraged and in some cases illegal due to the red scare. The use of phages has continued since the end of the Cold War in Russia, Georgia, and elsewhere in Central and Eastern Europe. The first regulated, randomized, double-blind clinical trial was reported in the Journal of Wound Care in June 2009, which evaluated the safety and efficacy of a bacteriophage cocktail to treat infected venous ulcers of the leg in human patients. The FDA approved the study as a Phase I clinical trial. The study's results demonstrated the safety of therapeutic application of bacteriophages, but did not show efficacy. The authors explained that the use of certain chemicals that are part of standard wound care (e.g. lactoferrin or silver) may have interfered with bacteriophage viability. Shortly after that, another controlled clinical trial in Western Europe (treatment of ear infections caused by Pseudomonas aeruginosa) was reported in the journal Clinical Otolaryngology in August 2009. The study concludes that bacteriophage preparations were safe and effective for treatment of chronic ear infections in humans. Additionally, there have been numerous animal and other experimental clinical trials evaluating the efficacy of bacteriophages for various diseases, such as infected burns and wounds, and cystic fibrosis-associated lung infections, among others. On the other hand, phages of Inoviridae have been shown to complicate biofilms involved in pneumonia and cystic fibrosis and to shelter the bacteria from drugs meant to eradicate disease, thus promoting persistent infection. Meanwhile, bacteriophage researchers have been developing engineered viruses to overcome antibiotic resistance, and engineering the phage genes responsible for coding enzymes that degrade the biofilm matrix, phage structural proteins, and the enzymes responsible for lysis of the bacterial cell wall. There have been results showing that T4 phages that are small in size and short-tailed can be helpful in detecting E. coli in the human body. Therapeutic efficacy of a phage cocktail was evaluated in a mouse model with nasal infection of multi-drug-resistant (MDR) A. baumannii. Mice treated with the phage cocktail showed a 2.3-fold higher survival rate compared to those untreated at seven days post-infection. In 2017, a 68-year-old diabetic patient with necrotizing pancreatitis complicated by a pseudocyst infected with MDR A. baumannii strains was being treated with a cocktail of Azithromycin, Rifampicin, and Colistin for 4 months without results and overall rapidly declining health. Because discussion had begun of the clinical futility of further treatment, an Emergency Investigational New Drug (eIND) was filed as a last effort to at the very least gain valuable medical data from the situation, and approved, so he was subjected to phage therapy using a percutaneously (PC) injected cocktail containing nine different phages that had been identified as effective against the primary infection strain by rapid isolation and testing techniques (a process which took under a day). This proved effective for a very brief period, although the patient remained unresponsive and his health continued to worsen; soon isolates of a strain of A. baumannii were being collected from drainage of the cyst that showed resistance to this cocktail, and a second cocktail which was tested to be effective against this new strain was added, this time by intravenous (IV) injection as it had become clear that the infection was more pervasive than originally thought. Once on the combination of the IV and PC therapy the patient's downward clinical trajectory reversed, and within two days he had awoken from his coma and become responsive. As his immune system began to function he had to be temporarily removed from the cocktail because his fever was spiking to over , but after two days the phage cocktails were re-introduced at levels he was able to tolerate. The original three-antibiotic cocktail was replaced by minocycline after the bacterial strain was found not to be resistant to this and he rapidly regained full lucidity, although he was not discharged from the hospital until roughly 145 days after phage therapy began. Towards the end of the therapy it was discovered that the bacteria had become resistant to both of the original phage cocktails, but they were continued because they seemed to be preventing minocycline resistance from developing in the bacterial samples collected so were having a useful synergistic effect. Other Food industry Phages have increasingly been used to safen food products and to forestall spoilage bacteria. Since 2006, the United States Food and Drug Administration (FDA) and United States Department of Agriculture (USDA) have approved several bacteriophage products. LMP-102 (Intralytix) was approved for treating ready-to-eat (RTE) poultry and meat products. In that same year, the FDA approved LISTEX (developed and produced by Micreos) using bacteriophages on cheese to kill Listeria monocytogenes bacteria, in order to give them generally recognized as safe (GRAS) status. In July 2007, the same bacteriophage were approved for use on all food products. In 2011 USDA confirmed that LISTEX is a clean label processing aid and is included in USDA. Research in the field of food safety is continuing to see if lytic phages are a viable option to control other food-borne pathogens in various food products. Water indicators Bacteriophages, including those specific to Escherichia coli, have been employed as indicators of fecal contamination in water sources. Due to their shared structural and biological characteristics, coliphages can serve as proxies for viral fecal contamination and the presence of pathogenic viruses such as rotavirus, norovirus, and HAV. Research conducted on wastewater treatment systems has revealed significant disparities in the behavior of coliphages compared to fecal coliforms, demonstrating a distinct correlation with the recovery of pathogenic viruses at the treatment's conclusion. Establishing a secure discharge threshold, studies have determined that discharges below 3000 PFU/100 mL are considered safe in terms of limiting the release of pathogenic viruses. Diagnostics In 2011, the FDA cleared the first bacteriophage-based product for in vitro diagnostic use. The KeyPath MRSA/MSSA Blood Culture Test uses a cocktail of bacteriophage to detect Staphylococcus aureus in positive blood cultures and determine methicillin resistance or susceptibility. The test returns results in about five hours, compared to two to three days for standard microbial identification and susceptibility test methods. It was the first accelerated antibiotic-susceptibility test approved by the FDA. Counteracting bioweapons and toxins Government agencies in the West have for several years been looking to Georgia and the former Soviet Union for help with exploiting phages for counteracting bioweapons and toxins, such as anthrax and botulism. Developments are continuing among research groups in the U.S. Other uses include spray application in horticulture for protecting plants and vegetable produce from decay and the spread of bacterial disease. Other applications for bacteriophages are as biocides for environmental surfaces, e.g., in hospitals, and as preventative treatments for catheters and medical devices before use in clinical settings. The technology for phages to be applied to dry surfaces, e.g., uniforms, curtains, or even sutures for surgery now exists. Clinical trials reported in Clinical Otolaryngology show success in veterinary treatment of pet dogs with otitis. Bacterium sensing and identification The sensing of phage-triggered ion cascades (SEPTIC) bacterium sensing and identification method uses the ion emission and its dynamics during phage infection and offers high specificity and speed for detection. Phage display Phage display is a different use of phages involving a library of phages with a variable peptide linked to a surface protein. Each phage genome encodes the variant of the protein displayed on its surface (hence the name), providing a link between the peptide variant and its encoding gene. Variant phages from the library may be selected through their binding affinity to an immobilized molecule (e.g., botulism toxin) to neutralize it. The bound, selected phages can be multiplied by reinfecting a susceptible bacterial strain, thus allowing them to retrieve the peptides encoded in them for further study. Antimicrobial drug discovery Phage proteins often have antimicrobial activity and may serve as leads for peptidomimetics, i.e. drugs that mimic peptides. Phage-ligand technology makes use of phage proteins for various applications, such as binding of bacteria and bacterial components (e.g. endotoxin) and lysis of bacteria. Basic research Bacteriophages are important model organisms for studying principles of evolution and ecology. Detriments Dairy industry Bacteriophages present in the environment can cause cheese to not ferment. In order to avoid this, mixed-strain starter cultures and culture rotation regimes can be used. Genetic engineering of culture microbes – especially Lactococcus lactis and Streptococcus thermophilus – have been studied for genetic analysis and modification to improve phage resistance. This has especially focused on plasmid and recombinant chromosomal modifications. Some research has focused on the potential of bacteriophages as antimicrobial against foodborne pathogens and biofilm formation within the dairy industry. As the spread of antibiotic resistance is a main concern within the dairy industry, phages can serve as a promising alternative. Replication The life cycle of bacteriophages tends to be either a lytic cycle or a lysogenic cycle. In addition, some phages display pseudolysogenic behaviors. With lytic phages such as the T4 phage, bacterial cells are broken open (lysed) and destroyed after immediate replication of the virion. As soon as the cell is destroyed, the phage progeny can find new hosts to infect. Lytic phages are more suitable for phage therapy. Some lytic phages undergo a phenomenon known as lysis inhibition, where completed phage progeny will not immediately lyse out of the cell if extracellular phage concentrations are high. This mechanism is not identical to that of the temperate phage going dormant and usually is temporary. In contrast, the lysogenic cycle does not result in immediate lysing of the host cell. Those phages able to undergo lysogeny are known as temperate phages. Their viral genome will integrate with host DNA and replicate along with it, relatively harmlessly, or may even become established as a plasmid. The virus remains dormant until host conditions deteriorate, perhaps due to depletion of nutrients, then, the endogenous phages (known as prophages) become active. At this point they initiate the reproductive cycle, resulting in lysis of the host cell. As the lysogenic cycle allows the host cell to continue to survive and reproduce, the virus is replicated in all offspring of the cell. An example of a bacteriophage known to follow the lysogenic cycle and the lytic cycle is the phage lambda of E. coli. Sometimes prophages may provide benefits to the host bacterium while they are dormant by adding new functions to the bacterial genome, in a phenomenon called lysogenic conversion. Examples are the conversion of harmless strains of Corynebacterium diphtheriae or Vibrio cholerae by bacteriophages to highly virulent ones that cause diphtheria or cholera, respectively. Strategies to combat certain bacterial infections by targeting these toxin-encoding prophages have been proposed. Attachment and penetration Bacterial cells are protected by a cell wall of polysaccharides, which are important virulence factors protecting bacterial cells against both immune host defenses and antibiotics. Host growth conditions also influence the ability of the phage to attach and invade them. As phage virions do not move independently, they must rely on random encounters with the correct receptors when in solution, such as blood, lymphatic circulation, irrigation, soil water, etc. Myovirus bacteriophages use a hypodermic syringe-like motion to inject their genetic material into the cell. After contacting the appropriate receptor, the tail fibers flex to bring the base plate closer to the surface of the cell. This is known as reversible binding. Once attached completely, irreversible binding is initiated and the tail contracts, possibly with the help of ATP present in the tail, injecting genetic material through the bacterial membrane. The injection is accomplished through a sort of bending motion in the shaft by going to the side, contracting closer to the cell and pushing back up. Podoviruses lack an elongated tail sheath like that of a myovirus, so instead, they use their small, tooth-like tail fibers enzymatically to degrade a portion of the cell membrane before inserting their genetic material. Synthesis of proteins and nucleic acid Within minutes, bacterial ribosomes start translating viral mRNA into protein. For RNA-based phages, RNA replicase is synthesized early in the process. Proteins modify the bacterial RNA polymerase so it preferentially transcribes viral mRNA. The host's normal synthesis of proteins and nucleic acids is disrupted, and it is forced to manufacture viral products instead. These products go on to become part of new virions within the cell, helper proteins that contribute to the assemblage of new virions, or proteins involved in cell lysis. In 1972, Walter Fiers (University of Ghent, Belgium) was the first to establish the complete nucleotide sequence of a gene and in 1976, of the viral genome of bacteriophage MS2. Some dsDNA bacteriophages encode ribosomal proteins, which are thought to modulate protein translation during phage infection. Virion assembly In the case of the T4 phage, the construction of new virus particles involves the assistance of helper proteins that act catalytically during phage morphogenesis. The base plates are assembled first, with the tails being built upon them afterward. The head capsids, constructed separately, will spontaneously assemble with the tails. During assembly of the phage T4 virion, the morphogenetic proteins encoded by the phage genes interact with each other in a characteristic sequence. Maintaining an appropriate balance in the amounts of each of these proteins produced during viral infection appears to be critical for normal phage T4 morphogenesis. The DNA is packed efficiently within the heads. The whole process takes about 15 minutes. Early studies of bactioriophage T4 (1962-1964) provided an opportunity to gain understanding of virtually all of the genes that are essential for growth of the bacteriophage under laboratory conditions. These studies were made possible by the availability of two classes of conditional lethal mutants. One class of such mutants was referred to as amber mutants. The other class of conditional lethal mutants was referred to as temperature-sensitive mutants Studies of these two classes of mutants led to considerable insight into the functions and interactions of the proteins employed in the machinery of DNA replication, repair and recombination, and on how viruses are assembled from protein and nucleic acid components (molecular morphogenesis). Release of virions Phages may be released via cell lysis, by extrusion, or, in a few cases, by budding. Lysis, by tailed phages, is achieved by an enzyme called endolysin, which attacks and breaks down the cell wall peptidoglycan. An altogether different phage type, the filamentous phage, makes the host cell continually secrete new virus particles. Released virions are described as free, and, unless defective, are capable of infecting a new bacterium. Budding is associated with certain Mycoplasma phages. In contrast to virion release, phages displaying a lysogenic cycle do not kill the host and instead become long-term residents as prophages. Communication Research in 2017 revealed that the bacteriophage Φ3T makes a short viral protein that signals other bacteriophages to lie dormant instead of killing the host bacterium. Arbitrium is the name given to this protein by the researchers who discovered it. Genome structure Given the millions of different phages in the environment, phage genomes come in a variety of forms and sizes. RNA phages such as MS2 have the smallest genomes, with only a few kilobases. However, some DNA phages such as T4 may have large genomes with hundreds of genes; the size and shape of the capsid varies along with the size of the genome. The largest bacteriophage genomes reach a size of 735 kb.Bacteriophage genomes can be highly mosaic, i.e. the genome of many phage species appear to be composed of numerous individual modules. These modules may be found in other phage species in different arrangements. Mycobacteriophages, bacteriophages with mycobacterial hosts, have provided excellent examples of this mosaicism. In these mycobacteriophages, genetic assortment may be the result of repeated instances of site-specific recombination and illegitimate recombination (the result of phage genome acquisition of bacterial host genetic sequences). Evolutionary mechanisms shaping the genomes of bacterial viruses vary between different families and depend upon the type of the nucleic acid, characteristics of the virion structure, as well as the mode of the viral life cycle. Some marine roseobacter phages contain deoxyuridine (dU) instead of deoxythymidine (dT) in their genomic DNA. There is some evidence that this unusual component is a mechanism to evade bacterial defense mechanisms such as restriction endonucleases and CRISPR/Cas systems which evolved to recognize and cleave sequences within invading phages, thereby inactivating them. Other phages have long been known to use unusual nucleotides. In 1963, Takahashi and Marmur identified a Bacillus phage that has dU substituting dT in its genome, and in 1977, Kirnos et al. identified a cyanophage containing 2-aminoadenine (Z) instead of adenine (A). Systems biology The field of systems biology investigates the complex networks of interactions within an organism, usually using computational tools and modeling. For example, a phage genome that enters into a bacterial host cell may express hundreds of phage proteins which will affect the expression of numerous host genes or the host's metabolism. All of these complex interactions can be described and simulated in computer models. For instance, infection of Pseudomonas aeruginosa by the temperate phage PaP3 changed the expression of 38% (2160/5633) of its host's genes. Many of these effects are probably indirect, hence the challenge becomes to identify the direct interactions among bacteria and phage. Several attempts have been made to map protein–protein interactions among phage and their host. For instance, bacteriophage lambda was found to interact with its host, E. coli, by dozens of interactions. Again, the significance of many of these interactions remains unclear, but these studies suggest that there most likely are several key interactions and many indirect interactions whose role remains uncharacterized. Host resistance Bacteriophages are a major threat to bacteria and prokaryotes have evolved numerous mechanisms to block infection or to block the replication of bacteriophages within host cells. The CRISPR system is one such mechanism as are retrons and the anti-toxin system encoded by them. The Thoeris defense system is known to deploy a unique strategy for bacterial antiphage resistance via NAD+ degradation. Bacteriophage–host symbiosis Temperate phages are bacteriophages that integrate their genetic material into the host as extrachromosomal episomes or as a prophage during a lysogenic cycle. Some temperate phages can confer fitness advantages to their host in numerous ways, including giving antibiotic resistance through the transfer or introduction of antibiotic resistance genes (ARGs), protecting hosts from phagocytosis, protecting hosts from secondary infection through superinfection exclusion, enhancing host pathogenicity, or enhancing bacterial metabolism or growth. Bacteriophage–host symbiosis may benefit bacteria by providing selective advantages while passively replicating the phage genome. In the environment Metagenomics has allowed the in-water detection of bacteriophages that was not possible previously. Also, bacteriophages have been used in hydrological tracing and modelling in river systems, especially where surface water and groundwater interactions occur. The use of phages is preferred to the more conventional dye marker because they are significantly less absorbed when passing through ground waters and they are readily detected at very low concentrations. Non-polluted water may contain approximately 2×108 bacteriophages per ml. Bacteriophages are thought to contribute extensively to horizontal gene transfer in natural environments, principally via transduction, but also via transformation. Metagenomics-based studies also have revealed that viromes from a variety of environments harbor antibiotic-resistance genes, including those that could confer multidrug resistance. Recent findings have mapped the complex and intertwined arsenal of anti-phage defense tools in environmental bacteria. In humans Although phages do not infect humans, there are countless phage particles in the human body, given the extensive human microbiome. One's phage population has been called the human phageome, including the "healthy gut phageome" (HGP) and the "diseased human phageome" (DHP). The active phageome of a healthy human (i.e., actively replicating as opposed to nonreplicating, integrated prophage) has been estimated to comprise dozens to thousands of different viruses. There is evidence that bacteriophages and bacteria interact in the human gut microbiome both antagonistically and beneficially. Preliminary studies have indicated that common bacteriophages are found in 62% of healthy individuals on average, while their prevalence was reduced by 42% and 54% on average in patients with ulcerative colitis (UC) and Crohn's disease (CD). Abundance of phages may also decline in the elderly. The most common phages in the human intestine, found worldwide, are crAssphages. CrAssphages are transmitted from mother to child soon after birth, and there is some evidence suggesting that they may be transmitted locally. Each person develops their own unique crAssphage clusters. CrAss-like phages also may be present in primates besides humans. Commonly studied bacteriophages Among the countless phages, only a few have been studied in detail, including some historically important phage that were discovered in the early days of microbial genetics. These, especially the T-phage, helped to discover important principles of gene structure and function. 186 phage λ phage Φ6 phage Φ29 phage ΦX174 Bacteriophage φCb5 G4 phage M13 phage MS2 phage (23–28 nm in size) N4 phage P1 phage P2 phage P4 phage R17 phage T2 phage T4 phage (169 kbp genome, 200 nm long) T7 phage T12 phage Bacteriophage databases and resources Phagesdb Phagescope
Biology and health sciences
Biology basics
Biology
4194
https://en.wikipedia.org/wiki/Bohrium
Bohrium
Bohrium is a synthetic chemical element; it has symbol Bh and atomic number 107. It is named after Danish physicist Niels Bohr. As a synthetic element, it can be created in particle accelerators but is not found in nature. All known isotopes of bohrium are highly radioactive; the most stable known isotope is 270Bh with a half-life of approximately 2.4 minutes, though the unconfirmed 278Bh may have a longer half-life of about 11.5 minutes. In the periodic table, it is a d-block transactinide element. It is a member of the 7th period and belongs to the group 7 elements as the fifth member of the 6d series of transition metals. Chemistry experiments have confirmed that bohrium behaves as the heavier homologue to rhenium in group 7. The chemical properties of bohrium are characterized only partly, but they compare well with the chemistry of the other group 7 elements. Introduction History Discovery Two groups claimed discovery of the element. Evidence of bohrium was first reported in 1976 by a Soviet research team led by Yuri Oganessian, in which targets of bismuth-209 and lead-208 were bombarded with accelerated nuclei of chromium-54 and manganese-55, respectively. Two activities, one with a half-life of one to two milliseconds, and the other with an approximately five-second half-life, were seen. Since the ratio of the intensities of these two activities was constant throughout the experiment, it was proposed that the first was from the isotope bohrium-261 and that the second was from its daughter dubnium-257. Later, the dubnium isotope was corrected to dubnium-258, which indeed has a five-second half-life (dubnium-257 has a one-second half-life); however, the half-life observed for its parent is much shorter than the half-lives later observed in the definitive discovery of bohrium at Darmstadt in 1981. The IUPAC/IUPAP Transfermium Working Group (TWG) concluded that while dubnium-258 was probably seen in this experiment, the evidence for the production of its parent bohrium-262 was not convincing enough. In 1981, a German research team led by Peter Armbruster and Gottfried Münzenberg at the GSI Helmholtz Centre for Heavy Ion Research (GSI Helmholtzzentrum für Schwerionenforschung) in Darmstadt bombarded a target of bismuth-209 with accelerated nuclei of chromium-54 to produce 5 atoms of the isotope bohrium-262: + → + This discovery was further substantiated by their detailed measurements of the alpha decay chain of the produced bohrium atoms to previously known isotopes of fermium and californium. The IUPAC/IUPAP Transfermium Working Group (TWG) recognised the GSI collaboration as official discoverers in their 1992 report. Proposed names In September 1992, the German group suggested the name nielsbohrium with symbol Ns to honor the Danish physicist Niels Bohr. The Soviet scientists at the Joint Institute for Nuclear Research in Dubna, Russia had suggested this name be given to element 105 (which was finally called dubnium) and the German team wished to recognise both Bohr and the fact that the Dubna team had been the first to propose the cold fusion reaction, and simultaneously help to solve the controversial problem of the naming of element 105. The Dubna team agreed with the German group's naming proposal for element 107. There was an element naming controversy as to what the elements from 104 to 106 were to be called; the IUPAC adopted unnilseptium (symbol Uns) as a temporary, systematic element name for this element. In 1994 a committee of IUPAC recommended that element 107 be named bohrium, not nielsbohrium, since there was no precedent for using a scientist's complete name in the naming of an element. This was opposed by the discoverers as there was some concern that the name might be confused with boron and in particular the distinguishing of the names of their respective oxyanions, bohrate and borate. The matter was handed to the Danish branch of IUPAC which, despite this, voted in favour of the name bohrium, and thus the name bohrium for element 107 was recognized internationally in 1997; the names of the respective oxyanions of boron and bohrium remain unchanged despite their homophony. Isotopes Bohrium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Twelve different isotopes of bohrium have been reported with atomic masses 260–262, 264–267, 270–272, 274, and 278, one of which, bohrium-262, has a known metastable state. All of these but the unconfirmed 278Bh decay only through alpha decay, although some unknown bohrium isotopes are predicted to undergo spontaneous fission. The lighter isotopes usually have shorter half-lives; half-lives of under 100 ms for 260Bh, 261Bh, 262Bh, and 262mBh were observed. 264Bh, 265Bh, 266Bh, and 271Bh are more stable at around 1 s, and 267Bh and 272Bh have half-lives of about 10 s. The heaviest isotopes are the most stable, with 270Bh and 274Bh having measured half-lives of about 2.4 min and 40 s respectively, and the even heavier unconfirmed isotope 278Bh appearing to have an even longer half-life of about 11.5 minutes. The most proton-rich isotopes with masses 260, 261, and 262 were directly produced by cold fusion, those with mass 262 and 264 were reported in the decay chains of meitnerium and roentgenium, while the neutron-rich isotopes with masses 265, 266, 267 were created in irradiations of actinide targets. The five most neutron-rich ones with masses 270, 271, 272, 274, and 278 (unconfirmed) appear in the decay chains of 282Nh, 287Mc, 288Mc, 294Ts, and 290Fl respectively. The half-lives of bohrium isotopes range from about ten milliseconds for 262mBh to about one minute for 270Bh and 274Bh, extending to about 11.5 minutes for the unconfirmed 278Bh, which may have one of the longest half-lives among reported superheavy nuclides. Predicted properties Very few properties of bohrium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that bohrium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, but properties of bohrium metal remain unknown and only predictions are available. Chemical Bohrium is the fifth member of the 6d series of transition metals and the heaviest member of group 7 in the periodic table, below manganese, technetium and rhenium. All the members of the group readily portray their group oxidation state of +7 and the state becomes more stable as the group is descended. Thus bohrium is expected to form a stable +7 state. Technetium also shows a stable +4 state whilst rhenium exhibits stable +4 and +3 states. Bohrium may therefore show these lower states as well. The higher +7 oxidation state is more likely to exist in oxyanions, such as perbohrate, , analogous to the lighter permanganate, pertechnetate, and perrhenate. Nevertheless, bohrium(VII) is likely to be unstable in aqueous solution, and would probably be easily reduced to the more stable bohrium(IV). The lighter group 7 elements are known to form volatile heptoxides M2O7 (M = Mn, Tc, Re), so bohrium should also form the volatile oxide Bh2O7. The oxide should dissolve in water to form perbohric acid, HBhO4. Rhenium and technetium form a range of oxyhalides from the halogenation of the oxide. The chlorination of the oxide forms the oxychlorides MO3Cl, so BhO3Cl should be formed in this reaction. Fluorination results in MO3F and MO2F3 for the heavier elements in addition to the rhenium compounds ReOF5 and ReF7. Therefore, oxyfluoride formation for bohrium may help to indicate eka-rhenium properties. Since the oxychlorides are asymmetrical, and they should have increasingly large dipole moments going down the group, they should become less volatile in the order TcO3Cl > ReO3Cl > BhO3Cl: this was experimentally confirmed in 2000 by measuring the enthalpies of adsorption of these three compounds. The values are for TcO3Cl and ReO3Cl are −51 kJ/mol and −61 kJ/mol respectively; the experimental value for BhO3Cl is −77.8 kJ/mol, very close to the theoretically expected value of −78.5 kJ/mol. Physical and atomic Bohrium is expected to be a solid under normal conditions and assume a hexagonal close-packed crystal structure (c/a = 1.62), similar to its lighter congener rhenium. Early predictions by Fricke estimated its density at 37.1 g/cm3, but newer calculations predict a somewhat lower value of 26–27 g/cm3. The atomic radius of bohrium is expected to be around 128 pm. Due to the relativistic stabilization of the 7s orbital and destabilization of the 6d orbital, the Bh+ ion is predicted to have an electron configuration of [Rn] 5f14 6d4 7s2, giving up a 6d electron instead of a 7s electron, which is the opposite of the behavior of its lighter homologues manganese and technetium. Rhenium, on the other hand, follows its heavier congener bohrium in giving up a 5d electron before a 6s electron, as relativistic effects have become significant by the sixth period, where they cause among other things the yellow color of gold and the low melting point of mercury. The Bh2+ ion is expected to have an electron configuration of [Rn] 5f14 6d3 7s2; in contrast, the Re2+ ion is expected to have a [Xe] 4f14 5d5 configuration, this time analogous to manganese and technetium. The ionic radius of hexacoordinate heptavalent bohrium is expected to be 58 pm (heptavalent manganese, technetium, and rhenium having values of 46, 57, and 53 pm respectively). Pentavalent bohrium should have a larger ionic radius of 83 pm. Experimental chemistry In 1995, the first report on attempted isolation of the element was unsuccessful, prompting new theoretical studies to investigate how best to investigate bohrium (using its lighter homologs technetium and rhenium for comparison) and removing unwanted contaminating elements such as the trivalent actinides, the group 5 elements, and polonium. In 2000, it was confirmed that although relativistic effects are important, bohrium behaves like a typical group 7 element. A team at the Paul Scherrer Institute (PSI) conducted a chemistry reaction using six atoms of 267Bh produced in the reaction between 249Bk and 22Ne ions. The resulting atoms were thermalised and reacted with a HCl/O2 mixture to form a volatile oxychloride. The reaction also produced isotopes of its lighter homologues, technetium (as 108Tc) and rhenium (as 169Re). The isothermal adsorption curves were measured and gave strong evidence for the formation of a volatile oxychloride with properties similar to that of rhenium oxychloride. This placed bohrium as a typical member of group 7. The adsorption enthalpies of the oxychlorides of technetium, rhenium, and bohrium were measured in this experiment, agreeing very well with the theoretical predictions and implying a sequence of decreasing oxychloride volatility down group 7 of TcO3Cl > ReO3Cl > BhO3Cl. 2 Bh + 3 + 2 HCl → 2 + The longer-lived heavy isotopes of bohrium, produced as the daughters of heavier elements, offer advantages for future radiochemical experiments. Although the heavy isotope 274Bh requires a rare and highly radioactive berkelium target for its production, the isotopes 272Bh, 271Bh, and 270Bh can be readily produced as daughters of more easily produced moscovium and nihonium isotopes.
Physical sciences
Group 7
Chemistry
4196
https://en.wikipedia.org/wiki/Barnard%27s%20Star
Barnard's Star
Barnard's Star is a small red dwarf star in the constellation of Ophiuchus. At a distance of from Earth, it is the fourth-nearest-known individual star to the Sun after the three components of the Alpha Centauri system, and is the closest star in the northern celestial hemisphere. Its stellar mass is about 16% of the Sun's, and it has 19% of the Sun's diameter. Despite its proximity, the star has a dim apparent visual magnitude of +9.5 and is invisible to the unaided eye; it is much brighter in the infrared than in visible light. The star is named after Edward Emerson Barnard, an American astronomer who in 1916 measured its proper motion as 10.3 arcseconds per year relative to the Sun, the highest known for any star. The star had previously appeared on Harvard University photographic plates in 1888 and 1890. Barnard's Star is among the most studied red dwarfs because of its proximity and favorable location for observation near the celestial equator. Historically, research on Barnard's Star has focused on measuring its stellar characteristics, its astrometry, and also refining the limits of possible extrasolar planets. Although Barnard's Star is ancient, it still experiences stellar flare events, one being observed in 1998. Barnard's Star hosts at least one planet, Barnard's Star b, a close-orbiting sub-Earth discovered in 2024, with additional candidates suspected. Previously, it was subject to multiple claims of planets that were disproven. Naming In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Barnard's Star for this star on 1 February 2017 and it is now included in the List of IAU-approved Star Names. Description Barnard's Star is a red dwarf of the dim spectral type M4 and is too faint to see without a telescope; its apparent magnitude is 9.5. At 7–12 billion years of age, Barnard's Star is considerably older than the Sun, which is 4.5 billion years old, and it might be among the oldest stars in the Milky Way galaxy. Barnard's Star has lost a great deal of rotational energy; the periodic slight changes in its brightness indicate that it rotates once in 130 days (the Sun rotates in 25). Given its age, Barnard's Star was long assumed to be quiescent in terms of stellar activity. In 1998, astronomers observed an intense stellar flare, showing that Barnard's Star is a flare star. Barnard's Star has the variable star designation V2500 Ophiuchi. In 2003, Barnard's Star presented the first detectable change in the radial velocity of a star caused by its motion. Further variability in the radial velocity of Barnard's Star was attributed to its stellar activity. The proper motion of Barnard's Star corresponds to a relative lateral speed of 90km/s. The 10.3 arcseconds it travels in a year amount to a quarter of a degree in a human lifetime, roughly half the angular diameter of the full Moon. The radial velocity of Barnard's Star is , as measured from the blueshift due to its motion toward the Sun. Combined with its proper motion and distance, this gives a "space velocity" (actual speed relative to the Sun) of . Barnard's Star will make its closest approach to the Sun around 11,800 CE, when it will approach to within about 3.75 light-years. Proxima Centauri is the closest star to the Sun at a position currently 4.24 light-years distant from it. However, despite Barnard's Star's even closer pass to the Sun in 11,800 CE, it will still not then be the nearest star, since by that time Proxima Centauri will have moved to a yet-nearer proximity to the Sun. At the time of the star's closest pass by the Sun, Barnard's Star will still be too dim to be seen with the naked eye, since its apparent magnitude will only have increased by one magnitude to about 8.5 by then, still being 2.5 magnitudes short of visibility to the naked eye. Barnard's Star has a mass of about 0.16 solar masses (), and a radius about 0.2 times that of the Sun. Thus, although Barnard's Star has roughly 150 times the mass of Jupiter (), its radius is only roughly 2 times larger, due to its much higher density. Its effective temperature is about 3,220 kelvin, and it has a luminosity of only 0.0034 solar luminosities. Barnard's Star is so faint that if it were at the same distance from Earth as the Sun is, it would appear only 100 times brighter than a full moon, comparable to the brightness of the Sun at 80 astronomical units. Barnard's Star has 10–32% of the solar metallicity. Metallicity is the proportion of stellar mass made up of elements heavier than helium and helps classify stars relative to the galactic population. Barnard's Star seems to be typical of the old, red dwarf population II stars, yet these are also generally metal-poor halo stars. While sub-solar, Barnard's Star's metallicity is higher than that of a halo star and is in keeping with the low end of the metal-rich disk star range; this, plus its high space motion, have led to the designation "intermediate population II star", between a halo and disk star. However, some recently published scientific papers have given much higher estimates for the metallicity of the star, very close to the Sun's level, between 75 and 125% of the solar metallicity. Planetary system In August 2024, by using data from ESPRESSO spectrograph of the Very Large Telescope, the existence of an exoplanet with a minimum mass of and orbital period of 3.15 days was confirmed. This constituted the first convincing evidence for a planet orbiting Barnard's Star. Additionally, three other candidate low-mass planets were proposed in this study. All of these planets orbit closer to the star than the habitable zone. The confirmed planet is designated Barnard's Star b (or Barnard b), a re-use of the designation originally used for the refuted super-Earth candidate. Previous planetary claims Barnard's Star has been subject to multiple claims of planets that were later disproven. From the early 1960s to the early 1970s, Peter van de Kamp argued that planets orbited Barnard's Star. His specific claims of large gas giants were refuted in the mid-1970s after much debate. In November 2018, a candidate super-Earth planetary companion was reported to orbit Barnard's Star. It was believed to have a minimum mass of and orbit at . However, work presented in July 2021 refuted the existence of this planet. Astrometric planetary claims For a decade from 1963 to about 1973, a substantial number of astronomers accepted a claim by Peter van de Kamp that he had detected, by using astrometry, a perturbation in the proper motion of Barnard's Star consistent with its having one or more planets comparable in mass with Jupiter. Van de Kamp had been observing the star from 1938, attempting, with colleagues at the Sproul Observatory at Swarthmore College, to find minuscule variations of one micrometre in its position on photographic plates consistent with orbital perturbations that would indicate a planetary companion; this involved as many as ten people averaging their results in looking at plates, to avoid systemic individual errors. Van de Kamp's initial suggestion was a planet having about at a distance of 4.4AU in a slightly eccentric orbit, and these measurements were apparently refined in a 1969 paper. Later that year, Van de Kamp suggested that there were two planets of 1.1 and . Other astronomers subsequently repeated Van de Kamp's measurements, and two papers in 1973 undermined the claim of a planet or planets. George Gatewood and Heinrich Eichhorn, at a different observatory and using newer plate measuring techniques, failed to verify the planetary companion. Another paper published by John L. Hershey four months earlier, also using the Swarthmore observatory, found that changes in the astrometric field of various stars correlated to the timing of adjustments and modifications that had been carried out on the refractor telescope's objective lens; the claimed planet was attributed to an artifact of maintenance and upgrade work. The affair has been discussed as part of a broader scientific review. Van de Kamp never acknowledged any error and published a further claim of two planets' existence as late as 1982; he died in 1995. Wulff Heintz, Van de Kamp's successor at Swarthmore and an expert on double stars, questioned his findings and began publishing criticisms from 1976 onwards. The two men were reported to have become estranged because of this. Refuted 2018 planetary claim In November 2018, an international team of astronomers announced the detection by radial velocity of a candidate super-Earth orbiting in relatively close proximity to Barnard's Star. Led by Ignasi Ribas of Spain their work, conducted over two decades of observation, provided strong evidence of the planet's existence. However, the existence of the planet was refuted in 2021, when the radial velocity signal was found to originate from long-term activity on the star itself, related to its rotation. Further studies in the following years confirmed this result. Dubbed Barnard's Star b, the planet was thought to be near the stellar system's snow line, which is an ideal spot for the icy accretion of proto-planetary material. It was thought to orbit at 0.4AU every 233 days and had a proposed minimum mass of . The planet would have most likely been frigid, with an estimated surface temperature of about , and lie outside Barnard Star's presumed habitable zone. Direct imaging of the planet and its tell-tale light signature would have been possible in the decade after its discovery. Further faint and unaccounted-for perturbations in the system suggested there may be a second planetary companion even farther out. Refining planetary boundaries For the more than four decades between van de Kamp's rejected claim and the eventual announcement of a planet candidate, Barnard's Star was carefully studied and the mass and orbital boundaries for possible planets were slowly tightened. M dwarfs such as Barnard's Star are more easily studied than larger stars in this regard because their lower masses render perturbations more obvious. Null results for planetary companions continued throughout the 1980s and 1990s, including interferometric work with the Hubble Space Telescope in 1999. Gatewood was able to show in 1995 that planets with were impossible around Barnard's Star, in a paper which helped refine the negative certainty regarding planetary objects in general. In 1999, the Hubble work further excluded planetary companions of with an orbital period of less than 1,000 days (Jupiter's orbital period is 4,332 days), while Kuerster determined in 2003 that within the habitable zone around Barnard's Star, planets are not possible with an "M sin i" value greater than 7.5 times the mass of the Earth (), or with a mass greater than 3.1 times the mass of Neptune (much lower than van de Kamp's smallest suggested value). In 2013, a research paper was published that further refined planet mass boundaries for the star. Using radial velocity measurements, taken over a period of 25 years, from the Lick and Keck Observatories and applying Monte Carlo analysis for both circular and eccentric orbits, upper masses for planets out to 1,000-day orbits were determined. Planets above two Earth masses in orbits of less than 10 days were excluded, and planets of more than ten Earth masses out to a two-year orbit were also confidently ruled out. It was also discovered that the habitable zone of the star seemed to be devoid of roughly Earth-mass planets or larger, save for face-on orbits. Even though this research greatly restricted the possible properties of planets around Barnard's Star, it did not rule them out completely as terrestrial planets were always going to be difficult to detect. NASA's Space Interferometry Mission, which was to begin searching for extrasolar Earth-like planets, was reported to have chosen Barnard's Star as an early search target, however the mission was shut down in 2010. ESA's similar Darwin interferometry mission had the same goal, but was stripped of funding in 2007. The analysis of radial velocities that eventually led to the announcement of a candidate super-Earth orbiting Barnard's Star was also used to set more precise upper mass limits for possible planets, up to and within the habitable zone: a maximum of up to the inner edge and on the outer edge of the optimistic habitable zone, corresponding to orbital periods of up to 10 and 40 days respectively. Therefore, it appears that Barnard's Star indeed does not host Earth-mass planets or larger, in hot and temperate orbits, unlike other M-dwarf stars that commonly have these types of planets in close-in orbits. Stellar flares 1998 In 1998 a stellar flare on Barnard's Star was detected based on changes in the spectral emissions on 17 July during an unrelated search for variations in the proper motion. Four years passed before the flare was fully analyzed, at which point it was suggested that the flare's temperature was 8,000K, more than twice the normal temperature of the star. Given the essentially random nature of flares, Diane Paulson, one of the authors of that study, noted that "the star would be fantastic for amateurs to observe". The flare was surprising because intense stellar activity is not expected in stars of such age. Flares are not completely understood, but are believed to be caused by strong magnetic fields, which suppress plasma convection and lead to sudden outbursts: strong magnetic fields occur in rapidly rotating stars, while old stars tend to rotate slowly. For Barnard's Star to undergo an event of such magnitude is thus presumed to be a rarity. Research on the star's periodicity, or changes in stellar activity over a given timescale, also suggest it ought to be quiescent; 1998 research showed weak evidence for periodic variation in the star's brightness, noting only one possible starspot over 130 days. Stellar activity of this sort has created interest in using Barnard's Star as a proxy to understand similar stars. It is hoped that photometric studies of its X-ray and UV emissions will shed light on the large population of old M dwarfs in the galaxy. Such research has astrobiological implications: given that the habitable zones of M dwarfs are close to the star, any planet located therein would be strongly affected by solar flares, stellar winds, and plasma ejection events. 2019 In 2019, two additional ultraviolet stellar flares were detected, each with far-ultraviolet energy of 3×1022 joules, together with one X-ray stellar flare with energy 1.6×1022 joules. The flare rate observed to date is enough to cause loss of 87 Earth atmospheres per billion years through thermal processes and ≈3 Earth atmospheres per billion years through ion loss processes on Barnard's Star b. Environment Barnard's Star shares much the same neighborhood as the Sun. The neighbors of Barnard's Star are generally of red dwarf size, the smallest and most common star type. Its closest neighbor is currently the red dwarf Ross 154, at a distance of 1.66 parsecs (5.41 light-years). The Sun (5.98 light-years) and Alpha Centauri (6.47 light-years) are, respectively, the next closest systems. From Barnard's Star, the Sun would appear on the diametrically opposite side of the sky at coordinates RA=, Dec=, in the westernmost part of the constellation Monoceros. The absolute magnitude of the Sun is 4.83, and at a distance of 1.834 parsecs, it would be a first-magnitude star, as Pollux is from the Earth. Proposed exploration Project Daedalus Barnard's Star was studied as part of Project Daedalus. Undertaken between 1973 and 1978, the study suggested that rapid, uncrewed travel to another star system was possible with existing or near-future technology. Barnard's Star was chosen as a target partly because it was believed to have planets. The theoretical model suggested that a nuclear pulse rocket employing nuclear fusion (specifically, electron bombardment of deuterium and helium-3) and accelerating for four years could achieve a velocity of 12% of the speed of light. The star could then be reached in 50 years, within a human lifetime. Along with detailed investigation of the star and any companions, the interstellar medium would be examined and baseline astrometric readings performed. The initial Project Daedalus model sparked further theoretical research. In 1980, Robert Freitas suggested a more ambitious plan: a self-replicating spacecraft intended to search for and make contact with extraterrestrial life. Built and launched in Jupiter's orbit, it would reach Barnard's Star in 47 years under parameters similar to those of the original Project Daedalus. Once at the star, it would begin automated self-replication, constructing a factory, initially to manufacture exploratory probes and eventually to create a copy of the original spacecraft after 1,000 years.
Physical sciences
Notable stars
Astronomy
4200
https://en.wikipedia.org/wiki/Bo%C3%B6tes
Boötes
Boötes ( ) is a constellation in the northern sky, located between 0° and +60° declination, and 13 and 16 hours of right ascension on the celestial sphere. The name comes from , which comes from 'herdsman' or 'plowman' (literally, 'ox-driver'; from boûs 'cow'). One of the 48 constellations described by the 2nd-century astronomer Ptolemy, Boötes is now one of the 88 modern constellations. It contains the fourth-brightest star in the night sky, the orange giant Arcturus. Epsilon Boötis, or Izar, is a colourful multiple star popular with amateur astronomers. Boötes is home to many other bright stars, including eight above the fourth magnitude and an additional 21 above the fifth magnitude, making a total of 29 stars easily visible to the naked eye. History and mythology In ancient Babylon, the stars of Boötes were known as SHU.PA. They were apparently depicted as the god Enlil, who was the leader of the Babylonian pantheon and special patron of farmers. Boötes may have been represented by the animal foreleg constellation in ancient Egypt, resembling that of an ox sufficiently to have been originally proposed as the "foreleg of ox" by Berio. Homer mentions Boötes in the Odyssey as a celestial reference for navigation, describing it as "late-setting" or "slow to set". Exactly whom Boötes is supposed to represent in Greek mythology is not clear. According to one version, he was a son of Demeter, Philomenus, twin brother of Plutus, a plowman who drove the oxen in the constellation Ursa Major. This agrees with the constellation's name. The ancient Greeks saw the asterism now called the "Big Dipper" or "Plough" as a cart with oxen. Some myths say that Boötes invented the plow and was memorialized for his ingenuity as a constellation. Another myth associated with Boötes by Hyginus is that of Icarius, who was schooled as a grape farmer and winemaker by Dionysus. Icarius made wine so strong that those who drank it appeared poisoned, which caused shepherds to avenge their supposedly poisoned friends by killing Icarius. Maera, Icarius' dog, brought his daughter Erigone to her father's body, whereupon both she and the dog died by suicide. Zeus then chose to honor all three by placing them in the sky as constellations: Icarius as Boötes, Erigone as Virgo, and Maera as Canis Major or Canis Minor. Following another reading, the constellation is identified with Arcas and also referred to as Arcas and Arcturus, son of Zeus and Callisto. Arcas was brought up by his maternal grandfather Lycaon, to whom one day Zeus went and had a meal. To verify that the guest was really the king of the gods, Lycaon killed his grandson and prepared a meal made from his flesh. Zeus noticed and became very angry, transforming Lycaon into a wolf and giving life back to his son. In the meantime Callisto had been transformed into a she-bear by Zeus's wife Hera, who was angry at Zeus's infidelity. This is corroborated by the Greek name for Boötes, Arctophylax, which means "Bear Watcher". Callisto, in the form of a bear was almost killed by her son, who was out hunting. Zeus rescued her, taking her into the sky where she became Ursa Major, "the Great Bear". Arcturus, the name of the constellation's brightest star, comes from the Greek word meaning "guardian of the bear". Sometimes Arcturus is depicted as leading the hunting dogs of nearby Canes Venatici and driving the bears of Ursa Major and Ursa Minor. Several former constellations were formed from stars now included in Boötes. Quadrans Muralis, the Quadrant, was a constellation created near Beta Boötis from faint stars. It was designated in 1795 by Jérôme Lalande, an astronomer who used a quadrant to perform detailed astronometric measurements. Lalande worked with Nicole-Reine Lepaute and others to predict the 1758 return of Halley's Comet. Quadrans Muralis was formed from the stars of eastern Boötes, western Hercules and Draco. It was originally called Le Mural by Jean Fortin in his 1795 Atlas Céleste; it was not given the name Quadrans Muralis until Johann Bode's 1801 Uranographia. The constellation was quite faint, with its brightest stars reaching the 5th magnitude. Mons Maenalus, representing the Maenalus mountains, was created by Johannes Hevelius in 1687 at the foot of the constellation's figure. The mountain was named for the son of Lycaon, Maenalus. The mountain, one of Diana's hunting grounds, was also holy to Pan. Non-Western astronomy The stars of Boötes were incorporated into many different Chinese constellations. Arcturus was part of the most prominent of these, variously designated as the celestial king's throne (Tian Wang) or the Blue Dragon's horn (Daijiao); the name Daijiao, meaning "great horn", is more common. Arcturus was given such importance in Chinese celestial mythology because of its status marking the beginning of the lunar calendar, as well as its status as the brightest star in the northern night sky. Two constellations flanked Daijiao: Yousheti to the right and Zuosheti to the left; they represented companions that orchestrated the seasons. Zuosheti was formed from modern Zeta, Omicron and Pi Boötis, while Yousheti was formed from modern Eta, Tau and Upsilon Boötis. Dixi, the Emperor's ceremonial banquet mat, was north of Arcturus, consisting of the stars 12, 11 and 9 Boötis. Another northern constellation was Qigong, the Seven Dukes, which mostly straddled the Boötes-Hercules border. It included either Delta Boötis or Beta Boötis as its terminus. The other Chinese constellations made up of the stars of Boötes existed in the modern constellation's north; they are all representations of weapons. Tianqiang, the spear, was formed from Iota, Kappa and Theta Boötis; Genghe, variously representing a lance or shield, was formed from Epsilon, Rho and Sigma Boötis. There were also two weapons made up of a singular star. Xuange, the halberd, was represented by Lambda Boötis, and Zhaoyao, either the sword or the spear, was represented by Gamma Boötis. Two Chinese constellations have an uncertain placement in Boötes. Kangchi, the lake, was placed south of Arcturus, though its specific location is disputed. It may have been placed entirely in Boötes, on either side of the Boötes-Virgo border, or on either side of the Virgo-Libra border. The constellation Zhouding, a bronze tripod-mounted container used for food, was sometimes cited as the stars 1, 2 and 6 Boötis. However, it has also been associated with three stars in Coma Berenices. Boötes is also known to Native American cultures. In Yup'ik language, Boötes is Taluyaq, literally "fish trap," and the funnel-shaped part of the fish trap is known as Ilulirat. Characteristics Boötes is a constellation bordered by Virgo to the south, Coma Berenices and Canes Venatici to the west, Ursa Major to the northwest, Draco to the northeast, and Hercules, Corona Borealis and Serpens Caput to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Boo". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 16 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates stretch from +7.36° to +55.1°. Covering 907 square degrees, Boötes culminates at midnight around 2 May and ranks 13th in area. Colloquially, its pattern of stars has been likened to a kite or ice cream cone. However, depictions of Boötes have varied historically. Aratus described him circling the north pole, herding the two bears. Later ancient Greek depictions, described by Ptolemy, have him holding the reins of his hunting dogs (Canes Venatici) in his left hand, with a spear, club, or staff in his right hand. After Hevelius introduced Mons Maenalus in 1681, Boötes was often depicted standing on the Peloponnese mountain. By 1801, when Johann Bode published his Uranographia, Boötes had acquired a sickle, which was also held in his left hand. The placement of Arcturus has also been mutable through the centuries. Traditionally, Arcturus lay between his thighs, as Ptolemy depicted him. However, Germanicus Caesar deviated from this tradition by placing Arcturus "where his garment is fastened by a knot". Features Stars In his Uranometria, Johann Bayer used the Greek letters alpha through to omega and then A to k to label what he saw as the most prominent 35 stars in the constellation, with subsequent astronomers splitting Kappa, Mu, Nu and Pi as two stars each. Nu is also the same star as Psi Herculis. John Flamsteed numbered 54 stars for the constellation. Located 36.7 light-years from Earth, Arcturus, or Alpha Boötis, is the brightest star in Boötes and the fourth-brightest star in the sky at an apparent magnitude of −0.05; It is also the brightest star north of the celestial equator, just shading out Vega and Capella. Its name comes from the Greek for "bear-keeper". An orange giant of spectral class K1.5III, Arcturus is an ageing star that has exhausted its core supply of hydrogen and cooled and expanded to a diameter of 27 solar diameters, equivalent to approximately 32 million kilometers. Though its mass is approximately one solar mass (), Arcturus shines with 133 times the luminosity of the Sun (). Bayer located Arcturus above the Herdman's left knee in his Uranometria. Nearby Eta Boötis, or Muphrid, is the uppermost star denoting the left leg. It is a 2.68-magnitude star 37 light-years distant with a spectral class of G0IV, indicating it has just exhausted its core hydrogen and is beginning to expand and cool. It is 9 times as luminous as the Sun and has 2.7 times its diameter. Analysis of its spectrum reveals that it is a spectroscopic binary. Muphrid and Arcturus lie only 3.3 light-years away from each other. Viewed from Arcturus, Muphrid would have a visual magnitude of −2½, while Arcturus would be around visual magnitude −4½ when seen from Muphrid. Marking the herdsman's head is Beta Boötis, or Nekkar, a yellow giant of magnitude 3.5 and spectral type G8IIIa. Like Arcturus, it has expanded and cooled off the main sequence—likely to have lived most of its stellar life as a blue-white B-type main sequence star. Its common name comes from the Arabic phrase for "ox-driver". It is 219 light-years away and has a luminosity of . Located 86 light-years distant, Gamma Boötis, or Seginus, is a white giant star of spectral class A7III, with a luminosity 34 times and diameter 3.5 times that of the Sun. It is a Delta Scuti variable, ranging between magnitudes 3.02 and 3.07 every 7 hours. These stars are short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study asteroseismology. Delta Boötis is a wide double star with a primary of magnitude 3.5 and a secondary of magnitude 7.8. The primary is a yellow giant that has cooled and expanded to 10.4 times the diameter of the Sun. Of spectral class G8IV, it is around 121 light-years away, while the secondary is a yellow main sequence star of spectral type G0V. The two are thought to take 120,000 years to orbit each other. Mu Boötis, known as Alkalurops, is a triple star popular with amateur astronomers. It has an overall magnitude of 4.3 and is 121 light-years away. Its name is from the Arabic phrase for "club" or "staff". The primary appears to be of magnitude 4.3 and is blue-white. The secondary appears to be of magnitude 6.5, but is actually a close double star itself with a primary of magnitude 7.0 and a secondary of magnitude 7.6. The secondary and tertiary stars have an orbital period of 260 years. The primary has an absolute magnitude of 2.6 and is of spectral class F0. The secondary and tertiary stars are separated by 2 arcseconds; the primary and secondary are separated by 109.1 arcseconds at an angle of 171 degrees. Nu Boötis is an optical double star. The primary is an orange giant of magnitude 5.0 and the secondary is a white star of magnitude 5.0. The primary is 870 light-years away and the secondary is 430 light-years. Epsilon Boötis, also known as Izar or Pulcherrima, is a close triple star popular with amateur astronomers and the most prominent binary star in Boötes. The primary is a yellow- or orange-hued magnitude 2.5 giant star, the secondary is a magnitude 4.6 blue-hued main-sequence star, and the tertiary is a magnitude 12.0 star. The system is 210 light-years away. The name "Izar" comes from the Arabic word for "girdle" or "loincloth", referring to its location in the constellation. The name "Pulcherrima" comes from the Latin phrase for "most beautiful", referring to its contrasting colors in a telescope. The primary and secondary stars are separated by 2.9 arcseconds at an angle of 341 degrees; the primary's spectral class is K0 and it has a luminosity of . To the naked eye, Izar has a magnitude of 2.37. Nearby Rho and Sigma Boötis denote the herdsman's waist. Rho is an orange giant of spectral type K3III located around 160 light-years from Earth. It is ever so slightly variable, wavering by 0.003 of a magnitude from its average of 3.57. Sigma, a yellow-white main-sequence star of spectral type F3V, is suspected of varying in brightness from 4.45 to 4.49. It is around 52 light-years distant. Traditionally known as Aulād al Dhiʼbah (أولاد الضباع – aulād al dhiʼb), "the Whelps of the Hyenas", Theta, Iota, Kappa and Lambda Boötis (or Xuange) are a small group of stars in the far north of the constellation. The magnitude 4.05 Theta Boötis has a spectral type of F7 and an absolute magnitude of 3.8. Iota Boötis is a triple star with a primary of magnitude 4.8 and spectral class of A7, a secondary of magnitude 7.5, and a tertiary of magnitude 12.6. The primary is 97 light-years away. The primary and secondary stars are separated by 38.5 arcseconds, at an angle of 33 degrees. The primary and tertiary stars are separated by 86.7 arcseconds at an angle of 194 degrees. Both the primary and tertiary appear white in a telescope, but the secondary appears yellow-hued. Kappa Boötis is another wide double star. The primary is 155 light-years away and has a magnitude of 4.5. The secondary is 196 light-years away and has a magnitude of 6.6. The two components are separated by 13.4 arcseconds, at an angle of 236 degrees. The primary, with spectral class A7, appears white and the secondary appears bluish. An apparent magnitude 4.18 type A0p star, Lambda Boötis is the prototype of a class of chemically peculiar stars, only some of which pulsate as Delta Scuti-type stars. The distinction between the Lambda Boötis stars as a class of stars with peculiar spectra, and the Delta Scuti stars whose class describes pulsation in low-overtone pressure modes, is an important one. While many Lambda Boötis stars pulsate and are Delta Scuti stars, not many Delta Scuti stars have Lambda Boötis peculiarities, since the Lambda Boötis stars are a much rarer class whose members can be found both inside and outside the Delta Scuti instability strip. Lambda Boötis stars are dwarf stars that can be either spectral class A or F. Like BL Boötis-type stars they are metal-poor. Scientists have had difficulty explaining the characteristics of Lambda Boötis stars, partly because only around 60 confirmed members exist, but also due to heterogeneity in the literature. Lambda has an absolute magnitude of 1.8. There are two dimmer F-type stars, magnitude 4.83 12 Boötis, class F8; and magnitude 4.93 45 Boötis, class F5. Xi Boötis is a G8 yellow dwarf of magnitude 4.55, and absolute magnitude is 5.5. Two dimmer G-type stars are magnitude 4.86 31 Boötis, class G8, and magnitude 4.76 44 Boötis, class G0. Of apparent magnitude 4.06, Upsilon Boötis has a spectral class of K5 and an absolute magnitude of −0.3. Dimmer than Upsilon Boötis is magnitude 4.54 Phi Boötis, with a spectral class of K2 and an absolute magnitude of −0.1. Just slightly dimmer than Phi at magnitude 4.60 is O Boötis, which, like Izar, has a spectral class of K0. O Boötis has an absolute magnitude of 0.2. The other four dim stars are magnitude 4.91 6 Boötis, class K4; magnitude 4.86 20 Boötis, class K3; magnitude 4.81 Omega Boötis, class K4; and magnitude 4.83 A Boötis, class K1. There is one bright B-class star in Boötes; magnitude 4.93 Pi1 Boötis, also called Alazal. It has a spectral class of B9 and is 40 parsecs from Earth. There is also one M-type star, magnitude 4.81 34 Boötis. It is of class gM0. Multiple stars Besides Pulcherrima and Alkalurops, there are several other binary stars in Boötes: Xi Boötis is a quadruple star popular with amateur astronomers. The primary is a yellow star of magnitude 4.7 and the secondary is an orange star of magnitude 6.8. The system is 22 light-years away and has an orbital period of 150 years. The primary and secondary have a separation of 6.7 arcseconds at an angle of 319 degrees. The tertiary is a magnitude 12.6 star (though it may be observed to be brighter) and the quaternary is a magnitude 13.6 star. Pi Boötis is a close triple star. The primary is a blue-white star of magnitude 4.9, the secondary is a blue-white star of magnitude 5.8, and the tertiary is a star of magnitude 10.4. The primary and secondary components are separated by 5.6 arcseconds at an angle of 108 degrees; the primary and tertiary components are separated by 128 arcseconds at an angle of 128 degrees. Zeta Boötis is a triple star that consists of a physical binary pair with an optical companion. Lying 205 light-years away from Earth, The physical pair has a period of 123.3 years and consists of a magnitude 4.5 and a magnitude 4.6 star. The two components are separated by 1.0 arcseconds at an angle of 303 degrees. The optical companion is of magnitude 10.9, separated by 99.3 arcseconds at an angle of 259 degrees. 44 Boötis is an eclipsing variable star. The primary is of variable magnitude and the secondary is of magnitude 6.2; they have an orbital period of 225 years. The components are separated by 1.0 arcsecond at an angle of 40 degrees. 44 Boötis (i Boötis) is a double variable star 42 light-years away. It has an overall magnitude of 4.8 and appears yellow to the naked eye. The primary is of magnitude 5.3 and the secondary is of magnitude 6.1; their orbital period is 220 years. The secondary is itself an eclipsing variable star with a range of 0.6 magnitudes; its orbital period is 6.4 hours. It is a W Ursae Majoris variable that ranges in magnitude from a minimum of 7.1 to a maximum of 6.5 every 0.27 days. Both stars are G-type stars. Another eclipsing binary star is ZZ Boötis, which has two F2-type components of almost equal mass, and ranges in magnitude from a minimum of 6.79 to a maximum of 7.44 over a period of 5.0 days. Variable stars Two of the brighter Mira-type variable stars in the constellation are R and S Boötis. Both are red giants that range greatly in magnitude—from 6.2 to 13.1 over 223.4 days, and 7.8 to 13.8 over a period of 270.7 days, respectively. Also red giants, V and W Boötis are semi-regular variable stars that range in magnitude from 7.0 to 12.0 over a period of 258 days, and magnitude 4.7 to 5.4 over 450 days, respectively. BL Boötis is the prototype of its class of pulsating variable stars, the anomalous Cepheids. These stars are somewhat similar to Cepheid variables, but they do not have the same relationship between their period and luminosity. Their periods are similar to RRAB variables; however, they are far brighter than these stars. BL Boötis is a member of the cluster NGC 5466. Anomalous Cepheids are metal poor and have masses not much larger than the Sun's, on average, . BL Boötis type stars are a subtype of RR Lyrae variables. T Boötis was a nova observed in April 1860 at a magnitude of 9.7. It has never been observed since, but that does not preclude the possibility of it being a highly irregular variable star or a recurrent nova. Stars with planetary systems Extrasolar planets have been discovered encircling ten stars in Boötes as of 2012. Tau Boötis is orbited by a large planet, discovered in 1999. The host star itself is a magnitude 4.5 star of type F7V, 15.6 parsecs from Earth. It has a mass of and a radius of 1.331 solar radii (); a companion, GJ527B, orbits at a distance of 240 AU. Tau Boötis b, the sole planet discovered in the system, orbits at a distance of 0.046 AU every 3.31 days. Discovered through radial velocity measurements, it has a mass of 5.95 Jupiter masses (). This makes it a hot Jupiter. The host star and planet are tidally locked, meaning that the planet's orbit and the star's particularly high rotation are synchronized. Furthermore, a slight variability in the host star's light may be caused by magnetic interactions with the planet. Carbon monoxide is present in the planet's atmosphere. Tau Boötis b does not transit its star, rather, its orbit is inclined 46 degrees. Like Tau Boötis b, HAT-P-4b is also a hot Jupiter. It is noted for orbiting a particularly metal-rich host star and being of low density. Discovered in 2007, HAT-P-4 b has a mass of and a radius of . It orbits every 3.05 days at a distance of 0.04 AU. HAT-P-4, the host star, is an F-type star of magnitude 11.2, 310 parsecs from Earth. It is larger than the Sun, with a mass of and a radius of . Boötes is also home to multiple-planet systems. HD 128311 is the host star for a two-planet system, consisting of HD 128311 b and HD 128311 c, discovered in 2002 and 2005, respectively. HD 128311 b is the smaller planet, with a mass of ; it was discovered through radial velocity observations. It orbits at almost the same distance as Earth, at 1.099 AU; however, its orbital period is significantly longer at 448.6 days. The larger of the two, HD 128311 c, has a mass of and was discovered in the same manner. It orbits every 919 days inclined at 50°, and is 1.76 AU from the host star. The host star, HD 128311, is a K0V-type star located 16.6 parsecs from Earth. It is smaller than the Sun, with a mass of and a radius of ; it also appears below the threshold of naked-eye visibility at an apparent magnitude of 7.51. There are several single-planet systems in Boötes. HD 132406 is a Sun-like star of spectral type G0V with an apparent magnitude of 8.45, 231.5 light-years from Earth. It has a mass of and a radius of . The star is orbited by a gas giant, HD 132406 b, discovered in 2007. HD 132406 orbits 1.98 AU from its host star with a period of 974 days and has a mass of . The planet was discovered by the radial velocity method. WASP-23 is a star with one orbiting planet, WASP-23 b. The planet, discovered by the transit method in 2010, orbits every 2.944 days very close to its Sun, at 0.0376 AU. It is smaller than Jupiter, at and . Its star is a K1V-type star of apparent magnitude 12.7, far below naked-eye visibility, and smaller than the Sun at and . HD 131496 is also encircled by one planet, HD 131496 b. The star is of type K0 and is located 110 parsecs from Earth; it appears at a visual magnitude of 7.96. It is significantly larger than the Sun, with a mass of and a radius of 4.6 solar radii. Its one planet, discovered in 2011 by the radial velocity method, has a mass of ; its radius is as yet undetermined. HD 131496 b orbits at a distance of 2.09 AU with a period of 883 days. Another single planetary system in Boötes is the HD 132563 system, a triple star system. The parent star, technically HD 132563B, is a star of magnitude 9.47, 96 parsecs from Earth. It is almost exactly the size of the Sun, with the same radius and a mass only 1% greater. Its planet, HD 132563B b, was discovered in 2011 by the radial velocity method. , it orbits 2.62 AU from its star with a period of 1544 days. Its orbit is somewhat elliptical, with an eccentricity of 0.22. HD 132563B b is one of very few planets found in triple star systems; it orbits the isolated member of the system, which is separated from the other components, a spectroscopic binary, by 400 AU. Also discovered through the radial velocity method, albeit a year earlier, is HD 136418 b, a two-Jupiter-mass planet that orbits the star HD 136418 at a distance of 1.32 AU with a period of 464.3 days. Its host star is a magnitude 7.88 G5-type star, 98.2 parsecs from Earth. It has a radius of and a mass of . WASP-14 b is one of the most massive and dense exoplanets known, with a mass of and a radius of . Discovered via the transit method, it orbits 0.036 AU from its host star with a period of 2.24 days. WASP-14 b has a density of 4.6 grams per cubic centimeter, making it one of the densest exoplanets known. Its host star, WASP-14, is an F5V-type star of magnitude 9.75, 160 parsecs from Earth. It has a radius of and a mass of . It also has a very high proportion of lithium. Deep-sky objects Boötes is in a part of the celestial sphere facing away from the plane of our home Milky Way galaxy, and so does not have open clusters or nebulae. Instead, it has one bright globular cluster and many faint galaxies. The globular cluster NGC 5466 has an overall magnitude of 9.1 and a diameter of 11 arcminutes. It is a very loose globular cluster with fairly few stars and may appear as a rich, concentrated open cluster in a telescope. NGC 5466 is classified as a Shapley–Sawyer Concentration Class 12 cluster, reflecting its sparsity. Its fairly large diameter means that it has a low surface brightness, so it appears far dimmer than the catalogued magnitude of 9.1 and requires a large amateur telescope to view. Only approximately 12 stars are resolved by an amateur instrument. Boötes has two bright galaxies. NGC 5248 (Caldwell 45) is a type Sc galaxy (a variety of spiral galaxy) of magnitude 10.2. It measures 6.5 by 4.9 arcminutes. Fifty million light-years from Earth, NGC 5248 is a member of the Virgo Cluster of galaxies; it has dim outer arms and obvious H II regions, dust lanes and young star clusters. NGC 5676 is another type Sc galaxy of magnitude 10.9. It measures 3.9 by 2.0 arcminutes. Other galaxies include NGC 5008, a type Sc emission-line galaxy, NGC 5548, a type S Seyfert galaxy, NGC 5653, a type S HII galaxy, NGC 5778 (also classified as NGC 5825), a type E galaxy that is the brightest of its cluster, NGC 5886, and NGC 5888, a type SBb galaxy. NGC 5698 is a barred spiral galaxy, notable for being the host of the 2005 supernova SN 2005bc, which peaked at magnitude 15.3. Further away lies the 250-million-light-year-diameter Boötes void, a huge space largely empty of galaxies. Discovered by Robert Kirshner and colleagues in 1981, it is roughly 700 million light-years from Earth. Beyond it and within the bounds of the constellation, lie two superclusters at around 830 million and 1 billion light-years distant. The Hercules–Corona Borealis Great Wall, the largest-known structure in the Universe, covers a significant part of Boötes. Meteor showers Boötes is home to the Quadrantid meteor shower, the most prolific annual meteor shower. It was discovered in January 1835 and named in 1864 by Alexander Herschel. The radiant is located in northern Boötes near Kappa Boötis, in its namesake former constellation of Quadrans Muralis. Quadrantid meteors are dim, but have a peak visible hourly rate of approximately 100 per hour on January 3–4. The zenithal hourly rate of the Quadrantids is approximately 130 meteors per hour at their peak; it is also a very narrow shower. The Quadrantids are notoriously difficult to observe because of a low radiant and often inclement weather. The parent body of the meteor shower has been disputed for decades; however, Peter Jenniskens has proposed 2003 EH1, a minor planet, as the parent. 2003 EH1 may be linked to C/1490 Y1, a comet previously thought to be a potential parent body for the Quadrantids. 2003 EH1 is a short-period comet of the Jupiter family; 500 years ago, it experienced a catastrophic breakup event. It is now dormant. The Quadrantids had notable displays in 1982, 1985 and 2004. Meteors from this shower often appear to have a blue hue and travel at a moderate speed of 41.5–43 kilometers per second. On April 28, 1984, a remarkable outburst of the normally placid Alpha Bootids was observed by visual observer Frank Witte from 00:00 to 2:30 UTC. In a 6 cm telescope, he observed 433 meteors in a field of view near Arcturus with a diameter of less than 1°. Peter Jenniskens comments that this outburst resembled a "typical dust trail crossing". The Alpha Bootids normally begin on April 14, peaking on April 27 and 28, and finishing on May 12. Its meteors are slow-moving, with a velocity of 20.9 kilometers per second. They may be related to Comet 73P/Schwassmann–Wachmann 3, but this connection is only theorized. The June Bootids, also known as the Iota Draconids, is a meteor shower associated with the comet 7P/Pons–Winnecke, first recognized on May 27, 1916, by William F. Denning. The shower, with its slow meteors, was not observed prior to 1916 because Earth did not cross the comet's dust trail until Jupiter perturbed Pons–Winnecke's orbit, causing it to come within of Earth's orbit the first year the June Bootids were observed. In 1982, E. A. Reznikov discovered that the 1916 outburst was caused by material released from the comet in 1819. Another outburst of the June Bootids was not observed until 1998, because Comet Pons–Winnecke's orbit was not in a favorable position. However, on June 27, 1998, an outburst of meteors radiating from Boötes, later confirmed to be associated with Pons-Winnecke, was observed. They were incredibly long-lived, with trails of the brightest meteors lasting several seconds at times. Many fireballs, green-hued trails, and even some meteors that cast shadows were observed throughout the outburst, which had a maximum zenithal hourly rate of 200–300 meteors per hour. Two Russian astronomers determined in 2002 that material ejected from the comet in 1825 was responsible for the 1998 outburst. Ejecta from the comet dating to 1819, 1825 and 1830 was predicted to enter Earth's atmosphere on June 23, 2004. The predictions of a shower less spectacular than the 1998 showing were borne out in a display that had a maximum zenithal hourly rate of 16–20 meteors per hour that night. The June Bootids are not expected to have another outburst in the next 50 years. Typically, only 1–2 dim, very slow meteors are visible per hour; the average June Bootid has a magnitude of 5.0. It is related to the Alpha Draconids and the Bootids-Draconids. The shower lasts from June 27 to July 5, with a peak on the night of June 28. The June Bootids are classified as a class III shower (variable), and has an average entry velocity of 18 kilometers per second. Its radiant is located 7 degrees north of Beta Boötis. The Beta Bootids is a weak shower that begins on January 5, peaks on January 16, and ends on January 18. Its meteors travel at 43 km/s. The January Bootids is a short, young meteor shower that begins on January 9, peaks from January 16 to January 18, and ends on January 18. The Phi Bootids is another weak shower radiating from Boötes. It begins on April 16, peaks on April 30 and May 1, and ends on May 12. Its meteors are slow-moving, with a velocity of 15.1 km/s. They were discovered in 2006. The shower's peak hourly rate can be as high as six meteors per hour. Though named for a star in Boötes, the Phi Bootid radiant has moved into Hercules. The meteor stream is associated with three different asteroids: 1620 Geographos, 2062 Aten and 1978 CA. The Lambda Bootids, part of the Bootid-Coronae Borealid Complex, are a weak annual shower with moderately fast meteors; 41.75 km/s. The complex includes the Lambda Bootids, as well as the Theta Coronae Borealids and Xi Coronae Borealids. All of the Bootid-Coronae Borealid showers are Jupiter family comet showers; the streams in the complex have highly inclined orbits. There are several minor showers in Boötes, some of whose existence is yet to be verified. The Rho Bootids radiate from near the namesake star, and were hypothesized in 2010. The average Rho Bootid has an entry velocity of 43 km/s. It peaks in November and lasts for three days. The Rho Bootid shower is part of the SMA complex, a group of meteor showers related to the Taurids, which is in turn linked to the comet 2P/Encke. However, the link to the Taurid shower remains unconfirmed and may be a chance correlation. Another such shower is the Gamma Bootids, which were hypothesized in 2006. Gamma Bootids have an entry velocity of 50.3 km/s. The Nu Bootids, hypothesized in 2012, have faster meteors, with an entry velocity of 62.8 km/s.
Physical sciences
Other
Astronomy
4210
https://en.wikipedia.org/wiki/Bipedalism
Bipedalism
Bipedalism is a form of terrestrial locomotion where an animal moves by means of its two rear (or lower) limbs or legs. An animal or machine that usually moves in a bipedal manner is known as a biped , meaning 'two feet' (from Latin bis 'double' and pes 'foot'). Types of bipedal movement include walking or running (a bipedal gait) and hopping. Several groups of modern species are habitual bipeds whose normal method of locomotion is two-legged. In the Triassic period some groups of archosaurs (a group that includes crocodiles and dinosaurs) developed bipedalism; among the dinosaurs, all the early forms and many later groups were habitual or exclusive bipeds; the birds are members of a clade of exclusively bipedal dinosaurs, the theropods. Within mammals, habitual bipedalism has evolved multiple times, with the macropods, kangaroo rats and mice, springhare, hopping mice, pangolins and hominin apes (australopithecines, including humans) as well as various other extinct groups evolving the trait independently. A larger number of modern species intermittently or briefly use a bipedal gait. Several lizard species move bipedally when running, usually to escape from threats. Many primate and bear species will adopt a bipedal gait in order to reach food or explore their environment, though there are a few cases where they walk on their hind limbs only. Several arboreal primate species, such as gibbons and indriids, exclusively walk on two legs during the brief periods they spend on the ground. Many animals rear up on their hind legs while fighting or copulating. Some animals commonly stand on their hind legs to reach food, keep watch, threaten a competitor or predator, or pose in courtship, but do not move bipedally. Etymology The word is derived from the Latin words bi(s) 'two' and ped- 'foot', as contrasted with quadruped 'four feet'. Advantages Limited and exclusive bipedalism can offer a species several advantages. Bipedalism raises the head; this allows a greater field of vision with improved detection of distant dangers or resources, access to deeper water for wading animals and allows the animals to reach higher food sources with their mouths. While upright, non-locomotory limbs become free for other uses, including manipulation (in primates and rodents), flight (in birds), digging (in the giant pangolin), combat (in bears, great apes and the large monitor lizard) or camouflage. The maximum bipedal speed appears slower than the maximum speed of quadrupedal movement with a flexible backbone – both the ostrich and the red kangaroo can reach speeds of , while the cheetah can exceed . Even though bipedalism is slower at first, over long distances, it has allowed humans to outrun most other animals according to the endurance running hypothesis. Bipedality in kangaroo rats has been hypothesized to improve locomotor performance, which could aid in escaping from predators. Facultative and obligate bipedalism Zoologists often label behaviors, including bipedalism, as "facultative" (i.e. optional) or "obligate" (the animal has no reasonable alternative). Even this distinction is not completely clear-cut — for example, humans other than infants normally walk and run in biped fashion, but almost all can crawl on hands and knees when necessary. There are even reports of humans who normally walk on all fours with their feet but not their knees on the ground, but these cases are a result of conditions such as Uner Tan syndrome — very rare genetic neurological disorders rather than normal behavior. Even if one ignores exceptions caused by some kind of injury or illness, there are many unclear cases, including the fact that "normal" humans can crawl on hands and knees. This article therefore avoids the terms "facultative" and "obligate", and focuses on the range of styles of locomotion normally used by various groups of animals. Normal humans may be considered "obligate" bipeds because the alternatives are very uncomfortable and usually only resorted to when walking is impossible. Movement There are a number of states of movement commonly associated with bipedalism. Standing. Staying still on both legs. In most bipeds this is an active process, requiring constant adjustment of balance. Walking. One foot in front of another, with at least one foot on the ground at any time. Running. One foot in front of another, with periods where both feet are off the ground. Jumping/hopping. Moving by a series of jumps with both feet moving together. Bipedal animals The great majority of living terrestrial vertebrates are quadrupeds, with bipedalism exhibited by only a handful of living groups. Humans, gibbons and large birds walk by raising one foot at a time. On the other hand, most macropods, smaller birds, lemurs and bipedal rodents move by hopping on both legs simultaneously. Tree kangaroos are able to walk or hop, most commonly alternating feet when moving arboreally and hopping on both feet simultaneously when on the ground. Extant reptiles Many species of lizards become bipedal during high-speed, sprint locomotion, including the world's fastest lizard, the spiny-tailed iguana (genus Ctenosaura). Early reptiles and lizards The first known biped is the bolosaurid Eudibamus whose fossils date from 290 million years ago. Its long hind-legs, short forelegs, and distinctive joints all suggest bipedalism. The species became extinct in the early Permian. Archosaurs (includes crocodilians and dinosaurs) Birds All birds are bipeds, as is the case for all theropod dinosaurs. However, hoatzin chicks have claws on their wings which they use for climbing. Other archosaurs Bipedalism evolved more than once in archosaurs, the group that includes both dinosaurs and crocodilians. All dinosaurs are thought to be descended from a fully bipedal ancestor, perhaps similar to Eoraptor. Dinosaurs diverged from their archosaur ancestors approximately 230 million years ago during the Middle to Late Triassic period, roughly 20 million years after the Permian-Triassic extinction event wiped out an estimated 95 percent of all life on Earth. Radiometric dating of fossils from the early dinosaur genus Eoraptor establishes its presence in the fossil record at this time. Paleontologists suspect Eoraptor resembles the common ancestor of all dinosaurs; if this is true, its traits suggest that the first dinosaurs were small, bipedal predators. The discovery of primitive, dinosaur-like ornithodirans such as Marasuchus and Lagerpeton in Argentinian Middle Triassic strata supports this view; analysis of recovered fossils suggests that these animals were indeed small, bipedal predators. Bipedal movement also re-evolved in a number of other dinosaur lineages such as the iguanodonts. Some extinct members of Pseudosuchia, a sister group to the avemetatarsalians (the group including dinosaurs and relatives), also evolved bipedal forms – a poposauroid from the Triassic, Effigia okeeffeae, is thought to have been bipedal. Pterosaurs were previously thought to have been bipedal, but recent trackways have all shown quadrupedal locomotion. Mammals A number of groups of extant mammals have independently evolved bipedalism as their main form of locomotion - for example humans, ground pangolins, the extinct giant ground sloths, numerous species of jumping rodents and macropods. Humans, as their bipedalism has been extensively studied, are documented in the next section. Macropods are believed to have evolved bipedal hopping only once in their evolution, at some time no later than 45 million years ago. Bipedal movement is less common among mammals, most of which are quadrupedal. All primates possess some bipedal ability, though most species primarily use quadrupedal locomotion on land. Primates aside, the macropods (kangaroos, wallabies and their relatives), kangaroo rats and mice, hopping mice and springhare move bipedally by hopping. Very few non-primate mammals commonly move bipedally with an alternating leg gait. Exceptions are the ground pangolin and in some circumstances the tree kangaroo. One black bear, Pedals, became famous locally and on the internet for having a frequent bipedal gait, although this is attributed to injuries on the bear's front paws. A two-legged fox was filmed in a Derbyshire garden in 2023, most likely having been born that way. Primates Most bipedal animals move with their backs close to horizontal, using a long tail to balance the weight of their bodies. The primate version of bipedalism is unusual because the back is close to upright (completely upright in humans), and the tail may be absent entirely. Many primates can stand upright on their hind legs without any support. Chimpanzees, bonobos, gorillas, gibbons and baboons exhibit forms of bipedalism. On the ground sifakas move like all indrids with bipedal sideways hopping movements of the hind legs, holding their forelimbs up for balance. Geladas, although usually quadrupedal, will sometimes move between adjacent feeding patches with a squatting, shuffling bipedal form of locomotion. However, they can only do so for brief amounts, as their bodies are not adapted for constant bipedal locomotion. Humans are the only primates who are normally biped, due to an extra curve in the spine which stabilizes the upright position, as well as shorter arms relative to the legs than is the case for the nonhuman great apes. The evolution of human bipedalism began in primates about four million years ago, or as early as seven million years ago with Sahelanthropus or about 12 million years ago with Danuvius guggenmosi. One hypothesis for human bipedalism is that it evolved as a result of differentially successful survival from carrying food to share with group members, although there are alternative hypotheses. Injured individuals Injured chimpanzees and bonobos have been capable of sustained bipedalism. Three captive primates, one macaque Natasha and two chimps, Oliver and Poko (chimpanzee), were found to move bipedally. Natasha switched to exclusive bipedalism after an illness, while Poko was discovered in captivity in a tall, narrow cage. Oliver reverted to knuckle-walking after developing arthritis. Non-human primates often use bipedal locomotion when carrying food, or while moving through shallow water. Limited bipedalism Limited bipedalism in mammals Other mammals engage in limited, non-locomotory, bipedalism. A number of other animals, such as rats, raccoons, and beavers will squat on their hindlegs to manipulate some objects but revert to four limbs when moving (the beaver will move bipedally if transporting wood for their dams, as will the raccoon when holding food). Bears will fight in a bipedal stance to use their forelegs as weapons. A number of mammals will adopt a bipedal stance in specific situations such as for feeding or fighting. Ground squirrels and meerkats will stand on hind legs to survey their surroundings, but will not walk bipedally. Dogs (e.g. Faith) can stand or move on two legs if trained, or if birth defect or injury precludes quadrupedalism. The gerenuk antelope stands on its hind legs while eating from trees, as did the extinct giant ground sloth and chalicotheres. The spotted skunk will walk on its front legs when threatened, rearing up on its front legs while facing the attacker so that its anal glands, capable of spraying an offensive oil, face its attacker. Limited bipedalism in non-mammals (and non-birds) Bipedalism is unknown among the amphibians. Among the non-archosaur reptiles bipedalism is rare, but it is found in the "reared-up" running of lizards such as agamids and monitor lizards. Many reptile species will also temporarily adopt bipedalism while fighting. One genus of basilisk lizard can run bipedally across the surface of water for some distance. Among arthropods, cockroaches are known to move bipedally at high speeds. Bipedalism is rarely found outside terrestrial animals, though at least two species of octopus walk bipedally on the sea floor using two of their arms, allowing the remaining arms to be used to camouflage the octopus as a mat of algae or a floating coconut. Evolution of human bipedalism There are at least twelve distinct hypotheses as to how and why bipedalism evolved in humans, and also some debate as to when. Bipedalism evolved well before the large human brain or the development of stone tools. Bipedal specializations are found in Australopithecus fossils from 4.2 to 3.9 million years ago and recent studies have suggested that obligate bipedal hominid species were present as early as 7 million years ago. Nonetheless, the evolution of bipedalism was accompanied by significant evolutions in the spine including the forward movement in position of the foramen magnum, where the spinal cord leaves the cranium. Recent evidence regarding modern human sexual dimorphism (physical differences between male and female) in the lumbar spine has been seen in pre-modern primates such as Australopithecus africanus. This dimorphism has been seen as an evolutionary adaptation of females to bear lumbar load better during pregnancy, an adaptation that non-bipedal primates would not need to make. Adapting bipedalism would have required less shoulder stability, which allowed the shoulder and other limbs to become more independent of each other and adapt for specific suspensory behaviors. In addition to the change in shoulder stability, changing locomotion would have increased the demand for shoulder mobility, which would have propelled the evolution of bipedalism forward. The different hypotheses are not necessarily mutually exclusive and a number of selective forces may have acted together to lead to human bipedalism. It is important to distinguish between adaptations for bipedalism and adaptations for running, which came later still. The form and function of modern-day humans' upper bodies appear to have evolved from living in a more forested setting. Living in this kind of environment would have made it so that being able to travel arboreally would have been advantageous at the time. Although different to human walking, bipedal locomotion in trees was thought to be advantageous. It has also been proposed that, like some modern-day apes, early hominins had undergone a knuckle-walking stage prior to adapting the back limbs for bipedality while retaining forearms capable of grasping. Numerous causes for the evolution of human bipedalism involve freeing the hands for carrying and using tools, sexual dimorphism in provisioning, changes in climate and environment (from jungle to savanna) that favored a more elevated eye-position, and to reduce the amount of skin exposed to the tropical sun. It is possible that bipedalism provided a variety of benefits to the hominin species, and scientists have suggested multiple reasons for evolution of human bipedalism. There is also not only the question of why the earliest hominins were partially bipedal but also why hominins became more bipedal over time. For example, the postural feeding hypothesis describes how the earliest hominins became bipedal for the benefit of reaching food in trees while the savanna-based theory describes how the late hominins that started to settle on the ground became increasingly bipedal. Multiple factors Napier (1963) argued that it is unlikely that a single factor drove the evolution of bipedalism. He stated "It seems unlikely that any single factor was responsible for such a dramatic change in behaviour. In addition to the advantages of accruing from ability to carry objects – food or otherwise – the improvement of the visual range and the freeing of the hands for purposes of defence and offence may equally have played their part as catalysts." Sigmon (1971) demonstrated that chimpanzees exhibit bipedalism in different contexts, and one single factor should be used to explain bipedalism: preadaptation for human bipedalism. Day (1986) emphasized three major pressures that drove evolution of bipedalism: food acquisition, predator avoidance, and reproductive success. Ko (2015) stated that there are two main questions regarding bipedalism 1. Why were the earliest hominins partially bipedal? and 2. Why did hominins become more bipedal over time? He argued that these questions can be answered with combination of prominent theories such as Savanna-based, Postural feeding, and Provisioning. Savannah-based theory According to the Savanna-based theory, hominines came down from the tree's branches and adapted to life on the savanna by walking erect on two feet. The theory suggests that early hominids were forced to adapt to bipedal locomotion on the open savanna after they left the trees. One of the proposed mechanisms was the knuckle-walking hypothesis, which states that human ancestors used quadrupedal locomotion on the savanna, as evidenced by morphological characteristics found in Australopithecus anamensis and Australopithecus afarensis forelimbs, and that it is less parsimonious to assume that knuckle walking developed twice in genera Pan and Gorilla instead of evolving it once as synapomorphy for Pan and Gorilla before losing it in Australopithecus. The evolution of an orthograde posture would have been very helpful on a savanna as it would allow the ability to look over tall grasses in order to watch out for predators, or terrestrially hunt and sneak up on prey. It was also suggested in P. E. Wheeler's "The evolution of bipedality and loss of functional body hair in hominids", that a possible advantage of bipedalism in the savanna was reducing the amount of surface area of the body exposed to the sun, helping regulate body temperature. In fact, Elizabeth Vrba's turnover pulse hypothesis supports the savanna-based theory by explaining the shrinking of forested areas due to global warming and cooling, which forced animals out into the open grasslands and caused the need for hominids to acquire bipedality. Others state hominines had already achieved the bipedal adaptation that was used in the savanna. The fossil evidence reveals that early bipedal hominins were still adapted to climbing trees at the time they were also walking upright. It is possible that bipedalism evolved in the trees, and was later applied to the savanna as a vestigial trait. Humans and orangutans are both unique to a bipedal reactive adaptation when climbing on thin branches, in which they have increased hip and knee extension in relation to the diameter of the branch, which can increase an arboreal feeding range and can be attributed to a convergent evolution of bipedalism evolving in arboreal environments. Hominine fossils found in dry grassland environments led anthropologists to believe hominines lived, slept, walked upright, and died only in those environments because no hominine fossils were found in forested areas. However, fossilization is a rare occurrence—the conditions must be just right in order for an organism that dies to become fossilized for somebody to find later, which is also a rare occurrence. The fact that no hominine fossils were found in forests does not ultimately lead to the conclusion that no hominines ever died there. The convenience of the savanna-based theory caused this point to be overlooked for over a hundred years. Some of the fossils found actually showed that there was still an adaptation to arboreal life. For example, Lucy, the famous Australopithecus afarensis, found in Hadar in Ethiopia, which may have been forested at the time of Lucy's death, had curved fingers that would still give her the ability to grasp tree branches, but she walked bipedally. "Little Foot", a nearly-complete specimen of Australopithecus africanus, has a divergent big toe as well as the ankle strength to walk upright. "Little Foot" could grasp things using his feet like an ape, perhaps tree branches, and he was bipedal. Ancient pollen found in the soil in the locations in which these fossils were found suggest that the area used to be much more wet and covered in thick vegetation and has only recently become the arid desert it is now. Traveling efficiency hypothesis An alternative explanation is that the mixture of savanna and scattered forests increased terrestrial travel by proto-humans between clusters of trees, and bipedalism offered greater efficiency for long-distance travel between these clusters than quadrupedalism. In an experiment monitoring chimpanzee metabolic rate via oxygen consumption, it was found that the quadrupedal and bipedal energy costs were very similar, implying that this transition in early ape-like ancestors would not have been very difficult or energetically costing. This increased travel efficiency is likely to have been selected for as it assisted foraging across widely dispersed resources. Postural feeding hypothesis The postural feeding hypothesis has been recently supported by Dr. Kevin Hunt, a professor at Indiana University. This hypothesis asserts that chimpanzees were only bipedal when they eat. While on the ground, they would reach up for fruit hanging from small trees and while in trees, bipedalism was used to reach up to grab for an overhead branch. These bipedal movements may have evolved into regular habits because they were so convenient in obtaining food. Also, Hunt's hypotheses states that these movements coevolved with chimpanzee arm-hanging, as this movement was very effective and efficient in harvesting food. When analyzing fossil anatomy, Australopithecus afarensis has very similar features of the hand and shoulder to the chimpanzee, which indicates hanging arms. Also, the Australopithecus hip and hind limb very clearly indicate bipedalism, but these fossils also indicate very inefficient locomotive movement when compared to humans. For this reason, Hunt argues that bipedalism evolved more as a terrestrial feeding posture than as a walking posture. A related study conducted by University of Birmingham, Professor Susannah Thorpe examined the most arboreal great ape, the orangutan, holding onto supporting branches in order to navigate branches that were too flexible or unstable otherwise. In more than 75 percent of observations, the orangutans used their forelimbs to stabilize themselves while navigating thinner branches. Increased fragmentation of forests where A. afarensis as well as other ancestors of modern humans and other apes resided could have contributed to this increase of bipedalism in order to navigate the diminishing forests. Findings also could shed light on discrepancies observed in the anatomy of A. afarensis, such as the ankle joint, which allowed it to "wobble" and long, highly flexible forelimbs. If bipedalism started from upright navigation in trees, it could explain both increased flexibility in the ankle as well as long forelimbs which grab hold of branches. Provisioning model One theory on the origin of bipedalism is the behavioral model presented by C. Owen Lovejoy, known as "male provisioning". Lovejoy theorizes that the evolution of bipedalism was linked to monogamy. In the face of long inter-birth intervals and low reproductive rates typical of the apes, early hominids engaged in pair-bonding that enabled greater parental effort directed towards rearing offspring. Lovejoy proposes that male provisioning of food would improve the offspring survivorship and increase the pair's reproductive rate. Thus the male would leave his mate and offspring to search for food and return carrying the food in his arms walking on his legs. This model is supported by the reduction ("feminization") of the male canine teeth in early hominids such as Sahelanthropus tchadensis and Ardipithecus ramidus, which along with low body size dimorphism in Ardipithecus and Australopithecus, suggests a reduction in inter-male antagonistic behavior in early hominids. In addition, this model is supported by a number of modern human traits associated with concealed ovulation (permanently enlarged breasts, lack of sexual swelling) and low sperm competition (moderate sized testes, low sperm mid-piece volume) that argues against recent adaptation to a polygynous reproductive system. However, this model has been debated, as others have argued that early bipedal hominids were instead polygynous. Among most monogamous primates, males and females are about the same size. That is sexual dimorphism is minimal, and other studies have suggested that Australopithecus afarensis males were nearly twice the weight of females. However, Lovejoy's model posits that the larger range a provisioning male would have to cover (to avoid competing with the female for resources she could attain herself) would select for increased male body size to limit predation risk. Furthermore, as the species became more bipedal, specialized feet would prevent the infant from conveniently clinging to the mother - hampering the mother's freedom and thus make her and her offspring more dependent on resources collected by others. Modern monogamous primates such as gibbons tend to be also territorial, but fossil evidence indicates that Australopithecus afarensis lived in large groups. However, while both gibbons and hominids have reduced canine sexual dimorphism, female gibbons enlarge ('masculinize') their canines so they can actively share in the defense of their home territory. Instead, the reduction of the male hominid canine is consistent with reduced inter-male aggression in a pair-bonded though group living primate. Early bipedalism in homininae model Recent studies of 4.4 million years old Ardipithecus ramidus suggest bipedalism. It is thus possible that bipedalism evolved very early in homininae and was reduced in chimpanzee and gorilla when they became more specialized. Other recent studies of the foot structure of Ardipithecus ramidus suggest that the species was closely related to African-ape ancestors. This possibly provides a species close to the true connection between fully bipedal hominins and quadruped apes. According to Richard Dawkins in his book "The Ancestor's Tale", chimps and bonobos are descended from Australopithecus gracile type species while gorillas are descended from Paranthropus. These apes may have once been bipedal, but then lost this ability when they were forced back into an arboreal habitat, presumably by those australopithecines from whom eventually evolved hominins. Early hominines such as Ardipithecus ramidus may have possessed an arboreal type of bipedalism that later independently evolved towards knuckle-walking in chimpanzees and gorillas and towards efficient walking and running in modern humans (see figure). It is also proposed that one cause of Neanderthal extinction was a less efficient running. Warning display (aposematic) model Joseph Jordania from the University of Melbourne recently (2011) suggested that bipedalism was one of the central elements of the general defense strategy of early hominids, based on aposematism, or warning display and intimidation of potential predators and competitors with exaggerated visual and audio signals. According to this model, hominids were trying to stay as visible and as loud as possible all the time. Several morphological and behavioral developments were employed to achieve this goal: upright bipedal posture, longer legs, long tightly coiled hair on the top of the head, body painting, threatening synchronous body movements, loud voice and extremely loud rhythmic singing/stomping/drumming on external subjects. Slow locomotion and strong body odor (both characteristic for hominids and humans) are other features often employed by aposematic species to advertise their non-profitability for potential predators. Other behavioural models There are a variety of ideas which promote a specific change in behaviour as the key driver for the evolution of hominid bipedalism. For example, Wescott (1967) and later Jablonski & Chaplin (1993) suggest that bipedal threat displays could have been the transitional behaviour which led to some groups of apes beginning to adopt bipedal postures more often. Others (e.g. Dart 1925) have offered the idea that the need for more vigilance against predators could have provided the initial motivation. Dawkins (e.g. 2004) has argued that it could have begun as a kind of fashion that just caught on and then escalated through sexual selection. And it has even been suggested (e.g. Tanner 1981:165) that male phallic display could have been the initial incentive, as well as increased sexual signaling in upright female posture. Thermoregulatory model The thermoregulatory model explaining the origin of bipedalism is one of the simplest theories so far advanced, but it is a viable explanation. Dr. Peter Wheeler, a professor of evolutionary biology, proposes that bipedalism raises the amount of body surface area higher above the ground which results in a reduction in heat gain and helps heat dissipation. When a hominid is higher above the ground, the organism accesses more favorable wind speeds and temperatures. During heat seasons, greater wind flow results in a higher heat loss, which makes the organism more comfortable. Also, Wheeler explains that a vertical posture minimizes the direct exposure to the sun whereas quadrupedalism exposes more of the body to direct exposure. Analysis and interpretations of Ardipithecus reveal that this hypothesis needs modification to consider that the forest and woodland environmental preadaptation of early-stage hominid bipedalism preceded further refinement of bipedalism by the pressure of natural selection. This then allowed for the more efficient exploitation of the hotter conditions ecological niche, rather than the hotter conditions being hypothetically bipedalism's initial stimulus. A feedback mechanism from the advantages of bipedality in hot and open habitats would then in turn make a forest preadaptation solidify as a permanent state. Carrying models Charles Darwin wrote that "Man could not have attained his present dominant position in the world without the use of his hands, which are so admirably adapted to the act of obedience of his will". Darwin (1871:52) and many models on bipedal origins are based on this line of thought. Gordon Hewes (1961) suggested that the carrying of meat "over considerable distances" (Hewes 1961:689) was the key factor. Isaac (1978) and Sinclair et al. (1986) offered modifications of this idea, as indeed did Lovejoy (1981) with his "provisioning model" described above. Others, such as Nancy Tanner (1981), have suggested that infant carrying was key, while others again have suggested stone tools and weapons drove the change. This stone-tools theory is very unlikely, as though ancient humans were known to hunt, the discovery of tools was not discovered for thousands of years after the origin of bipedalism, chronologically precluding it from being a driving force of evolution. (Wooden tools and spears fossilize poorly and therefore it is difficult to make a judgment about their potential usage.) Wading models The observation that large primates, including especially the great apes, that predominantly move quadrupedally on dry land, tend to switch to bipedal locomotion in waist deep water, has led to the idea that the origin of human bipedalism may have been influenced by waterside environments. This idea, labelled "the wading hypothesis", was originally suggested by the Oxford marine biologist Alister Hardy who said: "It seems to me likely that Man learnt to stand erect first in water and then, as his balance improved, he found he became better equipped for standing up on the shore when he came out, and indeed also for running." It was then promoted by Elaine Morgan, as part of the aquatic ape hypothesis, who cited bipedalism among a cluster of other human traits unique among primates, including voluntary control of breathing, hairlessness and subcutaneous fat. The "aquatic ape hypothesis", as originally formulated, has not been accepted or considered a serious theory within the anthropological scholarly community. Others, however, have sought to promote wading as a factor in the origin of human bipedalism without referring to further ("aquatic ape" related) factors. Since 2000 Carsten Niemitz has published a series of papers and a book on a variant of the wading hypothesis, which he calls the "amphibian generalist theory" (). Other theories have been proposed that suggest wading and the exploitation of aquatic food sources (providing essential nutrients for human brain evolution or critical fallback foods) may have exerted evolutionary pressures on human ancestors promoting adaptations which later assisted full-time bipedalism. It has also been thought that consistent water-based food sources had developed early hominid dependency and facilitated dispersal along seas and rivers. Consequences Prehistoric fossil records show that early hominins first developed bipedalism before being followed by an increase in brain size. The consequences of these two changes in particular resulted in painful and difficult labor due to the increased favor of a narrow pelvis for bipedalism being countered by larger heads passing through the constricted birth canal. This phenomenon is commonly known as the obstetrical dilemma. Non-human primates habitually deliver their young on their own, but the same cannot be said for modern-day humans. Isolated birth appears to be rare and actively avoided cross-culturally, even if birthing methods may differ between said cultures. This is due to the fact that the narrowing of the hips and the change in the pelvic angle caused a discrepancy in the ratio of the size of the head to the birth canal. The result of this is that there is greater difficulty in birthing for hominins in general, let alone to be doing it by oneself. Physiology Bipedal movement occurs in a number of ways and requires many mechanical and neurological adaptations. Some of these are described below. Biomechanics Standing Energy-efficient means of standing bipedally involve constant adjustment of balance, and of course these must avoid overcorrection. The difficulties associated with simple standing in upright humans are highlighted by the greatly increased risk of falling present in the elderly, even with minimal reductions in control system effectiveness. Shoulder stability Shoulder stability would decrease with the evolution of bipedalism. Shoulder mobility would increase because the need for a stable shoulder is only present in arboreal habitats. Shoulder mobility would support suspensory locomotion behaviors which are present in human bipedalism. The forelimbs are freed from weight-bearing requirements, which makes the shoulder a place of evidence for the evolution of bipedalism. Walking Unlike non-human apes that are able to practice bipedality such as Pan and Gorilla, hominins have the ability to move bipedally without the utilization of a bent-hip-bent-knee (BHBK) gait, which requires the engagement of both the hip and the knee joints. This human ability to walk is made possible by the spinal curvature humans have that non-human apes do not. Rather, walking is characterized by an "inverted pendulum" movement in which the center of gravity vaults over a stiff leg with each step. Force plates can be used to quantify the whole-body kinetic & potential energy, with walking displaying an out-of-phase relationship indicating exchange between the two. This model applies to all walking organisms regardless of the number of legs, and thus bipedal locomotion does not differ in terms of whole-body kinetics. In humans, walking is composed of several separate processes: Vaulting over a stiff stance leg Passive ballistic movement of the swing leg A short 'push' from the ankle prior to toe-off, propelling the swing leg Rotation of the hips about the axis of the spine, to increase stride length Rotation of the hips about the horizontal axis to improve balance during stance Running Early hominins underwent post-cranial changes in order to better adapt to bipedality, especially running. One of these changes is having longer hindlimbs proportional to the forelimbs and their effects. As previously mentioned, longer hindlimbs assist in thermoregulation by reducing the total surface area exposed to direct sunlight while simultaneously allowing for more space for cooling winds. Additionally, having longer limbs is more energy-efficient, since longer limbs mean that overall muscle strain is lessened. Better energy efficiency, in turn, means higher endurance, particularly when running long distances. Running is characterized by a spring-mass movement. Kinetic and potential energy are in phase, and the energy is stored & released from a spring-like limb during foot contact, achieved by the plantar arch and the Achilles tendon in the foot and leg, respectively. Again, the whole-body kinetics are similar to animals with more limbs. Musculature Bipedalism requires strong leg muscles, particularly in the thighs. Contrast in domesticated poultry the well muscled legs, against the small and bony wings. Likewise in humans, the quadriceps and hamstring muscles of the thigh are both so crucial to bipedal activities that each alone is much larger than the well-developed biceps of the arms. In addition to the leg muscles, the increased size of the gluteus maximus in humans is an important adaptation as it provides support and stability to the trunk and lessens the amount of stress on the joints when running. Respiration Quadrupeds, have more restrictive breathing respire while moving than do bipedal humans. "Quadrupedal species normally synchronize the locomotor and respiratory cycles at a constant ratio of 1:1 (strides per breath) in both the trot and gallop. Human runners differ from quadrupeds in that while running they employ several phase-locked patterns (4:1, 3:1, 2:1, 1:1, 5:2, and 3:2), although a 2:1 coupling ratio appears to be favored. Even though the evolution of bipedal gait has reduced the mechanical constraints on respiration in man, thereby permitting greater flexibility in breathing pattern, it has seemingly not eliminated the need for the synchronization of respiration and body motion during sustained running." Respiration through bipedality means that there is better breath control in bipeds, which can be associated with brain growth. The modern brain utilizes approximately 20% of energy input gained through breathing and eating, as opposed to species like chimpanzees who use up twice as much energy as humans for the same amount of movement. This excess energy, leading to brain growth, also leads to the development of verbal communication. This is because breath control means that the muscles associated with breathing can be manipulated into creating sounds. This means that the onset of bipedality, leading to more efficient breathing, may be related to the origin of verbal language. Bipedal robots For nearly the whole of the 20th century, bipedal robots were very difficult to construct and robot locomotion involved only wheels, treads, or multiple legs. Recent cheap and compact computing power has made two-legged robots more feasible. Some notable biped robots are ASIMO, HUBO, MABEL and QRIO. Recently, spurred by the success of creating a fully passive, un-powered bipedal walking robot, those working on such machines have begun using principles gleaned from the study of human and animal locomotion, which often relies on passive mechanisms to minimize power consumption.
Biology and health sciences
Ethology
null
4214
https://en.wikipedia.org/wiki/Bioinformatics
Bioinformatics
Bioinformatics () is an interdisciplinary field of science that develops methods and software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, chemistry, physics, computer science, data science, computer programming, information engineering, mathematics and statistics to analyze and interpret biological data. The process of analyzing and interpreting data can sometimes be referred to as computational biology, however this distinction between the two terms is often disputed. To some, the term computational biology refers to building and using models of biological systems. Computational, statistical, and computer programming techniques have been used for computer simulation analyses of biological queries. They include reused specific analysis "pipelines", particularly in the field of genomics, such as by the identification of genes and single nucleotide polymorphisms (SNPs). These pipelines are used to better understand the genetic basis of disease, unique adaptations, desirable properties (especially in agricultural species), or differences between populations. Bioinformatics also includes proteomics, which tries to understand the organizational principles within nucleic acid and protein sequences. Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations. Bioinformatics includes text mining of biological literature and the development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA, RNA, proteins as well as biomolecular interactions. History The first definition of the term bioinformatics was coined by Paulien Hogeweg and Ben Hesper in 1970, to refer to the study of information processes in biotic systems. This definition placed bioinformatics as a field parallel to biochemistry (the study of chemical processes in biological systems). Bioinformatics and computational biology involved the analysis of biological data, particularly DNA, RNA, and protein sequences. The field of bioinformatics experienced explosive growth starting in the mid-1990s, driven largely by the Human Genome Project and by rapid advances in DNA sequencing technology. Analyzing biological data to produce meaningful information involves writing and running software programs that use algorithms from graph theory, artificial intelligence, soft computing, data mining, image processing, and computer simulation. The algorithms in turn depend on theoretical foundations such as discrete mathematics, control theory, system theory, information theory, and statistics. Sequences There has been a tremendous advance in speed and cost reduction since the completion of the Human Genome Project, with some labs able to sequence over 100,000 billion bases each year, and a full genome can be sequenced for $1,000 or less. Computers became essential in molecular biology when protein sequences became available after Frederick Sanger determined the sequence of insulin in the early 1950s. Comparing multiple sequences manually turned out to be impractical. Margaret Oakley Dayhoff, a pioneer in the field, compiled one of the first protein sequence databases, initially published as books as well as methods of sequence alignment and molecular evolution. Another early contributor to bioinformatics was Elvin A. Kabat, who pioneered biological sequence analysis in 1970 with his comprehensive volumes of antibody sequences released online with Tai Te Wu between 1980 and 1991. In the 1970s, new techniques for sequencing DNA were applied to bacteriophage MS2 and øX174, and the extended nucleotide sequences were then parsed with informational and statistical algorithms. These studies illustrated that well known features, such as the coding segments and the triplet code, are revealed in straightforward statistical analyses and were the proof of the concept that bioinformatics would be insightful. Goals In order to study how normal cellular activities are altered in different disease states, raw biological data must be combined to form a comprehensive picture of these activities. Therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data. This also includes nucleotide and amino acid sequences, protein domains, and protein structures. Important sub-disciplines within bioinformatics and computational biology include: Development and implementation of computer programs to efficiently access, manage, and use various types of information. Development of new mathematical algorithms and statistical measures to assess relationships among members of large data sets. For example, there are methods to locate a gene within a sequence, to predict protein structure and/or function, and to cluster protein sequences into families of related sequences. The primary goal of bioinformatics is to increase the understanding of biological processes. What sets it apart from other approaches is its focus on developing and applying computationally intensive techniques to achieve this goal. Examples include: pattern recognition, data mining, machine learning algorithms, and visualization. Major research efforts in the field include sequence alignment, gene finding, genome assembly, drug design, drug discovery, protein structure alignment, protein structure prediction, prediction of gene expression and protein–protein interactions, genome-wide association studies, the modeling of evolution and cell division/mitosis. Bioinformatics entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data. Over the past few decades, rapid developments in genomic and other molecular research technologies and developments in information technologies have combined to produce a tremendous amount of information related to molecular biology. Bioinformatics is the name given to these mathematical and computing approaches used to glean understanding of biological processes. Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures. Sequence analysis Since the bacteriophage Phage Φ-X174 was sequenced in 1977, the DNA sequences of thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encode proteins, RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually. Computer programs such as BLAST are used routinely to search sequences—as of 2008, from more than 260,000 organisms, containing over 190 billion nucleotides. DNA sequencing Before sequences can be analyzed, they are obtained from a data storage bank, such as GenBank. DNA sequencing is still a non-trivial problem as the raw data may be noisy or affected by weak signals. Algorithms have been developed for base calling for the various experimental approaches to DNA sequencing. Sequence assembly Most DNA sequencing techniques produce short fragments of sequence that need to be assembled to obtain complete gene or genome sequences. The shotgun sequencing technique (used by The Institute for Genomic Research (TIGR) to sequence the first bacterial genome, Haemophilus influenzae) generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as large as the human genome, it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly usually contains numerous gaps that must be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced (rather than chain-termination or chemical degradation methods), and genome assembly algorithms are a critical area of bioinformatics research. Genome annotation In genomics, annotation refers to the process of marking the stop and start regions of genes and other biological features in a sequenced DNA sequence. Many genomes are too large to be annotated by hand. As the rate of sequencing exceeds the rate of genome annotation, genome annotation has become the new bottleneck in bioinformatics. Genome annotation can be classified into three levels: the nucleotide, protein, and process levels. Gene finding is a chief aspect of nucleotide-level annotation. For complex genomes, a combination of ab initio gene prediction and sequence comparison with expressed sequence databases and other organisms can be successful. Nucleotide-level annotation also allows the integration of genome sequence with other genetic and physical maps of the genome. The principal aim of protein-level annotation is to assign function to the protein products of the genome. Databases of protein sequences and functional domains and motifs are used for this type of annotation. About half of the predicted proteins in a new genome sequence tend to have no obvious function. Understanding the function of genes and their products in the context of cellular and organismal physiology is the goal of process-level annotation. An obstacle of process-level annotation has been the inconsistency of terms used by different model systems. The Gene Ontology Consortium is helping to solve this problem. The first description of a comprehensive annotation system was published in 1995 by The Institute for Genomic Research, which performed the first complete sequencing and analysis of the genome of a free-living (non-symbiotic) organism, the bacterium Haemophilus influenzae. The system identifies the genes encoding all proteins, transfer RNAs, ribosomal RNAs, in order to make initial functional assignments. The GeneMark program trained to find protein-coding genes in Haemophilus influenzae is constantly changing and improving. Following the goals that the Human Genome Project left to achieve after its closure in 2003, the ENCODE project was developed by the National Human Genome Research Institute. This project is a collaborative data collection of the functional elements of the human genome that uses next-generation DNA-sequencing technologies and genomic tiling arrays, technologies able to automatically generate large amounts of data at a dramatically reduced per-base cost but with the same accuracy (base call error) and fidelity (assembly error). Gene function prediction While genome annotation is primarily based on sequence similarity (and thus homology), other properties of sequences can be used to predict the function of genes. In fact, most gene function prediction methods focus on protein sequences as they are more informative and more feature-rich. For instance, the distribution of hydrophobic amino acids predicts transmembrane segments in proteins. However, protein function prediction can also use external information such as gene (or protein) expression data, protein structure, or protein-protein interactions. Computational evolutionary biology Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics has assisted evolutionary biologists by enabling researchers to: trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through physical taxonomy or physiological observations alone, compare entire genomes, which permits the study of more complex evolutionary events, such as gene duplication, horizontal gene transfer, and the prediction of factors important in bacterial speciation, build complex computational population genetics models to predict the outcome of the system over time track and share information on an increasingly large number of species and organisms Future work endeavours to reconstruct the now more complex tree of life. Comparative genomics The core of comparative genome analysis is the establishment of the correspondence between genes (orthology analysis) or other genomic features in different organisms. Intergenomic maps are made to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. Entire genomes are involved in processes of hybridization, polyploidization and endosymbiosis that lead to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectrum of algorithmic, statistical and mathematical techniques, ranging from exact, heuristics, fixed parameter and approximation algorithms for problems based on parsimony models to Markov chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models. Many of these studies are based on the detection of sequence homology to assign sequences to protein families. Pan genomics Pan genomics is a concept introduced in 2005 by Tettelin and Medini. Pan genome is the complete gene repertoire of a particular monophyletic taxonomic group. Although initially applied to closely related strains of a species, it can be applied to a larger context like genus, phylum, etc. It is divided in two parts: the Core genome, a set of genes common to all the genomes under study (often housekeeping genes vital for survival), and the Dispensable/Flexible genome: a set of genes not present in all but one or some genomes under study. A bioinformatics tool BPGA can be used to characterize the Pan Genome of bacterial species. Genetics of disease As of 2013, the existence of efficient high-throughput next-generation sequencing technology allows for the identification of cause many different human disorders. Simple Mendelian inheritance has been observed for over 3,000 disorders that have been identified at the Online Mendelian Inheritance in Man database, but complex diseases are more difficult. Association studies have found many individual genetic regions that individually are weakly associated with complex diseases (such as infertility, breast cancer and Alzheimer's disease), rather than a single cause. There are currently many challenges to using genes for diagnosis and treatment, such as how we don't know which genes are important, or how stable the choices an algorithm provides. Genome-wide association studies have successfully identified thousands of common genetic variants for complex diseases and traits; however, these common variants only explain a small fraction of heritability. Rare variants may account for some of the missing heritability. Large-scale whole genome sequencing studies have rapidly sequenced millions of whole genomes, and such studies have identified hundreds of millions of rare variants. Functional annotations predict the effect or function of a genetic variant and help to prioritize rare functional variants, and incorporating these annotations can effectively boost the power of genetic association of rare variants analysis of whole genome sequencing studies. Some tools have been developed to provide all-in-one rare variant association analysis for whole-genome sequencing data, including integration of genotype data and their functional annotations, association analysis, result summary and visualization. Meta-analysis of whole genome sequencing studies provides an attractive solution to the problem of collecting large sample sizes for discovering rare variants associated with complex phenotypes. Analysis of mutations in cancer In cancer, the genomes of affected cells are rearranged in complex or unpredictable ways. In addition to single-nucleotide polymorphism arrays identifying point mutations that cause cancer, oligonucleotide microarrays can be used to identify chromosomal gains and losses (called comparative genomic hybridization). These detection methods generate terabytes of data per experiment. The data is often found to contain considerable variability, or noise, and thus Hidden Markov model and change-point analysis methods are being developed to infer real copy number changes. Two important principles can be used to identify cancer by mutations in the exome. First, cancer is a disease of accumulated somatic mutations in genes. Second, cancer contains driver mutations which need to be distinguished from passengers. Further improvements in bioinformatics could allow for classifying types of cancer by analysis of cancer driven mutations in the genome. Furthermore, tracking of patients while the disease progresses may be possible in the future with the sequence of cancer samples. Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent among many tumors. Gene and protein expression Analysis of gene expression The expression of many genes can be determined by measuring mRNA levels with multiple techniques including microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag sequencing, massively parallel signature sequencing (MPSS), RNA-Seq, also known as "Whole Transcriptome Shotgun Sequencing" (WTSS), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological measurement, and a major research area in computational biology involves developing statistical tools to separate signal from noise in high-throughput gene expression studies. Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population of cancer cells. Analysis of protein expression Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins present in a biological sample. The former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples when multiple incomplete peptides from each protein are detected. Cellular protein localization in a tissue context can be achieved through affinity proteomics displayed as spatial data based on immunohistochemistry and tissue microarrays. Analysis of regulation Gene regulation is a complex process where a signal, such as an extracellular signal such as a hormone, eventually leads to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to explore various steps in this process. For example, gene expression can be regulated by nearby elements in the genome. Promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the protein-coding region of a gene. These motifs influence the extent to which that region is transcribed into mRNA. Enhancer elements far away from the promoter can also regulate gene expression, through three-dimensional looping interactions. These interactions can be determined by bioinformatic analysis of chromosome conformation capture experiments. Expression data can be used to infer gene regulation: one might compare microarray data from a wide variety of states of an organism to form hypotheses about the genes involved in each state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress conditions (heat shock, starvation, etc.). Clustering algorithms can be then applied to expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-represented regulatory elements. Examples of clustering algorithms applied in gene clustering are k-means clustering, self-organizing maps (SOMs), hierarchical clustering, and consensus clustering methods. Analysis of cellular organization Several approaches have been developed to analyze the location of organelles, genes, proteins, and other components within cells. A gene ontology category, cellular component, has been devised to capture subcellular localization in many biological databases. Microscopy and image analysis Microscopic pictures allow for the location of organelles as well as molecules, which may be the source of abnormalities in diseases. Protein localization Finding the location of proteins allows us to predict what they do. This is called protein function prediction. For instance, if a protein is found in the nucleus it may be involved in gene regulation or splicing. By contrast, if a protein is found in mitochondria, it may be involved in respiration or other metabolic processes. There are well developed protein subcellular localization prediction resources available, including protein subcellular location databases, and prediction tools. Nuclear organization of chromatin Data from high-throughput chromosome conformation capture experiments, such as Hi-C (experiment) and ChIA-PET, can provide information on the three-dimensional structure and nuclear organization of chromatin. Bioinformatic challenges in this field include partitioning the genome into domains, such as Topologically Associating Domains (TADs), that are organised together in three-dimensional space. Structural bioinformatics Finding the structure of proteins is an important application of bioinformatics. The Critical Assessment of Protein Structure Prediction (CASP) is an open competition where worldwide research groups submit protein models for evaluating unknown protein models. Amino acid sequence The linear amino acid sequence of a protein is called the primary structure. The primary structure can be easily determined from the sequence of codons on the DNA gene that codes for it. In most proteins, the primary structure uniquely determines the 3-dimensional structure of a protein in its native environment. An exception is the misfolded protein involved in bovine spongiform encephalopathy. This structure is linked to the function of the protein. Additional structural information includes the secondary, tertiary and quaternary structure. A viable general solution to the prediction of the function of a protein remains an open problem. Most efforts have so far been directed towards heuristics that work most of the time. Homology In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A, whose function is known, is homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In structural bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. Homology modeling is used to predict the structure of an unknown protein from existing homologous proteins. One example of this is hemoglobin in humans and the hemoglobin in legumes (leghemoglobin), which are distant relatives from the same protein superfamily. Both serve the same purpose of transporting oxygen in the organism. Although both of these proteins have completely different amino acid sequences, their protein structures are virtually identical, which reflects their near identical purposes and shared ancestor. Other techniques for predicting protein structure include protein threading and de novo (from scratch) physics-based modeling. Another aspect of structural bioinformatics include the use of protein structures for Virtual Screening models such as Quantitative Structure-Activity Relationship models and proteochemometric models (PCM). Furthermore, a protein's crystal structure can be used in simulation of for example ligand-binding studies and in silico mutagenesis studies. A 2021 deep-learning algorithms-based software called AlphaFold, developed by Google's DeepMind, greatly outperforms all other prediction software methods, and has released predicted structures for hundreds of millions of proteins in the AlphaFold protein structure database. Network and systems biology Network analysis seeks to understand the relationships within biological networks such as metabolic or protein–protein interaction networks. Although biological networks can be constructed from a single type of molecule or entity (such as genes), network biology often attempts to integrate many different data types, such as proteins, small molecules, gene expression data, and others, which are all connected physically, functionally, or both. Systems biology involves the use of computer simulations of cellular subsystems (such as the networks of metabolites and enzymes that comprise metabolism, signal transduction pathways and gene regulatory networks) to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms. Molecular interaction networks Tens of thousands of three-dimensional protein structures have been determined by X-ray crystallography and protein nuclear magnetic resonance spectroscopy (protein NMR) and a central question in structural bioinformatics is whether it is practical to predict possible protein–protein interactions only based on these 3D shapes, without performing protein–protein interaction experiments. A variety of methods have been developed to tackle the protein–protein docking problem, though it seems that there is still much work to be done in this field. Other interactions encountered in the field include Protein–ligand (including drug) and protein–peptide. Molecular dynamic simulation of movement of atoms about rotatable bonds is the fundamental principle behind computational algorithms, termed docking algorithms, for studying molecular interactions. Biodiversity informatics Biodiversity informatics deals with the collection and analysis of biodiversity data, such as taxonomic databases, or microbiome data. Examples of such analyses include phylogenetics, niche modelling, species richness mapping, DNA barcoding, or species identification tools. A growing area is also macro-ecology, i.e. the study of how biodiversity is connected to ecology and human impact, such as climate change. Others Literature analysis The enormous number of published literature makes it virtually impossible for individuals to read every paper, resulting in disjointed sub-fields of research. Literature analysis aims to employ computational and statistical linguistics to mine this growing library of text resources. For example: Abbreviation recognition – identify the long-form and abbreviation of biological terms Named-entity recognition – recognizing biological terms such as gene names Protein–protein interaction – identify which proteins interact with which proteins from text The area of research draws from statistics and computational linguistics. High-throughput image analysis Computational technologies are used to automate the processing, quantification and analysis of large amounts of high-information-content biomedical imagery. Modern image analysis systems can improve an observer's accuracy, objectivity, or speed. Image analysis is important for both diagnostics and research. Some examples are: high-throughput and high-fidelity quantification and sub-cellular localization (high-content screening, cytohistopathology, Bioimage informatics) morphometrics clinical image analysis and visualization determining the real-time air-flow patterns in breathing lungs of living animals quantifying occlusion size in real-time imagery from the development of and recovery during arterial injury making behavioral observations from extended video recordings of laboratory animals infrared measurements for metabolic activity determination inferring clone overlaps in DNA mapping, e.g. the Sulston score High-throughput single cell data analysis Computational techniques are used to analyse high-throughput, low-measurement single cell data, such as that obtained from flow cytometry. These methods typically involve finding populations of cells that are relevant to a particular disease state or experimental condition. Ontologies and data integration Biological ontologies are directed acyclic graphs of controlled vocabularies. They create categories for biological concepts and descriptions so they can be easily analyzed with computers. When categorised in this way, it is possible to gain added value from holistic and integrated analysis. The OBO Foundry was an effort to standardise certain ontologies. One of the most widespread is the Gene ontology which describes gene function. There are also ontologies which describe phenotypes. Databases Databases are essential for bioinformatics research and applications. Databases exist for many different information types, including DNA and protein sequences, molecular structures, phenotypes and biodiversity. Databases can contain both empirical data (obtained directly from experiments) and predicted data (obtained from analysis of existing data). They may be specific to a particular organism, pathway or molecule of interest. Alternatively, they can incorporate data compiled from multiple other databases. Databases can have different formats, access mechanisms, and be public or private. Some of the most commonly used databases are listed below: Used in biological sequence analysis: Genbank, UniProt Used in structure analysis: Protein Data Bank (PDB) Used in finding Protein Families and Motif Finding: InterPro, Pfam Used for Next Generation Sequencing: Sequence Read Archive Used in Network Analysis: Metabolic Pathway Databases (KEGG, BioCyc), Interaction Analysis Databases, Functional Networks Used in design of synthetic genetic circuits: GenoCAD Software and tools Software tools for bioinformatics include simple command-line tools, more complex graphical programs, and standalone web-services. They are made by bioinformatics companies or by public institutions. Open-source bioinformatics software Many free and open-source software tools have existed and continued to grow since the 1980s. The combination of a continued need for new algorithms for the analysis of emerging types of biological readouts, the potential for innovative in silico experiments, and freely available open code bases have created opportunities for research groups to contribute to both bioinformatics regardless of funding. The open source tools often act as incubators of ideas, or community-supported plug-ins in commercial applications. They may also provide de facto standards and shared object models for assisting with the challenge of bioinformation integration. Open-source bioinformatics software includes Bioconductor, BioPerl, Biopython, BioJava, BioJS, BioRuby, Bioclipse, EMBOSS, .NET Bio, Orange with its bioinformatics add-on, Apache Taverna, UGENE and GenoCAD. The non-profit Open Bioinformatics Foundation and the annual Bioinformatics Open Source Conference promote open-source bioinformatics software. Web services in bioinformatics SOAP- and REST-based interfaces have been developed to allow client computers to use algorithms, data and computing resources from servers in other parts of the world. The main advantage are that end users do not have to deal with software and database maintenance overheads. Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search Services), MSA (Multiple Sequence Alignment), and BSA (Biological Sequence Analysis). The availability of these service-oriented bioinformatics resources demonstrate the applicability of web-based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single web-based interface, to integrative, distributed and extensible bioinformatics workflow management systems. Bioinformatics workflow management systems A bioinformatics workflow management system is a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in a Bioinformatics application. Such systems are designed to provide an easy-to-use environment for individual application scientists themselves to create their own workflows, provide interactive tools for the scientists enabling them to execute their workflows and view their results in real-time, simplify the process of sharing and reusing workflows between the scientists, and enable scientists to track the provenance of the workflow execution results and the workflow creation steps. Some of the platforms giving this service: Galaxy, Kepler, Taverna, UGENE, Anduril, HIVE. BioCompute and BioCompute Objects In 2014, the US Food and Drug Administration sponsored a conference held at the National Institutes of Health Bethesda Campus to discuss reproducibility in bioinformatics. Over the next three years, a consortium of stakeholders met regularly to discuss what would become BioCompute paradigm. These stakeholders included representatives from government, industry, and academic entities. Session leaders represented numerous branches of the FDA and NIH Institutes and Centers, non-profit entities including the Human Variome Project and the European Federation for Medical Informatics, and research institutions including Stanford, the New York Genome Center, and the George Washington University. It was decided that the BioCompute paradigm would be in the form of digital 'lab notebooks' which allow for the reproducibility, replication, review, and reuse, of bioinformatics protocols. This was proposed to enable greater continuity within a research group over the course of normal personnel flux while furthering the exchange of ideas between groups. The US FDA funded this work so that information on pipelines would be more transparent and accessible to their regulatory staff. In 2016, the group reconvened at the NIH in Bethesda and discussed the potential for a BioCompute Object, an instance of the BioCompute paradigm. This work was copied as both a "standard trial use" document and a preprint paper uploaded to bioRxiv. The BioCompute object allows for the JSON-ized record to be shared among employees, collaborators, and regulators. Education platforms Bioinformatics is not only taught as in-person master's degree at many universities. The computational nature of bioinformatics lends it to computer-aided and online learning. Software platforms designed to teach bioinformatics concepts and methods include Rosalind and online courses offered through the Swiss Institute of Bioinformatics Training Portal. The Canadian Bioinformatics Workshops provides videos and slides from training workshops on their website under a Creative Commons license. The 4273π project or 4273pi project also offers open source educational materials for free. The course runs on low cost Raspberry Pi computers and has been used to teach adults and school pupils. 4273 is actively developed by a consortium of academics and research staff who have run research level bioinformatics using Raspberry Pi computers and the 4273π operating system. MOOC platforms also provide online certifications in bioinformatics and related disciplines, including Coursera's Bioinformatics Specialization at the University of California, San Diego, Genomic Data Science Specialization at Johns Hopkins University, and EdX's Data Analysis for Life Sciences XSeries at Harvard University. Conferences There are several large conferences that are concerned with bioinformatics. Some of the most notable examples are Intelligent Systems for Molecular Biology (ISMB), European Conference on Computational Biology (ECCB), and Research in Computational Molecular Biology (RECOMB).
Biology and health sciences
Biology basics
Biology
4230
https://en.wikipedia.org/wiki/Cell%20%28biology%29
Cell (biology)
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane; many cells contain organelles, each with a specific function. The term comes from the Latin word meaning 'small room'. Most cells are only visible under a microscope. Cells emerged on Earth about 4 billion years ago. All cells are capable of replication, protein synthesis, and motility. Cells are broadly categorized into two types: eukaryotic cells, which possess a nucleus, and prokaryotic cells, which lack a nucleus but have a nucleoid region. Prokaryotes are single-celled organisms such as bacteria, whereas eukaryotes can be either single-celled, such as amoebae, or multicellular, such as some algae, plants, animals, and fungi. Eukaryotic cells contain organelles including mitochondria, which provide energy for cell functions; chloroplasts, which create sugars by photosynthesis, in plants; and ribosomes, which synthesise proteins. Cells were discovered by Robert Hooke in 1665, who named them after their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cell types Cells are broadly categorized into two types: eukaryotic cells, which possess a nucleus, and prokaryotic cells, which lack a nucleus but have a nucleoid region. Prokaryotes are single-celled organisms, whereas eukaryotes can be either single-celled or multicellular. Prokaryotic cells Prokaryotes include bacteria and archaea, two of the three domains of life. Prokaryotic cells were the first form of life on Earth, characterized by having vital biological processes including cell signaling. They are simpler and smaller than eukaryotic cells, and lack a nucleus, and other membrane-bound organelles. The DNA of a prokaryotic cell consists of a single circular chromosome that is in direct contact with the cytoplasm. The nuclear region in the cytoplasm is called the nucleoid. Most prokaryotes are the smallest of all organisms, ranging from 0.5 to 2.0 μm in diameter. A prokaryotic cell has three regions: Enclosing the cell is the cell envelope, generally consisting of a plasma membrane covered by a cell wall which, for some bacteria, may be further covered by a third layer called a capsule. Though most prokaryotes have both a cell membrane and a cell wall, there are exceptions such as Mycoplasma (bacteria) and Thermoplasma (archaea) which only possess the cell membrane layer. The envelope gives rigidity to the cell and separates the interior of the cell from its environment, serving as a protective filter. The cell wall consists of peptidoglycan in bacteria and acts as an additional barrier against exterior forces. It also prevents the cell from expanding and bursting (cytolysis) from osmotic pressure due to a hypotonic environment. Some eukaryotic cells (plant cells and fungal cells) also have a cell wall. Inside the cell is the cytoplasmic region that contains the genome (DNA), ribosomes and various sorts of inclusions. The genetic material is freely found in the cytoplasm. Prokaryotes can carry extrachromosomal DNA elements called plasmids, which are usually circular. Linear bacterial plasmids have been identified in several species of spirochete bacteria, including members of the genus Borrelia notably Borrelia burgdorferi, which causes Lyme disease. Though not forming a nucleus, the DNA is condensed in a nucleoid. Plasmids encode additional genes, such as antibiotic resistance genes. On the outside, some prokaryotes have flagella and pili that project from the cell's surface. These are structures made of proteins that facilitate movement and communication between cells. Eukaryotic cells Plants, animals, fungi, slime moulds, protozoa, and algae are all eukaryotic. These cells are about fifteen times wider than a typical prokaryote and can be as much as a thousand times greater in volume. The main distinguishing feature of eukaryotes as compared to prokaryotes is compartmentalization: the presence of membrane-bound organelles (compartments) in which specific activities take place. Most important among these is a cell nucleus, an organelle that houses the cell's DNA. This nucleus gives the eukaryote its name, which means "true kernel (nucleus)". Some of the other differences are: The plasma membrane resembles that of prokaryotes in function, with minor differences in the setup. Cell walls may or may not be present. The eukaryotic DNA is organized in one or more linear molecules, called chromosomes, which are associated with histone proteins. All chromosomal DNA is stored in the cell nucleus, separated from the cytoplasm by a membrane. Some eukaryotic organelles such as mitochondria also contain some DNA. Many eukaryotic cells are ciliated with primary cilia. Primary cilia play important roles in chemosensation, mechanosensation, and thermosensation. Each cilium may thus be "viewed as a sensory cellular antennae that coordinates a large number of cellular signaling pathways, sometimes coupling the signaling to ciliary motility or alternatively to cell division and differentiation." Motile eukaryotes can move using motile cilia or flagella. Motile cells are absent in conifers and flowering plants. Eukaryotic flagella are more complex than those of prokaryotes. Many groups of eukaryotes are single-celled. Among the many-celled groups are animals and plants. The number of cells in these groups vary with species; it has been estimated that the human body contains around 37 trillion (3.72×1013) cells, and more recent studies put this number at around 30 trillion (~36 trillion cells in the male, ~28 trillion in the female). Subcellular components All cells, whether prokaryotic or eukaryotic, have a membrane that envelops the cell, regulates what moves in and out (selectively permeable), and maintains the electric potential of the cell. Inside the membrane, the cytoplasm takes up most of the cell's volume. Except red blood cells, which lack a cell nucleus and most organelles to accommodate maximum space for hemoglobin, all cells possess DNA, the hereditary material of genes, and RNA, containing the information necessary to build various proteins such as enzymes, the cell's primary machinery. There are also other kinds of biomolecules in cells. This article lists these primary cellular components, then briefly describes their function. Cell membrane The cell membrane, or plasma membrane, is a selectively permeable biological membrane that surrounds the cytoplasm of a cell. In animals, the plasma membrane is the outer boundary of the cell, while in plants and prokaryotes it is usually covered by a cell wall. This membrane serves to separate and protect a cell from its surrounding environment and is made mostly from a double layer of phospholipids, which are amphiphilic (partly hydrophobic and partly hydrophilic). Hence, the layer is called a phospholipid bilayer, or sometimes a fluid mosaic membrane. Embedded within this membrane is a macromolecular structure called the porosome the universal secretory portal in cells and a variety of protein molecules that act as channels and pumps that move different molecules into and out of the cell. The membrane is semi-permeable, and selectively permeable, in that it can either let a substance (molecule or ion) pass through freely, to a limited extent or not at all. Cell surface membranes also contain receptor proteins that allow cells to detect external signaling molecules such as hormones. Cytoskeleton The cytoskeleton acts to organize and maintain the cell's shape; anchors organelles in place; helps during endocytosis, the uptake of external materials by a cell, and cytokinesis, the separation of daughter cells after cell division; and moves parts of the cell in processes of growth and mobility. The eukaryotic cytoskeleton is composed of microtubules, intermediate filaments and microfilaments. In the cytoskeleton of a neuron the intermediate filaments are known as neurofilaments. There are a great number of proteins associated with them, each controlling a cell's structure by directing, bundling, and aligning filaments. The prokaryotic cytoskeleton is less well-studied but is involved in the maintenance of cell shape, polarity and cytokinesis. The subunit protein of microfilaments is a small, monomeric protein called actin. The subunit of microtubules is a dimeric molecule called tubulin. Intermediate filaments are heteropolymers whose subunits vary among the cell types in different tissues. Some of the subunit proteins of intermediate filaments include vimentin, desmin, lamin (lamins A, B and C), keratin (multiple acidic and basic keratins), and neurofilament proteins (NF–L, NF–M). Genetic material Two different kinds of genetic material exist: deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). Cells use DNA for their long-term information storage. The biological information contained in an organism is encoded in its DNA sequence. RNA is used for information transport (e.g., mRNA) and enzymatic functions (e.g., ribosomal RNA). Transfer RNA (tRNA) molecules are used to add amino acids during protein translation. Prokaryotic genetic material is organized in a simple circular bacterial chromosome in the nucleoid region of the cytoplasm. Eukaryotic genetic material is divided into different, linear molecules called chromosomes inside a discrete nucleus, usually with additional genetic material in some organelles like mitochondria and chloroplasts (see endosymbiotic theory). A human cell has genetic material contained in the cell nucleus (the nuclear genome) and in the mitochondria (the mitochondrial genome). In humans, the nuclear genome is divided into 46 linear DNA molecules called chromosomes, including 22 homologous chromosome pairs and a pair of sex chromosomes. The mitochondrial genome is a circular DNA molecule distinct from nuclear DNA. Although the mitochondrial DNA is very small compared to nuclear chromosomes, it codes for 13 proteins involved in mitochondrial energy production and specific tRNAs. Foreign genetic material (most commonly DNA) can also be artificially introduced into the cell by a process called transfection. This can be transient, if the DNA is not inserted into the cell's genome, or stable, if it is. Certain viruses also insert their genetic material into the genome. Organelles Organelles are parts of the cell that are adapted and/or specialized for carrying out one or more vital functions, analogous to the organs of the human body (such as the heart, lung, and kidney, with each organ performing a different function). Both eukaryotic and prokaryotic cells have organelles, but prokaryotic organelles are generally simpler and are not membrane-bound. There are several types of organelles in a cell. Some (such as the nucleus and Golgi apparatus) are typically solitary, while others (such as mitochondria, chloroplasts, peroxisomes and lysosomes) can be numerous (hundreds to thousands). The cytosol is the gelatinous fluid that fills the cell and surrounds the organelles. Eukaryotic Cell nucleus: A cell's information center, the cell nucleus is the most conspicuous organelle found in a eukaryotic cell. It houses the cell's chromosomes, and is the place where almost all DNA replication and RNA synthesis (transcription) occur. The nucleus is spherical and separated from the cytoplasm by a double membrane called the nuclear envelope, space between these two membrane is called perinuclear space. The nuclear envelope isolates and protects a cell's DNA from various molecules that could accidentally damage its structure or interfere with its processing. During processing, DNA is transcribed, or copied into a special RNA, called messenger RNA (mRNA). This mRNA is then transported out of the nucleus, where it is translated into a specific protein molecule. The nucleolus is a specialized region within the nucleus where ribosome subunits are assembled. In prokaryotes, DNA processing takes place in the cytoplasm. Mitochondria and chloroplasts: generate energy for the cell. Mitochondria are self-replicating double membrane-bound organelles that occur in various numbers, shapes, and sizes in the cytoplasm of all eukaryotic cells. Respiration occurs in the cell mitochondria, which generate the cell's energy by oxidative phosphorylation, using oxygen to release energy stored in cellular nutrients (typically pertaining to glucose) to generate ATP (aerobic respiration). Mitochondria multiply by binary fission, like prokaryotes. Chloroplasts can only be found in plants and algae, and they capture the sun's energy to make carbohydrates through photosynthesis. Endoplasmic reticulum: The endoplasmic reticulum (ER) is a transport network for molecules targeted for certain modifications and specific destinations, as compared to molecules that float freely in the cytoplasm. The ER has two forms: the rough ER, which has ribosomes on its surface that secrete proteins into the ER, and the smooth ER, which lacks ribosomes. The smooth ER plays a role in calcium sequestration and release and also helps in synthesis of lipid. Golgi apparatus: The primary function of the Golgi apparatus is to process and package the macromolecules such as proteins and lipids that are synthesized by the cell. Lysosomes and peroxisomes: Lysosomes contain digestive enzymes (acid hydrolases). They digest excess or worn-out organelles, food particles, and engulfed viruses or bacteria. Peroxisomes have enzymes that rid the cell of toxic peroxides, Lysosomes are optimally active in an acidic environment. The cell could not house these destructive enzymes if they were not contained in a membrane-bound system. Centrosome: the cytoskeleton organizer: The centrosome produces the microtubules of a cell—a key component of the cytoskeleton. It directs the transport through the ER and the Golgi apparatus. Centrosomes are composed of two centrioles which lie perpendicular to each other in which each has an organization like a cartwheel, which separate during cell division and help in the formation of the mitotic spindle. A single centrosome is present in the animal cells. They are also found in some fungi and algae cells. Vacuoles: Vacuoles sequester waste products and in plant cells store water. They are often described as liquid filled spaces and are surrounded by a membrane. Some cells, most notably Amoeba, have contractile vacuoles, which can pump water out of the cell if there is too much water. The vacuoles of plant cells and fungal cells are usually larger than those of animal cells. Vacuoles of plant cells are surrounded by a membrane which transports ions against concentration gradients. Eukaryotic and prokaryotic Ribosomes: The ribosome is a large complex of RNA and protein molecules. They each consist of two subunits, and act as an assembly line where RNA from the nucleus is used to synthesise proteins from amino acids. Ribosomes can be found either floating freely or bound to a membrane (the rough endoplasmatic reticulum in eukaryotes, or the cell membrane in prokaryotes). Plastids: Plastid are membrane-bound organelle generally found in plant cells and euglenoids and contain specific pigments, thus affecting the colour of the plant and organism. And these pigments also helps in food storage and tapping of light energy. There are three types of plastids based upon the specific pigments. Chloroplasts contain chlorophyll and some carotenoid pigments which helps in the tapping of light energy during photosynthesis. Chromoplasts contain fat-soluble carotenoid pigments like orange carotene and yellow xanthophylls which helps in synthesis and storage. Leucoplasts are non-pigmented plastids and helps in storage of nutrients. Structures outside the cell membrane Many cells also have structures which exist wholly or partially outside the cell membrane. These structures are notable because they are not protected from the external environment by the cell membrane. In order to assemble these structures, their components must be carried across the cell membrane by export processes. Cell wall Many types of prokaryotic and eukaryotic cells have a cell wall. The cell wall acts to protect the cell mechanically and chemically from its environment, and is an additional layer of protection to the cell membrane. Different types of cell have cell walls made up of different materials; plant cell walls are primarily made up of cellulose, fungi cell walls are made up of chitin and bacteria cell walls are made up of peptidoglycan. Prokaryotic Capsule A gelatinous capsule is present in some bacteria outside the cell membrane and cell wall. The capsule may be polysaccharide as in pneumococci, meningococci or polypeptide as Bacillus anthracis or hyaluronic acid as in streptococci. Capsules are not marked by normal staining protocols and can be detected by India ink or methyl blue, which allows for higher contrast between the cells for observation. Flagella Flagella are organelles for cellular mobility. The bacterial flagellum stretches from cytoplasm through the cell membrane(s) and extrudes through the cell wall. They are long and thick thread-like appendages, protein in nature. A different type of flagellum is found in archaea and a different type is found in eukaryotes. Fimbriae A fimbria (plural fimbriae also known as a pilus, plural pili) is a short, thin, hair-like filament found on the surface of bacteria. Fimbriae are formed of a protein called pilin (antigenic) and are responsible for the attachment of bacteria to specific receptors on human cells (cell adhesion). There are special types of pili involved in bacterial conjugation. Cellular processes Replication Cell division involves a single cell (called a mother cell) dividing into two daughter cells. This leads to growth in multicellular organisms (the growth of tissue) and to procreation (vegetative reproduction) in unicellular organisms. Prokaryotic cells divide by binary fission, while eukaryotic cells usually undergo a process of nuclear division, called mitosis, followed by division of the cell, called cytokinesis. A diploid cell may also undergo meiosis to produce haploid cells, usually four. Haploid cells serve as gametes in multicellular organisms, fusing to form new diploid cells. DNA replication, or the process of duplicating a cell's genome, always happens when a cell divides through mitosis or binary fission. This occurs during the S phase of the cell cycle. In meiosis, the DNA is replicated only once, while the cell divides twice. DNA replication only occurs before meiosis I. DNA replication does not occur when the cells divide the second time, in meiosis II. Replication, like all cellular activities, requires specialized proteins for carrying out the job. DNA repair Cells of all organisms contain enzyme systems that scan their DNA for damage and carry out repair processes when it is detected. Diverse repair processes have evolved in organisms ranging from bacteria to humans. The widespread prevalence of these repair processes indicates the importance of maintaining cellular DNA in an undamaged state in order to avoid cell death or errors of replication due to damage that could lead to mutation. E. coli bacteria are a well-studied example of a cellular organism with diverse well-defined DNA repair processes. These include: nucleotide excision repair, DNA mismatch repair, non-homologous end joining of double-strand breaks, recombinational repair and light-dependent repair (photoreactivation). Growth and metabolism Between successive cell divisions, cells grow through the functioning of cellular metabolism. Cell metabolism is the process by which individual cells process nutrient molecules. Metabolism has two distinct divisions: catabolism, in which the cell breaks down complex molecules to produce energy and reducing power, and anabolism, in which the cell uses energy and reducing power to construct complex molecules and perform other biological functions. Complex sugars can be broken down into simpler sugar molecules called monosaccharides such as glucose. Once inside the cell, glucose is broken down to make adenosine triphosphate (ATP), a molecule that possesses readily available energy, through two different pathways. In plant cells, chloroplasts create sugars by photosynthesis, using the energy of light to join molecules of water and carbon dioxide. Protein synthesis Cells are capable of synthesizing new proteins, which are essential for the modulation and maintenance of cellular activities. This process involves the formation of new protein molecules from amino acid building blocks based on information encoded in DNA/RNA. Protein synthesis generally consists of two major steps: transcription and translation. Transcription is the process where genetic information in DNA is used to produce a complementary RNA strand. This RNA strand is then processed to give messenger RNA (mRNA), which is free to migrate through the cell. mRNA molecules bind to protein-RNA complexes called ribosomes located in the cytosol, where they are translated into polypeptide sequences. The ribosome mediates the formation of a polypeptide sequence based on the mRNA sequence. The mRNA sequence directly relates to the polypeptide sequence by binding to transfer RNA (tRNA) adapter molecules in binding pockets within the ribosome. The new polypeptide then folds into a functional three-dimensional protein molecule. Motility Unicellular organisms can move in order to find food or escape predators. Common mechanisms of motion include flagella and cilia. In multicellular organisms, cells can move during processes such as wound healing, the immune response and cancer metastasis. For example, in wound healing in animals, white blood cells move to the wound site to kill the microorganisms that cause infection. Cell motility involves many receptors, crosslinking, bundling, binding, adhesion, motor and other proteins. The process is divided into three steps: protrusion of the leading edge of the cell, adhesion of the leading edge and de-adhesion at the cell body and rear, and cytoskeletal contraction to pull the cell forward. Each step is driven by physical forces generated by unique segments of the cytoskeleton. Navigation, control and communication In August 2020, scientists described one way cells—in particular cells of a slime mold and mouse pancreatic cancer-derived cells—are able to navigate efficiently through a body and identify the best routes through complex mazes: generating gradients after breaking down diffused chemoattractants which enable them to sense upcoming maze junctions before reaching them, including around corners. Multicellularity Cell specialization/differentiation Multicellular organisms are organisms that consist of more than one cell, in contrast to single-celled organisms. In complex multicellular organisms, cells specialize into different cell types that are adapted to particular functions. In mammals, major cell types include skin cells, muscle cells, neurons, blood cells, fibroblasts, stem cells, and others. Cell types differ both in appearance and function, yet are genetically identical. Cells are able to be of the same genotype but of different cell type due to the differential expression of the genes they contain. Most distinct cell types arise from a single totipotent cell, called a zygote, that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Origin of multicellularity Multicellularity has evolved independently at least 25 times, including in some prokaryotes, like cyanobacteria, myxobacteria, actinomycetes, or Methanosarcina. However, complex multicellular organisms evolved only in six eukaryotic groups: animals, fungi, brown algae, red algae, green algae, and plants. It evolved repeatedly for plants (Chloroplastida), once or twice for animals, once for brown algae, and perhaps several times for fungi, slime molds, and red algae. Multicellularity may have evolved from colonies of interdependent organisms, from cellularization, or from organisms in symbiotic relationships. The first evidence of multicellularity is from cyanobacteria-like organisms that lived between 3 and 3.5 billion years ago. Other early fossils of multicellular organisms include the contested Grypania spiralis and the fossils of the black shales of the Palaeoproterozoic Francevillian Group Fossil B Formation in Gabon. The evolution of multicellularity from unicellular ancestors has been replicated in the laboratory, in evolution experiments using predation as the selective pressure. Origins The origin of cells has to do with the origin of life, which began the history of life on Earth. Origin of life Small molecules needed for life may have been carried to Earth on meteorites, created at deep-sea vents, or synthesized by lightning in a reducing atmosphere. There is little experimental data defining what the first self-replicating forms were. RNA may have been the earliest self-replicating molecule, as it can both store genetic information and catalyze chemical reactions. Cells emerged around 4 billion years ago. The first cells were most likely heterotrophs. The early cell membranes were probably simpler and more permeable than modern ones, with only a single fatty acid chain per lipid. Lipids spontaneously form bilayered vesicles in water, and could have preceded RNA. First eukaryotic cells Eukaryotic cells were created some 2.2 billion years ago in a process called eukaryogenesis. This is widely agreed to have involved symbiogenesis, in which archaea and bacteria came together to create the first eukaryotic common ancestor. This cell had a new level of complexity and capability, with a nucleus and facultatively aerobic mitochondria. It evolved some 2 billion years ago into a population of single-celled organisms that included the last eukaryotic common ancestor, gaining capabilities along the way, though the sequence of the steps involved has been disputed, and may not have started with symbiogenesis. It featured at least one centriole and cilium, sex (meiosis and syngamy), peroxisomes, and a dormant cyst with a cell wall of chitin and/or cellulose. In turn, the last eukaryotic common ancestor gave rise to the eukaryotes' crown group, containing the ancestors of animals, fungi, plants, and a diverse range of single-celled organisms. The plants were created around 1.6 billion years ago with a second episode of symbiogenesis that added chloroplasts, derived from cyanobacteria. History of research In 1665, Robert Hooke examined a thin slice of cork under his microscope, and saw a structure of small enclosures. He wrote "I could exceeding plainly perceive it to be all perforated and porous, much like a Honey-comb, but that the pores of it were not regular". To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well. 1632–1723: Antonie van Leeuwenhoek taught himself to make lenses, constructed basic optical microscopes and drew protozoa, such as Vorticella from rain water, and bacteria from his own mouth. 1665: Robert Hooke discovered cells in cork, then in living plant tissue using an early compound microscope. He coined the term cell (from Latin cellula, meaning "small room") in his book Micrographia (1665). 1839: Theodor Schwann and Matthias Jakob Schleiden elucidated the principle that plants and animals are made of cells, concluding that cells are a common unit of structure and development, and thus founding the cell theory. 1855: Rudolf Virchow stated that new cells come from pre-existing cells by cell division (omnis cellula ex cellula). 1931: Ernst Ruska built the first transmission electron microscope (TEM) at the University of Berlin. By 1935, he had built an EM with twice the resolution of a light microscope, revealing previously unresolvable organelles. 1981: Lynn Margulis published Symbiosis in Cell Evolution detailing how eukaryotic cells were created by symbiogenesis.
Biology and health sciences
Science and medicine
null
4266
https://en.wikipedia.org/wiki/Binary%20search
Binary search
In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array. Binary search runs in logarithmic time in the worst case, making comparisons, where is the number of elements in the array. Binary search is faster than linear search except for small arrays. However, the array must be sorted first to be able to apply binary search. There are specialized data structures designed for fast searching, such as hash tables, that can be searched more efficiently than binary search. However, binary search can be used to solve a wider range of problems, such as finding the next-smallest or next-largest element in the array relative to the target even if it is absent from the array. There are numerous variations of binary search. In particular, fractional cascading speeds up binary searches for the same value in multiple arrays. Fractional cascading efficiently solves a number of search problems in computational geometry and in numerous other fields. Exponential search extends binary search to unbounded lists. The binary search tree and B-tree data structures are based on binary search. Algorithm Binary search works on sorted arrays. Binary search begins by comparing an element in the middle of the array with the target value. If the target value matches the element, its position in the array is returned. If the target value is less than the element, the search continues in the lower half of the array. If the target value is greater than the element, the search continues in the upper half of the array. By doing this, the algorithm eliminates the half in which the target value cannot lie in each iteration. Procedure Given an array of elements with values or records sorted such that , and target value , the following subroutine uses binary search to find the index of in . Set to and to . If , the search terminates as unsuccessful. Set (the position of the middle element) to the floor of , which is the greatest integer less than or equal to . If , set to and go to step 2. If , set to and go to step 2. Now , the search is done; return . This iterative procedure keeps track of the search boundaries with the two variables and . The procedure may be expressed in pseudocode as follows, where the variable names and types remain the same as above, floor is the floor function, and unsuccessful refers to a specific value that conveys the failure of the search. function binary_search(A, n, T) is L := 0 R := n − 1 while L ≤ R do m := floor((L + R) / 2) if A[m] < T then L := m + 1 else if A[m] > T then R := m − 1 else: return m return unsuccessful Alternatively, the algorithm may take the ceiling of . This may change the result if the target value appears more than once in the array. Alternative procedure In the above procedure, the algorithm checks whether the middle element () is equal to the target () in every iteration. Some implementations leave out this check during each iteration. The algorithm would perform this check only when one element is left (when ). This results in a faster comparison loop, as one comparison is eliminated per iteration, while it requires only one more iteration on average. Hermann Bottenbruch published the first implementation to leave out this check in 1962. Set to and to . While , Set (the position of the middle element) to the ceiling of , which is the least integer greater than or equal to . If , set to . Else, ; set to . Now , the search is done. If , return . Otherwise, the search terminates as unsuccessful. Where ceil is the ceiling function, the pseudocode for this version is: function binary_search_alternative(A, n, T) is L := 0 R := n − 1 while L != R do m := ceil((L + R) / 2) if A[m] > T then R := m − 1 else: L := m if A[L] = T then return L return unsuccessful Duplicate elements The procedure may return any index whose element is equal to the target value, even if there are duplicate elements in the array. For example, if the array to be searched was and the target was , then it would be correct for the algorithm to either return the 4th (index 3) or 5th (index 4) element. The regular procedure would return the 4th element (index 3) in this case. It does not always return the first duplicate (consider which still returns the 4th element). However, it is sometimes necessary to find the leftmost element or the rightmost element for a target value that is duplicated in the array. In the above example, the 4th element is the leftmost element of the value 4, while the 5th element is the rightmost element of the value 4. The alternative procedure above will always return the index of the rightmost element if such an element exists. Procedure for finding the leftmost element To find the leftmost element, the following procedure can be used: Set to and to . While , Set (the position of the middle element) to the floor of , which is the greatest integer less than or equal to . If , set to . Else, ; set to . Return . If and , then is the leftmost element that equals . Even if is not in the array, is the rank of in the array, or the number of elements in the array that are less than . Where floor is the floor function, the pseudocode for this version is: function binary_search_leftmost(A, n, T): L := 0 R := n while L < R: m := floor((L + R) / 2) if A[m] < T: L := m + 1 else: R := m return L Procedure for finding the rightmost element To find the rightmost element, the following procedure can be used: Set to and to . While , Set (the position of the middle element) to the floor of , which is the greatest integer less than or equal to . If , set to . Else, ; set to . Return . If and , then is the rightmost element that equals . Even if is not in the array, is the number of elements in the array that are greater than . Where floor is the floor function, the pseudocode for this version is: function binary_search_rightmost(A, n, T): L := 0 R := n while L < R: m := floor((L + R) / 2) if A[m] > T: R := m else: L := m + 1 return R - 1 Approximate matches The above procedure only performs exact matches, finding the position of a target value. However, it is trivial to extend binary search to perform approximate matches because binary search operates on sorted arrays. For example, binary search can be used to compute, for a given value, its rank (the number of smaller elements), predecessor (next-smallest element), successor (next-largest element), and nearest neighbor. Range queries seeking the number of elements between two values can be performed with two rank queries. Rank queries can be performed with the procedure for finding the leftmost element. The number of elements less than the target value is returned by the procedure. Predecessor queries can be performed with rank queries. If the rank of the target value is , its predecessor is . For successor queries, the procedure for finding the rightmost element can be used. If the result of running the procedure for the target value is , then the successor of the target value is . The nearest neighbor of the target value is either its predecessor or successor, whichever is closer. Range queries are also straightforward. Once the ranks of the two values are known, the number of elements greater than or equal to the first value and less than the second is the difference of the two ranks. This count can be adjusted up or down by one according to whether the endpoints of the range should be considered to be part of the range and whether the array contains entries matching those endpoints. Performance In terms of the number of comparisons, the performance of binary search can be analyzed by viewing the run of the procedure on a binary tree. The root node of the tree is the middle element of the array. The middle element of the lower half is the left child node of the root, and the middle element of the upper half is the right child node of the root. The rest of the tree is built in a similar fashion. Starting from the root node, the left or right subtrees are traversed depending on whether the target value is less or more than the node under consideration. In the worst case, binary search makes iterations of the comparison loop, where the notation denotes the floor function that yields the greatest integer less than or equal to the argument, and is the binary logarithm. This is because the worst case is reached when the search reaches the deepest level of the tree, and there are always levels in the tree for any binary search. The worst case may also be reached when the target element is not in the array. If is one less than a power of two, then this is always the case. Otherwise, the search may perform iterations if the search reaches the deepest level of the tree. However, it may make iterations, which is one less than the worst case, if the search ends at the second-deepest level of the tree. On average, assuming that each element is equally likely to be searched, binary search makes iterations when the target element is in the array. This is approximately equal to iterations. When the target element is not in the array, binary search makes iterations on average, assuming that the range between and outside elements is equally likely to be searched. In the best case, where the target value is the middle element of the array, its position is returned after one iteration. In terms of iterations, no search algorithm that works only by comparing elements can exhibit better average and worst-case performance than binary search. The comparison tree representing binary search has the fewest levels possible as every level above the lowest level of the tree is filled completely. Otherwise, the search algorithm can eliminate few elements in an iteration, increasing the number of iterations required in the average and worst case. This is the case for other search algorithms based on comparisons, as while they may work faster on some target values, the average performance over all elements is worse than binary search. By dividing the array in half, binary search ensures that the size of both subarrays are as similar as possible. Space complexity Binary search requires three pointers to elements, which may be array indices or pointers to memory locations, regardless of the size of the array. Therefore, the space complexity of binary search is in the word RAM model of computation. Derivation of average case The average number of iterations performed by binary search depends on the probability of each element being searched. The average case is different for successful searches and unsuccessful searches. It will be assumed that each element is equally likely to be searched for successful searches. For unsuccessful searches, it will be assumed that the intervals between and outside elements are equally likely to be searched. The average case for successful searches is the number of iterations required to search every element exactly once, divided by , the number of elements. The average case for unsuccessful searches is the number of iterations required to search an element within every interval exactly once, divided by the intervals. Successful searches In the binary tree representation, a successful search can be represented by a path from the root to the target node, called an internal path. The length of a path is the number of edges (connections between nodes) that the path passes through. The number of iterations performed by a search, given that the corresponding path has length , is counting the initial iteration. The internal path length is the sum of the lengths of all unique internal paths. Since there is only one path from the root to any single node, each internal path represents a search for a specific element. If there are elements, which is a positive integer, and the internal path length is , then the average number of iterations for a successful search , with the one iteration added to count the initial iteration. Since binary search is the optimal algorithm for searching with comparisons, this problem is reduced to calculating the minimum internal path length of all binary trees with nodes, which is equal to: For example, in a 7-element array, the root requires one iteration, the two elements below the root require two iterations, and the four elements below require three iterations. In this case, the internal path length is: The average number of iterations would be based on the equation for the average case. The sum for can be simplified to: Substituting the equation for into the equation for : For integer , this is equivalent to the equation for the average case on a successful search specified above. Unsuccessful searches Unsuccessful searches can be represented by augmenting the tree with external nodes, which forms an extended binary tree. If an internal node, or a node present in the tree, has fewer than two child nodes, then additional child nodes, called external nodes, are added so that each internal node has two children. By doing so, an unsuccessful search can be represented as a path to an external node, whose parent is the single element that remains during the last iteration. An external path is a path from the root to an external node. The external path length is the sum of the lengths of all unique external paths. If there are elements, which is a positive integer, and the external path length is , then the average number of iterations for an unsuccessful search , with the one iteration added to count the initial iteration. The external path length is divided by instead of because there are external paths, representing the intervals between and outside the elements of the array. This problem can similarly be reduced to determining the minimum external path length of all binary trees with nodes. For all binary trees, the external path length is equal to the internal path length plus . Substituting the equation for : Substituting the equation for into the equation for , the average case for unsuccessful searches can be determined: Performance of alternative procedure Each iteration of the binary search procedure defined above makes one or two comparisons, checking if the middle element is equal to the target in each iteration. Assuming that each element is equally likely to be searched, each iteration makes 1.5 comparisons on average. A variation of the algorithm checks whether the middle element is equal to the target at the end of the search. On average, this eliminates half a comparison from each iteration. This slightly cuts the time taken per iteration on most computers. However, it guarantees that the search takes the maximum number of iterations, on average adding one iteration to the search. Because the comparison loop is performed only times in the worst case, the slight increase in efficiency per iteration does not compensate for the extra iteration for all but very large . Running time and cache use In analyzing the performance of binary search, another consideration is the time required to compare two elements. For integers and strings, the time required increases linearly as the encoding length (usually the number of bits) of the elements increase. For example, comparing a pair of 64-bit unsigned integers would require comparing up to double the bits as comparing a pair of 32-bit unsigned integers. The worst case is achieved when the integers are equal. This can be significant when the encoding lengths of the elements are large, such as with large integer types or long strings, which makes comparing elements expensive. Furthermore, comparing floating-point values (the most common digital representation of real numbers) is often more expensive than comparing integers or short strings. On most computer architectures, the processor has a hardware cache separate from RAM. Since they are located within the processor itself, caches are much faster to access but usually store much less data than RAM. Therefore, most processors store memory locations that have been accessed recently, along with memory locations close to it. For example, when an array element is accessed, the element itself may be stored along with the elements that are stored close to it in RAM, making it faster to sequentially access array elements that are close in index to each other (locality of reference). On a sorted array, binary search can jump to distant memory locations if the array is large, unlike algorithms (such as linear search and linear probing in hash tables) which access elements in sequence. This adds slightly to the running time of binary search for large arrays on most systems. Binary search versus other schemes Sorted arrays with binary search are a very inefficient solution when insertion and deletion operations are interleaved with retrieval, taking time for each such operation. In addition, sorted arrays can complicate memory use especially when elements are often inserted into the array. There are other data structures that support much more efficient insertion and deletion. Binary search can be used to perform exact matching and set membership (determining whether a target value is in a collection of values). There are data structures that support faster exact matching and set membership. However, unlike many other searching schemes, binary search can be used for efficient approximate matching, usually performing such matches in time regardless of the type or structure of the values themselves. In addition, there are some operations, like finding the smallest and largest element, that can be performed efficiently on a sorted array. Linear search Linear search is a simple search algorithm that checks every record until it finds the target value. Linear search can be done on a linked list, which allows for faster insertion and deletion than an array. Binary search is faster than linear search for sorted arrays except if the array is short, although the array needs to be sorted beforehand. All sorting algorithms based on comparing elements, such as quicksort and merge sort, require at least comparisons in the worst case. Unlike linear search, binary search can be used for efficient approximate matching. There are operations such as finding the smallest and largest element that can be done efficiently on a sorted array but not on an unsorted array. Trees A binary search tree is a binary tree data structure that works based on the principle of binary search. The records of the tree are arranged in sorted order, and each record in the tree can be searched using an algorithm similar to binary search, taking on average logarithmic time. Insertion and deletion also require on average logarithmic time in binary search trees. This can be faster than the linear time insertion and deletion of sorted arrays, and binary trees retain the ability to perform all the operations possible on a sorted array, including range and approximate queries. However, binary search is usually more efficient for searching as binary search trees will most likely be imperfectly balanced, resulting in slightly worse performance than binary search. This even applies to balanced binary search trees, binary search trees that balance their own nodes, because they rarely produce the tree with the fewest possible levels. Except for balanced binary search trees, the tree may be severely imbalanced with few internal nodes with two children, resulting in the average and worst-case search time approaching comparisons. Binary search trees take more space than sorted arrays. Binary search trees lend themselves to fast searching in external memory stored in hard disks, as binary search trees can be efficiently structured in filesystems. The B-tree generalizes this method of tree organization. B-trees are frequently used to organize long-term storage such as databases and filesystems. Hashing For implementing associative arrays, hash tables, a data structure that maps keys to records using a hash function, are generally faster than binary search on a sorted array of records. Most hash table implementations require only amortized constant time on average. However, hashing is not useful for approximate matches, such as computing the next-smallest, next-largest, and nearest key, as the only information given on a failed search is that the target is not present in any record. Binary search is ideal for such matches, performing them in logarithmic time. Binary search also supports approximate matches. Some operations, like finding the smallest and largest element, can be done efficiently on sorted arrays but not on hash tables. Set membership algorithms A related problem to search is set membership. Any algorithm that does lookup, like binary search, can also be used for set membership. There are other algorithms that are more specifically suited for set membership. A bit array is the simplest, useful when the range of keys is limited. It compactly stores a collection of bits, with each bit representing a single key within the range of keys. Bit arrays are very fast, requiring only time. The Judy1 type of Judy array handles 64-bit keys efficiently. For approximate results, Bloom filters, another probabilistic data structure based on hashing, store a set of keys by encoding the keys using a bit array and multiple hash functions. Bloom filters are much more space-efficient than bit arrays in most cases and not much slower: with hash functions, membership queries require only time. However, Bloom filters suffer from false positives. Other data structures There exist data structures that may improve on binary search in some cases for both searching and other operations available for sorted arrays. For example, searches, approximate matches, and the operations available to sorted arrays can be performed more efficiently than binary search on specialized data structures such as van Emde Boas trees, fusion trees, tries, and bit arrays. These specialized data structures are usually only faster because they take advantage of the properties of keys with a certain attribute (usually keys that are small integers), and thus will be time or space consuming for keys that lack that attribute. As long as the keys can be ordered, these operations can always be done at least efficiently on a sorted array regardless of the keys. Some structures, such as Judy arrays, use a combination of approaches to mitigate this while retaining efficiency and the ability to perform approximate matching. Variations Uniform binary search Uniform binary search stores, instead of the lower and upper bounds, the difference in the index of the middle element from the current iteration to the next iteration. A lookup table containing the differences is computed beforehand. For example, if the array to be searched is , the middle element () would be . In this case, the middle element of the left subarray () is and the middle element of the right subarray () is . Uniform binary search would store the value of as both indices differ from by this same amount. To reduce the search space, the algorithm either adds or subtracts this change from the index of the middle element. Uniform binary search may be faster on systems where it is inefficient to calculate the midpoint, such as on decimal computers. Exponential search Exponential search extends binary search to unbounded lists. It starts by finding the first element with an index that is both a power of two and greater than the target value. Afterwards, it sets that index as the upper bound, and switches to binary search. A search takes iterations before binary search is started and at most iterations of the binary search, where is the position of the target value. Exponential search works on bounded lists, but becomes an improvement over binary search only if the target value lies near the beginning of the array. Interpolation search Instead of calculating the midpoint, interpolation search estimates the position of the target value, taking into account the lowest and highest elements in the array as well as length of the array. It works on the basis that the midpoint is not the best guess in many cases. For example, if the target value is close to the highest element in the array, it is likely to be located near the end of the array. A common interpolation function is linear interpolation. If is the array, are the lower and upper bounds respectively, and is the target, then the target is estimated to be about of the way between and . When linear interpolation is used, and the distribution of the array elements is uniform or near uniform, interpolation search makes comparisons. In practice, interpolation search is slower than binary search for small arrays, as interpolation search requires extra computation. Its time complexity grows more slowly than binary search, but this only compensates for the extra computation for large arrays. Fractional cascading Fractional cascading is a technique that speeds up binary searches for the same element in multiple sorted arrays. Searching each array separately requires time, where is the number of arrays. Fractional cascading reduces this to by storing specific information in each array about each element and its position in the other arrays. Fractional cascading was originally developed to efficiently solve various computational geometry problems. Fractional cascading has been applied elsewhere, such as in data mining and Internet Protocol routing. Generalization to graphs Binary search has been generalized to work on certain types of graphs, where the target value is stored in a vertex instead of an array element. Binary search trees are one such generalization—when a vertex (node) in the tree is queried, the algorithm either learns that the vertex is the target, or otherwise which subtree the target would be located in. However, this can be further generalized as follows: given an undirected, positively weighted graph and a target vertex, the algorithm learns upon querying a vertex that it is equal to the target, or it is given an incident edge that is on the shortest path from the queried vertex to the target. The standard binary search algorithm is simply the case where the graph is a path. Similarly, binary search trees are the case where the edges to the left or right subtrees are given when the queried vertex is unequal to the target. For all undirected, positively weighted graphs, there is an algorithm that finds the target vertex in queries in the worst case. Noisy binary search Noisy binary search algorithms solve the case where the algorithm cannot reliably compare elements of the array. For each pair of elements, there is a certain probability that the algorithm makes the wrong comparison. Noisy binary search can find the correct position of the target with a given probability that controls the reliability of the yielded position. Every noisy binary search procedure must make at least comparisons on average, where is the binary entropy function and is the probability that the procedure yields the wrong position. The noisy binary search problem can be considered as a case of the Rényi-Ulam game, a variant of Twenty Questions where the answers may be wrong. Quantum binary search Classical computers are bounded to the worst case of exactly iterations when performing binary search. Quantum algorithms for binary search are still bounded to a proportion of queries (representing iterations of the classical procedure), but the constant factor is less than one, providing for a lower time complexity on quantum computers. Any exact quantum binary search procedure—that is, a procedure that always yields the correct result—requires at least queries in the worst case, where is the natural logarithm. There is an exact quantum binary search procedure that runs in queries in the worst case. In comparison, Grover's algorithm is the optimal quantum algorithm for searching an unordered list of elements, and it requires queries. History The idea of sorting a list of items to allow for faster searching dates back to antiquity. The earliest known example was the Inakibit-Anu tablet from Babylon dating back to . The tablet contained about 500 sexagesimal numbers and their reciprocals sorted in lexicographical order, which made searching for a specific entry easier. In addition, several lists of names that were sorted by their first letter were discovered on the Aegean Islands. Catholicon, a Latin dictionary finished in 1286 CE, was the first work to describe rules for sorting words into alphabetical order, as opposed to just the first few letters. In 1946, John Mauchly made the first mention of binary search as part of the Moore School Lectures, a seminal and foundational college course in computing. In 1957, William Wesley Peterson published the first method for interpolation search. Every published binary search algorithm worked only for arrays whose length is one less than a power of two until 1960, when Derrick Henry Lehmer published a binary search algorithm that worked on all arrays. In 1962, Hermann Bottenbruch presented an ALGOL 60 implementation of binary search that placed the comparison for equality at the end, increasing the average number of iterations by one, but reducing to one the number of comparisons per iteration. The uniform binary search was developed by A. K. Chandra of Stanford University in 1971. In 1986, Bernard Chazelle and Leonidas J. Guibas introduced fractional cascading as a method to solve numerous search problems in computational geometry. Implementation issues When Jon Bentley assigned binary search as a problem in a course for professional programmers, he found that ninety percent failed to provide a correct solution after several hours of working on it, mainly because the incorrect implementations failed to run or returned a wrong answer in rare edge cases. A study published in 1988 shows that accurate code for it is only found in five out of twenty textbooks. Furthermore, Bentley's own implementation of binary search, published in his 1986 book Programming Pearls, contained an overflow error that remained undetected for over twenty years. The Java programming language library implementation of binary search had the same overflow bug for more than nine years. In a practical implementation, the variables used to represent the indices will often be of fixed size (integers), and this can result in an arithmetic overflow for very large arrays. If the midpoint of the span is calculated as , then the value of may exceed the range of integers of the data type used to store the midpoint, even if and are within the range. If and are nonnegative, this can be avoided by calculating the midpoint as . An infinite loop may occur if the exit conditions for the loop are not defined correctly. Once exceeds , the search has failed and must convey the failure of the search. In addition, the loop must be exited when the target element is found, or in the case of an implementation where this check is moved to the end, checks for whether the search was successful or failed at the end must be in place. Bentley found that most of the programmers who incorrectly implemented binary search made an error in defining the exit conditions. Library support Many languages' standard libraries include binary search routines: C provides the function bsearch() in its standard library, which is typically implemented via binary search, although the official standard does not require it so. C++'s standard library provides the functions binary_search(), lower_bound(), upper_bound() and equal_range(). D's standard library Phobos, in std.range module provides a type SortedRange (returned by sort() and assumeSorted() functions) with methods contains(), equaleRange(), lowerBound() and trisect(), that use binary search techniques by default for ranges that offer random access. COBOL provides the SEARCH ALL verb for performing binary searches on COBOL ordered tables. Go's sort standard library package contains the functions Search, SearchInts, SearchFloat64s, and SearchStrings, which implement general binary search, as well as specific implementations for searching slices of integers, floating-point numbers, and strings, respectively. Java offers a set of overloaded binarySearch() static methods in the classes and in the standard java.util package for performing binary searches on Java arrays and on Lists, respectively. Microsoft's .NET Framework 2.0 offers static generic versions of the binary search algorithm in its collection base classes. An example would be System.Array's method BinarySearch<T>(T[] array, T value). For Objective-C, the Cocoa framework provides the method in Mac OS X 10.6+. Apple's Core Foundation C framework also contains a CFArrayBSearchValues() function. Python provides the bisect module that keeps a list in sorted order without having to sort the list after each insertion. Ruby's Array class includes a bsearch method with built-in approximate matching. Rust's slice primitive provides binary_search(), binary_search_by(), binary_search_by_key(), and partition_point().
Mathematics
Algorithms
null
4292
https://en.wikipedia.org/wiki/Base%20pair
Base pair
A base pair (bp) is a fundamental unit of double-stranded nucleic acids consisting of two nucleobases bound to each other by hydrogen bonds. They form the building blocks of the DNA double helix and contribute to the folded structure of both DNA and RNA. Dictated by specific hydrogen bonding patterns, "Watson–Crick" (or "Watson–Crick–Franklin") base pairs (guanine–cytosine and adenine–thymine) allow the DNA helix to maintain a regular helical structure that is subtly dependent on its nucleotide sequence. The complementary nature of this based-paired structure provides a redundant copy of the genetic information encoded within each strand of DNA. The regular structure and data redundancy provided by the DNA double helix make DNA well suited to the storage of genetic information, while base-pairing between DNA and incoming nucleotides provides the mechanism through which DNA polymerase replicates DNA and RNA polymerase transcribes DNA into RNA. Many DNA-binding proteins can recognize specific base-pairing patterns that identify particular regulatory regions of genes. Intramolecular base pairs can occur within single-stranded nucleic acids. This is particularly important in RNA molecules (e.g., transfer RNA), where Watson–Crick base pairs (guanine–cytosine and adenine–uracil) permit the formation of short double-stranded helices, and a wide variety of non–Watson–Crick interactions (e.g., G–U or A–A) allow RNAs to fold into a vast range of specific three-dimensional structures. In addition, base-pairing between transfer RNA (tRNA) and messenger RNA (mRNA) forms the basis for the molecular recognition events that result in the nucleotide sequence of mRNA becoming translated into the amino acid sequence of proteins via the genetic code. The size of an individual gene or an organism's entire genome is often measured in base pairs because DNA is usually double-stranded. Hence, the number of total base pairs is equal to the number of nucleotides in one of the strands (with the exception of non-coding single-stranded regions of telomeres). The haploid human genome (23 chromosomes) is estimated to be about 3.2 billion base pairs long and to contain 20,000–25,000 distinct protein-coding genes. A kilobase (kb) is a unit of measurement in molecular biology equal to 1000 base pairs of DNA or RNA. The total number of DNA base pairs on Earth is estimated at 5.0 with a weight of 50 billion tonnes. In comparison, the total mass of the biosphere has been estimated to be as much as 4 TtC (trillion tons of carbon). Hydrogen bonding and stability Top, a G.C base pair with three hydrogen bonds. Bottom, an A.T base pair with two hydrogen bonds. Non-covalent hydrogen bonds between the bases are shown as dashed lines. The wiggly lines stand for the connection to the pentose sugar and point in the direction of the minor groove. Hydrogen bonding is the chemical interaction that underlies the base-pairing rules described above. Appropriate geometrical correspondence of hydrogen bond donors and acceptors allows only the "right" pairs to form stably. DNA with high GC-content is more stable than DNA with low GC-content. Crucially, however, stacking interactions are primarily responsible for stabilising the double-helical structure; Watson-Crick base pairing's contribution to global structural stability is minimal, but its role in the specificity underlying complementarity is, by contrast, of maximal importance as this underlies the template-dependent processes of the central dogma (e.g. DNA replication). The bigger nucleobases, adenine and guanine, are members of a class of double-ringed chemical structures called purines; the smaller nucleobases, cytosine and thymine (and uracil), are members of a class of single-ringed chemical structures called pyrimidines. Purines are complementary only with pyrimidines: pyrimidine–pyrimidine pairings are energetically unfavorable because the molecules are too far apart for hydrogen bonding to be established; purine–purine pairings are energetically unfavorable because the molecules are too close, leading to overlap repulsion. Purine–pyrimidine base-pairing of AT or GC or UA (in RNA) results in proper duplex structure. The only other purine–pyrimidine pairings would be AC and GT and UG (in RNA); these pairings are mismatches because the patterns of hydrogen donors and acceptors do not correspond. The GU pairing, with two hydrogen bonds, does occur fairly often in RNA (see wobble base pair). Paired DNA and RNA molecules are comparatively stable at room temperature, but the two nucleotide strands will separate above a melting point that is determined by the length of the molecules, the extent of mispairing (if any), and the GC content. Higher GC content results in higher melting temperatures; it is, therefore, unsurprising that the genomes of extremophile organisms such as Thermus thermophilus are particularly GC-rich. On the converse, regions of a genome that need to separate frequently — for example, the promoter regions for often-transcribed genes — are comparatively GC-poor (for example, see TATA box). GC content and melting temperature must also be taken into account when designing primers for PCR reactions. Examples The following DNA sequences illustrate pair double-stranded patterns. By convention, the top strand is written from the 5′-end to the 3′-end; thus, the bottom strand is written 3′ to 5′. A base-paired DNA sequence: The corresponding RNA sequence, in which uracil is substituted for thymine in the RNA strand: Base analogs and intercalators Chemical analogs of nucleotides can take the place of proper nucleotides and establish non-canonical base-pairing, leading to errors (mostly point mutations) in DNA replication and DNA transcription. This is due to their isosteric chemistry. One common mutagenic base analog is 5-bromouracil, which resembles thymine but can base-pair to guanine in its enol form. Other chemicals, known as DNA intercalators, fit into the gap between adjacent bases on a single strand and induce frameshift mutations by "masquerading" as a base, causing the DNA replication machinery to skip or insert additional nucleotides at the intercalated site. Most intercalators are large polyaromatic compounds and are known or suspected carcinogens. Examples include ethidium bromide and acridine. Mismatch repair Mismatched base pairs can be generated by errors of DNA replication and as intermediates during homologous recombination. The process of mismatch repair ordinarily must recognize and correctly repair a small number of base mispairs within a long sequence of normal DNA base pairs. To repair mismatches formed during DNA replication, several distinctive repair processes have evolved to distinguish between the template strand and the newly formed strand so that only the newly inserted incorrect nucleotide is removed (in order to avoid generating a mutation). The proteins employed in mismatch repair during DNA replication, and the clinical significance of defects in this process are described in the article DNA mismatch repair. The process of mispair correction during recombination is described in the article gene conversion. Length measurements The following abbreviations are commonly used to describe the length of a D/RNA molecule: bp = base pair—one bp corresponds to approximately 3.4 Å (340 pm) of length along the strand, and to roughly 618 or 643 daltons for DNA and RNA respectively. kb (= kbp) = kilo–base-pair = 1,000 bp Mb (= Mbp) = mega–base-pair = 1,000,000 bp Gb (= Gbp) = giga–base-pair = 1,000,000,000 bp For single-stranded DNA/RNA, units of nucleotides are used—abbreviated nt (or knt, Mnt, Gnt)—as they are not paired. To distinguish between units of computer storage and bases, kbp, Mbp, Gbp, etc. may be used for base pairs. The centimorgan is also often used to imply distance along a chromosome, but the number of base pairs it corresponds to varies widely. In the human genome, the centimorgan is about 1 million base pairs. Unnatural base pair (UBP) An unnatural base pair (UBP) is a designed subunit (or nucleobase) of DNA which is created in a laboratory and does not occur in nature. DNA sequences have been described which use newly created nucleobases to form a third base pair, in addition to the two base pairs found in nature, A-T (adenine – thymine) and G-C (guanine – cytosine). A few research groups have been searching for a third base pair for DNA, including teams led by Steven A. Benner, Philippe Marliere, Floyd E. Romesberg and Ichiro Hirao. Some new base pairs based on alternative hydrogen bonding, hydrophobic interactions and metal coordination have been reported. In 1989 Steven Benner (then working at the Swiss Federal Institute of Technology in Zurich) and his team led with modified forms of cytosine and guanine into DNA molecules in vitro. The nucleotides, which encoded RNA and proteins, were successfully replicated in vitro. Since then, Benner's team has been trying to engineer cells that can make foreign bases from scratch, obviating the need for a feedstock. In 2002, Ichiro Hirao's group in Japan developed an unnatural base pair between 2-amino-8-(2-thienyl)purine (s) and pyridine-2-one (y) that functions in transcription and translation, for the site-specific incorporation of non-standard amino acids into proteins. In 2006, they created 7-(2-thienyl)imidazo[4,5-b]pyridine (Ds) and pyrrole-2-carbaldehyde (Pa) as a third base pair for replication and transcription. Afterward, Ds and 4-[3-(6-aminohexanamido)-1-propynyl]-2-nitropyrrole (Px) was discovered as a high fidelity pair in PCR amplification. In 2013, they applied the Ds-Px pair to DNA aptamer generation by in vitro selection (SELEX) and demonstrated the genetic alphabet expansion significantly augment DNA aptamer affinities to target proteins. In 2012, a group of American scientists led by Floyd Romesberg, a chemical biologist at the Scripps Research Institute in San Diego, California, published that his team designed an unnatural base pair (UBP). The two new artificial nucleotides or Unnatural Base Pair (UBP) were named d5SICS and dNaM. More technically, these artificial nucleotides bearing hydrophobic nucleobases, feature two fused aromatic rings that form a (d5SICS–dNaM) complex or base pair in DNA. His team designed a variety of in vitro or "test tube" templates containing the unnatural base pair and they confirmed that it was efficiently replicated with high fidelity in virtually all sequence contexts using the modern standard in vitro techniques, namely PCR amplification of DNA and PCR-based applications. Their results show that for PCR and PCR-based applications, the d5SICS–dNaM unnatural base pair is functionally equivalent to a natural base pair, and when combined with the other two natural base pairs used by all organisms, A–T and G–C, they provide a fully functional and expanded six-letter "genetic alphabet". In 2014 the same team from the Scripps Research Institute reported that they synthesized a stretch of circular DNA known as a plasmid containing natural T-A and C-G base pairs along with the best-performing UBP Romesberg's laboratory had designed and inserted it into cells of the common bacterium E. coli that successfully replicated the unnatural base pairs through multiple generations. The transfection did not hamper the growth of the E. coli cells and showed no sign of losing its unnatural base pairs to its natural DNA repair mechanisms. This is the first known example of a living organism passing along an expanded genetic code to subsequent generations. Romesberg said he and his colleagues created 300 variants to refine the design of nucleotides that would be stable enough and would be replicated as easily as the natural ones when the cells divide. This was in part achieved by the addition of a supportive algal gene that expresses a nucleotide triphosphate transporter which efficiently imports the triphosphates of both d5SICSTP and dNaMTP into E. coli bacteria. Then, the natural bacterial replication pathways use them to accurately replicate a plasmid containing d5SICS–dNaM. Other researchers were surprised that the bacteria replicated these human-made DNA subunits. The successful incorporation of a third base pair is a significant breakthrough toward the goal of greatly expanding the number of amino acids which can be encoded by DNA, from the existing 20 amino acids to a theoretically possible 172, thereby expanding the potential for living organisms to produce novel proteins. The artificial strings of DNA do not encode for anything yet, but scientists speculate they could be designed to manufacture new proteins which could have industrial or pharmaceutical uses. Experts said the synthetic DNA incorporating the unnatural base pair raises the possibility of life forms based on a different DNA code. Non-canonical base pairing In addition to the canonical pairing, some conditions can also favour base-pairing with alternative base orientation, and number and geometry of hydrogen bonds. These pairings are accompanied by alterations to the local backbone shape. The most common of these is the wobble base pairing that occurs between tRNAs and mRNAs at the third base position of many codons during transcription and during the charging of tRNAs by some tRNA synthetases. They have also been observed in the secondary structures of some RNA sequences. Additionally, Hoogsteen base pairing (typically written as A•U/T and G•C) can exist in some DNA sequences (e.g. CA and TA dinucleotides) in dynamic equilibrium with standard Watson–Crick pairing. They have also been observed in some protein–DNA complexes. In addition to these alternative base pairings, a wide range of base-base hydrogen bonding is observed in RNA secondary and tertiary structure. These bonds are often necessary for the precise, complex shape of an RNA, as well as its binding to interaction partners.
Biology and health sciences
Nucleic acids
Biology
4320
https://en.wikipedia.org/wiki/Binary%20search%20tree
Binary search tree
In computer science, a binary search tree (BST), also called an ordered or sorted binary tree, is a rooted binary tree data structure with the key of each internal node being greater than all the keys in the respective node's left subtree and less than the ones in its right subtree. The time complexity of operations on the binary search tree is linear with respect to the height of the tree. Binary search trees allow binary search for fast lookup, addition, and removal of data items. Since the nodes in a BST are laid out so that each comparison skips about half of the remaining tree, the lookup performance is proportional to that of binary logarithm. BSTs were devised in the 1960s for the problem of efficient storage of labeled data and are attributed to Conway Berners-Lee and David Wheeler. The performance of a binary search tree is dependent on the order of insertion of the nodes into the tree since arbitrary insertions may lead to degeneracy; several variations of the binary search tree can be built with guaranteed worst-case performance. The basic operations include: search, traversal, insert and delete. BSTs with guaranteed worst-case complexities perform better than an unsorted array, which would require linear search time. The complexity analysis of BST shows that, on average, the insert, delete and search takes for nodes. In the worst case, they degrade to that of a singly linked list: . To address the boundless increase of the tree height with arbitrary insertions and deletions, self-balancing variants of BSTs are introduced to bound the worst lookup complexity to that of the binary logarithm. AVL trees were the first self-balancing binary search trees, invented in 1962 by Georgy Adelson-Velsky and Evgenii Landis. Binary search trees can be used to implement abstract data types such as dynamic sets, lookup tables and priority queues, and used in sorting algorithms such as tree sort. History The binary search tree algorithm was discovered independently by several researchers, including P.F. Windley, Andrew Donald Booth, Andrew Colin, Thomas N. Hibbard. The algorithm is attributed to Conway Berners-Lee and David Wheeler, who used it for storing labeled data in magnetic tapes in 1960. One of the earliest and popular binary search tree algorithm is that of Hibbard. The time complexities of a binary search tree increases boundlessly with the tree height if the nodes are inserted in an arbitrary order, therefore self-balancing binary search trees were introduced to bound the height of the tree to . Various height-balanced binary search trees were introduced to confine the tree height, such as AVL trees, Treaps, and red–black trees. The AVL tree was invented by Georgy Adelson-Velsky and Evgenii Landis in 1962 for the efficient organization of information. It was the first self-balancing binary search tree to be invented. Overview A binary search tree is a rooted binary tree in which nodes are arranged in strict total order in which the nodes with keys greater than any particular node A is stored on the right sub-trees to that node A and the nodes with keys equal to or less than A are stored on the left sub-trees to A, satisfying the binary search property. Binary search trees are also efficacious in sortings and search algorithms. However, the search complexity of a BST depends upon the order in which the nodes are inserted and deleted; since in worst case, successive operations in the binary search tree may lead to degeneracy and form a singly linked list (or "unbalanced tree") like structure, thus has the same worst-case complexity as a linked list. Binary search trees are also a fundamental data structure used in construction of abstract data structures such as sets, multisets, and associative arrays. Operations Searching Searching in a binary search tree for a specific key can be programmed recursively or iteratively. Searching begins by examining the root node. If the tree is , the key being searched for does not exist in the tree. Otherwise, if the key equals that of the root, the search is successful and the node is returned. If the key is less than that of the root, the search proceeds by examining the left subtree. Similarly, if the key is greater than that of the root, the search proceeds by examining the right subtree. This process is repeated until the key is found or the remaining subtree is . If the searched key is not found after a subtree is reached, then the key is not present in the tree. Recursive search The following pseudocode implements the BST search procedure through recursion. The recursive procedure continues until a or the being searched for are encountered. Iterative search The recursive version of the search can be "unrolled" into a while loop. On most machines, the iterative version is found to be more efficient. Since the search may proceed till some leaf node, the running time complexity of BST search is where is the height of the tree. However, the worst case for BST search is where is the total number of nodes in the BST, because an unbalanced BST may degenerate to a linked list. However, if the BST is height-balanced the height is . Successor and predecessor For certain operations, given a node , finding the successor or predecessor of is crucial. Assuming all the keys of a BST are distinct, the successor of a node in a BST is the node with the smallest key greater than 's key. On the other hand, the predecessor of a node in a BST is the node with the largest key smaller than 's key. The following pseudocode finds the successor and predecessor of a node in a BST. Operations such as finding a node in a BST whose key is the maximum or minimum are critical in certain operations, such as determining the successor and predecessor of nodes. Following is the pseudocode for the operations. Insertion Operations such as insertion and deletion cause the BST representation to change dynamically. The data structure must be modified in such a way that the properties of BST continue to hold. New nodes are inserted as leaf nodes in the BST. Following is an iterative implementation of the insertion operation. The procedure maintains a "trailing pointer" as a parent of . After initialization on line 2, the while loop along lines 4-11 causes the pointers to be updated. If is , the BST is empty, thus is inserted as the root node of the binary search tree , if it is not , insertion proceeds by comparing the keys to that of on the lines 15-19 and the node is inserted accordingly. Deletion The deletion of a node, say , from the binary search tree has three cases: If is a leaf node, the parent node of gets replaced by and consequently is removed from the , as shown in (a). If has only one child, the child node of gets elevated by modifying the parent node of to point to the child node, consequently taking 's position in the tree, as shown in (b) and (c). If has both left and right children, the successor of , say , displaces by following the two cases: If is 's right child, as shown in (d), displaces and 's right child remain unchanged. If lies within 's right subtree but is not 's right child, as shown in (e), first gets replaced by its own right child, and then it displaces 's position in the tree. The following pseudocode implements the deletion operation in a binary search tree. The procedure deals with the 3 special cases mentioned above. Lines 2-3 deal with case 1; lines 4-5 deal with case 2 and lines 6-16 for case 3. The helper function is used within the deletion algorithm for the purpose of replacing the node with in the binary search tree . This procedure handles the deletion (and substitution) of from . Traversal A BST can be traversed through three basic algorithms: inorder, preorder, and postorder tree walks. Inorder tree walk: Nodes from the left subtree get visited first, followed by the root node and right subtree. Such a traversal visits all the nodes in the order of non-decreasing key sequence. Preorder tree walk: The root node gets visited first, followed by left and right subtrees. Postorder tree walk: Nodes from the left subtree get visited first, followed by the right subtree, and finally, the root. Following is a recursive implementation of the tree walks. Balanced binary search trees Without rebalancing, insertions or deletions in a binary search tree may lead to degeneration, resulting in a height of the tree (where is number of items in a tree), so that the lookup performance is deteriorated to that of a linear search. Keeping the search tree balanced and height bounded by is a key to the usefulness of the binary search tree. This can be achieved by "self-balancing" mechanisms during the updation operations to the tree designed to maintain the tree height to the binary logarithmic complexity. Height-balanced trees A tree is height-balanced if the heights of the left sub-tree and right sub-tree are guaranteed to be related by a constant factor. This property was introduced by the AVL tree and continued by the red–black tree. The heights of all the nodes on the path from the root to the modified leaf node have to be observed and possibly corrected on every insert and delete operation to the tree. Weight-balanced trees In a weight-balanced tree, the criterion of a balanced tree is the number of leaves of the subtrees. The weights of the left and right subtrees differ at most by . However, the difference is bound by a ratio of the weights, since a strong balance condition of cannot be maintained with rebalancing work during insert and delete operations. The -weight-balanced trees gives an entire family of balance conditions, where each left and right subtrees have each at least a fraction of of the total weight of the subtree. Types There are several self-balanced binary search trees, including T-tree, treap, red-black tree, B-tree, 2–3 tree, and Splay tree. Examples of applications Sort Binary search trees are used in sorting algorithms such as tree sort, where all the elements are inserted at once and the tree is traversed at an in-order fashion. BSTs are also used in quicksort. Priority queue operations Binary search trees are used in implementing priority queues, using the node's key as priorities. Adding new elements to the queue follows the regular BST insertion operation but the removal operation depends on the type of priority queue: If it is an ascending order priority queue, removal of an element with the lowest priority is done through leftward traversal of the BST. If it is a descending order priority queue, removal of an element with the highest priority is done through rightward traversal of the BST.
Mathematics
Data structures and types
null
4359
https://en.wikipedia.org/wiki/Boomerang
Boomerang
A boomerang () is a thrown tool typically constructed with airfoil sections and designed to spin about an axis perpendicular to the direction of its flight. A returning boomerang is designed to return to the thrower, while a non-returning boomerang is designed as a weapon to be thrown straight. Various forms of Boomerang like designs are traditionally used by some Aboriginal Australians for hunting, pre-colonialisation they had a multitude of names. Historically, boomerangs have been used for hunting, sport, and entertainment and are made in various shapes and sizes to suit different purposes. Although considered an Australian icon, ancient boomerangs have also been discovered in Egypt, the Americas, and Eurasia. Description A boomerang is a throwing stick with aerodynamic properties, traditionally made of wood, but also of bone, horn, tusks and even iron. Modern boomerangs used for sport can be made from plywood or plastics such as ABS, polypropylene, phenolic paper, or carbon fibre-reinforced plastics. Boomerangs come in many shapes and sizes depending on their geographic or tribal origins and intended function, including the traditional Australian type, the cross-stick, the pinwheel, the tumble-stick, the Boomabird, and other less common types. Returning boomerangs fly, and are examples of the earliest heavier-than-air human-made flight. A returning boomerang has two or more aerofoil section wings arranged so that when spinning they create unbalanced aerodynamic forces that curve its path into an ellipse, returning to its point of origin when thrown correctly. Their typical L-shape makes them the most recognisable form of boomerang. Although used primarily for leisure or recreation, returning boomerangs are also used to decoy birds of prey, thrown above the long grass to frighten game birds into flight and into waiting nets. Non-traditional, modern, competition boomerangs come in many shapes, sizes and materials. Throwing sticks, valari, or kylies, are primarily used as weapons. They lack the aerofoil sections, are generally heavier and designed to travel as straight and forcefully as possible to the target to bring down game. The Tamil valari variant, of ancient origin and mentioned in the Tamil Sangam literature "Purananuru", was one of these. The usual form of the Valari is two limbs set at an angle; one thin and tapering, the other rounded as a handle. Valaris come in many shapes and sizes. They are usually made of iron and cast from moulds. Some may have wooden limbs tipped with iron or sharpened edges. Etymology The origin of the term is uncertain. One source asserts that the term entered the language in 1827, adapted from an Aboriginal language of New South Wales, Australia, but mentions a variant, wo-mur-rang, which it dates to 1798. The first recorded encounter with a boomerang by Europeans was at Farm Cove (Port Jackson), in December 1804, when a weapon was witnessed during a tribal skirmish: David Collins listed "Wo-mur-rāng" as one of eight Aboriginal "Names of clubs" in 1798. but was probably referring to the woomera, which is actually a spear-thrower. An anonymous 1790 manuscript on Aboriginal languages of New South Wales reported "Boo-mer-rit" as "the Scimiter". In 1822, it was described in detail and recorded as a "bou-mar-rang" in the language of the Turuwal people (a sub-group of the Darug) of the Georges River near Port Jackson. The Turawal used other words for their hunting sticks but used "boomerang" to refer to a returning throw-stick. History Boomerangs were, historically, used as hunting weapons, percussive musical instruments, battle clubs, fire-starters, decoys for hunting waterfowl, and as recreational play toys. The smallest boomerang may be less than from tip to tip, and the largest over in length. Tribal boomerangs may be inscribed or painted with designs meaningful to their makers. Most boomerangs seen today are of the tourist or competition sort, and are almost invariably of the returning type. Depictions of boomerangs being thrown at animals, such as kangaroos, appear in some of the oldest rock art in the world, the Indigenous Australian rock art of the Kimberley region, which is potentially up to 50,000 years old. Stencils and paintings of boomerangs also appear in the rock art of West Papua, including on Bird's Head Peninsula and Kaimana, likely dating to the Last Glacial Maximum, when lower sea levels led to cultural continuity between Papua and Arnhem Land in Northern Australia. The oldest surviving Australian Aboriginal boomerang was found in a peat bog in the Wyrie Swamp of South Australia in 1973. It was dated to 10,000 BC and is held by the South Australian Museum in Adelaide. Although traditionally thought of as Australian, boomerangs have been found also in ancient Europe, Egypt, and North America. There is evidence of the use of non-returning boomerangs by the Native Americans of California and Arizona, and inhabitants of South India for killing birds and rabbits. Some boomerangs were not thrown at all, but were used in hand to hand combat by Indigenous Australians. Ancient Egyptian examples, however, have been recovered, and experiments have shown that they functioned as returning boomerangs. Hunting sticks discovered in Europe seem to have formed part of the Stone Age arsenal of weapons. One boomerang that was discovered in Obłazowa Cave in the Carpathian Mountains in Poland was made of mammoth's tusk and is believed, based on AMS dating of objects found with it, to be about 30,000 years old. In the Netherlands, boomerangs have been found in Vlaardingen and Velsen from the first century BC. King Tutankhamun owned a collection of boomerangs of both the straight flying (hunting) and returning variety. No one knows for sure how the returning boomerang was invented, but some modern boomerang makers speculate that it developed from the flattened throwing stick, still used by Aboriginal Australians and other indigenous peoples around the world, including the Navajo in North America. A hunting boomerang is delicately balanced and much harder to make than a returning one. The curving flight characteristic of returning boomerangs was probably first noticed by early hunters trying to "tune" their throwing sticks to fly straight. It is thought by some that the shape and elliptical flight path of the returning boomerang makes it useful for hunting birds and small animals, or that noise generated by the movement of the boomerang through the air, or, by a skilled thrower, lightly clipping leaves of a tree whose branches house birds, would help scare the birds towards the thrower. It is further supposed by some that this was used to frighten flocks or groups of birds into nets that were usually strung up between trees or thrown by hidden hunters. In southeastern Australia, it is claimed that boomerangs were made to hover over a flock of ducks; mistaking it for a hawk, the ducks would dive away, toward hunters armed with nets or clubs. Traditionally, most boomerangs used by Aboriginal groups in Australia were non-returning. These weapons, sometimes called "throwsticks" or "kylies", were used for hunting a variety of prey, from kangaroos to parrots; at a range of about , a non-returning boomerang could inflict mortal injury to a large animal. A throwstick thrown nearly horizontally may fly in a nearly straight path and could fell a kangaroo on impact to the legs or knees, while the long-necked emu could be killed by a blow to the neck. Hooked non-returning boomerangs, known as "beaked kylies", used in northern Central Australia, have been claimed to kill multiple birds when thrown into a dense flock. Throwsticks are used as multi-purpose tools by today's Aboriginal peoples, and besides throwing could be wielded as clubs, used for digging, used to start friction fires, and are sonorous when two are struck together. Recent evidence also suggests that boomerangs were used as war weapons. Modern use Today, boomerangs are mostly used for recreation. There are different types of throwing contests: accuracy of return; Aussie round; trick catch; maximum time aloft; fast catch; and endurance (see below). The modern sport boomerang (often referred to as a 'boom' or 'rang') is made of Finnish birch plywood, hardwood, plastic or composite materials and comes in many different shapes and colours. Most sport boomerangs typically weigh less than , with MTA boomerangs (boomerangs used for the maximum-time-aloft event) often under . Boomerangs have also been suggested as an alternative to clay pigeons in shotgun sports, where the flight of the boomerang better mimics the flight of a bird offering a more challenging target. The modern boomerang is often computer-aided designed with precision airfoils. The number of "wings" is often more than 2 as more lift is provided by 3 or 4 wings than by 2. Among the latest inventions is a round-shaped boomerang, which has a different look but using the same returning principle as traditional boomerangs. This allows for safer catch for players. In 1992, German astronaut Ulf Merbold performed an experiment aboard Spacelab that established that boomerangs function in zero gravity as they do on Earth. French Astronaut Jean-François Clervoy aboard Mir repeated this in 1997. In 2008, Japanese astronaut Takao Doi again repeated the experiment on board the International Space Station. Beginning in the later part of the twentieth century, there has been a bloom in the independent creation of unusually designed art boomerangs. These often have little or no resemblance to the traditional historical ones and on first sight some of these objects may not look like boomerangs at all. The use of modern thin plywoods and synthetic plastics have greatly contributed to their success. Designs are very diverse and can range from animal inspired forms, humorous themes, complex calligraphic and symbolic shapes, to the purely abstract. Painted surfaces are similarly richly diverse. Some boomerangs made primarily as art objects do not have the required aerodynamic properties to return. Aerodynamics A returning boomerang is a rotating wing. It consists of two or more arms, or wings, connected at an angle; each wing is shaped as an airfoil section. Although it is not a requirement that a boomerang be in its traditional shape, it is usually flat. Boomerangs can be made for right- or left-handed throwers. The difference between right and left is subtle, the planform is the same but the leading edges of the aerofoil sections are reversed. A right-handed boomerang makes a counter-clockwise, circular flight to the left while a left-handed boomerang flies clockwise to the right. Most sport boomerangs weigh between , have a wingspan, and a range. A falling boomerang starts spinning, and most then fall in a spiral. When the boomerang is thrown with high spin, a boomerang flies in a curved rather than a straight line. When thrown correctly, a boomerang returns to its starting point. As the wing rotates and the boomerang moves through the air, the airflow over the wings creates lift on both "wings". However, during one-half of each blade's rotation, it sees a higher airspeed, because the rotation tip speed and the forward speed add, and when it is in the other half of the rotation, the tip speed subtracts from the forward speed. Thus if thrown nearly upright, each blade generates more lift at the top than the bottom. While it might be expected that this would cause the boomerang to tilt around the axis of travel, because the boomerang has significant angular momentum, the gyroscopic precession causes the plane of rotation to tilt about an axis that is 90 degrees to the direction of flight, causing it to turn. When thrown in the horizontal plane, as with a Frisbee, instead of in the vertical, the same gyroscopic precession will cause the boomerang to fly violently, straight up into the air and then crash. Fast Catch boomerangs usually have three or more symmetrical wings (seen from above), whereas a Long Distance boomerang is most often shaped similar to a question mark. Maximum Time Aloft boomerangs mostly have one wing considerably longer than the other. This feature, along with carefully executed bends and twists in the wings help to set up an "auto-rotation" effect to maximise the boomerang's hover time in descending from the highest point in its flight. Some boomerangs have turbulators — bumps or pits on the top surface that act to increase the lift as boundary layer transition activators (to keep attached turbulent flow instead of laminar separation). Throwing technique Boomerangs are generally thrown in unobstructed, open spaces at least twice as large as the range of the boomerang. The flight direction to the left or right depends upon the design of the boomerang itself, not the thrower. A right-handed or left-handed boomerang can be thrown with either hand, but throwing a boomerang with the non-matching hand requires a throwing motion that many throwers find awkward. The following technique applies to a right-handed boomerang; the directions are mirrored for a left-handed boomerang. Different boomerang designs have different flight characteristics and are suitable for different conditions. The accuracy of the throw depends on understanding the weight and aerodynamics of that particular boomerang, and the strength, consistency and direction of the wind; from this, the thrower chooses the angle of tilt, the angle against the wind, the elevation of the trajectory, the degree of spin and the strength of the throw. A great deal of trial and error is required to perfect the throw over time. A properly thrown boomerang will travel out parallel to the ground, sometimes climbing gently, perform a graceful, anti-clockwise, circular or tear-drop shaped arc, flatten out and return in a hovering motion, coming in from the left or spiralling in from behind. Ideally, the hover will allow a practiced catcher to clamp their hands shut horizontally on the boomerang from above and below, sandwiching the centre between their hands. The grip used depends on size and shape; smaller boomerangs are held between finger and thumb at one end, while larger, heavier or wider boomerangs need one or two fingers wrapped over the top edge in order to induce a spin. The aerofoil-shaped section must face the inside of the thrower, and the flatter side outwards. It is usually inclined outwards, from a nearly vertical position to 20° or 30°; the stronger the wind, the closer to vertical. The elbow of the boomerang can point forwards or backwards, or it can be gripped for throwing; it just needs to start spinning on the required inclination, in the desired direction, with the right force. The boomerang is aimed to the right of the oncoming wind; the exact angle depends on the strength of the wind and the boomerang itself. Left-handed boomerangs are thrown to the left of the wind and will fly a clockwise flight path. The trajectory is either parallel to the ground or slightly upwards. The boomerang can return without the aid of any wind, but even very slight winds must be taken into account however calm they might seem. Little or no wind is preferable for an accurate throw, light winds up to are manageable with skill. If the wind is strong enough to fly a kite, then it may be too strong unless a skilled thrower is using a boomerang designed for stability in stronger winds. Gusty days are a great challenge, and the thrower must be keenly aware of the ebb and flow of the wind strength, finding appropriate lulls in the gusts to launch their boomerang. Competitions and records A world record achievement was made on 3 June 2007 by Tim Lendrum in Aussie Round. Lendrum scored 96 out of 100, giving him a national record as well as an equal world record throwing an "AYR" made by expert boomerang maker Adam Carroll. In international competition, a world cup is held every second year. , teams from Germany and the United States dominated international competition. The individual World Champion title was won in 2000, 2002, 2004, 2012, and 2016 by Swiss thrower Manuel Schütz. In 1992, 1998, 2006, and 2008 Fridolin Frost from Germany won the title. The team competitions of 2012 and 2014 were won by Boomergang (an international team). World champions were Germany in 2012 and Japan in 2014 for the first time. Boomergang was formed by individuals from several countries, including the Colombian Alejandro Palacio. In 2016 USA became team world champion. Competition disciplines Modern boomerang tournaments usually involve some or all of the events listed below In all disciplines the boomerang must travel at least from the thrower. Throwing takes place individually. The thrower stands at the centre of concentric rings marked on an open field. Events include: Aussie Round: considered by many to be the ultimate test of boomeranging skills. The boomerang should ideally cross the circle and come right back to the centre. Each thrower has five attempts. Points are awarded for distance, accuracy and the catch. Accuracy: points are awarded according to how close the boomerang lands to the centre of the rings. The thrower must not touch the boomerang after it has been thrown. Each thrower has five attempts. In major competitions there are two accuracy disciplines: Accuracy 100 and Accuracy 50. Endurance: points are awarded for the number of catches achieved in 5 minutes. Fast Catch: the time taken to throw and catch the boomerang five times. The winner has the fastest timed catches. Trick Catch/Doubling: points are awarded for trick catches behind the back, between the feet, and so on. In Doubling, the thrower has to throw two boomerangs at the same time and catch them in sequence in a special way. Consecutive Catch: points are awarded for the number of catches achieved before the boomerang is dropped. The event is not timed. MTA 100 (Maximal Time Aloft, ): points are awarded for the length of time spent by the boomerang in the air. The field is normally a circle measuring 100 m. An alternative to this discipline, without the 100 m restriction is called MTA unlimited. Long Distance: the boomerang is thrown from the middle point of a baseline. The furthest distance travelled by the boomerang away from the baseline is measured. On returning, the boomerang must cross the baseline again but does not have to be caught. A special section is dedicated to LD below. Juggling: as with Consecutive Catch, only with two boomerangs. At any given time one boomerang must be in the air. World records Guinness World Record – Smallest Returning Boomerang Non-discipline record: Smallest Returning Boomerang: Sadir Kattan of Australia in 1997 with long and wide. This tiny boomerang flew the required , before returning to the accuracy circles on 22 March 1997 at the Australian National Championships. Guinness World Record – Longest Throw of Any Object by a Human A boomerang was used to set a Guinness World Record with a throw of by David Schummy on 15 March 2005 at Murarrie Recreation Ground, Australia. This broke the record set by Erin Hemmings who threw an Aerobie on 14 July 2003 at Fort Funston, San Francisco. Long-distance versions Long-distance boomerang throwers aim to have the boomerang go the furthest possible distance while returning close to the throwing point. In competition the boomerang must intersect an imaginary surface defined as an infinite vertical projection of a line centred on the thrower. Outside of competitions, the definition is not so strict, and throwers may be happy simply not to walk too far to recover the boomerang. General properties Long-distance boomerangs are optimised to have minimal drag while still having enough lift to fly and return. For this reason, they have a very narrow throwing window, which discourages many beginners from continuing with this discipline. For the same reason, the quality of manufactured long-distance boomerangs is often difficult to determine. Today's long-distance boomerangs have almost all an S or ? – question mark shape and have a beveled edge on both sides (the bevel on the bottom side is sometimes called an undercut). This is to minimise drag and lower the lift. Lift must be low because the boomerang is thrown with an almost total layover (flat). Long-distance boomerangs are most frequently made of composite material, mainly fibre glass epoxy composites. Flight path The projection of the flight path of long-distance boomerang on the ground resembles a water drop. For older types of long-distance boomerangs (all types of so-called big hooks), the first and last third of the flight path are very low, while the middle third is a fast climb followed by a fast descent. Nowadays, boomerangs are made in a way that their whole flight path is almost planar with a constant climb during the first half of the trajectory and then a rather constant descent during the second half. From theoretical point of view, distance boomerangs are interesting also for the following reason: for achieving a different behaviour during different flight phases, the ratio of the rotation frequency to the forward velocity has a U-shaped function, i.e., its derivative crosses 0. Practically, it means that the boomerang being at the furthest point has a very low forward velocity. The kinetic energy of the forward component is then stored in the potential energy. This is not true for other types of boomerangs, where the loss of kinetic energy is non-reversible (the MTAs also store kinetic energy in potential energy during the first half of the flight, but then the potential energy is lost directly by the drag). Related terms In Noongar language, kylie is a flat curved piece of wood similar in appearance to a boomerang that is thrown when hunting for birds and animals. "Kylie" is one of the Aboriginal words for the hunting stick used in warfare and for hunting animals. Instead of following curved flight paths, kylies fly in straight lines from the throwers. They are typically much larger than boomerangs, and can travel very long distances; due to their size and hook shapes, they can cripple or kill an animal or human opponent. The word is perhaps an English corruption of a word meaning "boomerang" taken from one of the Western Desert languages, for example, the Warlpiri word "karli". Cultural references Trademarks of Australian companies using the boomerang as a symbol, emblem or logo proliferate, usually removed from Aboriginal context and symbolising "returning" or to distinguish an Australian brand. Early examples included Bain's White Ant Exterminator (1896); Webendorfer Bros. explosives (1898); E. A. Adams Foods (1920); and by the (still current) Boomerang Cigarette Papers Pty. Ltd. "Aboriginalia", including the boomerang, as symbols of Australia dates from the late 1940s and early 1950s and was in widespread use by a largely European arts, crafts and design community. By the 1960s, the Australian tourism industry extended it to the very branding of Australia, particularly to overseas and domestic tourists as souvenirs and gifts and thus Aboriginal culture. At the very time when Aboriginal people and culture were subject to policies that removed them from their traditional lands and sought to assimilate them (physiologically and culturally) into mainstream white Australian culture, causing the Stolen Generations, Aboriginalia found an ironically "nostalgic", entry point into Australian popular culture at important social locations: holiday resorts and in Australian domestic interiors. In the 21st century, souvenir objects depicting Aboriginal peoples, symbolism and motifs including the boomerang, from the 1940s–1970s, regarded as kitsch and sold largely to tourists in the first instance, became highly sought after by both Aboriginal and non-Aboriginal collectors and has captured the imagination of Aboriginal artists and cultural commentators.
Technology
Projectile weapons
null
4361
https://en.wikipedia.org/wiki/Biological%20warfare
Biological warfare
Biological warfare, also known as germ warfare, is the use of biological toxins or infectious agents such as bacteria, viruses, insects, and fungi with the intent to kill, harm or incapacitate humans, animals or plants as an act of war. Biological weapons (often termed "bio-weapons", "biological threat agents", or "bio-agents") are living organisms or replicating entities (i.e. viruses, which are not universally considered "alive"). Entomological (insect) warfare is a subtype of biological warfare. Biological warfare is subject to a forceful normative prohibition. Offensive biological warfare in international armed conflicts is a war crime under the 1925 Geneva Protocol and several international humanitarian law treaties. In particular, the 1972 Biological Weapons Convention (BWC) bans the development, production, acquisition, transfer, stockpiling and use of biological weapons. In contrast, defensive biological research for prophylactic, protective or other peaceful purposes is not prohibited by the BWC. Biological warfare is distinct from warfare involving other types of weapons of mass destruction (WMD), including nuclear warfare, chemical warfare, and radiological warfare. None of these are considered conventional weapons, which are deployed primarily for their explosive, kinetic, or incendiary potential. Biological weapons may be employed in various ways to gain a strategic or tactical advantage over the enemy, either by threats or by actual deployments. Like some chemical weapons, biological weapons may also be useful as area denial weapons. These agents may be lethal or non-lethal, and may be targeted against a single individual, a group of people, or even an entire population. They may be developed, acquired, stockpiled or deployed by nation states or by non-national groups. In the latter case, or if a nation-state uses it clandestinely, it may also be considered bioterrorism. Biological warfare and chemical warfare overlap to an extent, as the use of toxins produced by some living organisms is considered under the provisions of both the BWC and the Chemical Weapons Convention. Toxins and psychochemical weapons are often referred to as midspectrum agents. Unlike bioweapons, these midspectrum agents do not reproduce in their host and are typically characterized by shorter incubation periods. Overview A biological attack could conceivably result in large numbers of civilian casualties and cause severe disruption to economic and societal infrastructure. A nation or group that can pose a credible threat of mass casualty has the ability to alter the terms under which other nations or groups interact with it. When indexed to weapon mass and cost of development and storage, biological weapons possess destructive potential and loss of life far in excess of nuclear, chemical or conventional weapons. Accordingly, biological agents are potentially useful as strategic deterrents, in addition to their utility as offensive weapons on the battlefield. As a tactical weapon for military use, a significant problem with biological warfare is that it would take days to be effective, and therefore might not immediately stop an opposing force. Some biological agents (smallpox, pneumonic plague) have the capability of person-to-person transmission via aerosolized respiratory droplets. This feature can be undesirable, as the agent(s) may be transmitted by this mechanism to unintended populations, including neutral or even friendly forces. Worse still, such a weapon could "escape" the laboratory where it was developed, even if there was no intent to use it – for example by infecting a researcher who then transmits it to the outside world before realizing that they were infected. Several cases are known of researchers becoming infected and dying of Ebola, which they had been working with in the lab (though nobody else was infected in those cases) – while there is no evidence that their work was directed towards biological warfare, it demonstrates the potential for accidental infection even of careful researchers fully aware of the dangers. While containment of biological warfare is less of a concern for certain criminal or terrorist organizations, it remains a significant concern for the military and civilian populations of virtually all nations. History Antiquity and Middle Ages Rudimentary forms of biological warfare have been practiced since antiquity. The earliest documented incident of the intention to use biological weapons is recorded in Hittite texts of 1500–1200 BCE, in which victims of an unknown plague (possibly tularemia) were driven into enemy lands, causing an epidemic. The Assyrians poisoned enemy wells with the fungus ergot, though with unknown results. Scythian archers dipped their arrows and Roman soldiers their swords into excrements and cadavers – victims were commonly infected by tetanus as result. In 1346, the bodies of Mongol warriors of the Golden Horde who had died of plague were thrown over the walls of the besieged Crimean city of Kaffa. Specialists disagree about whether this operation was responsible for the spread of the Black Death into Europe, Near East and North Africa, resulting in the deaths of approximately 25 million Europeans. Biological agents were extensively used in many parts of Africa from the sixteenth century AD, most of the time in the form of poisoned arrows, or powder spread on the war front as well as poisoning of horses and water supply of the enemy forces. In Borgu, there were specific mixtures to kill, hypnotize, make the enemy bold, and to act as an antidote against the poison of the enemy as well. The creation of biologicals was reserved for a specific and professional class of medicine-men. 18th to 19th century During the French and Indian War, in June 1763 a group of Native Americans laid siege to British-held Fort Pitt. Following instructions of his superior, Colonel Henry Bouquet, the commander of Fort Pitt, Swiss-born Captain Simeon Ecuyer, ordered his men to take smallpox-infested blankets from the infirmary and give it to a Lenape delegation during the siege. A reported outbreak that began the spring before left as many as one hundred Native Americans dead in Ohio Country from 1763 to 1764. It is not clear whether the smallpox was a result of the Fort Pitt incident or the virus was already present among the Delaware people as outbreaks happened on their own every dozen or so years and the delegates were met again later and seemingly had not contracted smallpox. During the American Revolutionary War, Continental Army officer George Washington mentioned to the Continental Congress that he had heard a rumor from a sailor that his opponent during the Siege of Boston, General William Howe, had deliberately sent civilians out of the city in the hopes of spreading the ongoing smallpox epidemic to American lines; Washington, remaining unconvinced, wrote that he "could hardly give credit to" the claim. Washington had already inoculated his soldiers, diminishing the effect of the epidemic. Some historians have claimed that a detachment of the Corps of Royal Marines stationed in New South Wales, Australia, deliberately used smallpox there in 1789. Dr Seth Carus states: "Ultimately, we have a strong circumstantial case supporting the theory that someone deliberately introduced smallpox in the Aboriginal population." World War I By 1900 the germ theory and advances in bacteriology brought a new level of sophistication to the techniques for possible use of bio-agents in war. Biological sabotage in the form of anthrax and glanders was undertaken on behalf of the Imperial German government during World War I (1914–1918), with indifferent results. The Geneva Protocol of 1925 prohibited the first use of chemical and biological weapons against enemy nationals in international armed conflicts. World War II With the onset of World War II, the Ministry of Supply in the United Kingdom established a biological warfare program at Porton Down, headed by the microbiologist Paul Fildes. The research was championed by Winston Churchill and soon tularemia, anthrax, brucellosis, and botulism toxins had been effectively weaponized. In particular, Gruinard Island in Scotland, was contaminated with anthrax during a series of extensive tests for the next 56 years. Although the UK never offensively used the biological weapons it developed, its program was the first to successfully weaponize a variety of deadly pathogens and bring them into industrial production. Other nations, notably France and Japan, had begun their own biological weapons programs. When the United States entered the war, Allied resources were pooled at the request of the British. The U.S. then established a large research program and industrial complex at Fort Detrick, Maryland, in 1942 under the direction of George W. Merck. The biological and chemical weapons developed during that period were tested at the Dugway Proving Grounds in Utah. Soon there were facilities for the mass production of anthrax spores, brucellosis, and botulism toxins, although the war was over before these weapons could be of much operational use. The most notorious program of the period was run by the secret Imperial Japanese Army Unit 731 during the war, based at Pingfan in Manchuria and commanded by Lieutenant General Shirō Ishii. This biological warfare research unit conducted often fatal human experiments on prisoners, and produced biological weapons for combat use. Although the Japanese effort lacked the technological sophistication of the American or British programs, it far outstripped them in its widespread application and indiscriminate brutality. Biological weapons were used against Chinese soldiers and civilians in several military campaigns. In 1940, the Japanese Army Air Force bombed Ningbo with ceramic bombs full of fleas carrying the bubonic plague. Many of these operations were ineffective due to inefficient delivery systems, although up to 400,000 people may have died. During the Zhejiang-Jiangxi Campaign in 1942, around 1,700 Japanese troops died out of a total 10,000 Japanese soldiers who fell ill with disease when their own biological weapons attack rebounded on their own forces. During the final months of World War II, Japan planned to use plague as a biological weapon against U.S. civilians in San Diego, California, during Operation Cherry Blossoms at Night. The plan was set to launch on 22 September 1945, but it was not executed because of Japan's surrender on 15 August 1945. Cold War In Britain, the 1950s saw the weaponization of plague, brucellosis, tularemia and later equine encephalomyelitis and vaccinia viruses, but the programme was unilaterally cancelled in 1956. The United States Army Biological Warfare Laboratories weaponized anthrax, tularemia, brucellosis, Q-fever and others. In 1969, US President Richard Nixon decided to unilaterally terminate the offensive biological weapons program of the US, allowing only scientific research for defensive measures. This decision increased the momentum of the negotiations for a ban on biological warfare, which took place from 1969 to 1972 in the United Nation's Conference of the Committee on Disarmament in Geneva. These negotiations resulted in the Biological Weapons Convention, which was opened for signature on 10 April 1972 and entered into force on 26 March 1975 after its ratification by 22 states. Despite being a party and depositary to the BWC, the Soviet Union continued and expanded its massive offensive biological weapons program, under the leadership of the allegedly civilian institution Biopreparat. The Soviet Union attracted international suspicion after the 1979 Sverdlovsk anthrax leak killed approximately 65 to 100 people. 1948 Arab–Israeli War According to historians Benny Morris and Benjamin Kedar, Israel conducted a biological warfare operation codenamed Operation Cast Thy Bread during the 1948 Arab–Israeli War. The Haganah initially used typhoid bacteria to contaminate water wells in newly cleared Arab villages to prevent the population including militiamen from returning. Later, the biological warfare campaign expanded to include Jewish settlements that were in imminent danger of being captured by Arab troops and inhabited Arab towns not slated for capture. There was also plans to expand the biological warfare campaign into other Arab states including Egypt, Lebanon and Syria, but they were not carried out. International law International restrictions on biological warfare began with the 1925 Geneva Protocol, which prohibits the use but not the possession or development of biological and chemical weapons in international armed conflicts. Upon ratification of the Geneva Protocol, several countries made reservations regarding its applicability and use in retaliation. Due to these reservations, it was in practice a "no-first-use" agreement only. The 1972 Biological Weapons Convention (BWC) supplements the Geneva Protocol by prohibiting the development, production, acquisition, transfer, stockpiling and use of biological weapons. Having entered into force on 26 March 1975, the BWC was the first multilateral disarmament treaty to ban the production of an entire category of weapons of mass destruction. As of March 2021, 183 states have become party to the treaty. The BWC is considered to have established a strong global norm against biological weapons, which is reflected in the treaty's preamble, stating that the use of biological weapons would be "repugnant to the conscience of mankind". The BWC's effectiveness has been limited due to insufficient institutional support and the absence of any formal verification regime to monitor compliance. In 1985, the Australia Group was established, a multilateral export control regime of 43 countries aiming to prevent the proliferation of chemical and biological weapons. In 2004, the United Nations Security Council passed Resolution 1540, which obligates all UN Member States to develop and enforce appropriate legal and regulatory measures against the proliferation of chemical, biological, radiological, and nuclear weapons and their means of delivery, in particular, to prevent the spread of weapons of mass destruction to non-state actors. Bioterrorism Biological weapons are difficult to detect, economical and easy to use, making them appealing to terrorists. The cost of a biological weapon is estimated to be about 0.05 percent the cost of a conventional weapon in order to produce similar numbers of mass casualties per kilometer square. Moreover, their production is very easy as common technology can be used to produce biological warfare agents, like that used in production of vaccines, foods, spray devices, beverages and antibiotics. A major factor in biological warfare that attracts terrorists is that they can easily escape before the government agencies or secret agencies have even started their investigation. This is because the potential organism has an incubation period of 3 to 7 days, after which the results begin to appear, thereby giving terrorists a lead. A technique called Clustered, Regularly Interspaced, Short Palindromic Repeat (CRISPR-Cas9) is now so cheap and widely available that scientists fear that amateurs will start experimenting with them. In this technique, a DNA sequence is cut off and replaced with a new sequence, e.g. one that codes for a particular protein, with the intent of modifying an organism's traits. Concerns have emerged regarding do-it-yourself biology research organizations due to their associated risk that a rogue amateur DIY researcher could attempt to develop dangerous bioweapons using genome editing technology. In 2002, when CNN went through Al-Qaeda's (AQ's) experiments with crude poisons, they found out that AQ had begun planning ricin and cyanide attacks with the help of a loose association of terrorist cells. The associates had infiltrated many countries like Turkey, Italy, Spain, France and others. In 2015, to combat the threat of bioterrorism, a National Blueprint for Biodefense was issued by the Blue-Ribbon Study Panel on Biodefense. Also, 233 potential exposures of select biological agents outside of the primary barriers of the biocontainment in the US were described by the annual report of the Federal Select Agent Program. Though a verification system can reduce bioterrorism, an employee, or a lone terrorist having adequate knowledge of a bio-technology company's facilities, can cause potential danger by utilizing, without proper oversight and supervision, that company's resources. Moreover, it has been found that about 95% of accidents that have occurred due to low security have been done by employees or those who had a security clearance. Entomology Entomological warfare (EW) is a type of biological warfare that uses insects to attack the enemy. The concept has existed for centuries and research and development have continued into the modern era. EW has been used in battle by Japan and several other nations have developed and been accused of using an entomological warfare program. EW may employ insects in a direct attack or as vectors to deliver a biological agent, such as plague. Essentially, EW exists in three varieties. One type of EW involves infecting insects with a pathogen and then dispersing the insects over target areas. The insects then act as a vector, infecting any person or animal they might bite. Another type of EW is a direct insect attack against crops; the insect may not be infected with any pathogen but instead represents a threat to agriculture. The final method uses uninfected insects, such as bees or wasps, to directly attack the enemy. Genetics Theoretically, novel approaches in biotechnology, such as synthetic biology could be used in the future to design novel types of biological warfare agents. Would demonstrate how to render a vaccine ineffective; Would confer resistance to therapeutically useful antibiotics or antiviral agents; Would enhance the virulence of a pathogen or render a nonpathogen virulent; Would increase the transmissibility of a pathogen; Would alter the host range of a pathogen; Would enable the evasion of diagnostic/detection tools; Would enable the weaponization of a biological agent or toxin. Most of the biosecurity concerns in synthetic biology are focused on the role of DNA synthesis and the risk of producing genetic material of lethal viruses (e.g. 1918 Spanish flu, polio) in the lab. Recently, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was hailed by The Washington Post as "the most important innovation in the synthetic biology space in nearly 30 years." While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks. Due to its ease of use and accessibility, it has raised a number of ethical concerns, especially surrounding its use in the biohacking space. By target Anti-personnel Ideal characteristics of a biological agent to be used as a weapon against humans are high infectivity, high virulence, non-availability of vaccines and availability of an effective and efficient delivery system. Stability of the weaponized agent (the ability of the agent to retain its infectivity and virulence after a prolonged period of storage) may also be desirable, particularly for military applications, and the ease of creating one is often considered. Control of the spread of the agent may be another desired characteristic. The primary difficulty is not the production of the biological agent, as many biological agents used in weapons can be manufactured relatively quickly, cheaply and easily. Rather, it is the weaponization, storage, and delivery in an effective vehicle to a vulnerable target that pose significant problems. For example, Bacillus anthracis is considered an effective agent for several reasons. First, it forms hardy spores, perfect for dispersal aerosols. Second, this organism is not considered transmissible from person to person, and thus rarely if ever causes secondary infections. A pulmonary anthrax infection starts with ordinary influenza-like symptoms and progresses to a lethal hemorrhagic mediastinitis within 3–7 days, with a fatality rate that is 90% or higher in untreated patients. Finally, friendly personnel and civilians can be protected with suitable antibiotics. Agents considered for weaponization, or known to be weaponized, include bacteria such as Bacillus anthracis, Brucella spp., Burkholderia mallei, Burkholderia pseudomallei, Chlamydophila psittaci, Coxiella burnetii, Francisella tularensis, some of the Rickettsiaceae (especially Rickettsia prowazekii and Rickettsia rickettsii), Shigella spp., Vibrio cholerae, and Yersinia pestis. Many viral agents have been studied and/or weaponized, including some of the Bunyaviridae (especially Rift Valley fever virus), Ebolavirus, many of the Flaviviridae (especially Japanese encephalitis virus), Machupo virus, Coronaviruses, Marburg virus, Variola virus, and yellow fever virus. Fungal agents that have been studied include Coccidioides spp. Toxins that can be used as weapons include ricin, staphylococcal enterotoxin B, botulinum toxin, saxitoxin, and many mycotoxins. These toxins and the organisms that produce them are sometimes referred to as select agents. In the United States, their possession, use, and transfer are regulated by the Centers for Disease Control and Prevention's Select Agent Program. The former US biological warfare program categorized its weaponized anti-personnel bio-agents as either Lethal Agents (Bacillus anthracis, Francisella tularensis, Botulinum toxin) or Incapacitating Agents (Brucella suis, Coxiella burnetii, Venezuelan equine encephalitis virus, Staphylococcal enterotoxin B). Anti-agriculture Anti-crop/anti-vegetation/anti-fisheries The United States developed an anti-crop capability during the Cold War that used plant diseases (bioherbicides, or mycoherbicides) for destroying enemy agriculture. Biological weapons also target fisheries as well as water-based vegetation. It was believed that the destruction of enemy agriculture on a strategic scale could thwart Sino-Soviet aggression in a general war. Diseases such as wheat blast and rice blast were weaponized in aerial spray tanks and cluster bombs for delivery to enemy watersheds in agricultural regions to initiate epiphytotic (epidemics among plants). On the other hand, some sources report that these agents were stockpiled but never weaponized. When the United States renounced its offensive biological warfare program in 1969 and 1970, the vast majority of its biological arsenal was composed of these plant diseases. Enterotoxins and Mycotoxins were not affected by Nixon's order. Though herbicides are chemicals, they are often grouped with biological warfare and chemical warfare because they may work in a similar manner as biotoxins or bioregulators. The Army Biological Laboratory tested each agent and the Army's Technical Escort Unit was responsible for the transport of all chemical, biological, radiological (nuclear) materials. Biological warfare can also specifically target plants to destroy crops or defoliate vegetation. The United States and Britain discovered plant growth regulators (i.e., herbicides) during the Second World War, which were then used by the UK in the counterinsurgency operations of the Malayan Emergency. Inspired by the use in Malaysia, the US military effort in the Vietnam War included a mass dispersal of a variety of herbicides, famously Agent Orange, with the aim of destroying farmland and defoliating forests used as cover by the Viet Cong. Sri Lanka deployed military defoliants in its prosecution of the Eelam War against Tamil insurgents. Anti-livestock During World War I, German saboteurs used anthrax and glanders to sicken cavalry horses in U.S. and France, sheep in Romania, and livestock in Argentina intended for the Entente forces. One of these German saboteurs was Anton Dilger. Also, Germany itself became a victim of similar attacks – horses bound for Germany were infected with Burkholderia by French operatives in Switzerland. During World War II, the U.S. and Canada secretly investigated the use of rinderpest, a highly lethal disease of cattle, as a bioweapon. In the 1980s Soviet Ministry of Agriculture had successfully developed variants of foot-and-mouth disease, and rinderpest against cows, African swine fever for pigs, and psittacosis for chickens. These agents were prepared to spray them down from tanks attached to airplanes over hundreds of miles. The secret program was code-named "Ecology". During the Mau Mau Uprising in 1952, the poisonous latex of the African milk bush was used to kill cattle. Defensive operations Medical countermeasures In 2010 at The Meeting of the States Parties to the Convention on the Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and Their Destruction in Geneva the sanitary epidemiological reconnaissance was suggested as well-tested means for enhancing the monitoring of infections and parasitic agents, for the practical implementation of the International Health Regulations (2005). The aim was to prevent and minimize the consequences of natural outbreaks of dangerous infectious diseases as well as the threat of alleged use of biological weapons against BTWC States Parties. Many countries require their active-duty military personnel to get vaccinated for certain diseases that may potentially be used as a bioweapon such as anthrax, smallpox, and various other vaccines depending on the Area of Operations of the individual military units and commands. Public health and disease surveillance Most classical and modern biological weapons' pathogens can be obtained from a plant or an animal which is naturally infected. In the largest biological weapons accident known—the anthrax outbreak in Sverdlovsk (now Yekaterinburg) in the Soviet Union in 1979—sheep became ill with anthrax as far as 200 kilometers from the release point of the organism from a military facility in the southeastern portion of the city and still off-limits to visitors today, (see Sverdlovsk Anthrax leak). Thus, a robust surveillance system involving human clinicians and veterinarians may identify a bioweapons attack early in the course of an epidemic, permitting the prophylaxis of disease in the vast majority of people (and/or animals) exposed but not yet ill. For example, in the case of anthrax, it is likely that by 24–36 hours after an attack, some small percentage of individuals (those with the compromised immune system or who had received a large dose of the organism due to proximity to the release point) will become ill with classical symptoms and signs (including a virtually unique chest X-ray finding, often recognized by public health officials if they receive timely reports). The incubation period for humans is estimated to be about 11.8 days to 12.1 days. This suggested period is the first model that is independently consistent with data from the largest known human outbreak. These projections refine previous estimates of the distribution of early-onset cases after a release and support a recommended 60-day course of prophylactic antibiotic treatment for individuals exposed to low doses of anthrax. By making these data available to local public health officials in real time, most models of anthrax epidemics indicate that more than 80% of an exposed population can receive antibiotic treatment before becoming symptomatic, and thus avoid the moderately high mortality of the disease. Common epidemiological warnings From most specific to least specific: Single cause of a certain disease caused by an uncommon agent, with lack of an epidemiological explanation. Unusual, rare, genetically engineered strain of an agent. High morbidity and mortality rates in regards to patients with the same or similar symptoms. Unusual presentation of the disease. Unusual geographic or seasonal distribution. Stable endemic disease, but with an unexplained increase in relevance. Rare transmission (aerosols, food, water). No illness presented in people who were/are not exposed to "common ventilation systems (have separate closed ventilation systems) when illness is seen in persons in close proximity who have a common ventilation system." Different and unexplained diseases coexisting in the same patient without any other explanation. Rare illness that affects a large, disparate population (respiratory disease might suggest the pathogen or agent was inhaled). Illness is unusual for a certain population or age-group in which it takes presence. Unusual trends of death and/or illness in animal populations, previous to or accompanying illness in humans. Many affected reaching out for treatment at the same time. Similar genetic makeup of agents in affected individuals. Simultaneous collections of similar illness in non-contiguous areas, domestic, or foreign. An abundance of cases of unexplained diseases and deaths. Bioweapon identification The goal of biodefense is to integrate the sustained efforts of the national and homeland security, medical, public health, intelligence, diplomatic, and law enforcement communities. Health care providers and public health officers are among the first lines of defense. In some countries private, local, and provincial (state) capabilities are being augmented by and coordinated with federal assets, to provide layered defenses against biological weapon attacks. During the first Gulf War the United Nations activated a biological and chemical response team, Task Force Scorpio, to respond to any potential use of weapons of mass destruction on civilians. The traditional approach toward protecting agriculture, food, and water: focusing on the natural or unintentional introduction of a disease is being strengthened by focused efforts to address current and anticipated future biological weapons threats that may be deliberate, multiple, and repetitive. The growing threat of biowarfare agents and bioterrorism has led to the development of specific field tools that perform on-the-spot analysis and identification of encountered suspect materials. One such technology, being developed by researchers from the Lawrence Livermore National Laboratory (LLNL), employs a "sandwich immunoassay", in which fluorescent dye-labeled antibodies aimed at specific pathogens are attached to silver and gold nanowires. In the Netherlands, the company TNO has designed Bioaerosol Single Particle Recognition eQuipment (BiosparQ). This system would be implemented into the national response plan for bioweapon attacks in the Netherlands. Researchers at Ben Gurion University in Israel are developing a different device called the BioPen, essentially a "Lab-in-a-Pen", which can detect known biological agents in under 20 minutes using an adaptation of the ELISA, a similar widely employed immunological technique, that in this case incorporates fiber optics. List of programs, projects and sites by country United States Fort Detrick, Maryland U.S. Army Biological Warfare Laboratories (1943–69) Building 470 One-Million-Liter Test Sphere Operation Sea-Spray Operation Whitecoat (1954–73) U.S. entomological warfare program Operation Big Itch Operation Big Buzz Operation Drop Kick Operation May Day Project Bacchus Project Clear Vision Project SHAD Project 112 Horn Island Testing Station Fort Terry Granite Peak Installation Vigo Ordnance Plant United Kingdom Porton Down Gruinard Island Nancekuke Operation Vegetarian (1942–1944) Open-air field tests: Operation Harness off Antigua, 1948–1950. Operation Cauldron off Stornoway, 1952. Operation Hesperus off Stornoway, 1953. Operation Ozone off Nassau, 1954. Operation Negation off Nassau, 1954–5. Soviet Union and Russia Biopreparat (18 labs and production centers) Stepnogorsk Scientific and Technical Institute for Microbiology, Stepnogorsk, northern Kazakhstan Institute of Ultra Pure Biochemical Preparations, Leningrad, a weaponized plague center Vector State Research Center of Virology and Biotechnology (VECTOR), a weaponized smallpox center Institute of Applied Biochemistry, Omutninsk Kirov bioweapons production facility, Kirov, Kirov Oblast Zagorsk smallpox production facility, Zagorsk Berdsk bioweapons production facility, Berdsk Bioweapons research facility, Obolensk Sverdlovsk bioweapons production facility (Military Compound 19), Sverdlovsk, a weaponized anthrax center Institute of Virus Preparations Poison laboratory of the Soviet secret services Vozrozhdeniya Project Bonfire Project Factor Japan Unit 731 Zhongma Fortress Kaimingjie germ weapon attack Khabarovsk War Crime Trials Epidemic Prevention and Water Purification Department Iraq Al Hakum Salman Pak facility Al Manal facility South Africa Project Coast Delta G Scientific Company Roodeplaat Research Laboratories Protechnik Rhodesia Canada Grosse Isle, Quebec, site (1939–45) of research into anthrax and other agents DRDC Suffield, Suffield, Alberta List of associated people Bioweaponeers: Includes scientists and administrators Shyh-Ching Lo Kanatjan Alibekov, known as Ken Alibek Ira Baldwin Wouter Basson Kurt Blome Eugen von Haagen Anton Dilger Paul Fildes Arthur Galston (unwittingly) Kurt Gutzeit Riley D. Housewright Shiro Ishii Elvin A. Kabat George W. Merck Frank Olson Vladimir Pasechnik William C. Patrick III Sergei Popov Theodor Rosebury Rihab Rashid Taha Prince Tsuneyoshi Takeda Huda Salih Mahdi Ammash Nassir al-Hindawi Erich Traub Auguste Trillat Baron Otto von Rosen Yujiro Wakamatsu Yazid Sufaat Writers and activists: Jack Trudel Daniel Barenblatt Leonard A. Cole Stephen Endicott Arthur Galston Jeanne Guillemin Edward Hagerman Sheldon H. Harris Nicholas D. Kristof Joshua Lederberg Matthew Meselson Toby Ord Richard Preston Ed Regis Mark Wheelis David Willman Aaron Henderson In popular culture
Technology
Weapons of mass destruction
null
4393
https://en.wikipedia.org/wiki/Bioterrorism
Bioterrorism
Bioterrorism is terrorism involving the intentional release or dissemination of biological agents. These agents include bacteria, viruses, insects, fungi, and/or their toxins, and may be in a naturally occurring or a human-modified form, in much the same way as in biological warfare. Further, modern agribusiness is vulnerable to anti-agricultural attacks by terrorists, and such attacks can seriously damage economy as well as consumer confidence. The latter destructive activity is called agrobioterrorism and is a subtype of agro-terrorism. Definition Bioterrorism agents are typically found in nature, but could be mutated or altered to increase their ability to cause disease, make them resistant to current medicines, or to increase their ability to be spread into the environment. Biological agents can be spread through the air, water, or in food. Biological agents are attractive to terrorists because they are extremely difficult to detect and do not cause illness for several hours to several days. Some bioterrorism agents, like the smallpox virus, can be spread from person to person and some, like anthrax, cannot. Bioterrorism may be favored because biological agents are relatively easy and inexpensive to obtain, can be easily disseminated, and can cause widespread fear and panic beyond the actual physical damage. Military leaders, however, have learned that, as a military asset, bioterrorism has some important limitations; it is difficult to use a bioweapon in a way that only affects the enemy and not friendly forces. A biological weapon is useful to terrorists mainly as a method of creating mass panic and disruption to a state or a country. However, technologists such as Bill Joy have warned of the potential power which genetic engineering might place in the hands of future bio-terrorists. The use of agents that do not cause harm to humans, but disrupt the economy, have also been discussed. One such pathogen is the foot-and-mouth disease (FMD) virus, which is capable of causing widespread economic damage and public concern (as witnessed in the 2001 and 2007 FMD outbreaks in the UK), while having almost no capacity to infect humans. History By the time World War I began, attempts to use anthrax were directed at animal populations. This generally proved to be ineffective. Shortly after the start of World War I, Germany launched a biological sabotage campaign in the United States, Russia, Romania, and France. At that time, Anton Dilger lived in Germany, but in 1915 he was sent to the United States carrying cultures of glanders, a virulent disease of horses and mules. Dilger set up a laboratory in his home in Chevy Chase, Maryland. He used stevedores working the docks in Baltimore to infect horses with glanders while they were waiting to be shipped to Britain. Dilger was under suspicion as being a German agent, but was never arrested. Dilger eventually fled to Madrid, Spain, where he died during the Influenza Pandemic of 1918. In 1916, the Russians arrested a German agent with similar intentions. Germany and its allies infected French cavalry horses and many of Russia's mules and horses on the Eastern Front. These actions hindered artillery and troop movements, as well as supply convoys. In 1972, police in Chicago arrested two college students, Allen Schwander and Stephen Pera, who had planned to poison the city's water supply with typhoid and other bacteria. Schwander had founded a terrorist group, "R.I.S.E.", while Pera collected and grew cultures from the hospital where he worked. The two men fled to Cuba after being released on bail. Schwander died of natural causes in 1974, while Pera returned to the U.S. in 1975 and was put on probation. In 1979, anthrax spores killed around 66 people after the spores were unintentionally released from a military lab near Sverdlovsk, Russia. This occurrence of inhalational anthrax had provided a majority of the knowledge scientists understand about clinical anthrax. Soviet officials and physicians claimed the epidemic was produced by the consumption of infected game meat, but further investigation proves the source of infection were the inhaled spores. There is continued discussion about the intentionality of the epidemic and some speculate it was calculated by the Soviet government. In 1980, the World Health Organization (WHO) announced the eradication of smallpox, a highly contagious and incurable disease. Although the disease has been eliminated in the wild, frozen stocks of smallpox virus are still maintained by the governments of the United States and Russia. Disastrous consequences are feared if rogue politicians or terrorists were to get hold of the smallpox strains. Since vaccination programs are now terminated, the world population is more susceptible to smallpox than ever before. In Oregon in 1984, followers of the Bhagwan Shree Rajneesh attempted to control a local election by incapacitating the local population. They infected salad bars in 11 restaurants, produce in grocery stores, doorknobs, and other public domains with Salmonella typhimurium bacteria in the city of The Dalles, Oregon. The attack infected 751 people with severe food poisoning. There were no fatalities. This incident was the first known bioterrorist attack in the United States in the 20th century. It was also the single largest bioterrorism attack on U.S. soil. In June 1993, the religious group Aum Shinrikyo released anthrax in Tokyo. Eyewitnesses reported a foul odor. The attack was a failure, because it did not infect a single person. The reason for this is due to the fact that the group used the vaccine strain of the bacterium. The spores which were recovered from the site of the attack showed that they were identical to an anthrax vaccine strain that was given to animals at the time. These vaccine strains are missing the genes that cause a symptomatic response. In September and October 2001, several cases of anthrax broke out in the United States, apparently deliberately caused. Letters laced with infectious anthrax were concurrently delivered to news media offices and the U.S. Congress. The letters killed five people. Scenarios There are multiple considerable scenarios, how terrorists might employ biological agents. In 2000, tests conducted by various US agencies showed that indoor attacks in densely populated spaces are much more serious than outdoor attacks. Such enclosed spaces are large buildings, trains, indoor arenas, theaters, malls, tunnels and similar. Contra-measures against such scenarios are building architecture and ventilation systems engineering. In 1993, sewage was spilled out into a river, subsequently drawn into the water system and affected 400,000 people in Milwaukee, Wisconsin. The disease-causing organism was cryptosporidium parvum. This man-made disaster can be a template for a terrorist scenario. Nevertheless, terrorist scenarios are considered more likely near the points of delivery than at the water sources before the water treatment. Release of biological agents is more likely for a single building or a neighborhood. Counter-measures against this scenario include the further limitation of access to the water supply systems, tunnels, and infrastructure. Agricultural crop-duster flights might be misused as delivery devices for biological agents as well. Counter-measures against this scenario are background checks of employees of crop-dusting companies and surveillance procedures. In the most common hoax scenario, no biological agents are employed. For instance, an envelope with powder in it that says, “You've just been exposed to anthrax.” Such hoaxes have been shown to have a large psychological impact on the population. Anti-agriculture attacks are considered to require relatively little expertise and technology. Biological agents that attack livestock, fish, vegetation, and crops are mostly not contagious to humans and are therefore easier for attackers to handle. Even a few cases of infection can disrupt a country's agricultural production and exports for months, as evidenced by FMD outbreaks. Types of agents Under current United States law, bio-agents which have been declared by the U.S. Department of Health and Human Services or the U.S. Department of Agriculture to have the "potential to pose a severe threat to public health and safety" are officially defined as "select agents." The CDC categorizes these agents (A, B or C) and administers the Select Agent Program, which regulates the laboratories which may possess, use, or transfer select agents within the United States. As with US attempts to categorize harmful recreational drugs, designer viruses are not yet categorized and avian H5N1 has been shown to achieve high mortality and human-communication in a laboratory setting. Category A These high-priority agents pose a risk to national security, can be easily transmitted and disseminated, result in high mortality, have potential major public health impact, may cause public panic, or require special action for public health preparedness. SARS and COVID-19, though not as lethal as other diseases, was concerning to scientists and policymakers for its social and economic disruption potential. After the global containment of the pandemic, the United States President George W. Bush stated "...A global influenza pandemic that infects millions and lasts from one to three years could be far worse." Tularemia or "rabbit fever": Tularemia has a very low fatality rate if treated, but can severely incapacitate. The disease is caused by the Francisella tularensis bacterium, and can be contracted through contact with fur, inhalation, ingestion of contaminated water or insect bites. Francisella tularensis is very infectious. A small number of organisms (10–50 or so) can cause disease. If F. tularensis were used as a weapon, the bacteria would likely be made airborne for exposure by inhalation. People who inhale an infectious aerosol would generally experience severe respiratory illness, including life-threatening pneumonia and systemic infection, if they are not treated. The bacteria that cause tularemia occur widely in nature and could be isolated and grown in quantity in a laboratory, although manufacturing an effective aerosol weapon would require considerable sophistication. Anthrax: Anthrax is a non-contagious disease caused by the spore-forming bacterium Bacillus anthracis. The ability of Anthrax to produce within small spores, or bacilli bacterium, makes it readily permeable to porous skin and can cause abrupt symptoms within 24 hours of exposure. The dispersal of this pathogen among densely populated areas is said to carry less than one percent mortality rate, for cutaneous exposure, to a ninety percent or higher mortality for untreated inhalational infections. An anthrax vaccine does exist but requires many injections for stable use. When discovered early, anthrax can be cured by administering antibiotics (such as ciprofloxacin). Its first modern incidence in biological warfare were when Scandinavian "freedom fighters" supplied by the German General Staff used anthrax with unknown results against the Imperial Russian Army in Finland in 1916. In 1993, the Aum Shinrikyo used anthrax in an unsuccessful attempt in Tokyo with zero fatalities. Anthrax was used in a series of attacks by a microbiologist at the US Army Medical Research Institute of Infection Disease on the offices of several United States senators in late 2001. The anthrax was in a powder form and it was delivered by the mail. This bioterrorist attack inevitably prompted seven cases of cutaneous anthrax and eleven cases of inhalation anthrax, with five leading to deaths. Additionally, an estimated 10 to 26 cases had prevented fatality through treatment supplied to over 30,000 individuals. Anthrax is one of the few biological agents that federal employees have been vaccinated for. In the US an anthrax vaccine, Anthrax Vaccine Adsorbed (AVA) exists and requires five injections for stable use. Other anthrax vaccines also exist. The strain used in the 2001 anthrax attacks was identical to the strain used by the USAMRIID. Smallpox: Smallpox is a highly contagious virus. It is transmitted easily through the atmosphere and has a high mortality rate (20–40%). Smallpox was eradicated in the world in the 1970s, thanks to a worldwide vaccination program. However, some virus samples are still available in Russian and American laboratories. Some believe that after the collapse of the Soviet Union, cultures of smallpox have become available in other countries. Although people born pre-1970 will have been vaccinated for smallpox under the WHO program, the effectiveness of vaccination is limited since the vaccine provides high level of immunity for only 3 to 5 years. Revaccination's protection lasts longer. As a biological weapon smallpox is dangerous because of the highly contagious nature of both the infected and their pox. Also, the infrequency with which vaccines are administered among the general population since the eradication of the disease would leave most people unprotected in the event of an outbreak. Smallpox occurs only in humans, and has no external hosts or vectors. Botulinum toxin: The neurotoxin Botulinum is the deadliest toxin known to man, and is produced by the bacterium Clostridium botulinum. Botulism causes death by respiratory failure and paralysis. Furthermore, the toxin is readily available worldwide due to its cosmetic applications in injections. Bubonic plague: Plague is a disease caused by the Yersinia pestis bacterium. Rodents are the normal host of plague, and the disease is transmitted to humans by flea bites and occasionally by aerosol in the form of pneumonic plague. The disease has a history of use in biological warfare dating back many centuries, and is considered a threat due to its ease of culture and ability to remain in circulation among local rodents for a long period of time. The weaponized threat comes mainly in the form of pneumonic plague (infection by inhalation) It was the disease that caused the Black Death in Medieval Europe. Viral hemorrhagic fevers: This includes hemorrhagic fevers caused by members of the family Filoviridae (Marburg virus and Ebola virus), and by the family Arenaviridae (for example Lassa virus and Machupo virus). Ebola virus disease, in particular, has caused high fatality rates ranging from 25 to 90% with a 50% average. No cure currently exists, although vaccines are in development. The Soviet Union investigated the use of filoviruses for biological warfare, and the Aum Shinrikyo group unsuccessfully attempted to obtain cultures of Ebola virus. Death from Ebola virus disease is commonly due to multiple organ failure and hypovolemic shock. Marburg virus was first discovered in Marburg, Germany. No treatments currently exist aside from supportive care. The arenaviruses have a somewhat reduced case-fatality rate compared to disease caused by filoviruses, but are more widely distributed, chiefly in central Africa and South America. Category B Category B agents are moderately easy to disseminate and have low mortality rates. Brucellosis (Brucella species) Epsilon toxin of Clostridium perfringens Food safety threats (for example, Salmonella species, E coli O157:H7, Shigella, Staphylococcus aureus) Glanders (Burkholderia mallei) Melioidosis (Burkholderia pseudomallei) Psittacosis (Chlamydia psittaci) Q fever (Coxiella burnetii) Ricin toxin from Ricinus communis (castor beans) Abrin toxin from Abrus precatorius (Rosary peas) Staphylococcal enterotoxin B Typhus (Rickettsia prowazekii) Viral encephalitis (alphaviruses, for example,: Venezuelan equine encephalitis, eastern equine encephalitis, western equine encephalitis) Water supply threats (for example, Vibrio cholerae, Cryptosporidium parvum) Category C Category C agents are emerging pathogens that might be engineered for mass dissemination because of their availability, ease of production and dissemination, high mortality rate, or ability to cause a major health impact. Nipah virus Hantavirus Planning and monitoring Planning may involve the development of biological identification systems. Until recently in the United States, most biological defense strategies have been geared to protecting soldiers on the battlefield rather than ordinary people in cities. Financial cutbacks have limited the tracking of disease outbreaks. Some outbreaks, such as food poisoning due to E. coli or Salmonella, could be of either natural or deliberate origin. Global defense strategies have also been put into place including the introduction of the Biological and Toxin Weapons Convention in 1975. A majority of countries across the globe participated in the conventions (144) but a handful chose not to take part in the defense. Many of the countries who opted out of the convention are located in the Middle East and former Soviet Union countries. Preparedness Export controls on biological agents are not applied uniformly, providing terrorists a route for acquisition. Laboratories are working on advanced detection systems to provide early warning, identify contaminated areas and populations at risk, and to facilitate prompt treatment. Methods for predicting the use of biological agents in urban areas as well as assessing the area for the hazards associated with a biological attack are being established in major cities. In addition, forensic technologies are working on identifying biological agents, their geographical origins and/or their initial source. Efforts include decontamination technologies to restore facilities without causing additional environmental concerns. Early detection and rapid response to bioterrorism depend on close cooperation between public health authorities and law enforcement; however, such cooperation is lacking. National detection assets and vaccine stockpiles are not useful if local and state officials do not have access to them. Aspects of protection against bioterrorism in the United States include: Detection and resilience strategies in combating bioterrorism. This occurs primarily through the efforts of the Office of Health Affairs (OHA), a part of the Department of Homeland Security (DHS), whose role is to prepare for an emergency situation that impacts the health of the American populace. Detection has two primary technological factors. First there is OHA's BioWatch program in which collection devices are disseminated to thirty high risk areas throughout the country to detect the presence of aerosolized biological agents before symptoms present in patients. This is significant primarily because it allows a more proactive response to a disease outbreak rather than the more passive treatment of the past. Implementation of the Generation-3 automated detection system. This advancement is significant simply because it enables action to be taken in four to six hours due to its automatic response system, whereas the previous system required aerosol detectors to be manually transported to laboratories. Resilience is a multifaceted issue as well, as addressed by OHA. One way in which this is ensured is through exercises that establish preparedness; programs like the Anthrax Response Exercise Series exist to ensure that, regardless of the incident, all emergency personnel will be aware of the role they must fill. Moreover, by providing information and education to public leaders, emergency medical services and all employees of the DHS, OHS suggests it can significantly decrease the impact of bioterrorism. Enhancing the technological capabilities of first responders is accomplished through numerous strategies. The first of these strategies was developed by the Science and Technology Directorate (S&T) of DHS to ensure that the danger of suspicious powders could be effectively assessed, (as many dangerous biological agents such as anthrax exist as a white powder). By testing the accuracy and specificity of commercially available systems used by first responders, the hope is that all biologically harmful powders can be rendered ineffective. Enhanced equipment for first responders. One recent advancement is the commercialization of a new form of Tyvex™ armor which protects first responders and patients from chemical and biological contaminants. There has also been a new generation of Self-Contained Breathing Apparatuses (SCBA) which has been recently made more robust against bioterrorism agents. All of these technologies combine to form what seems like a relatively strong deterrent to bioterrorism. However, New York City as an entity has numerous organizations and strategies that effectively serve to deter and respond to bioterrorism as it comes. From here the logical progression is into the realm of New York City's specific strategies to prevent bioterrorism. Excelsior Challenge. In the second week of September 2016, the state of New York held a large emergency response training exercise called the Excelsior Challenge, with over 100 emergency responders participating. According to WKTV, "This is the fourth year of the Excelsior Challenge, a training exercise designed for police and first responders to become familiar with techniques and practices should a real incident occur." The event was held over three days and hosted by the State Preparedness Training Center in Oriskany, New York. Participants included bomb squads, canine handlers, tactical team officers and emergency medical services. In an interview with Homeland Preparedness News, Bob Stallman, assistant director at the New York State Preparedness Training Center, said, "We're constantly seeing what's happening around the world and we tailor our training courses and events for those types of real-world events." For the first time, the 2016 training program implemented New York's new electronic system. The system, called NY Responds, electronically connects every county in New York to aid in disaster response and recovery. As a result, "counties have access to a new technology known as Mutualink, which improves interoperability by integrating telephone, radio, video, and file-sharing into one application to allow local emergency staff to share real-time information with the state and other counties." The State Preparedness Training Center in Oriskany was designed by the State Division of Homeland Security, and Emergency Services (DHSES) in 2006. It cost $42 million to construct on over 1100 acres and is available for training 360 days a year. Students from SUNY Albany's College of Emergency Preparedness, Homeland Security and Cybersecurity, were able to participate in this year's exercise and learn how "DHSES supports law enforcement specialty teams." Project BioShield. The accrual of vaccines and treatments for potential biological threats, also known as medical countermeasures has been an important aspect in preparing for a potential bioterrorist attack; this took the form of a program beginning in 2004, referred to as Project BioShield. The significance of this program should not be overlooked as “there is currently enough smallpox vaccine to inoculate every United States citizen and a variety of therapeutic drugs to treat the infected.” The Department of Defense also has a variety of laboratories currently working to increase the quantity and efficacy of countermeasures that comprise the national stockpile. Efforts have also been taken to ensure that these medical countermeasures can be disseminated effectively in the event of a bioterrorist attack. The National Association of Chain Drug Stores championed this cause by encouraging the participation of the private sector in improving the distribution of such countermeasures if required. On a CNN news broadcast in 2011, the CNN chief medical correspondent, Dr. Sanjay Gupta, weighed in on the American government's recent approach to bioterrorist threats. He explains how, even though the United States would be better fending off bioterrorist attacks now than they would be a decade ago, the amount of money available to fight bioterrorism over the last three years has begun to decrease. Looking at a detailed report that examined the funding decrease for bioterrorism in fifty-one American cities, Dr. Gupta stated that the cities "wouldn't be able to distribute vaccines as well" and "wouldn't be able to track viruses." He also said that film portrayals of global pandemics, such as Contagion, were actually quite possible and may occur in the United States under the right conditions. A news broadcast by MSNBC in 2010 also stressed the low levels of bioterrorism preparedness in the United States. The broadcast stated that a bipartisan report gave the Obama administration a failing grade for its efforts to respond to a bioterrorist attack. The news broadcast invited the former New York City police commissioner, Howard Safir, to explain how the government would fare in combating such an attack. He said how "biological and chemical weapons are probable and relatively easy to disperse." Furthermore, Safir thought that efficiency in bioterrorism preparedness is not necessarily a question of money, but is instead dependent on putting resources in the right places. The broadcast suggested that the nation was not ready for something more serious. In a September 2016 interview conducted by Homeland Preparedness News, Daniel Gerstein, a senior policy researcher for the RAND Corporation, stresses the importance in preparing for potential bioterrorist attacks on the nation. He implored the U.S. government to take the proper and necessary actions to implement a strategic plan of action to save as many lives as possible and to safeguard against potential chaos and confusion. He believes that because there have been no significant instances of bioterrorism since the anthrax attacks in 2001, the government has allowed itself to become complacent making the country that much more vulnerable to unsuspecting attacks, thereby further endangering the lives of U.S. citizens. Gerstein formerly served in the Science and Technology Directorate of the Department of Homeland Security from 2011 to 2014. He claims there has not been a serious plan of action since 2004 during George W. Bush's presidency, in which he issued a Homeland Security directive delegating responsibilities among various federal agencies. He also stated that the blatant mishandling of the Ebola virus outbreak in 2014 attested to the government's lack of preparation. This past May, legislation that would create a national defense strategy was introduced in the Senate, coinciding with the timing of ISIS-affiliated terrorist groups get closer to weaponizing biological agents. In May 2016, Kenyan officials apprehended two members of an Islamic extremist group in motion to set off a biological bomb containing anthrax. Mohammed Abdi Ali, the believed leader of the group, who was a medical intern, was arrested along with his wife, a medical student. The two were caught just before carrying out their plan. The Blue Ribbon Study Panel on Biodefense, which comprises a group of experts on national security and government officials, in which Gerstein had previously testified to, submitted its National Blueprint for Biodefense to Congress in October 2015 listing their recommendations for devising an effective plan. Bill Gates said in a February 18, 2017 Business Insider op-ed (published near the time of his Munich Security Conference speech) that it is possible for an airborne pathogen to kill at least 30 million people over the course of a year. In a New York Times report, the Gates Foundation predicted that a modern outbreak similar to the Spanish Influenza pandemic (which killed between 50 million and 100 million people) could end up killing more than 360 million people worldwide, even considering widespread availability of vaccines and other healthcare tools. The report cited increased globalization, rapid international air travel, and urbanization as increased reasons for concern. In a March 9, 2017, interview with CNBC, former U.S. Senator Joe Lieberman, who was co-chair of the bipartisan Blue Ribbon Study Panel on Biodefense, said a worldwide pandemic could end the lives of more people than a nuclear war. Lieberman also expressed worry that a terrorist group like ISIS could develop a synthetic influenza strain and introduce it to the world to kill civilians. In July 2017, Robert C. Hutchinson, former agent at the Department of Homeland Security, called for a "whole-of-government" response to the next global health threat, which he described as including strict security procedures at our borders and proper execution of government preparedness plans. Also, novel approaches in biotechnology, such as synthetic biology, could be used in the future to design new types of biological warfare agents. Special attention has to be laid on future experiments (of concern) that: Would demonstrate how to render a vaccine ineffective; Would confer resistance to therapeutically useful antibiotics or antiviral agents; Would enhance the virulence of a pathogen or render a nonpathogen virulent; Would increase transmissibility of a pathogen; Would alter the host range of a pathogen; Would enable the evasion of diagnostic/detection tools; Would enable the weaponization of a biological agent or toxin Most of the biosecurity concerns in synthetic biology, however, are focused on the role of DNA synthesis and the risk of producing genetic material of lethal viruses (e.g. 1918 Spanish flu, polio) in the lab. The CRISPR/Cas system has emerged as a promising technique for gene editing. It was hailed by The Washington Post as "the most important innovation in the synthetic biology space in nearly 30 years." While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks. However, due to its ease of use and accessibility, it has raised a number of ethical concerns, especially surrounding its use in the biohacking space. Biosurveillance In 1999, the University of Pittsburgh's Center for Biomedical Informatics deployed the first automated bioterrorism detection system, called RODS (Real-Time Outbreak Disease Surveillance). RODS is designed to collect data from many data sources and use them to perform signal detection, that is, to detect a possible bioterrorism event at the earliest possible moment. RODS, and other systems like it, collect data from sources including clinic data, laboratory data, and data from over-the-counter drug sales. In 2000, Michael Wagner, the codirector of the RODS laboratory, and Ron Aryel, a subcontractor, conceived the idea of obtaining live data feeds from "non-traditional" (non-health-care) data sources. The RODS laboratory's first efforts eventually led to the establishment of the National Retail Data Monitor, a system which collects data from 20,000 retail locations nationwide. On February 5, 2002, George W. Bush visited the RODS laboratory and used it as a model for a $300 million spending proposal to equip all 50 states with biosurveillance systems. In a speech delivered at the nearby Masonic temple, Bush compared the RODS system to a modern "DEW" line (referring to the Cold War ballistic missile early warning system). The principles and practices of biosurveillance, a new interdisciplinary science, were defined and described in the Handbook of Biosurveillance, edited by Michael Wagner, Andrew Moore and Ron Aryel, and published in 2006. Biosurveillance is the science of real-time disease outbreak detection. Its principles apply to both natural and man-made epidemics (bioterrorism). Data which potentially could assist in early detection of a bioterrorism event include many categories of information. Health-related data such as that from hospital computer systems, clinical laboratories, electronic health record systems, medical examiner record-keeping systems, 911 call center computers, and veterinary medical record systems could be of help; researchers are also considering the utility of data generated by ranching and feedlot operations, food processors, drinking water systems, school attendance recording, and physiologic monitors, among others. In Europe, disease surveillance is beginning to be organized on the continent-wide scale needed to track a biological emergency. The system not only monitors infected persons, but attempts to discern the origin of the outbreak. Researchers have experimented with devices to detect the existence of a threat: Tiny electronic chips that would contain living nerve cells to warn of the presence of bacterial toxins (identification of broad range toxins) Fiber-optic tubes lined with antibodies coupled to light-emitting molecules (identification of specific pathogens, such as anthrax, botulinum, ricin) Some research shows that ultraviolet avalanche photodiodes offer the high gain, reliability and robustness needed to detect anthrax and other bioterrorism agents in the air. The fabrication methods and device characteristics were described at the 50th Electronic Materials Conference in Santa Barbara on June 25, 2008. Details of the photodiodes were also published in the February 14, 2008, issue of the journal Electronics Letters and the November 2007 issue of the journal IEEE Photonics Technology Letters. The United States Department of Defense conducts global biosurveillance through several programs, including the Global Emerging Infections Surveillance and Response System. Another powerful tool developed within New York City for use in countering bioterrorism is the development of the New York City Syndromic Surveillance System. This system is essentially a way of tracking disease progression throughout New York City, and was developed by the New York City Department of Health and Mental Hygiene (NYC DOHMH) in the wake of the 9/11 attacks. The system works by tracking the symptoms of those taken into the emergency department—based on the location of the hospital to which they are taken and their home address—and assessing any patterns in symptoms. These established trends can then be observed by medical epidemiologists to determine if there are any disease outbreaks in any particular locales; maps of disease prevalence can then be created rather easily. This is an obviously beneficial tool in fighting bioterrorism as it provides a means through which such attacks could be discovered in their nascence; assuming bioterrorist attacks result in similar symptoms across the board, this strategy allows New York City to respond immediately to any bioterrorist threats that they may face with some level of alacrity. Response to bioterrorism incident or threat Government agencies which would be called on to respond to a bioterrorism incident would include law enforcement, hazardous materials and decontamination units, and emergency medical units, if available. The US military has specialized units, which can respond to a bioterrorism event; among them are the United States Marine Corps' Chemical Biological Incident Response Force and the U.S. Army's 20th Support Command (CBRNE), which can detect, identify, and neutralize threats, and decontaminate victims exposed to bioterror agents. US response would include the Centers for Disease Control. Historically, governments and authorities have relied on quarantines to protect their populations. International bodies such as the World Health Organization already devote some of their resources to monitoring epidemics and have served clearing-house roles in historical epidemics. Media attention toward the seriousness of biological attacks increased in 2013 to 2014. In July 2013, Forbes published an article with the title "Bioterrorism: A Dirty Little Threat With Huge Potential Consequences." In November 2013, Fox News reported on a new strain of botulism, saying that the Centers for Disease and Control lists botulism as one of two agents that have "the highest risks of mortality and morbidity", noting that there is no antidote for botulism. USA Today reported that the U.S. military in November was trying to develop a vaccine for troops from the bacteria that cause the disease Q fever, an agent the military once used as a biological weapon. In February 2014, the former special assistant and senior director for biodefense policy to President George W. Bush called the bioterrorism risk imminent and uncertain and Congressman Bill Pascrell called for increasing federal measures against bioterrorism as a "matter of life or death." The New York Times wrote a story saying the United States would spend $40 million to help certain low and middle-income countries deal with the threats of bioterrorism and infectious diseases. Bioterrorism can additionally harm the psychological aspect of victims and the general public. Victims exposed to biological weapons have shown an increased presence of clinical anxiety compared to the normal population. Bill Gates has warned that bioterrorism could kill more people than nuclear war. In February 2018, a CNN employee discovered on an airplane a "sensitive, top-secret document in the seatback pouch explaining how the Department of Homeland Security would respond to a bioterrorism attack at the Super Bowl." 2017 U.S. budget proposal affecting bioterrorism programs President Donald Trump promoted his first budget around keeping America safe. However, one aspect of defense would receive less money: "protecting the nation from deadly pathogens, man-made or natural," according to The New York Times. Agencies tasked with biosecurity get a decrease in funding under the Administration's budget proposal. For example: The Office of Public Health Preparedness and Response would be cut by $136 million, or 9.7 percent. The office tracks outbreaks of disease. The National Center for Emerging and Zoonotic Infectious Diseases would be cut by $65 million, or 11 percent. The center is a branch of the Centers for Disease Control and Prevention that fights threats like anthrax and the Ebola virus, and additionally towards research on HIV/AIDS vaccines. Within the National Institutes of Health, the National Institute of Allergy and Infectious Diseases (NIAID) would lose 18 percent of its budget. NIAID oversees responses to Zika, Ebola and HIV/AIDS vaccine research. "The next weapon of mass destruction may not be a bomb," Lawrence O. Gostin, the director of the World Health Organization's Collaborating Center on Public Health Law and Human Rights, told The New York Times. "It may be a tiny pathogen that you can't see, smell or taste, and by the time we discover it, it'll be too late." Lack of international standards on public health experiments Tom Inglesy, the CEO and director of the Center for Health Security at the Johns Hopkins Bloomberg School of Public Health and an internationally recognized expert on public health preparedness, pandemic and emerging infectious disease said in 2017 that the lack of an internationally standardized approval process that could be used to guide countries in conducting public health experiments for resurrecting a disease that has already been eradicated increases the risk that the disease could be used in bioterrorism. This was in reference to the lab synthesis of horsepox in 2017 by researchers at the University of Alberta. The researchers recreated horsepox, an extinct cousin of the smallpox virus, in order to research new ways to treat cancer. In popular culture Incidents
Technology
Biotechnology
null
4396
https://en.wikipedia.org/wiki/Northrop%20B-2%20Spirit
Northrop B-2 Spirit
The Northrop B-2 Spirit, also known as the Stealth Bomber, is an American heavy strategic bomber, featuring low-observable stealth technology designed to penetrate dense anti-aircraft defenses. A subsonic flying wing with a crew of two, the plane was designed by Northrop (later Northrop Grumman) as the prime contractor, with Boeing, Hughes, and Vought as principal subcontractors, and was produced from 1987 to 2000. The bomber can drop conventional and thermonuclear weapons, such as up to eighty Mk 82 JDAM GPS-guided bombs, or sixteen B83 nuclear bombs. The B-2 is the only acknowledged in-service aircraft that can carry large air-to-surface standoff weapons in a stealth configuration. Development began under the Advanced Technology Bomber (ATB) project during the Carter administration, which cancelled the Mach 2-capable B-1A bomber in part because the ATB showed such promise. But development difficulties delayed progress and drove up costs. Ultimately, the program produced 21 B-2s at an average cost of $2.13 billion (~$ billion in ), including development, engineering, testing, production, and procurement. Building each aircraft cost an average of US$737 million, while total procurement costs (including production, spare parts, equipment, retrofitting, and software support) averaged $929 million (~$ in ) per plane. The project's considerable capital and operating costs made it controversial in the U.S. Congress even before the winding down of the Cold War dramatically reduced the desire for a stealth aircraft designed to strike deep in Soviet territory. Consequently, in the late 1980s and 1990s lawmakers shrank the planned purchase of 132 bombers to 21. The B-2 can perform attack missions at altitudes of up to ; it has an unrefueled range of more than and can fly more than with one midair refueling. It entered service in 1997 as the second aircraft designed with advanced stealth technology, after the Lockheed F-117 Nighthawk attack aircraft. Primarily designed as a nuclear bomber, the B-2 was first used in combat to drop conventional, non-nuclear ordnance in the Kosovo War in 1999. It was later used in Iraq, Afghanistan, Libya and Yemen. The United States Air Force has nineteen B-2s in service as of 2024; one was destroyed in a 2008 crash and another one damaged in a crash in 2022 was retired from service likely on account of the cost and duration of a potential repair. The Air Force plans to operate the B-2s until 2032, when the Northrop Grumman B-21 Raider is to replace them. Development Origins By the mid-1970s, military aircraft designers had learned of a new method to avoid missiles and interceptors, known today as "stealth". The concept was to build an aircraft with an airframe that deflected or absorbed radar signals so that little was reflected back to the radar unit. An aircraft having radar stealth characteristics would be able to fly nearly undetected and could be attacked only by weapons and systems not relying on radar. Although other detection measures existed, such as human observation, infrared scanners, and acoustic locators, their relatively short detection range or poorly developed technology allowed most aircraft to fly undetected, or at least untracked, especially at night. In 1974, DARPA requested information from U.S. aviation firms about the largest radar cross-section of an aircraft that would remain effectively invisible to radars. Initially, Northrop and McDonnell Douglas were selected for further development. Lockheed had experience in this field with the development of the Lockheed A-12 and SR-71, which included several stealthy features, notably its canted vertical stabilizers, the use of composite materials in key locations, and the overall surface finish in radar-absorbing paint. A key improvement was the introduction of computer models used to predict the radar reflections from flat surfaces where collected data drove the design of a "faceted" aircraft. Development of the first such designs started in 1975 with the Have Blue, a model Lockheed built to test the concept. Plans were well advanced by the summer of 1975, when DARPA started the Experimental Survivability Testbed project. Northrop and Lockheed were awarded contracts in the first round of testing. Lockheed received the sole award for the second test round in April 1976 leading to the Have Blue program and eventually the F-117 stealth attack aircraft. Northrop also had a classified technology demonstration aircraft, the Tacit Blue in development in 1979 at Area 51. It developed stealth technology, LO (low observables), fly-by-wire, curved surfaces, composite materials, electronic intelligence, and Battlefield Surveillance Aircraft Experimental. The stealth technology developed from the program was later incorporated into other operational aircraft designs, including the B-2 stealth bomber. ATB program By 1976, these programs had progressed to a position in which a long-range strategic stealth bomber appeared viable. President Jimmy Carter became aware of these developments during 1977, and it appears to have been one of the major reasons the B-1 was canceled. Further studies were ordered in early 1978, by which point the Have Blue platform had flown and proven the concepts. During the 1980 presidential election campaign in 1979, Ronald Reagan repeatedly stated that Carter was weak on defense and used the B-1 as a prime example. In response, on 22 August 1980 the Carter administration publicly disclosed that the United States Department of Defense was working to develop stealth aircraft, including a bomber. The Advanced Technology Bomber (ATB) program began in 1979. Full development of the black project followed, funded under the code name "Aurora". After the evaluations of the companies' proposals, the ATB competition was narrowed to the Northrop/Boeing and Lockheed/Rockwell teams with each receiving a study contract for further work. Both teams used flying wing designs. The Northrop proposal was code named "Senior Ice", and the Lockheed proposal code named "Senior Peg". Northrop had prior experience developing the YB-35 and YB-49 flying wing aircraft. The Northrop design was larger and had curved surfaces while the Lockheed design was faceted and included a small tail. In 1979, designer Hal Markarian produced a sketch of the aircraft that bore considerable similarities to the final design. The USAF originally planned to procure 165 ATB bombers. The Northrop team's ATB design was selected over the Lockheed/Rockwell design on 20 October 1981. The Northrop design received the designation B-2 and the name "Spirit". The bomber's design was changed in the mid-1980s when the mission profile was changed from high-altitude to low-altitude, terrain-following. The redesign delayed the B-2's first flight by two years and added about US$1 billion to the program's cost. An estimated US$23 billion was secretly spent for research and development on the B-2 by 1989. MIT engineers and scientists helped assess the mission effectiveness of the aircraft under a five-year classified contract during the 1980s. ATB technology was also fed into the Advanced Tactical Fighter program, which would result in the Lockheed YF-22 and Northrop YF-23, and later the Lockheed Martin F-22. Northrop was the B-2's prime contractor; major subcontractors included Boeing, Hughes Aircraft (now Raytheon), GE, and Vought Aircraft. Secrecy and espionage During its design and development, the Northrop B-2 program was a black project; all program personnel needed a secret clearance. Still, it was less closely held than the Lockheed F-117 program; more people in the federal government knew about the B-2, and more information about the project was available. Both during development and in service, considerable effort has been devoted to maintaining the security of the B-2's design and technologies. Staff working on the B-2 in most, if not all, capacities need a level of special-access clearance and undergo extensive background checks carried out by a special branch of the USAF. A former Ford automobile assembly plant in Pico Rivera, California, was acquired and heavily rebuilt; the plant's employees were sworn to secrecy. To avoid suspicion, components were typically purchased through front companies, military officials would visit out of uniform, and staff members were routinely subjected to polygraph examinations. Nearly all information on the program was kept from the Government Accountability Office (GAO) and members of Congress until the mid-1980s. The B-2 was first publicly displayed on 22 November 1988 at United States Air Force Plant 42 in Palmdale, California, where it was assembled. This viewing was heavily restricted, and guests were not allowed to see the rear of the B-2. However, Aviation Week editors found that there were no airspace restrictions above the presentation area and took aerial photographs of the aircraft's secret rear section with suppressed engine exhausts. The B-2's (s/n / AV-1) first public flight was on 17 July 1989 from Palmdale to Edwards Air Force Base. In 1984, Northrop employee Thomas Patrick Cavanagh was arrested for attempting to sell classified information from the Pico Rivera factory to the Soviet Union. Cavanagh was sentenced to life in prison in 1985 but released on parole in 2001. In October 2005, Noshir Gowadia, a design engineer who worked on the B-2's propulsion system, was arrested for selling classified information to China. Gowadia was convicted and sentenced to 32 years in prison. Program costs and procurement A procurement of 132 aircraft was planned in the mid-1980s but was later reduced to 75. By the early 1990s the Soviet Union dissolved, effectively eliminating the Spirit's primary Cold War mission. Under budgetary pressures and Congressional opposition, in his 1992 State of the Union address, President George H. W. Bush announced B-2 production would be limited to 20 aircraft. In 1996, however, the Clinton administration, though originally committed to ending production of the bombers at 20 aircraft, authorized the conversion of a 21st bomber, a prototype test model, to Block 30 fully operational status at a cost of nearly $500 million (~$ in ). In 1995, Northrop made a proposal to the USAF to build 20 additional aircraft with a flyaway cost of $566 million each. The program was the subject of public controversy for its cost to American taxpayers. In 1996, the GAO disclosed that the USAF's B-2 bombers "will be, by far, the costliest bombers to operate on a per aircraft basis", costing over three times as much as the B-1B (US$9.6 million annually) and over four times as much as the B-52H (US$6.8 million annually). In September 1997, each hour of B-2 flight necessitated 119 hours of maintenance. Comparable maintenance needs for the B-52 and the B-1B are 53 and 60 hours, respectively, for each hour of flight. A key reason for this cost is the provision of air-conditioned hangars large enough for the bomber's wingspan, which are needed to maintain the aircraft's stealth properties, particularly its "low-observable" stealth skins. Maintenance costs are about $3.4 million per month for each aircraft. An August 1995 GAO report disclosed that the B-2 had trouble operating in heavy rain, as rain could damage the aircraft's stealth coating, causing procurement delays until an adequate protective coating could be found. In addition, the B-2's terrain-following/terrain-avoidance radar had difficulty distinguishing rain from other obstacles, rendering the subsystem inoperable during rain. However a subsequent report in October 1996 noted that the USAF had made some progress in resolving the issues with the radar via software fixes and hoped to have these fixes undergoing tests by the spring of 1997. The total "military construction" cost related to the program was projected to be US$553.6 million in 1997 dollars. The cost to procure each B-2 was US$737 million in 1997 dollars (equivalent to US$ billion in 2021), based only on a fleet cost of US$15.48 billion. The procurement cost per aircraft, as detailed in GAO reports, which include spare parts and software support, was $929 million per aircraft in 1997 dollars. The total program cost projected through 2004 was US$44.75 billion in 1997 dollars (equivalent to US$ billion in 2021). This includes development, procurement, facilities, construction, and spare parts. The total program cost averaged US$2.13 billion per aircraft. The B-2 may cost up to $135,000 per flight hour to operate in 2010, which is about twice that of the B-52 and B-1. Opposition In its consideration of the fiscal year 1990 defense budget, the House Armed Services Committee trimmed $800 million from the B-2 research and development budget, while at the same time staving off a motion to end the project. Opposition in committee and in Congress was mostly broad and bipartisan, with Congressmen Ron Dellums (D-CA), John Kasich (R-OH), and John G. Rowland (R-CT) authorizing the motion to end the project—as well as others in the Senate, including Jim Exon (D-NE) and John McCain (R-AZ) also opposing the project. Dellums and Kasich, in particular, worked together from 1989 through the early 1990s to limit production to 21 aircraft and were ultimately successful. The escalating cost of the B-2 program and evidence of flaws in the aircraft's ability to elude detection by radar were among factors that drove opposition to continue the program. At the peak production period specified in 1989, the schedule called for spending US$7 billion to $8 billion per year in 1989 dollars, something Committee Chair Les Aspin (D-WI) said "won't fly financially". In 1990, the Department of Defense accused Northrop of using faulty components in the flight control system; it was also found that redesign work was required to reduce the risk of damage to engine fan blades by bird ingestion. In time, several prominent members of Congress began to oppose the program's expansion, including Senator John Kerry (D-MA), who cast votes against the B-2 in 1989, 1991, and 1992. By 1992, Bush had called for the cancellation of the B-2 and promised to cut military spending by 30% in the wake of the collapse of the Soviet Union. In October 1995, former Chief of Staff of the United States Air Force, General Mike Ryan, and former chairman of the Joint Chiefs of Staff, General John Shalikashvili, strongly recommended against Congressional action to fund the purchase of any additional B-2s, arguing that to do so would require unacceptable cuts in existing conventional and nuclear-capable aircraft, and that the military had greater priorities in spending a limited budget. Some B-2 advocates argued that procuring twenty additional aircraft would save money because B-2s would be able to deeply penetrate anti-aircraft defenses and use low-cost, short-range attack weapons rather than expensive standoff weapons. However, in 1995, the Congressional Budget Office (CBO) and its Director of National Security Analysis found that additional B-2s would reduce the cost of expended munitions by less than US$2 billion in 1995 dollars during the first two weeks of a conflict, in which the USAF predicted bombers would make their greatest contribution; this was a small fraction of the US$26.8 billion (in 1995 dollars) life cycle cost that the CBO projected for an additional 20 B-2s. In 1997, as Ranking Member of the House Armed Services Committee and National Security Committee, Congressman Ron Dellums (D-CA), a long-time opponent of the bomber, cited five independent studies and offered an amendment to that year's defense authorization bill to cap production of the bombers to the existing 21 aircraft; the amendment was narrowly defeated. Nonetheless, Congress did not approve funding for additional B-2s. Further developments Several upgrade packages have been applied to the B-2. In July 2008, the B-2's onboard computing architecture was extensively redesigned; it now incorporates a new integrated processing unit that communicates with systems throughout the aircraft via a newly installed fiber optic network; a new version of the operational flight program software was also developed, with legacy code converted from the JOVIAL programming language to standard C. Updates were also made to the weapon control systems to enable strikes upon moving targets, such as ground vehicles. On 29 December 2008, USAF officials awarded a US$468 million contract to Northrop Grumman to modernize the B-2 fleet's radars. Changing the radar's frequency was required as the United States Department of Commerce had sold that radio spectrum to another operator. In July 2009, it was reported that the B-2 had successfully passed a major USAF audit. In 2010, it was made public that the Air Force Research Laboratory had developed a new material to be used on the part of the wing trailing edge subject to engine exhaust, replacing existing material that quickly degraded. In July 2010, political analyst Rebecca Grant speculated that when the B-2 becomes unable to reliably penetrate enemy defenses, the Lockheed Martin F-35 Lightning II may take on its strike/interdiction mission, carrying B61 nuclear bombs as a tactical bomber. However, in March 2012, The Pentagon announced that a $2 billion, 10-year-long modernization of the B-2 fleet was to begin. The main area of improvement would be replacement of outdated avionics and equipment. Continued modernization efforts likely have continued in secret, as alluded to by a B-2 commander from Whiteman Air Force Base in April 2021, possibly indicating offensive weapons capability against threatening air defenses and aircraft. He stated: It was reported in 2011 that The Pentagon was evaluating an unmanned stealth bomber, characterized as a "mini-B-2", as a potential replacement in the near future. In 2012, USAF Chief of Staff General Norton Schwartz stated the B-2's 1980s-era stealth technologies would make it less survivable in future contested airspaces, so the USAF is to proceed with the Next-Generation Bomber despite overall budget cuts. In 2012 projections, it was estimated that the Next-Generation Bomber would have an overall cost of $55 billion. In 2013, the USAF contracted for the Defensive Management System Modernization (DMS-M) program to replace the antenna system and other electronics to increase the B-2's frequency awareness. The Common Very Low Frequency Receiver upgrade allows the B-2s to use the same very low frequency transmissions as the Ohio-class submarines so as to continue in the nuclear mission until the Mobile User Objective System is fielded. In 2014, the USAF outlined a series of upgrades including nuclear warfighting, a new integrated processing unit, the ability to carry cruise missiles, and threat warning improvements. Due to ongoing software challenges, DMS-M was canceled by 2020, and the existing work was repurposed for cockpit upgrades. In 1998, a Congressional panel advised the USAF to refocus resources away from continued B-2 production and instead begin development of a new bomber, either a new build or a variant of the B-2. In its 1999 bomber roadmap the USAF eschewed the panel's recommendations, believing its current bomber fleet could be maintained until the 2030s. The service believed that development could begin in 2013, in time to replace aging B-2s, B-1s and B-52s around 2037. Although the USAF previously planned to operate the B-2 until 2058, the FY 2019 budget moved up its retirement to "no later than 2032". It also moved the retirement of the B-1 to 2036 while extending the B-52's service life into the 2050s, because the B-52 has lower maintenance costs, versatile conventional payload, and the ability to carry nuclear cruise missiles (which the B-1 is treaty-prohibited from doing). The decision to retire the B-2 early was made because the small fleet of 20 is considered too expensive per plane to retain, with its position as a stealth bomber being taken over with the introduction of the B-21 Raider starting in the mid-2020s. Design Overview The B-2 Spirit was developed to take over the USAF's vital penetration missions, allowing it to travel deep into enemy territory to deploy ordnance, which could include nuclear weapons. The B-2 is a flying wing aircraft, meaning that it has no fuselage or tail. It has significant advantages over previous bombers due to its blend of low-observable technologies with high aerodynamic efficiency and a large payload. Low observability provides greater freedom of action at high altitudes, thus increasing both range and field of view for onboard sensors. The USAF reports its range as approximately . At cruising altitude, the B-2 refuels every six hours, taking on up to of fuel at a time. The development and construction of the B-2 required pioneering use of computer-aided design and manufacturing technologies due to its complex flight characteristics and design requirements to maintain very low visibility to multiple means of detection. The B-2 bears a resemblance to earlier Northrop aircraft; the YB-35 and YB-49 were both flying wing bombers that had been canceled in development in the early 1950s, allegedly for political reasons. The resemblance goes as far as B-2 and YB-49 having the same wingspan. The YB-49 also had a small radar cross-section. Approximately 80 pilots fly the B-2. Each aircraft has a crew of two, a pilot in the left seat and mission commander in the right, and has provisions for a third crew member if needed. For comparison, the B-1B has a crew of four and the B-52 has a crew of five. The B-2 is highly automated, and one crew member can sleep in a camp bed, use a toilet, or prepare a hot meal while the other monitors the aircraft, unlike most two-seat aircraft. Extensive sleep cycle and fatigue research was conducted to improve crew performance on long sorties. Advanced training is conducted at the USAF Weapons School. Armaments and equipment In the envisaged Cold War scenario, the B-2 was to perform deep-penetrating nuclear strike missions, making use of its stealthy capabilities to avoid detection and interception throughout the missions. There are two internal bomb bays in which munitions are stored either on a rotary launcher or two bomb-racks; the carriage of the weapons loadouts internally results in less radar visibility than external mounting of munitions. The B-2 is capable of carrying of ordnance. Nuclear ordnance includes the B61 and B83 nuclear bombs; the AGM-129 ACM cruise missile was also intended for use on the B-2 platform. In light of the dissolution of the Soviet Union, it was decided to equip the B-2 for conventional precision attacks as well as for the strategic role of nuclear-strike. The B-2 features a sophisticated GPS-Aided Targeting System (GATS) that uses the aircraft's APQ-181 synthetic aperture radar to map out targets prior to the deployment of GPS-aided bombs (GAMs), later superseded by the Joint Direct Attack Munition (JDAM). In the B-2's original configuration, up to 16 GAMs or JDAMs could be deployed; An upgrade program in 2004 raised the maximum carrier capacity to 80 JDAMs. The B-2 has various conventional weapons in its arsenal, including Mark 82 and Mark 84 bombs, CBU-87 Combined Effects Munitions, GATOR mines, and the CBU-97 Sensor Fuzed Weapon. In July 2009, Northrop Grumman reported the B-2 was compatible with the equipment necessary to deploy the Massive Ordnance Penetrator (MOP), which is intended to attack reinforced bunkers; up to two MOPs could be equipped in the B-2's bomb bays with one per bay, the B-2 is the only platform compatible with the MOP as of 2012. As of 2011, the AGM-158 JASSM cruise missile is an upcoming standoff munition to be deployed on the B-2 and other platforms. This is to be followed by the Long Range Standoff Weapon, which may give the B-2 standoff nuclear capability for the first time. Avionics and systems To make the B-2 more effective than previous bombers, many advanced and modern avionics systems were integrated into its design; these have been modified and improved following a switch to conventional warfare missions. One system is the low probability of intercept AN/APQ-181 multi-mode radar, a fully digital navigation system that is integrated with terrain-following radar and Global Positioning System (GPS) guidance, NAS-26 astro-inertial navigation system (first such system tested on the Northrop SM-62 Snark cruise missile) and a Defensive Management System (DMS) to inform the flight crew of possible threats. The onboard DMS is capable of automatically assessing the detection capabilities of identified threats and indicated targets. The DMS will be upgraded by 2021 to detect radar emissions from air defenses to allow changes to the auto-router's mission planning information while in-flight so it can receive new data quickly to plan a route that minimizes exposure to dangers. For safety and fault-detection purposes, an on-board test system is linked with the majority of avionics on the B-2 to continuously monitor the performance and status of thousands of components and consumables; it also provides post-mission servicing instructions for ground crews. In 2008, many of the 136 standalone distributed computers on board the B-2, including the primary flight management computer, were being replaced by a single integrated system. The avionics are controlled by 13 EMP-resistant MIL-STD-1750A computers, which are interconnected through 26 MIL-STD-1553B-busses; other system elements are connected via optical fiber. In addition to periodic software upgrades and the introduction of new radar-absorbent materials across the fleet, the B-2 has had several major upgrades to its avionics and combat systems. For battlefield communications, both Link-16 and a high frequency satellite link have been installed, compatibility with various new munitions has been undertaken, and the AN/APQ-181 radar's operational frequency was shifted to avoid interference with other operators' equipment. The arrays of the upgraded radar features were entirely replaced to make the AN/APQ-181 into an active electronically scanned array (AESA) radar. Due to the B-2's composite structure, it is required to stay away from thunderstorms, to avoid static discharge and lightning strikes. Flight controls To address the inherent flight instability of a flying wing aircraft, the B-2 uses a complex quadruplex computer-controlled fly-by-wire flight control system that can automatically manipulate flight surfaces and settings without direct pilot inputs to maintain aircraft stability. The flight computer receives information on external conditions such as the aircraft's current air speed and angle of attack via pitot-static sensing plates, as opposed to traditional pitot tubes which would impair the aircraft's stealth capabilities. The flight actuation system incorporates both hydraulic and electrical servoactuated components, and it was designed with a high level of redundancy and fault-diagnostic capabilities. Northrop had investigated several means of applying directional control that would infringe on the aircraft's radar profile as little as possible, eventually settling on a combination of split brake-rudders and differential thrust. Engine thrust became a key element of the B-2's aerodynamic design process early on; thrust not only affects drag and lift but pitching and rolling motions as well. Four pairs of control surfaces are located along the wing's trailing edge; while most surfaces are used throughout the aircraft's flight envelope, the inner elevons are normally only in use at slow speeds, such as landing. To avoid potential contact damage during takeoff and to provide a nose-down pitching attitude, all of the elevons remain drooped during takeoff until a high enough airspeed has been attained. Stealth The B-2's low-observable, or "stealth", characteristics enable the undetected penetration of sophisticated anti-aircraft defenses and to attack even heavily defended targets. This stealth comes from a combination of reduced acoustic, infrared, visual and radar signatures (multi-spectral camouflage) to evade the various detection systems that could be used to detect and be used to direct attacks against an aircraft. The B-2's stealth enables the reduction of supporting aircraft that are required to provide air cover, Suppression of Enemy Air Defenses and electronic countermeasures, making the bomber a "force multiplier". , there have been no instances of a missile being launched at a B-2. To reduce optical visibility during daylight flights, the B-2 is painted in an anti-reflective paint. The undersides are dark because it flies at high altitudes (), and at that altitude a dark grey painting blends well into the sky. It is speculated to have an upward-facing light sensor which alerts the pilot to increase or reduce altitude to match the changing illuminance of the sky. The original design had tanks for a contrail-inhibiting chemical, but this was replaced in production aircraft by a contrail sensor that alerts the crew when they should change altitude. The B-2 is vulnerable to visual interception at ranges of or less. The B-2 is stored in a $5 million specialized air-conditioned hangar to maintain its stealth coating. Every seven years, this coating is carefully washed away with crystallized wheat starch so that the B-2's surfaces can be inspected for any dents or scratches. Radar The B-2's clean, low-drag flying wing configuration not only provides exceptional range but is also beneficial to reducing its radar profile. Reportedly, the B-2 has a radar cross-section (RCS) of about . The bomber does not always fly stealthily; when nearing air defenses pilots "stealth up" the B-2, a maneuver whose details are secret. The aircraft is stealthy, except briefly when the bomb bay opens. The flying wing design most closely resembles a so-called infinite flat plate (as vertical control surfaces dramatically increase RCS), the perfect stealth shape, as it would lack angles to reflect back radar waves (initially, the shape of the Northrop ATB concept was flatter; it gradually increased in volume according to specific military requirements). Without vertical surfaces to reflect radar laterally, side aspect radar cross section is also reduced. Radars operating at a lower frequency band (S or L band) are able to detect and track certain stealth aircraft that have multiple control surfaces, like canards or vertical stabilizers, where the frequency wavelength can exceed a certain threshold and cause a resonant effect. RCS reduction as a result of shape had already been observed on the Royal Air Force's Avro Vulcan strategic bomber, and the USAF's F-117 Nighthawk. The F-117 used flat surfaces (faceting technique) for controlling radar returns as during its development (see Lockheed Have Blue) in the early 1970s, technology only allowed for the simulation of radar reflections on simple, flat surfaces; computing advances in the 1980s made it possible to simulate radar returns on more complex curved surfaces. The B-2 is composed of many curved and rounded surfaces across its exposed airframe to deflect radar beams. This technique, known as continuous curvature, was made possible by advances in computational fluid dynamics, and first tested on the Northrop Tacit Blue. Infrared Some analysts claim infra-red search and track systems (IRSTs) can be deployed against stealth aircraft, because any aircraft surface heats up due to air friction and with a two channel IRST is a (4.3 μm absorption maxima) detection possible, through difference comparing between the low and high channel. Burying engines deep inside the fuselage also minimizes the thermal visibility or infrared signature of the exhaust. At the engine intake, cold air from the boundary layer below the main inlet enters the fuselage (boundary layer suction, first tested on the Northrop X-21) and is mixed with hot exhaust air just before the nozzles (similar to the Ryan AQM-91 Firefly). According to the Stefan–Boltzmann law, this results in less energy (thermal radiation in the infrared spectrum) being released and thus a reduced heat signature. The resulting cooler air is conducted over a surface composed of heat resistant carbon-fiber-reinforced polymer and titanium alloy elements, which disperse the air laterally, to accelerate its cooling. The B-2 lacks afterburners as the hot exhaust would increase the infrared signature; breaking the sound barrier would produce an obvious sonic boom as well as aerodynamic heating of the aircraft skin which would also increase the infrared signature. Materials According to the Huygens–Fresnel principle, even a very flat plate would still reflect radar waves, though much less than when a signal is bouncing at a right angle. Additional reduction in its radar signature was achieved by the use of various radar-absorbent materials (RAM) to absorb and neutralize radar beams. The majority of the B-2 is made out of a carbon-graphite composite material that is stronger than steel, lighter than aluminum, and absorbs a significant amount of radar energy. The B-2 is assembled with unusually tight engineering tolerances to avoid leaks as they could increase its radar signature. Innovations such as alternate high frequency material (AHFM) and automated material application methods were also incorporated to improve the aircraft's radar-absorbent properties and reduce maintenance requirements. In early 2004, Northrop Grumman began applying a newly developed AHFM to operational B-2s. To protect the operational integrity of its sophisticated radar absorbent material and coatings, each B-2 is kept inside a climate-controlled hangar (Extra Large Deployable Aircraft Hangar System) large enough to accommodate its wingspan. Shelter system B-2s are supported by portable, environmentally-controlled hangars called B-2 Shelter Systems (B2SS). The hangars are built by American Spaceframe Fabricators Inc. and cost approximately US$5 million apiece. The need for specialized hangars arose in 1998 when it was found that B-2s passing through Andersen Air Force Base did not have the climate-controlled environment maintenance operations required. In 2003, the B2SS program was managed by the Combat Support System Program Office at Eglin Air Force Base. B2SS hangars are known to have been deployed to Naval Support Facility Diego Garcia and RAF Fairford. Operational history 1990s The first operational aircraft, christened Spirit of Missouri, was delivered to Whiteman Air Force Base, Missouri, where the fleet is based, on 17 December 1993. The B-2 reached initial operational capability (IOC) on 1 January 1997. Depot maintenance for the B-2 is accomplished by USAF contractor support and managed at Oklahoma City Air Logistics Center at Tinker Air Force Base. Originally designed to deliver nuclear weapons, modern usage has shifted towards a flexible role with conventional and nuclear capability. The B-2's combat debut was in 1999, during the Kosovo War. It was responsible for destroying 33% of selected Serbian bombing targets in the first eight weeks of U.S. involvement in the war. During this war, six B-2s flew non-stop to Yugoslavia from their home base in Missouri and back, totaling 30 hours. Although the bombers accounted 50 sorties out of a total of 34,000 NATO sorties, they dropped 11 percent of all bombs. The B-2 was the first aircraft to deploy GPS satellite-guided JDAM "smart bombs" in combat use in Kosovo. The use of JDAMs and precision-guided munitions effectively replaced the controversial tactic of carpet-bombing, which had been harshly criticized due to it causing indiscriminate civilian casualties in prior conflicts, such as the 1991 Gulf War. On 7 May 1999, a B-2 dropped five JDAMs on the Chinese Embassy, killing several staff. By then, the B-2 had dropped 500 bombs in Yugoslavia. 2000s The B-2 saw service in Afghanistan, striking ground targets in support of Operation Enduring Freedom. With aerial refueling support, the B-2 flew one of its longest missions to date from Whiteman Air Force Base in Missouri to Afghanistan and back. B-2s would be stationed in the Middle East as a part of a US military buildup in the region from 2003. The B-2's combat use preceded a USAF declaration of "full operational capability" in December 2003. The Pentagon's Operational Test and Evaluation 2003 Annual Report noted that the B-2's serviceability for Fiscal Year 2003 was still inadequate, mainly due to the maintainability of the B-2's low observable coatings. The evaluation also noted that the Defensive Avionics suite had shortcomings with "pop-up threats". During the Iraq War, B-2s operated from Diego Garcia and an undisclosed "forward operating location". Other sorties in Iraq have launched from Whiteman AFB. the longest combat mission has been 44.3 hours. "Forward operating locations" have been previously designated as Andersen Air Force Base in Guam and RAF Fairford in the United Kingdom, where new climate controlled hangars have been constructed. B-2s have conducted 27 sorties from Whiteman AFB and 22 sorties from a forward operating location, releasing more than of munitions, including 583 JDAM "smart bombs" in 2003. 2010s In response to organizational issues and high-profile mistakes made within the USAF, all of the B-2s, along with the nuclear-capable B-52s and the USAF's intercontinental ballistic missiles (ICBMs), were transferred to the newly formed Air Force Global Strike Command on 1 February 2010. In March 2011, B-2s were the first U.S. aircraft into action in Operation Odyssey Dawn, the UN mandated enforcement of the Libyan no-fly zone. Three B-2s dropped 40 bombs on a Libyan airfield in support of the UN no-fly zone. The B-2s flew directly from the U.S. mainland across the Atlantic Ocean to Libya; a B-2 was refueled by allied tanker aircraft four times during each round trip mission. In August 2011, The New Yorker reported that prior to the May 2011 U.S. Special Operations raid into Abbottabad, Pakistan that resulted in the death of Osama bin Laden, U.S. officials had considered an airstrike by one or more B-2s as an alternative; the use of a bunker busting bomb was rejected due to potential damage to nearby civilian buildings. There were also concerns an airstrike would make it difficult to positively identify Bin Laden's remains, making it hard to confirm his death. On 28 March 2013, two B-2s flew a round trip of from Whiteman Air Force base in Missouri to South Korea, dropping dummy ordnance on the Jik Do target range. The mission, part of the annual South Korean–U.S. military exercises, was the first time that B-2s overflew the Korean Peninsula. Tensions between the Koreas were high; North Korea protested against the B-2's participation and made threats of retaliatory nuclear strikes against South Korea and the United States. On 18 January 2017, two B-2s attacked an ISIS training camp southwest of Sirte, Libya, killing around 85 militants. The B-2s together dropped 108 precision-guided Joint Direct Attack Munition (JDAM) bombs. These strikes were followed by an MQ-9 Reaper unmanned aerial vehicle firing Hellfire missiles. Each B-2 flew a 33-hour, round-trip mission from Whiteman Air Force Base, Missouri with four or five (accounts differ) refuelings during the trip. 2020s On 16 October 2024, B-2As carried out strikes on weapons storage facilities in Yemen, including underground facilities owned by the Houthis. Five hardened underground weapons storage locations were struck as part of the campaign against the Houthis for attacking international shipping during the Red Sea crisis. It was believed the strikes also served as a warning to Iran, demonstrating the stealth bomber's ability to destroy targets buried underground. RAAF Base Tindal in the Northern Territory, Australia was used as a staging ground for the strikes. Operators United States Air Force (19 aircraft in active inventory) Air Force Global Strike Command 509th Bomb Wing – Whiteman Air Force Base, Missouri (18 B-2s) 13th Bomb Squadron 2005–present 325th Bomb Squadron 1998–2005 393rd Bomb Squadron 1993–present 394th Combat Training Squadron 1996–2018 Air Combat Command 53rd Wing – Eglin Air Force Base, Florida 72nd Test and Evaluation Squadron (Whiteman AFB, Missouri) 1998–present 57th Wing – Nellis AFB, Nevada 325th Weapons Squadron – Whiteman AFB, Missouri 2005–present 715th Weapons Squadron 2003–2005 Air National Guard 131st Bomb Wing (Associate) – Whiteman AFB, Missouri 2009–present 110th Bomb Squadron Air Force Materiel Command 412th Test Wing – Edwards Air Force Base, California (has one B-2) 419th Flight Test Squadron 1997–present 420th Flight Test Squadron 1992–1997 Air Force Systems Command 6510th Test Wing – Edwards AFB, California 1989–1992 6520th Flight Test Squadron Accidents and incidents On 23 February 2008, B-2 "AV-12" Spirit of Kansas crashed on the runway shortly after takeoff from Andersen Air Force Base in Guam. Spirit of Kansas had been operated by the 393rd Bomb Squadron, 509th Bomb Wing, Whiteman Air Force Base, Missouri, and had logged 5,176 flight hours. The two-person crew ejected safely from the aircraft. The aircraft was destroyed, a hull loss valued at US$1.4 billion. After the accident, the USAF took the B-2 fleet off operational status for 53 days, returning on 15 April 2008. The cause of the crash was later determined to be moisture in the aircraft's Port Transducer Units during air data calibration, which distorted the information being sent to the bomber's air data system. As a result, the flight control computers calculated an inaccurate airspeed, and a negative angle of attack, causing the aircraft to pitch upward 30 degrees during takeoff. This was the first crash and loss of a B-2. In February 2010, a serious incident involving a B-2 occurred at Andersen Air Force Base in Guam. The aircraft involved was AV-11 Spirit of Washington. The aircraft was severely damaged by fire while on the ground and underwent 18 months of repairs to enable it to fly back to the mainland U.S. for more comprehensive repairs. Spirit of Washington was repaired and returned to service in December 2013. At the time of the accident, the USAF had no training to deal with tailpipe fires on the B-2s. On the night of 13–14 September 2021, B-2 Spirit of Georgia made an emergency landing at Whiteman AFB. The aircraft landed and went off the runway into the grass and came to rest on its left side. The cause was later determined to be faulty landing gear springs and "microcracking" in hydraulic connections on the aircraft. The lock link springs in the left landing gear had likely not been replaced in at least a decade, and produced about 11% less tension than specified. The "microcracking" reduced hydraulic support to the landing gear. These problems allowed the landing gear to fold upon landing. The accident resulted in a minimum of $10.1 million in repair damages, but the final repair cost was still being determined in March 2022. On 10 December 2022, an in-flight malfunction aboard a B-2 forced an emergency landing at Whiteman AFB. No personnel, including the flight crew, sustained injuries during the incident; there was a post-crash fire that was quickly put out. Subsequently, all B-2s were grounded. On 18 May 2023, Air Force officials lifted the grounding without disclosing any details about what caused the incident, or what steps had been taken return the aircraft to operation. In May 2024, the Air Force announced the B-2 would be divested, as it had been deemed to be "uneconomical to repair." Although no cost estimate was provided, the decision was likely influenced by the coming introduction of the B-21 bomber; after the B-2 crash in 2010, it took almost four years and over $100 million to return the aircraft to service because not losing one of the few penetrating bombers in the inventory was considered necessary to justify the effort. However, the impending arrival of the B-21 and coming retirement of the B-2 sometime after 2029 likely made USAF leaders decide it wouldn't be worth the expense to repair it, only for it to soon be retired. Aircraft on display No operational B-2s have been retired by the Air Force to be put on display. B-2s have made occasional appearances on ground display at various air shows. B-2 test article (s/n AT-1000), the second of two built without engines or instruments and used for static testing, was placed on display in 2004 at the National Museum of the United States Air Force near Dayton, Ohio. The test article passed all structural testing requirements before the airframe failed. The museum's restoration team spent over a year reassembling the fractured airframe. The display airframe is marked to resemble Spirit of Ohio (S/N 82-1070), the B-2 used to test the design's ability to withstand extreme heat and cold. The exhibit features Spirit of Ohios nose wheel door, with its Fire and Ice artwork, which was painted and signed by the technicians who performed the temperature testing. The restored test aircraft is on display in the museum's "Cold War Gallery". Specifications (B-2A Block 30) Individual aircraft Notable appearances in media
Technology
Specific aircraft
null
4399
https://en.wikipedia.org/wiki/Beaver
Beaver
Beavers (genus Castor) are large, semiaquatic rodents of the Northern Hemisphere. There are two existing species: the North American beaver (Castor canadensis) and the Eurasian beaver (C. fiber). Beavers are the second-largest living rodents, after capybaras, weighing up to . They have stout bodies with large heads, long chisel-like incisors, brown or gray fur, hand-like front feet, webbed back feet, and tails that are flat and scaly. The two species differ in skull and tail shape and fur color. Beavers can be found in a number of freshwater habitats, such as rivers, streams, lakes and ponds. They are herbivorous, consuming tree bark, aquatic plants, grasses and sedges. Beavers build dams and lodges using tree branches, vegetation, rocks and mud; they chew down trees for building material. Dams restrict water flow, and lodges serve as shelters. Their infrastructure creates wetlands used by many other species, and because of their effect on other organisms in the ecosystem, beavers are considered a keystone species. Adult males and females live in monogamous pairs with their offspring. After their first year, the young help their parents repair dams and lodges; older siblings may also help raise newly born offspring. Beavers hold territories and mark them using scent mounds made of mud, debris, and castoreum—a liquid substance excreted through the beaver's urethra-based castor sacs. Beavers can also recognize their kin by their anal gland secretions and are more likely to tolerate them as neighbors. Historically, beavers have been hunted for their fur, meat, and castoreum. Castoreum has been used in medicine, perfume, and food flavoring; beaver pelts have been a major driver of the fur trade. Before protections began in the 19th and early 20th centuries, overhunting had nearly exterminated both species. Their populations have since rebounded, and they are listed as species of least concern by the IUCN Red List of mammals. In human culture, the beaver symbolizes industriousness, especially in connection with construction; it is the national animal of Canada. Etymology The English word beaver comes from the Old English word or and is connected to the German word and the Dutch word . The ultimate origin of the word is an Indo-European root for . Cognates of beaver is the source for several European placenames, including those of Beverley, Bièvres, Biberbach, Biebrich, Bibra, Bibern, Bibrka, Bobr, Bober, Bóbrka, Bjurholm, Bjurälven, and Bjurum. The genus name Castor has its origin in the Greek word and translates as . Taxonomy Carl Linnaeus coined the genus name Castor in 1758 as well as the specific (species) epithet fiber for the Eurasian species. German zoologist Heinrich Kuhl coined C. canadensis in 1820, many scientists considered both names synonymous for one same species until the 1970s, when chromosomal evidence became available confirming both as separate where the Eurasian has 48 chromosomes, while the North American has 40.) The difference in chromosome numbers prevents them from interbreeding. Twenty-five subspecies have been classified for C. canadensis, and nine have been classified for C. fiber. There are two extant species: the North American beaver (Castor canadensis) and the Eurasian beaver (C. fiber). The Eurasian beaver is slightly longer and has a more lengthened skull, triangular nasal cavities (as opposed to the square ones of the North American species), a lighter fur color, and a narrower tail. Evolution Beavers belong to the rodent suborder Castorimorpha, along with Heteromyidae (kangaroo rats and kangaroo mice), and the gophers. Modern beavers are the only extant members of the family Castoridae. They originated in North America in the late Eocene and colonized Eurasia via the Bering Land Bridge in the early Oligocene, coinciding with the Grande Coupure, a time of significant changes in animal species around 33million years ago (myr). The more basal castorids had several unique features: more complex occlusion between cheek teeth, parallel rows of upper teeth, premolars that were only slightly smaller than molars, the presence of a third set of premolars (P3), a hole in the stapes of the inner ear, a smooth palatine bone (with the palatine opening closer to the rear end of the bone), and a longer snout. More derived castorids have less complex occlusion, upper tooth rows that create a V-shape towards the back, larger second premolars compared to molars, absence of a third premolar set and stapes hole, a more grooved palatine (with the opening shifted towards the front), and reduced incisive foramen. Members of the subfamily Palaeocastorinae appeared in late-Oligocene North America. This group consisted primarily of smaller animals with relatively large front legs, a flattened skull, and a reduced tail—all features of a fossorial (burrowing) lifestyle. In the early Miocene (about 24 mya), castorids evolved a semiaquatic lifestyle. Members of the subfamily Castoroidinae are considered to be a sister group to modern beavers, and included giants like Castoroides of North America and Trogontherium of Eurasia. Castoroides is estimated to have had a length of and a weight of . Fossils of one genus in Castoroidinae, Dipoides, have been found near piles of chewed wood, though Dipoides appears to have been an inferior woodcutter compared to Castor. Researchers suggest that modern beavers and Castoroidinae shared a bark-eating common ancestor. Dam and lodge-building likely developed from bark-eating, and allowed beavers to survive in the harsh winters of the subarctic. There is no conclusive evidence for this behavior occurring in non-Castor species. The genus Castor likely originated in Eurasia. The earliest fossil remains appear to be C. neglectus, found in Germany and dated 12–10 mya. Mitochondrial DNA studies place the common ancestor of the two living species at around 8 mya. The ancestors of the North American beaver would have crossed the Bering Land Bridge around 7.5 mya. Castor may have competed with members of Castoroidinae, which led to niche differentiation. The fossil species C. praefiber was likely an ancestor of the Eurasian beaver. C. californicus from the Early Pleistocene of North America was similar to but larger than the extant North American beaver.' Characteristics Beavers are the second-largest living rodents, after capybaras. They have a head–body length of , with a tail, a shoulder height of , and generally weigh , but can be as heavy as . Males and females are almost identical externally. Their bodies are streamlined like marine mammals and their robust build allows them to pull heavy loads. A beaver coat has 12,000–23,000 hairs/cm2 (77,000–148,000 hairs/in2) and functions to keep the animal warm, to help it float in water, and to protect it against predators. Guard hairs are long and typically reddish brown, but can range from yellowish brown to nearly black. The underfur is long and dark gray. Beavers molt every summer. Beavers have large skulls with powerful chewing muscles. They have four chisel-shaped incisors that continue to grow throughout their lives. The incisors are covered in a thick enamel that is colored orange or reddish-brown by iron compounds. The lower incisors have roots that are almost as long as the entire lower jaw. Beavers have one premolar and three molars on all four sides of the jaws, adding up to 20 teeth. The molars have meandering ridges for grinding woody material. The eyes, ears and nostrils are arranged so that they can remain above water while the rest of the body is submerged. The nostrils and ears have valves that close underwater, while nictitating membranes cover the eyes. To protect the larynx and trachea from water flow, the epiglottis is contained within the nasal cavity instead of the throat. In addition, the back of the tongue can rise and create a waterproof seal. A beaver's lips can close behind the incisors, preventing water from entering their mouths as they cut and bite onto things while submerged. The beaver's front feet are dexterous, allowing them to grasp and manipulate objects and food, as well as dig. The hind feet are larger and have webbing between the toes, and the second innermost toe has a "double nail" used for grooming. Beavers can swim at ; only their webbed hind feet are used to swim, while the front feet fold under the chest. On the surface, the hind limbs thrust one after the other; while underwater, they move at the same time. Beavers are awkward on land but can move quickly when they feel threatened. They can carry objects while walking on their hind legs. The beaver's distinctive tail has a conical, muscular, hairy base; the remaining two-thirds of the appendage is flat and scaly. The tail has multiple functions: it provides support for the animal when it is upright (such as when chewing down a tree), acts as a rudder when it is swimming, and stores fat for winter. It also has a countercurrent blood vessel system which allows the animal to lose heat in warm temperatures and retain heat in cold temperatures. The beaver's sex organs are inside the body, and the male's penis has a cartilaginous baculum. They have only one opening, a cloaca, which is used for reproduction, scent-marking, defecation, and urination. The cloaca evolved secondarily, as most mammals have lost this feature, and may reduce the area vulnerable to infection in dirty water. The beaver's intestine is six times longer than its body, and the caecum is double the volume of its stomach. Microorganisms in the caecum allow them to process around 30 percent of the cellulose they eat. A beaver defecates in the water, leaving behind balls of sawdust. Female beavers have four mammary glands; these produce milk with 19 percent fat, a higher fat content than other rodents. Beavers have two pairs of glands: castor sacs, which are part of the urethra, and anal glands. The castor sacs secrete castoreum, a liquid substance used mainly for marking territory. Anal glands produce an oily substance which the beaver uses as a waterproof ointment for its coat. The substance plays a role in individual and family recognition. Anal secretions are darker in females than males among Eurasian beavers, while the reverse is true for the North American species. Compared to many other rodents, a beaver's brain has a hypothalamus that is much smaller than the cerebrum; this indicates a relatively advanced brain with higher intelligence. The cerebellum is large, allowing the animal to move within a three-dimensional space (such as underwater) similar to tree-climbing squirrels. The neocortex is devoted mainly to touch and hearing. Touch is more advanced in the lips and hands than the whiskers and tail. Vision in the beaver is relatively poor; the beaver eye cannot see as well underwater as an otter. Beavers have a good sense of smell, which they use for detecting land predators and for inspecting scent marks, food, and other individuals. Beavers can hold their breath for as long as 15 minutes but typically remain underwater for no more than five or six minutes. Dives typically last less than 30 seconds and are usually no more than deep. When diving, their heart rate decreases to 60 beats per minute, half its normal pace, and blood flow is directed more towards the brain. A beaver's body also has a high tolerance for carbon dioxide. When surfacing, the animal can replace 75 percent of the air in its lungs in one breath, compared to 15 percent for a human. Distribution and status The IUCN Red List of mammals lists both beaver species as least concern. The North American beaver is widespread throughout most of the United States and Canada and can be found in northern Mexico. The species was introduced to Finland in 1937 (and then spread to northwestern Russia) and to Tierra del Fuego, Patagonia, in 1946. , the introduced population of North American beavers in Finland has been moving closer to the habitat of the Eurasian beaver. Historically, the North American beaver was trapped and nearly extirpated because its fur was highly sought after. Protections have allowed the beaver population on the continent to rebound to an estimated 6–12million by the late 20th century; still far lower than the originally estimated 60–400million North American beavers before the fur trade. The introduced population in Tierra del Fuego is estimated at 35,000–50,000 individuals . The Eurasian beaver's range historically included much of Eurasia, but was decimated by hunting by the early 20th century. In Europe, beavers were reduced to fragmented populations, with combined population numbers being estimated at 1,200 individuals for the Rhône of France, the Elbe in Germany, southern Norway, the Neman river and Dnieper Basin in Belarus, and the Voronezh river in Russia. The beaver has since recolonized parts of its former range, aided by conservation policies and reintroductions. Beaver populations now range across western, central, and eastern Europe, and western Russia and the Scandinavian Peninsula. Beginning in 2009, beavers have been successfully reintroduced to parts of Great Britain. , the total Eurasian beaver population in Europe was estimated at over one million. Small native populations are also present in Mongolia and northwestern China; their numbers were estimated at 150 and 700, respectively, . Under New Zealand's Hazardous Substances and New Organisms Act 1996, beavers are classed as a "prohibited new organism" preventing them from being introduced into the country. Ecology Beavers live in freshwater ecosystems such as rivers, streams, lakes and ponds. Water is the most important component of beaver habitat; they swim and dive in it, and it provides them refuge from land predators. It also restricts access to their homes and allows them to move building objects more easily. Beavers prefer slower moving streams, typically with a gradient (steepness) of one percent, though they have been recorded using streams with gradients as high as 15 percent. Beavers are found in wider streams more often than in narrower ones. They also prefer areas with no regular flooding and may abandon a location for years after a significant flood. Beavers typically select flat landscapes with diverse vegetation close to the water. North American beavers prefer trees being or less from the water, but will roam several hundred meters to find more. Beavers have also been recorded in mountainous areas. Dispersing beavers will use certain habitats temporarily before finding their ideal home. These include small streams, temporary swamps, ditches, and backyards. These sites lack important resources, so the animals do not stay there permanently. Beavers have increasingly settled at or near human-made environments, including agricultural areas, suburbs, golf courses, and shopping malls. Beavers have an herbivorous and a generalist diet. During the spring and summer, they mainly feed on herbaceous plant material such as leaves, roots, herbs, ferns, grasses, sedges, water lilies, water shields, rushes, and cattails. During the fall and winter, they eat more bark and cambium of woody plants; tree and shrub species consumed include aspen, birch, oak, dogwood, willow and alder. There is some disagreement about why beavers select specific woody plants; some research has shown that beavers more frequently select species which are more easily digested, while others suggest beavers principally forage based on stem size. Beavers may cache their food for the winter, piling wood in the deepest part of their pond where it cannot be reached by other browsers. This cache is known as a "raft"; when the top becomes frozen, it creates a "cap". The beaver accesses the raft by swimming under the ice. Many populations of Eurasian beaver do not make rafts, but forage on land during winter. Beavers usually live up to 10 years. Felids, canids, and bears may prey upon them. Beavers are protected from predators when in their lodges, and prefer to stay near water. Parasites of the beaver include the bacteria Francisella tularensis, which causes tularemia; the protozoan Giardia duodenalis, which causes giardiasis (beaver fever); and the beaver beetle and mites of the genus Schizocarpus. They have also been recorded to be infected with the rabies virus. Infrastructure Beavers need trees and shrubs to use as building material for dams, which restrict flowing water to create a pond for them to live in, and for lodges, which act as shelters and refuges from predators and the elements. Without such material, beavers dig burrows into a bank to live. Dam construction begins in late summer or early fall, and they repair them whenever needed. Beavers can cut down trees up to wide in less than 50 minutes. Thicker trees, at wide or more, may not fall for hours. When chewing down a tree, beavers switch between biting with the left and right side of the mouth. Tree branches are then cut and carried to their destination with the powerful jaw and neck muscles. Other building materials, like mud and rocks, are held by the forelimbs and tucked between the chin and chest. Beavers start building dams when they hear running water, and the sound of a leak in a dam triggers them to repair it. To build a dam, beavers stack up relatively long and thick logs between banks and in opposite directions. Heavy rocks keep them stable, and grass is packed between them. Beavers continue to pile on more material until the dam slopes in a direction facing upstream. Dams can range in height from to and can stretch from to several hundred meters long. Beaver dams are more effective in trapping and slowly leaking water than man-made concrete dams. Lake-dwelling beavers do not need to build dams. Beavers make two types of lodges: bank lodges and open-water lodges. Bank lodges are burrows dug along the shore and covered in sticks. The more complex freestanding, open-water lodges are built over a platform of piled-up sticks. The lodge is mostly sealed with mud, except for a hole at the top which acts as an air vent. Both types are accessed by underwater entrances. The above-water space inside the lodge is known as the "living chamber", and a "dining area" may exist close to the water entrance. Families routinely clean out old plant material and bring in new material. North American beavers build more open-water lodges than Eurasian beavers. Beaver lodges built by new settlers are typically small and sloppy. More experienced families can build structures with a height of and an above-water diameter of . A lodge sturdy enough to withstand the coming winter can be finished in just two nights. Both lodge types can be present at a beaver site. During the summer, beavers tend to use bank lodges to keep cool. They use open-water lodges during the winter. The air vent provides ventilation, and newly added carbon dioxide can be cleared in an hour. The lodge remains consistent in oxygen and carbon dioxide levels from season to season. Beavers in some areas will dig canals connected to their ponds. The canals fill with groundwater and give beavers access and easier transport of resources, as well as allow them to escape predators. These canals can stretch up to wide, deep, and over long. It has been hypothesized that beavers' canals are not only transportation routes but an extension of their "central place" around the lodge and/or food cache. As they drag wood across the land, beavers leave behind trails or "slides", which they reuse when moving new material. Environmental effects The beaver works as an ecosystem engineer and keystone species, as its activities can have a great impact on the landscape and biodiversity of an area. Aside from humans, few other extant animals appear to do more to shape their environment. When building dams, beavers alter the paths of streams and rivers, allowing for the creation of extensive wetland habitats. In one study, beavers were associated with large increases in open-water areas. When beavers returned to an area, 160% more open water was available during droughts than in previous years, when they were absent. Beaver dams also lead to higher water tables in mineral soil environments and in wetlands such as peatlands. In peatlands particularly, their dams stabilize the constantly changing water levels, leading to greater carbon storage. Beaver ponds, and the wetlands that succeed them, remove sediments and pollutants from waterways, and can stop the loss of important soils. These ponds can increase the productivity of freshwater ecosystems by accumulating nitrogen in sediments. Beaver activity can affect the temperature of the water; in northern latitudes, ice thaws earlier in the warmer beaver-dammed waters. Beavers may contribute to climate change. In Arctic areas, the floods they create can cause permafrost to thaw, releasing methane into the atmosphere. As wetlands are formed and riparian habitats are enlarged, aquatic plants colonize the newly available watery habitat. One study in the Adirondacks found that beaver engineering lead to an increase of more than 33 percent in herbaceous plant diversity along the water's edge. Another study in semiarid eastern Oregon found that the width of riparian vegetation on stream banks increased several-fold as beaver dams watered previously dry terraces adjacent to the stream. Riparian ecosystems in arid areas appear to sustain more plant life when beaver dams are present. Beaver ponds act as a refuge for riverbank plants during wildfires, and provide them with enough moisture to resist such fires. Introduced beavers at Tierra del Fuego have been responsible for destroying the indigenous forest. Unlike trees in North America, many trees in South America cannot grow back after being cut down. Beaver activity impacts communities of aquatic invertebrates. Damming typically leads to an increase of slow or motionless water species, like dragonflies, oligochaetes, snails, and mussels. This is to the detriment of rapid water species like black flies, stoneflies, and net-spinning caddisflies. Beaver floodings create more dead trees, providing more habitat for terrestrial invertebrates like Drosophila flies and bark beetles, which live and breed in dead wood. The presence of beavers can increase wild salmon and trout populations, and the average size of these fishes. These species use beaver habitats for spawning, overwintering, feeding, and as havens from changes in water flow. The positive effects of beaver dams on fish appear to outweigh the negative effects, such as blocking of migration. Beaver ponds have been shown to be beneficial to frog populations by protecting areas for larvae to mature in warm water. The stable waters of beaver ponds also provide ideal habitat for freshwater turtles. Beavers help waterfowl by creating increased areas of water. The widening of the riparian zone associated with beaver dams has been shown to increase the abundance and diversity of birds favoring the water's edge, an impact that may be especially important in semi-arid climates. Fish-eating birds use beaver ponds for foraging, and in some areas, certain species appear more frequently at sites where beavers were active than at sites with no beaver activity. In a study of Wyoming streams and rivers, watercourses with beavers had 75 times as many ducks as those without. As trees are drowned by rising beaver impoundments, they become an ideal habitat for woodpeckers, which carve cavities that may be later used by other bird species. Beaver-caused ice thawing in northern latitudes allows Canada geese to nest earlier. Other semi-aquatic mammals, such as water voles, muskrats, minks, and otters, will shelter in beaver lodges. Beaver modifications to streams in Poland create habitats favorable to bat species that forage at the water surface and "prefer moderate vegetation clutter". Large herbivores, such as some deer species, benefit from beaver activity as they can access vegetation from fallen trees and ponds. Behavior Beavers are mainly nocturnal and crepuscular, and spend the daytime in their shelters. In northern latitudes, beaver activity is decoupled from the 24-hour cycle during the winter, and may last as long as 29 hours. They do not hibernate during winter, and spend much of their time in their lodges. Family life The core of beaver social organization is the family, which is composed of an adult male and an adult female in a monogamous pair and their offspring. Beaver families can have as many as ten members; groups about this size require multiple lodges. Mutual grooming and play fighting maintain bonds between family members, and aggression between them is uncommon. Adult beavers mate with their partners, though partner replacement appears to be common. A beaver that loses its partner will wait for another one to come by. Estrus cycles begin in late December and peak in mid-January. Females may have two to four estrus cycles per season, each lasting 12–24 hours. The pair typically mate in the water and to a lesser extent in the lodge, for half a minute to three minutes. Up to four young, or kits, are born in spring and summer, after a three or four-month gestation. Newborn beavers are precocial with a full fur coat, and can open their eyes within days of birth. Their mother is the primary caretaker, while their father maintains the territory. Older siblings from a previous litter also play a role. After they are born, the kits spend their first one to two months in the lodge. Kits suckle for as long as three months, but can eat solid food within their second week and rely on their parents and older siblings to bring it to them. Eventually, beaver kits explore outside the lodge and forage on their own, but may follow an older relative and hold onto their backs. After their first year, young beavers help their families with construction. Beavers sexually mature around 1.5–3 years. They become independent at two years old, but remain with their parents for an extra year or more during times of food shortage, high population density, or drought. Territories and spacing Beavers typically disperse from their parental colonies during the spring or when the winter snow melts. They often travel less than , but long-distance dispersals are not uncommon when previous colonizers have already exploited local resources. Beavers are able to travel greater distances when free-flowing water is available. Individuals may meet their mates during the dispersal stage, and the pair travel together. It may take them weeks or months to reach their final destination; longer distances may require several years. Beavers establish and defend territories along the banks of their ponds, which may be in length. Beavers mark their territories by constructing scent mounds made of mud and vegetation, scented with castoreum. Those with many territorial neighbors create more scent mounds. Scent marking increases in spring, during the dispersal of yearlings, to deter interlopers. Beavers are generally intolerant of intruders and fights may result in deep bites to the sides, rump, and tail. They exhibit a behavior known as the "dear enemy effect"; a territory-holder will investigate and become familiar with the scents of its neighbors and react more aggressively to the scents of strangers passing by. Beavers are also more tolerant of individuals that are their kin. They recognize them by using their keen sense of smell to detect differences in the composition of anal gland secretions. Anal gland secretion profiles are more similar among relatives than unrelated individuals. Communication Beavers within a family greet each other with whines. Kits will attract the attention of adults with mews, squeaks, and cries. Defensive beavers produce a hissing growl and gnash their teeth. Tail slaps, which involve an animal hitting the water surface with its tail, serve as alarm signals warning other beavers of a potential threat. An adult's tail slap is more successful in alerting others, who will escape into the lodge or deeper water. Juveniles have not yet learned the proper use of a tail slap, and hence are normally ignored. Eurasian beavers have been recorded using a territorial "stick display", which involves individuals holding up a stick and bouncing in shallow water. Interactions with humans Beavers sometimes come into conflict with humans over land use; individual beavers may be labeled as "nuisance beavers". Beavers can damage crops, timber stocks, roads, ditches, gardens, and pastures via gnawing, eating, digging, and flooding. They occasionally attack humans and domestic pets, particularly when infected with rabies, in defense of their territory, or when they feel threatened. Some of these attacks have been fatal, including at least one human death. Beavers can spread giardiasis ('beaver fever') by infecting surface waters, though outbreaks are more commonly caused by human activity. Flow devices, like beaver pipes, are used to manage beaver flooding, while fencing and hardware cloth protect trees and shrubs from beaver damage. If necessary, hand tools, heavy equipment, or explosives are used to remove dams. Hunting, trapping, and relocation may be permitted as forms of population control and for removal of individuals. The governments of Argentina and Chile have authorized the trapping of invasive beavers in hopes of eliminating them. The ecological importance of beavers has led to cities like Seattle designing their parks and green spaces to accommodate the animals. The Martinez beavers became famous in the mid-2000s for their role in improving the ecosystem of Alhambra Creek in Martinez, California. Zoos have displayed beavers since at least the 19th century, though not commonly. In captivity, beavers have been used for entertainment, fur harvesting, and for reintroduction into the wild. Captive beavers require access to water, substrate for digging, and artificial shelters. Archibald Stansfeld "Grey Owl" Belaney pioneered beaver conservation in the early 20th century. Belaney wrote several books, and was first to professionally film beavers in their environment. In 1931, he moved to a log cabin in Prince Albert National Park, where he was the "caretaker of park animals" and raised a beaver pair and their four offspring. Commercial use Beavers have been hunted, trapped, and exploited for their fur, meat, and castoreum. Since the animals typically stayed in one place, trappers could easily find them and could kill entire families in a lodge. Many pre-modern people mistakenly thought that castoreum was produced by the testicles or that the castor sacs of the beaver were its testicles, and females were hermaphrodites. Aesop's Fables describes beavers chewing off their testicles to preserve themselves from hunters, which is impossible because a beaver's testicles are internal. This myth persisted for centuries, and was corrected by French physician Guillaume Rondelet in the 1500s. Beavers have historically been hunted and captured using deadfalls, snares, nets, bows and arrows, spears, clubs, firearms, and leg-hold traps. Castoreum was used to lure the animals. Castoreum was used for a variety of medical purposes; Pliny the Elder promoted it as a treatment for stomach problems, flatulence, seizures, sciatica, vertigo, and epilepsy. He stated it could stop hiccups when mixed with vinegar, toothaches if mixed with oil (by administering into the ear opening on the same side as the tooth), and could be used as an antivenom. The substance has traditionally been prescribed to treat hysteria in women, which was believed to have been caused by a "toxic" womb. Castoreum's properties have been credited to the accumulation of salicylic acid from willow and aspen trees in the beaver's diet, and has a physiological effect comparable to aspirin. Today, the medical use of castoreum has declined and is limited mainly to homeopathy. The substance is also used as an ingredient in perfumes and tinctures, and as a flavouring in food and drinks. Various Native American groups have historically hunted beavers for food, they preferred its meat more than other red meats because of its higher calorie and fat content, and the animals remained plump in winter when they were most hunted. The bones were used to make tools. In medieval Europe, the Catholic Church considered the beaver to be part mammal and part fish, and allowed followers to eat the scaly, fishlike tail on meatless Fridays during Lent. Beaver tails were thus highly prized in Europe; they were described by French naturalist Pierre Belon as tasting like a "nicely dressed eel". Beaver pelts were used to make hats; felters would remove the guard hairs. The number of pelts needed depended on the type of hat, with Cavalier and Puritan hats requiring more fur than top hats. In the late 16th century, Europeans began to deal in North American furs due to the lack of taxes or tariffs on the continent and the decline of fur-bearers at home. Beaver pelts caused or contributed to the Beaver Wars, King William's War, and the French and Indian War; the trade made John Jacob Astor and the owners of the North West Company very wealthy. For Europeans in North America, the fur trade was a driver of the exploration and westward exploration on the continent and contact with native peoples, who traded with them. The fur trade peaked between 1860 and 1870, when over 150,000 beaver pelts were purchased annually by the Hudson's Bay Company and fur companies in the United States. The contemporary global fur trade is not as profitable due to conservation, anti-fur and animal rights campaigns. In culture The beaver has been used to represent productivity, trade, tradition, masculinity, and respectability.
Biology and health sciences
Rodents
null
4400
https://en.wikipedia.org/wiki/Bear
Bear
Bears are carnivoran mammals of the family Ursidae (). They are classified as caniforms, or doglike carnivorans. Although only eight species of bears are extant, they are widespread, appearing in a wide variety of habitats throughout most of the Northern Hemisphere and partially in the Southern Hemisphere. Bears are found on the continents of North America, South America, and Eurasia. Common characteristics of modern bears include large bodies with stocky legs, long snouts, small rounded ears, shaggy hair, plantigrade paws with five nonretractile claws, and short tails. While the polar bear is mostly carnivorous, and the giant panda is mostly herbivorous, the remaining six species are omnivorous with varying diets. With the exception of courting individuals and mothers with their young, bears are typically solitary animals. They may be diurnal or nocturnal and have an excellent sense of smell. Despite their heavy build and awkward gait, they are adept runners, climbers, and swimmers. Bears use shelters, such as caves and logs, as their dens; most species occupy their dens during the winter for a long period of hibernation, up to 100 days. Bears have been hunted since prehistoric times for their meat and fur; they have also been used for bear-baiting and other forms of entertainment, such as being made to dance. With their powerful physical presence, they play a prominent role in the arts, mythology, and other cultural aspects of various human societies. In modern times, bears have come under pressure through encroachment on their habitats and illegal trade in bear parts, including the Asian bile bear market. The IUCN lists six bear species as vulnerable or endangered, and even least concern species, such as the brown bear, are at risk of extirpation in certain countries. The poaching and international trade of these most threatened populations are prohibited, but still ongoing. Etymology The English word "bear" comes from Old English and belongs to a family of names for the bear in Germanic languages, such as Swedish , also used as a first name. This form is conventionally said to be related to a Proto-Indo-European word for "brown", so that "bear" would mean "the brown one". However, Ringe notes that while this etymology is semantically plausible, a word meaning "brown" of this form cannot be found in Proto-Indo-European. He suggests instead that "bear" is from the Proto-Indo-European word *ǵʰwḗr- ~ *ǵʰwér "wild animal". This terminology for the animal originated as a taboo avoidance term: proto-Germanic tribes replaced their original word for bear—arkto—with this euphemistic expression out of fear that speaking the animal's true name might cause it to appear. According to author Ralph Keyes, this is the oldest known euphemism. Bear taxon names such as Arctoidea and Helarctos come from the ancient Greek ἄρκτος (arktos), meaning bear, as do the names "arctic" and "antarctic", via the name of the constellation Ursa Major, the "Great Bear", prominent in the northern sky. Bear taxon names such as Ursidae and Ursus come from Latin Ursus/Ursa, he-bear/she-bear. The female first name "Ursula", originally derived from a Christian saint's name, means "little she-bear" (diminutive of Latin ursa). In Switzerland, the male first name "Urs" is especially popular, while the name of the canton and city of Bern is by legend derived from Bär, German for bear. The Germanic name Bernard (including Bernhardt and similar forms) means "bear-brave", "bear-hardy", or "bold bear". The Old English name Beowulf is a kenning, "bee-wolf", for bear, in turn meaning a brave warrior. Taxonomy The family Ursidae is one of nine families in the suborder Caniformia, or "doglike" carnivorans, within the order Carnivora. Bears' closest living relatives are the pinnipeds, canids, and musteloids. (Some scholars formerly argued that bears are directly derived from canids and should not be classified as a separate family.) Modern bears comprise eight species in three subfamilies: Ailuropodinae (monotypic with the giant panda), Tremarctinae (monotypic with the spectacled bear), and Ursinae (containing six species divided into one to three genera, depending on the authority). Nuclear chromosome analysis show that the karyotype of the six ursine bears is nearly identical, each having 74 chromosomes (see Ursid hybrid), whereas the giant panda has 42 chromosomes and the spectacled bear 52. These smaller numbers can be explained by the fusing of some chromosomes, and the banding patterns on these match those of the ursine species, but differ from those of procyonids, which supports the inclusion of these two species in Ursidae rather than in Procyonidae, where they had been placed by some earlier authorities. Evolution The earliest members of Ursidae belong to the extinct subfamily Amphicynodontinae, including Parictis (late Eocene to early middle Miocene, 38–18 Mya) and the slightly younger Allocyon (early Oligocene, 34–30 Mya), both from North America. These animals looked very different from today's bears, being small and raccoon-like in overall appearance, with diets perhaps more similar to that of a badger. Parictis does not appear in Eurasia and Africa until the Miocene. It is unclear whether late-Eocene ursids were also present in Eurasia, although faunal exchange across the Bering land bridge may have been possible during a major sea level low stand as early as the late Eocene (about 37 Mya) and continuing into the early Oligocene. European genera morphologically very similar to Allocyon, and to the much younger American Kolponomos (about 18 Mya), are known from the Oligocene, including Amphicticeps and Amphicynodon. There has been various morphological evidence linking amphicynodontines with pinnipeds, as both groups were semi-aquatic, otter-like mammals. In addition to the support of the pinniped–amphicynodontine clade, other morphological and some molecular evidence supports bears being the closest living relatives to pinnipeds. The raccoon-sized, dog-like Cephalogale is the oldest-known member of the subfamily Hemicyoninae, which first appeared during the middle Oligocene in Eurasia about 30 Mya. The subfamily includes the younger genera Phoberocyon (20–15 Mya), and Plithocyon (15–7 Mya). A Cephalogale-like species gave rise to the genus Ursavus during the early Oligocene (30–28 Mya); this genus proliferated into many species in Asia and is ancestral to all living bears. Species of Ursavus subsequently entered North America, together with Amphicynodon and Cephalogale, during the early Miocene (21–18 Mya). Members of the living lineages of bears diverged from Ursavus between 15 and 20 Mya, likely via the species Ursavus elmensis. Based on genetic and morphological data, the Ailuropodinae (pandas) were the first to diverge from other living bears about 19 Mya, although no fossils of this group have been found before about 11 Mya. The New World short-faced bears (Tremarctinae) differentiated from Ursinae following a dispersal event into North America during the mid-Miocene (about 13 Mya). They invaded South America (≈2.5 or 1.2 Ma) following formation of the Isthmus of Panama. Their earliest fossil representative is Plionarctos in North America (c. 10–2 Ma). This genus is probably the direct ancestor to the North American short-faced bears (genus Arctodus), the South American short-faced bears (Arctotherium), and the spectacled bears, Tremarctos, represented by both an extinct North American species (T. floridanus), and the lone surviving representative of the Tremarctinae, the South American spectacled bear (T. ornatus). The subfamily Ursinae experienced a dramatic proliferation of taxa about 5.3–4.5 Mya, coincident with major environmental changes; the first members of the genus Ursus appeared around this time. The sloth bear is a modern survivor of one of the earliest lineages to diverge during this radiation event (5.3 Mya); it took on its peculiar morphology, related to its diet of termites and ants, no later than by the early Pleistocene. By 3–4 Mya, the species Ursus minimus appears in the fossil record of Europe; apart from its size, it was nearly identical to today's Asian black bear. It is likely ancestral to all bears within Ursinae, perhaps aside from the sloth bear. Two lineages evolved from U. minimus: the black bears (including the sun bear, the Asian black bear, and the American black bear); and the brown bears (which includes the polar bear). Modern brown bears evolved from U. minimus via Ursus etruscus, which itself is ancestral to the extinct Pleistocene cave bear. Species of Ursinae have migrated repeatedly into North America from Eurasia as early as 4 Mya during the early Pliocene. The polar bear is the most recently evolved species and descended from a population of brown bears that became isolated in northern latitudes by glaciation 400,000 years ago. Phylogeny The relationship of the bear family with other carnivorans is shown in the following phylogenetic tree, which is based on the molecular phylogenetic analysis of six genes in Flynn (2005) with the musteloids updated following the multigene analysis of Law et al. (2018). Note that although they are called "bears" in some languages, red pandas and raccoons and their close relatives are not bears, but rather musteloids. There are two phylogenetic hypotheses on the relationships among extant and fossil bear species. One is all species of bears are classified in seven subfamilies as adopted here and related articles: Amphicynodontinae, Hemicyoninae, Ursavinae, Agriotheriinae, Ailuropodinae, Tremarctinae, and Ursinae. Below is a cladogram of the subfamilies of bears after McLellan and Reiner (1992) and Qiu et al.. (2014): The second alternative phylogenetic hypothesis was implemented by McKenna et al. (1997) to classify all the bear species into the superfamily Ursoidea, with Hemicyoninae and Agriotheriinae being classified in the family "Hemicyonidae". Amphicynodontinae under this classification were classified as stem-pinnipeds in the superfamily Phocoidea. In the McKenna and Bell classification, both bears and pinnipeds are in a parvorder of carnivoran mammals known as Ursida, along with the extinct bear dogs of the family Amphicyonidae. Below is the cladogram based on McKenna and Bell (1997) classification: Physical characteristics Size The bear family includes the most massive extant terrestrial members of the order Carnivora. The polar bear is considered to be the largest extant species, with adult males weighing and measuring in total length. The smallest species is the sun bear, which ranges in weight and in length. Prehistoric North and South American short-faced bears were the largest species known to have lived. The latter estimated to have weighed and stood tall. Body weight varies throughout the year in bears of temperate and arctic climates, as they build up fat reserves in the summer and autumn and lose weight during the winter. Morphology Bears are generally bulky and robust animals with short tails. They are sexually dimorphic with regard to size, with males typically being larger. Larger species tend to show increased levels of sexual dimorphism in comparison to smaller species. Relying as they do on strength rather than speed, bears have relatively short limbs with thick bones to support their bulk. The shoulder blades and the pelvis are correspondingly massive. The limbs are much straighter than those of the big cats as there is no need for them to flex in the same way due to the differences in their gait. The strong forelimbs are used to catch prey, excavate dens, dig out burrowing animals, turn over rocks and logs to locate prey, and club large creatures. Unlike most other land carnivorans, bears are plantigrade. They distribute their weight toward the hind feet, which makes them look lumbering when they walk. They are capable of bursts of speed but soon tire, and as a result mostly rely on ambush rather than the chase. Bears can stand on their hind feet and sit up straight with remarkable balance. Their front paws are flexible enough to grasp fruit and leaves. Bears' non-retractable claws are used for digging, climbing, tearing, and catching prey. The claws on the front feet are larger than those on the back and may be a hindrance when climbing trees; black bears are the most arboreal of the bears, and have the shortest claws. Pandas are unique in having a bony extension on the wrist of the front feet which acts as a thumb, and is used for gripping bamboo shoots as the animals feed. Most mammals have agouti hair, with each individual hair shaft having bands of color corresponding to two different types of melanin pigment. Bears however have a single type of melanin and the hairs have a single color throughout their length, apart from the tip which is sometimes a different shade. The coat consists of long guard hairs, which form a protective shaggy covering, and short dense hairs which form an insulating layer trapping air close to the skin. The shaggy coat helps maintain body heat during winter hibernation and is shed in the spring leaving a shorter summer coat. Polar bears have hollow, translucent guard hairs which gain heat from the sun and conduct it to the dark-colored skin below. They have a thick layer of blubber for extra insulation, and the soles of their feet have a dense pad of fur. While bears tend to be uniform in color, some species may have markings on the chest or face and the giant panda has a bold black-and-white pelage. Bears have small rounded ears so as to minimize heat loss, but neither their hearing or sight are particularly acute. Unlike many other carnivorans they have color vision, perhaps to help them distinguish ripe nuts and fruits. They are unique among carnivorans in not having touch-sensitive whiskers on the muzzle; however, they have an excellent sense of smell, better than that of the dog, or possibly any other mammal. They use smell for signalling to each other (either to warn off rivals or detect mates) and for finding food. Smell is the principal sense used by bears to locate most of their food, and they have excellent memories which helps them to relocate places where they have found food before. The skulls of bears are massive, providing anchorage for the powerful masseter and temporal jaw muscles. The canine teeth are large but mostly used for display, and the molar teeth flat and crushing. Unlike most other members of the Carnivora, bears have relatively undeveloped carnassial teeth, and their teeth are adapted for a diet that includes a significant amount of vegetable matter. Considerable variation occurs in dental formula even within a given species. This may indicate bears are still in the process of evolving from a mainly meat-eating diet to a predominantly herbivorous one. Polar bears appear to have secondarily re-evolved carnassial-like cheek teeth, as their diets have switched back towards carnivory. Sloth bears lack lower central incisors and use their protrusible lips for sucking up the termites on which they feed. The general dental formula for living bears is: . The structure of the larynx of bears appears to be the most basal of the caniforms. They possess air pouches connected to the pharynx which may amplify their vocalizations. Bears have a fairly simple digestive system typical for carnivorans, with a single stomach, short undifferentiated intestines and no cecum. Even the herbivorous giant panda still has the digestive system of a carnivore, as well as carnivore-specific genes. Its ability to digest cellulose is ascribed to the microbes in its gut. Bears must spend much of their time feeding in order to gain enough nutrition from foliage. The panda, in particular, spends 12–15 hours a day feeding. Distribution and habitat Extant bears are found in sixty countries primarily in the Northern Hemisphere and are concentrated in Asia, North America, and Europe. An exception is the spectacled bear; native to South America, it inhabits the Andean region. The sun bear's range extends below the equator in Southeast Asia. The Atlas bear, a subspecies of the brown bear was distributed in North Africa from Morocco to Libya, but it became extinct around the 1870s. The most widespread species is the brown bear, which occurs from Western Europe eastwards through Asia to the western areas of North America. The American black bear is restricted to North America, and the polar bear is restricted to the Arctic Ocean. All the remaining species of bear are Asian. They occur in a range of habitats which include tropical lowland rainforest, both coniferous and broadleaf forests, prairies, steppes, montane grassland, alpine scree slopes, Arctic tundra and in the case of the polar bear, ice floes. Bears may dig their dens in hillsides or use caves, hollow logs and dense vegetation for shelter. Behavior and ecology Brown and American black bears are generally diurnal, meaning that they are active for the most part during the day, though they may forage substantially by night. Other species may be nocturnal, active at night, though female sloth bears with cubs may feed more at daytime to avoid competition from conspecifics and nocturnal predators. Bears are overwhelmingly solitary and are considered to be the most asocial of all the Carnivora. The only times bears are encountered in groups are mothers with young or occasional seasonal bounties of rich food (such as salmon runs). Fights between males can occur and older individuals may have extensive scarring, which suggests that maintaining dominance can be intense. With their acute sense of smell, bears can locate carcasses from several kilometres away. They use olfaction to locate other foods, encounter mates, avoid rivals and recognize their cubs. Feeding Most bears are opportunistic omnivores and consume more plant than animal matter, and appear to have evolved from an ancestor which was a low-protein macronutrient omnivore. They eat anything from leaves, roots, and berries to insects, carrion, fresh meat, and fish, and have digestive systems and teeth adapted to such a diet. At the extremes are the almost entirely herbivorous giant panda and the mostly carnivorous polar bear. However, all bears feed on any food source that becomes seasonally available. For example, Asiatic black bears in Taiwan consume large numbers of acorns when these are most common, and switch to ungulates at other times of the year. When foraging for plants, bears choose to eat them at the stage when they are at their most nutritious and digestible, typically avoiding older grasses, sedges and leaves. Hence, in more northern temperate areas, browsing and grazing is more common early in spring and later becomes more restricted. Knowing when plants are ripe for eating is a learned behavior. Berries may be foraged in bushes or at the tops of trees, and bears try to maximize the number of berries consumed versus foliage. In autumn, some bear species forage large amounts of naturally fermented fruits, which affects their behavior. Smaller bears climb trees to obtain mast (edible reproductive parts, such as acorns). Such masts can be very important to the diets of these species, and mast failures may result in long-range movements by bears looking for alternative food sources. Brown bears, with their powerful digging abilities, commonly eat roots. The panda's diet is over 99% bamboo, of 30 different species. Its strong jaws are adapted for crushing the tough stems of these plants, though they prefer to eat the more nutritious leaves. Bromeliads can make up to 50% of the diet of the spectacled bear, which also has strong jaws to bite them open. The sloth bear is not as specialized as polar bears and the panda, has lost several front teeth usually seen in bears, and developed a long, suctioning tongue to feed on the ants, termites, and other burrowing insects. At certain times of the year, these insects can make up 90% of their diets. Some individuals become addicted to sweets in garbage inside towns where tourism-related waste is generated throughout the year. Some species may raid the nests of wasps and bees for the honey and immature insects, in spite of stinging from the adults. Sun bears use their long tongues to lick up both insects and honey. Fish are an important source of food for some species, and brown bears in particular gather in large numbers at salmon runs. Typically, a bear plunges into the water and seizes a fish with its jaws or front paws. The preferred parts to eat are the brain and eggs. Small burrowing mammals like rodents may be dug out and eaten. The brown bear and both species of black bears sometimes take large ungulates, such as deer and bovids, mostly the young and weak. These animals may be taken by a short rush and ambush, though hiding young may be sniffed out and pounced on. The polar bear mainly preys on seals, stalking them from the ice or breaking into their dens. They primarily eat the highly digestible blubber. Large mammalian prey is typically killed with raw strength, including bites and paw swipes, and bears do not display the specialized killing methods of felids and canids. Predatory behavior in bears is typically taught to the young by the mother. Bears are prolific scavengers and kleptoparasites, stealing food caches from rodents, and carcasses from other predators. For hibernating species, weight gain is important as it provides nourishment during winter dormancy. A brown bear can eat of food and gain of fat a day prior to entering its den. Communication Bears produce a number of vocal and non-vocal sounds. Tongue-clicking, grunting or chuffing many be made in cordial situations, such as between mothers and cubs or courting couples, while moaning, huffing, snorting or blowing air is made when an individual is stressed. Barking is produced during times of alarm, excitement or to give away the animal's position. Warning sounds include jaw-clicking and lip-popping, while teeth-chatters, bellows, growls, roars and pulsing sounds are made in aggressive encounters. Cubs may squeal, bawl, bleat or scream when in distress and make motor-like humming when comfortable or nursing. Bears sometimes communicate with visual displays such as standing upright, which exaggerates the individual's size. The chest markings of some species may add to this intimidating display. Staring is an aggressive act and the facial markings of spectacled bears and giant pandas may help draw attention to the eyes during agonistic encounters. Individuals may approach each other by stiff-legged walking with the head lowered. Dominance between bears is asserted by making a frontal orientation, showing the canine teeth, muzzle twisting and neck stretching. A subordinate may respond with a lateral orientation, by turning away and dropping the head and by sitting or lying down. Bears also communicate with their scent by urinating on or rubbing against trees and other objects. This is usually accompanied by clawing and biting the object. Bark may be spread around to draw attention to the marking post. Pandas establish territories by marking objects with urine and a waxy substance from their anal glands. Polar bears leave behind their scent in their tracks which allow individuals to keep track of one another in the vast Arctic wilderness. Reproduction and development The mating system of bears has variously been described as a form of polygyny, promiscuity and serial monogamy. During the breeding season, males take notice of females in their vicinity and females become more tolerant of males. A male bear may visit a female continuously over a period of several days or weeks, depending on the species, to test her reproductive state. During this time period, males try to prevent rivals from interacting with their mate. Courtship may be brief, although in some Asian species, courting pairs may engage in wrestling, hugging, mock fighting and vocalizing. Ovulation is induced by mating, which can last up to 30 minutes depending on the species. Gestation typically lasts six to nine months, including delayed implantation, and litter size numbers up to four cubs. Giant pandas may give birth to twins but they can only suckle one young and the other is left to die. In northern living species, birth takes place during winter dormancy. Cubs are born blind and helpless with at most a thin layer of hair, relying on their mother for warmth. The milk of the female bear is rich in fat and antibodies and cubs may suckle for up to a year after they are born. By two to three months, cubs can follow their mother outside the den. They usually follow her on foot, but sloth bear cubs may ride on their mother's back. Male bears play no role in raising young. Infanticide, where an adult male kills the cubs of another, has been recorded in polar bears, brown bears and American black bears but not in other species. Males kill young to bring the female into estrus. Cubs may flee and the mother defends them even at the cost of her life. In some species, offspring may become independent around the next spring, though some may stay until the female successfully mates again. Bears reach sexual maturity shortly after they disperse; at around three to six years depending on the species. Male Alaskan brown bears and polar bears may continue to grow until they are 11 years old. Lifespan may also vary between species. The brown bear can live an average of 25 years. Hibernation Bears of northern regions, including the American black bear and the grizzly bear, hibernate in the winter. During hibernation, the bear's metabolism slows down, its body temperature decreases slightly, and its heart rate slows from a normal value of 55 to just 9 beats per minute. Bears normally do not wake during their hibernation, and can go the entire period without eating, drinking, urinating, or defecating. A fecal plug is formed in the colon, and is expelled when the bear wakes in the spring. If they have stored enough body fat, their muscles remain in good condition, and their protein maintenance requirements are met from recycling waste urea. Female bears give birth during the hibernation period, and are roused when doing so. Mortality Bears do not have many predators. The most important are humans, and as they started cultivating crops, they increasingly came in conflict with the bears that raided them. Since the invention of firearms, people have been able to kill bears with greater ease. Felids like the tiger may also prey on bears, particularly cubs, which may also be threatened by canids. Bears are parasitized by eighty species of parasites, including single-celled protozoans and gastro-intestinal worms, and nematodes and flukes in their heart, liver, lungs and bloodstream. Externally, they have ticks, fleas and lice. A study of American black bears found seventeen species of endoparasite including the protozoan Sarcocystis, the parasitic worm Diphyllobothrium mansonoides, and the nematodes Dirofilaria immitis, Capillaria aerophila, Physaloptera sp., Strongyloides sp. and others. Of these, D. mansonoides and adult C. aerophila were causing pathological symptoms. By contrast, polar bears have few parasites; many parasitic species need a secondary, usually terrestrial, host, and the polar bear's life style is such that few alternative hosts exist in their environment. The protozoan Toxoplasma gondii has been found in polar bears, and the nematode Trichinella nativa can cause a serious infection and decline in older polar bears. Bears in North America are sometimes infected by a Morbillivirus similar to the canine distemper virus. They are susceptible to infectious canine hepatitis (CAV-1), with free-living black bears dying rapidly of encephalitis and hepatitis. Relationship with humans Conservation In modern times, bears have come under pressure through encroachment on their habitats and illegal trade in bear parts, including the Asian bile bear market, though hunting is now banned, largely replaced by farming. The IUCN lists six bear species as vulnerable; even the two least concern species, the brown bear and the American black bear, are at risk of extirpation in certain areas. In general, these two species inhabit remote areas with little interaction with humans, and the main non-natural causes of mortality are hunting, trapping, road-kill and depredation. Laws have been passed in many areas of the world to protect bears from habitat destruction. Public perception of bears is often positive, as people identify with bears due to their omnivorous diets, their ability to stand on two legs, and their symbolic importance. Support for bear protection is widespread, at least in more affluent societies. The giant panda has become a worldwide symbol of conservation. The Sichuan Giant Panda Sanctuaries, which are home to around 30% of the wild panda population, gained a UNESCO World Heritage Site designation in 2006. Where bears raid crops or attack livestock, they may come into conflict with humans. In poorer rural regions, attitudes may be more shaped by the dangers posed by bears, and the economic costs they cause to farmers and ranchers. Attacks Several bear species are dangerous to humans, especially in areas where they have become used to people; elsewhere, they generally avoid humans. Injuries caused by bears are rare, but are widely reported. Bears may attack humans in response to being startled, in defense of young or food, or even for predatory reasons. Entertainment, hunting, food and folk medicine Bears in captivity have for centuries been used for entertainment. They have been trained to dance, and were kept for baiting in Europe from at least the 16th century. There were five bear-baiting gardens in Southwark, London, at that time; archaeological remains of three of these have survived. Across Europe, nomadic Romani bear handlers called Ursari lived by busking with their bears from the 12th century. Bears have been hunted for sport, food, and folk medicine. Their meat is dark and stringy, like a tough cut of beef. In Cantonese cuisine, bear paws are considered a delicacy. Bear meat should be cooked thoroughly, as it can be infected with the parasite Trichinella spiralis. The peoples of eastern Asia use bears' body parts and secretions (notably their gallbladders and bile) as part of traditional Chinese medicine. More than 12,000 bears are thought to be kept on farms in China, Vietnam, and South Korea for the production of bile. Trade in bear products is prohibited under CITES, but bear bile has been detected in shampoos, wine and herbal medicines sold in Canada, the United States and Australia. Cultural depictions Bears have been popular subjects in art, literature, folklore and mythology. The image of the mother bear was prevalent throughout societies in North America and Eurasia, based on the female's devotion and protection of her cubs. In many Native American cultures, the bear is a symbol of rebirth because of its hibernation and re-emergence. A widespread belief among cultures of North America and northern Asia associated bears with shaman; this may be based on the solitary nature of both. Bears have thus been thought to predict the future and shaman were believed to have been capable of transforming into bears. There is evidence of prehistoric bear worship, though this is disputed by archaeologists. It is possible that bear worship existed in early Chinese and Ainu cultures. The prehistoric Finns, Siberian peoples and more recently Koreans considered the bear as the spirit of their forefathers. Artio (Dea Artio in the Gallo-Roman religion) was a Celtic bear goddess. Evidence of her worship has notably been found at Bern, itself named for the bear. Her name is derived from the Celtic word for "bear", artos. In ancient Greece, the archaic cult of Artemis in bear form survived into Classical times at Brauron, where young Athenian girls passed an initiation rite as arktoi "she bears". The constellations of Ursa Major and Ursa Minor, the great and little bears, are named for their supposed resemblance to bears, from the time of Ptolemy. The nearby star Arcturus means "guardian of the bear", as if it were watching the two constellations. Ursa Major has been associated with a bear for as much as 13,000 years since Paleolithic times, in the widespread Cosmic Hunt myths. These are found on both sides of the Bering land bridge, which was lost to the sea some 11,000 years ago. Bears are popular in children's stories, including Winnie the Pooh, Paddington Bear, Gentle Ben and "The Brown Bear of Norway". An early version of "Goldilocks and the Three Bears", was published as "The Three Bears" in 1837 by Robert Southey, many times retold, and illustrated in 1918 by Arthur Rackham. The Hanna-Barbera character Yogi Bear has appeared in numerous comic books, animated television shows and films. The Care Bears began as greeting cards in 1982, and were featured as toys, on clothing and in film. Around the world, many children—and some adults—have teddy bears, stuffed toys in the form of bears, named after the American statesman Theodore Roosevelt when in 1902 he had refused to shoot an American black bear tied to a tree. Bears, like other animals, may symbolize nations. The Russian Bear has been a common national personification for Russia from the 16th century onward. Smokey Bear has become a part of American culture since his introduction in 1944, with his message "Only you can prevent forest fires". Organizations The International Association for Bear Research & Management, also known as the International Bear Association, and the Bear Specialist Group of the Species Survival Commission, a part of the International Union for Conservation of Nature focus on the natural history, management, and conservation of bears. Bear Trust International works for wild bears and other wildlife through four core program initiatives, namely Conservation Education, Wild Bear Research, Wild Bear Management, and Habitat Conservation. Specialty organizations for each of the eight species of bears worldwide include: Vital Ground, for the brown bear Moon Bears, for the Asiatic black bear Black Bear Conservation Coalition, for the North American black bear Polar Bears International, for the polar bear Bornean Sun Bear Conservation Centre, for the sun bear Wildlife SOS, for the sloth bear Andean Bear Conservation Project, for the Andean bear Chengdu Research Base of Giant Panda Breeding, for the giant panda
Biology and health sciences
Carnivora
null
4401
https://en.wikipedia.org/wiki/Bald%20eagle
Bald eagle
The bald eagle (Haliaeetus leucocephalus) is a bird of prey found in North America. A sea eagle, it has two known subspecies and forms a species pair with the white-tailed eagle (Haliaeetus albicilla), which occupies the same niche as the bald eagle in the Palearctic. Its range includes most of Canada and Alaska, all of the contiguous United States, and northern Mexico. It is found near large bodies of open water with an abundant food supply and old-growth trees for nesting. The bald eagle is an opportunistic feeder which subsists mainly on fish, which it swoops down upon and snatches from the water with its talons. It builds the largest nest of any North American bird and the largest tree nests ever recorded for any animal species, up to deep, wide, and in weight. Sexual maturity is attained at the age of four to five years. Bald eagles are not bald; the name derives from an older meaning of the word, "white headed". The adult is mainly brown with a white head and tail. The sexes are identical in plumage, but females are about 25 percent larger than males. The yellow beak is large and hooked. The plumage of the immature is brown. The bald eagle is the national symbol of the United States and appears on its seal. In the late 20th century it was on the brink of extirpation in the contiguous United States. Populations have since recovered, and the species' status was upgraded from "endangered" to "threatened" in 1995 and removed from the list altogether in 2007. Taxonomy The bald eagle is placed in the genus Haliaeetus (sea eagles), and gets both its common and specific scientific names from the distinctive appearance of the adult's head. Bald in the English name is from an older usage meaning "having white on the face or head" rather than "hairless", referring to the white head feathers contrasting with the darker body. The genus name is Neo-Latin: Haliaeetus (from the ), and the specific name, leucocephalus, is Latinized () and (). The bald eagle was one of the many species originally described by Carl Linnaeus in his 18th-century work Systema Naturae, under the name Falco leucocephalus. There are two recognized subspecies of bald eagle: H. l. leucocephalus (Linnaeus, 1766) is the nominate subspecies. It is found in the southern United States and Baja California Peninsula. H. l. washingtoniensis (Audubon, 1827), synonym H. l. alascanus (Townsend, 1897), the northern subspecies, is larger than southern nominate leucocephalus. It is found in the northern United States, Canada and Alaska. The bald eagle forms a species pair with the white-tailed eagle of Eurasia. This species pair consists of a white-headed and a tan-headed species of roughly equal size; the white-tailed eagle also has overall somewhat paler brown body plumage. The two species fill the same ecological niche in their respective ranges. The pair diverged from other sea eagles at the beginning of the Early Miocene (c. 10 Ma BP) at the latest, but possibly as early as the Early/Middle Oligocene, 28 Ma BP, if the most ancient fossil record is correctly assigned to this genus. Description The plumage of an adult bald eagle is evenly dark brown with a white head and tail. The tail is moderately long and slightly wedge-shaped. Males and females are identical in plumage coloration, but sexual dimorphism is evident in the species, in that females are 25% larger than males. The beak, feet and irises are bright yellow. The legs are feather-free, and the toes are short and powerful with large talons. The highly developed talon of the hind toe is used to pierce the vital areas of prey while it is held immobile by the front toes. The beak is large and hooked, with a yellow cere. The adult bald eagle is unmistakable in its native range. The closely related African fish eagle (Haliaeetus vocifer) (from far outside the bald eagle's range) also has a brown body (albeit of somewhat more rufous hue), white head and tail, but differs from the bald eagle in having a white chest and black tip to the bill. The plumage of the immature is a dark brown overlaid with messy white streaking until the fifth (rarely fourth, very rarely third) year, when it reaches sexual maturity. Immature bald eagles are distinguishable from the golden eagle (Aquila chrysaetos), the only other very large, non-vulturine raptorial bird in North America, in that the former has a larger, more protruding head with a larger beak, straighter edged wings which are held flat (not slightly raised) and with a stiffer wing beat and feathers which do not completely cover the legs. When seen well, the golden eagle is distinctive in plumage with a more solid warm brown color than an immature bald eagle, with a reddish-golden patch to its nape and (in immature birds) a highly contrasting set of white squares on the wing. The bald eagle has sometimes been considered the largest true raptor (accipitrid) in North America. The only larger species of raptor-like bird is the California condor (Gymnogyps californianus), a New World vulture which today is not generally considered a taxonomic ally of true accipitrids. However, the golden eagle, averaging and in wing chord length in its American race (Aquila chrysaetos canadensis), is merely lighter in mean body mass and exceeds the bald eagle in mean wing chord length by around . Additionally, the bald eagle's close cousins, the relatively longer-winged but shorter-tailed white-tailed eagle and the overall larger Steller's sea eagle (Haliaeetus pelagicus), may, rarely, wander to coastal Alaska from Asia. The bald eagle has a body length of . Typical wingspan is between and mass is normally between . Females are about 25% larger than males, averaging as much as , and against the males' average weight of . The size of the bird varies by location and generally corresponds with Bergmann's rule: the species increases in size further away from the equator and the tropics. For example, eagles from South Carolina average in mass and in wingspan, smaller than their northern counterparts. One field guide in Florida listed similarly small sizes for bald eagles there, at about . Of intermediate size, 117 migrant bald eagles in Glacier National Park were found to average but this was mostly (possibly post-dispersal) juvenile eagles, with 6 adults here averaging . Wintering eagles in Arizona (winter weights are usually the highest of the year since, like many raptors, they spend the highest percentage of time foraging during winter) were found to average . The largest eagles are from Alaska, where large females may weigh more than and span across the wings. A survey of adult weights in Alaska showed that females there weighed on average , respectively, and males weighed against immatures which averaged and in the two sexes. An Alaskan adult female eagle that was considered outsized weighed some . R.S. Palmer listed a record from 1876 in Wyoming County, New York of an enormous adult bald eagle that was shot and reportedly scaled . Among standard linear measurements, the wing chord is , the tail is long, and the tarsus is . The culmen reportedly ranges from , while the measurement from the gape to the tip of the bill is . The bill size is unusually variable: Alaskan eagles can have up to twice the bill length of birds from the southern United States (Georgia, Louisiana, Florida), with means including both sexes of and in culmen length, respectively, from these two areas. The call consists of weak staccato, chirping whistles, kleek kik ik ik ik, somewhat similar in cadence to a gull's call. The calls of young birds tend to be more harsh and shrill than those of adults. Range The bald eagle's natural range covers most of North America, including most of Canada, all of the continental United States, and northern Mexico. It is the only sea eagle endemic to North America. Occupying varied habitats from the bayous of Louisiana to the Sonoran Desert and the eastern deciduous forests of Quebec and New England, northern birds are migratory, while southern birds are resident, remaining on their breeding territory all year. At minimum population, in the 1950s, it was largely restricted to Alaska, the Aleutian Islands, northern and eastern Canada, and Florida. From 1966 to 2015 bald eagle numbers increased substantially throughout its winter and breeding ranges, and as of 2018 the species nests in every continental state and province in the United States and Canada. The majority of bald eagles in Canada are found along the British Columbia coast while large populations are found in the forests of Alberta, Saskatchewan, Manitoba and Ontario. Bald eagles also congregate in certain locations in winter. From November until February, one to two thousand birds winter in Squamish, British Columbia, about halfway between Vancouver and Whistler. In March 2024, bald eagles were found nesting in Toronto for the first time. The birds primarily gather along the Squamish and Cheakamus Rivers, attracted by the salmon spawning in the area. Similar congregations of wintering bald eagles at open lakes and rivers, wherein fish are readily available for hunting or scavenging, are observed in the northern United States. It has occurred as a vagrant twice in Ireland; a juvenile was shot illegally in Fermanagh on January 11, 1973 (misidentified at first as a white-tailed eagle), and an exhausted juvenile was captured near Castleisland, County Kerry on November 15, 1987. There is also a record of it from Llyn Coron, Anglesey, in the United Kingdom, from October 17, 1978; the provenance of this individual eagle has remained in dispute. Habitat The bald eagle occurs during its breeding season in virtually any kind of American wetland habitat such as seacoasts, rivers, large lakes or marshes or other large bodies of open water with an abundance of fish. Studies have shown a preference for bodies of water with a circumference greater than , and lakes with an area greater than are optimal for breeding bald eagles. The bald eagle typically requires old-growth and mature stands of coniferous or hardwood trees for perching, roosting, and nesting. Tree species reportedly is less important to the eagle pair than the tree's height, composition and location. Perhaps of paramount importance for this species is an abundance of comparatively large trees surrounding the body of water. Selected trees must have good visibility, be over tall, an open structure, and proximity to prey. If nesting trees are in standing water such as in a mangrove swamp, the nest can be located fairly low, at as low as above the ground. In a more typical tree standing on dry ground, nests may be located from in height. In Chesapeake Bay, nesting trees averaged in diameter and in total height, while in Florida, the average nesting tree stands high and is in diameter. Trees used for nesting in the Greater Yellowstone area average high. Trees or forest used for nesting should have a canopy cover of no more than 60%, and no less than 20%, and be in close proximity to water. Most nests have been found within of open water. The greatest distance from open water recorded for a bald eagle nest was over , in Florida. Bald eagle nests are often very large in order to compensate for size of the birds. The largest recorded nest was found in Florida in 1963, and was measured at nearly 10 feet wide and 20 feet deep. In Florida, nesting habitats often consist of Mangrove swamps, the shorelines of lakes and rivers, pinelands, seasonally flooded flatwoods, hardwood swamps, and open prairies and pastureland with scattered tall trees. Favored nesting trees in Florida are slash pines (Pinus elliottii), longleaf pines (P. palustris), loblolly pines (P. taeda) and cypress trees, but for the southern coastal areas where mangroves are usually used. In Wyoming, groves of mature cottonwoods or tall pines found along streams and rivers are typical bald eagle nesting habitats. Wyoming eagles may inhabit habitat types ranging from large, old-growth stands of ponderosa pines (Pinus ponderosa) to narrow strips of riparian trees surrounded by rangeland. In Southeast Alaska, Sitka spruce (Picea sitchensis) provided 78% of the nesting trees used by eagles, followed by hemlocks (Tsuga) at 20%. Increasingly, eagles nest in human-made reservoirs stocked with fish. The bald eagle is usually quite sensitive to human activity while nesting, and is found most commonly in areas with minimal human disturbance. It chooses sites more than from low-density human disturbance and more than from medium- to high-density human disturbance. However, bald eagles will occasionally nest in large estuaries or secluded groves within major cities, such as Hardtack Island on the Willamette River in Portland, Oregon or John Heinz National Wildlife Refuge at Tinicum in Philadelphia, Pennsylvania, which are surrounded by a great quantity of human activity. Even more contrary to the usual sensitivity to disturbance, a family of bald eagles moved to the Harlem neighborhood in New York City in 2010. While wintering, bald eagles tend to be less habitat and disturbance sensitive. They will commonly congregate at spots with plentiful perches and waters with plentiful prey and (in northern climes) partially unfrozen waters. Alternately, non-breeding or wintering bald eagles, particularly in areas with a lack of human disturbance, spend their time in various upland, terrestrial habitats sometimes quite far away from waterways. In the northern half of North America (especially the interior portion), this terrestrial inhabitance by bald eagles tends to be especially prevalent because unfrozen water may not be accessible. Upland wintering habitats often consist of open habitats with concentrations of medium-sized mammals, such as prairies, meadows or tundra, or open forests with regular carrion access. Behavior The bald eagle is a powerful flier, and soars on thermal convection currents. It reaches speeds of when gliding and flapping, and about while carrying fish. Its dive speed is between , though it seldom dives vertically. Regarding their flying abilities, despite being morphologically less well adapted to faster flight than golden eagles (especially during dives), the bald eagle is considered surprisingly maneuverable in flight. Bald eagles have also been recorded catching up to and then swooping under geese in flight, turning over and thrusting their talons into the other bird's breast. It is partially migratory, depending on location. If its territory has access to open water, it remains there year-round, but if the body of water freezes during the winter, making it impossible to obtain food, it migrates to the south or to the coast. A number of populations are subject to post-breeding dispersal, mainly in juveniles; Florida eagles, for example, will disperse northwards in the summer. The bald eagle selects migration routes which take advantage of thermals, updrafts, and food resources. During migration, it may ascend in a thermal and then glide down, or may ascend in updrafts created by the wind against a cliff or other terrain. Migration generally takes place during the daytime, usually between the local hours of 8:00 a.m. and 6:00 p.m., when thermals are produced by the sun. Diet and feeding The bald eagle is an opportunistic carnivore with the capacity to consume a great variety of prey. Fish often comprise most of the eagle's diet throughout their range. In 20 food habit studies across the species' range, fish comprised 56% of the diet of nesting eagles, birds 28%, mammals 14% and other prey 2%. More than 400 species are known to be included in the bald eagle's prey spectrum, far more than its ecological equivalent in the Old World, the white-tailed eagle, is known to take. Despite its considerably lower population, the bald eagle may come in second amongst all North American accipitrids, slightly behind only the red-tailed hawk, in number of prey species recorded. Behavior To hunt fish, the eagle swoops down over the water and snatches the fish out of the water with its talons. They eat by holding the fish in one claw and tearing the flesh with the other. Eagles have structures on their toes called spicules that allow them to grasp fish. Osprey also has this adaptation. Bird prey may occasionally be attacked in flight, with prey up to the size of Canada geese attacked and killed in mid-air. It has been estimated that the bald eagle's gripping power (pounds by square inch) is ten times greater than that of a human. Bald eagles can fly with fish at least equal to their own weight, but if the fish is too heavy to lift, the eagle may be dragged into the water. Bald eagles can swim, but in some cases, they drag their catch ashore with their talons. Still, some eagles drown or succumb to hypothermia. Many sources claim that bald eagles, like all large eagles, cannot normally take flight carrying prey more than half of their own weight unless aided by favorable wind conditions. On numerous occasions, when large prey such as large fish including mature salmon or geese are attacked, eagles have been seen to make contact and then drag the prey in a strenuously labored, low flight over the water to a bank, where they then finish off and dismember the prey. When food is abundant, an eagle can gorge itself by storing up to of food in a pouch in the throat called a crop. Gorging allows the bird to fast for several days if food becomes unavailable. Occasionally, bald eagles may hunt cooperatively when confronting prey, especially relatively large prey such as jackrabbits or herons, with one bird distracting potential prey, while the other comes behind it in order to ambush it. While hunting waterfowl, bald eagles repeatedly fly at a target and cause it to dive repeatedly, hoping to exhaust the victim so it can be caught (white-tailed eagles have been recorded hunting waterfowl in the same way). When hunting concentrated prey, a successful catch often results in the hunting eagle being pursued by other eagles and needing to find an isolated perch for consumption if it is able to carry it away successfully. They obtain much of their food as carrion or via a practice known as kleptoparasitism, by which they steal prey away from other predators. Due to their dietary habits, bald eagles are frequently viewed in a negative light by humans. Thanks to their superior foraging ability and experience, adults are generally more likely to hunt live prey than immature eagles, which often obtain their food from scavenging. They are not very selective about the condition or origin, whether provided by humans, other animals, auto accidents or natural causes, of a carcass's presence, but will avoid eating carrion where disturbances from humans are a regular occurrence. They will scavenge carcasses up to the size of whales, though carcasses of ungulates and large fish are seemingly preferred. Congregated wintering waterfowl are frequently exploited for carcasses to scavenge by immature eagles in harsh winter weather. Bald eagles also may sometimes feed on material scavenged or stolen from campsites and picnics, as well as garbage dumps (dump usage is habitual mainly in Alaska) and fish-processing plants. Fish In Southeast Alaska, fish comprise approximately 66% of the year-round diet of bald eagles and 78% of the prey brought to the nest by the parents. Eagles living in the Columbia River Estuary in Oregon were found to rely on fish for 90% of their dietary intake. At least 100 species of fish have been recorded in the bald eagle's diet. From observation in the Columbia River, 58% of the fish were caught alive by the eagle, 24% were scavenged as carcasses and 18% were pirated away from other animals. In the Pacific Northwest, spawning trout and salmon provide most of the bald eagles' diet from late summer throughout fall. Though bald eagles occasionally catch live salmon, they usually scavenge spawned salmon carcass. Southeast Alaskan eagles largely prey on pink salmon (Oncorhynchus gorbuscha), coho salmon (O. kisutch) and, more locally, sockeye salmon (O. nerka), with Chinook salmon (O. tshawytscha). Due to the Chinook salmon's large size ( average adult size) probably being taken only as carrion and a single carcass can attract several eagles. Also important in the estuaries and shallow coastlines of southern Alaska are Pacific herring (Clupea pallasii), Pacific sand lance (Ammodytes hexapterus) and eulachon (Thaleichthys pacificus). In Oregon's Columbia River Estuary, the most significant prey species were largescale suckers (Catostomus macrocheilus) (17.3% of the prey selected there), American shad (Alosa sapidissima; 13%) and common carp (Cyprinus carpio; 10.8%). Eagles living in the Chesapeake Bay in Maryland were found to subsist largely on American gizzard shad (Dorosoma cepedianum), threadfin shad (Dorosoma petenense) and white bass (Morone chrysops). Floridian eagles have been reported to prey on catfish, most prevalently the brown bullhead (Ameiurus nebulosus) and any species in the genus Ictalurus as well as mullet, trout, needlefish, and eels. Chain pickerels (Esox niger) and white suckers (Catostomus commersonii) are frequently taken in interior Maine. Wintering eagles on the Platte River in Nebraska preyed mainly on American gizzard shads and common carp. Bald eagles are also known to eat the following fish species: rainbow trout (Oncorhynchus mykiss), white catfish (Ameiurus catus), rock greenling (Hexagrammos lagocephalus), Pacific cod (Gadus macrocephalus), Atka mackerel (Pleurogrammus monopterygius), largemouth bass (Micropterus salmoides), northern pike (Esox lucius), striped bass (Morone saxatilis), dogfish shark (Squalidae.sp) and Blue walleye (Sander vitreus). Fish taken by bald eagles varies in size, but bald eagles take larger fish than other piscivorous birds in North America, typically range from and prefer fish. When experimenters offered fish of different sizes in the breeding season around Lake Britton in California, fish measuring were taken 71.8% of the time by parent eagles while fish measuring were chosen only 25% of the time. At nests around Lake Superior, the remains of fish (mostly suckers) were found to average in total length. In the Columbia River estuary, most preyed on by eagles were estimated to measure less than , but larger fish between or even exceeding in length also taken especially during the non-breeding seasons. They can take fish up to at least twice their own weight, such as large mature salmons, carps, or even muskellunge (Esox masquinongy), by dragging its catch with talons and pull toward ashore. Much larger marine fish such as Pacific halibut (Hippoglossus stenolepis) and lemon sharks (Negaprion brevirostris) have been recorded among bald eagle prey though probably are only taken as young, as small, newly mature fish, or as carrion. Benthic fishes such as catfish are usually consumed after they die and float to the surface, though while temporarily swimming in the open may be more vulnerable to predation than most fish since their eyes focus downwards. Bald eagles also regularly exploit water turbines which produce battered, stunned or dead fish easily consumed. Predators who leave behind scraps of dead fish that they kill, such as brown bears (Ursus arctos), gray wolves (Canis lupus) and red foxes (Vulpes vulpes), may be habitually followed in order to scavenge the kills secondarily. Once North Pacific salmon die off after spawning, usually local bald eagles eat salmon carcasses almost exclusively. Eagles in Washington need to consume of fish each day for survival, with adults generally consuming more than juveniles and thus reducing potential energy deficiency and increasing survival during winter. Birds Behind fish, the next most significant prey base for bald eagles are other waterbirds. The contribution of such birds to the eagle's diet is variable, depending on the quantity and availability of fish near the water's surface. Waterbirds can seasonally comprise from 7% to 80% of the prey selection for eagles in certain localities. Overall, birds are the most diverse group in the bald eagle's prey spectrum, with 200 prey species recorded. Bird species most preferred as prey by eagles tend to be medium-sized, such as western grebes (Aechmophorus occidentalis), mallards (Anas platyrhynchos), and American coots (Fulica americana) as such prey is relatively easy for the much larger eagles to catch and fly with. American herring gull (Larus smithsonianus) are the favored avian prey species for eagles living around Lake Superior. Black ducks (Anas rubripes), common eiders (Somateria mollissima), and double-crested cormorants (Phalacrocorax auritus) are also frequently taken in coastal Maine and velvet scoter (Melanitta fusca) was dominant prey in San Miguel Island. Due to easy accessibility and lack of formidable nest defense against eagles by such species, bald eagles are capable of preying on such seabirds at all ages, from eggs to mature adults, and they can effectively cull large portions of a colony. Along some portions of the North Pacific coastline, bald eagles which had historically preyed mainly kelp-dwelling fish and supplementally sea otter (Enhydra lutris) pups are now preying mainly on seabird colonies since both the fish (possibly due to overfishing) and otters (cause unknown) have had steep population declines, causing concern for seabird conservation. Because of this more extensive predation, some biologist has expressed concern that murres are heading for a "conservation collision" due to heavy eagle predation. Eagles have been confirmed to attack nocturnally active, burrow-nesting seabird species such as storm petrels and shearwaters by digging out their burrows and feeding on all animals they find inside. If a bald eagle flies close by, waterbirds will often fly away en masse, though they may seemingly ignore a perched eagle in other cases. when the birds fly away from a colony, this exposes their unprotected eggs and nestlings to scavengers such as gulls. While they usually target small to medium-sized seabirds, larger seabirds such as great black-backed gulls (Larus marinus) and northern gannets (Morus bassanus) and brown pelicans (Pelecanus occidentalis) of all ages can successfully be taken by bald eagles. Similarly, large waterbirds are occasionally killed. Geese such as wintering emperor geese (Chen canagica) and snow geese (C. caerulescens), which gather in large groups, sometimes becoming regular prey. Smaller Ross's geese (Anser rossii) are also taken, as well as large-sized Canada geese (Branta canadensis). Predation on the largest subspecies (Branta canadensis maxima) has been reported. Other large waterbird prey include common loons (Gavia immer) of all ages. Large wading birds can also fall prey to bald eagles. For the great blue herons (Ardea herodias), bald eagles are their only serious enemies of all ages. Slightly larger Sandhill cranes (Grus canadensis) can be taken as well. While adult whooping cranes (Grus americana) are too large and formidable, their chicks can fall prey to bald eagles. They even occasionally prey on adult tundra swans (Cygnus columbianus). Young trumpeter swans (Cygnus buccinator) are also taken, and an unsuccessful attack on an adult swan has been photographed. Bald eagles have been occasionally recorded as killing other raptors. In some cases, these may be attacks of competition or kleptoparasitism on rival species but end with the consumption of the dead victims. Nine species of other accipitrids and owls are known to have been preyed upon by bald eagles. Owl prey species have ranged in size from western screech-owls (Megascops kennicotti) to snowy owls (Bubo scandiacus). Larger diurnal raptors known to have fallen victim to bald eagles have included red-tailed hawks (Buteo jamaicensis), peregrine falcons (Falco peregrinus), northern goshawks (Accipiter gentilis), ospreys (Pandion haliaetus) and black (Coragyps atratus) and turkey vultures (Cathartes aura). Mammals Mammalian preys are generally less frequently taken than fish or avian prey. However, in some regions, such as landlocked areas of North America, wintering bald eagles may become habitual predators of medium-sized mammals that occur in colonies or local concentrations, such as prairie dogs (Cynomys sp.) and jackrabbits (Lepus sp.). Bald eagles in Seedskadee National Wildlife Refuge often hunt in pair to catch cottontails, jackrabbits and prairie dogs. They can attack and prey on rabbits and hares of nearly any size, from marsh rabbits (Sylvilagus palustris) to black and white-tailed jackrabbits (Lepus californicus & L. townsendii), and Arctic hares (Lepus arcticus). In San Luis Valley, white-tailed jackrabbits can be important prey. Additionally, rodents such as montane voles (Microtus montanus), brown rats (Rattus norvegicus), and various squirrels are taken as supplementary prey. Larger rodents such as muskrats (Ondatra zibethicus), young or small adult nutrias (Myocastor coypus) and groundhogs (Marmota monax) are also preyed upon. Even American porcupines (Erethizon dorsatum) are reportedly attacked and killed. Where available, seal colonies can provide a lot of food. On Protection Island, Washington, they commonly feed on harbor seal (Phoca vitulina) afterbirths, still-borns and sickly seal pups. Similarly, bald eagles in Alaska readily prey on sea otter (Enhydra lutris) pups. Small to medium-sized terrestrial mammalian carnivores can be taken infrequently. Mustelid including American martens (Martes pennanti), American minks (Neogale vison), and larger fisher cats (Pekania pennanti) are known to be hunted. Foxes are also taken, including Island foxes ( Urocyon littoralis ), Arctic foxes (Vulpes lagopus), and grey foxes (Urocyon cinereoargenteus). Although fox farmers claimed that bald eagle heavily prey on young and adult free-range Arctic fox, the predation events are sporadic. In one instance, two bald Eagles fed upon a red fox (Vulpes vulpes) that had tried to cross a frozen Delaware Lake. Other medium-sized carnivorans such as striped skunks (Mephitis mephitis), American hog-nosed skunks (Conepatus leuconotus), and common raccoons (Procyon lotor) are taken, as well as domestic cats (Felis catus) and dogs (canis familiaris). Other wild mammalian prey include fawns of deer such as white-tailed deer (Odocoileus virginianus) and Sitka deer (Odocoileus hemionus sitkensis), which weigh around can be taken alive by bald eagles. In one instance, a bald eagle was observed carrying mule deer (Odocoileus hemionus) fawn. Additionally, Virginia opossums (Didelphis virginiana) can be preyed upon. Still, predation events are rare due to their nocturnal habits. Together with the golden eagle, bald eagles are occasionally accused of preying on livestock, especially sheep (Ovis aries). There are a handful of proven cases of lamb predation, some specimens weighing up to , by bald eagles. Still, they are much less likely to attack a healthy lamb than a golden eagle. Both species prefer native, wild prey and are unlikely to cause any extensive detriment to human livelihoods. There is one case of a bald eagle killing and feeding on an adult, pregnant ewe (then joined in eating the kill by at least 3 other eagles), which, weighing on average over , is much larger than any other known prey taken by this species. Reptiles and other prey Supplemental prey is readily taken given the opportunity. In some areas, reptiles may become regular prey, especially in warm areas such as Florida where reptile diversity is high. Turtles are perhaps the most regularly hunted type of reptile. In coastal New Jersey, 14 of 20 studied eagle nests included remains of turtles. The main species found were common musk turtles (Sternotherus odoratus), diamondback terrapin (Malaclemys terrapin) and juvenile common snapping turtles (Chelydra serpentina). In these New Jersey nests, mainly subadult and small adults were taken, ranging in carapace length from . Similarly, many turtles were recorded in the diet in the Chesapeake Bay. In Texas, softshell turtles are the most frequently taken prey, and a large number of Barbour's map turtles are taken in Torreya State Park. Other reptilian and amphibian prey includes southern alligator lizards (Elgaria multicarinata), snakes such as garter snakes and rattlesnakes, and Greater siren (Siren lacertina). Invertebrates are occasionally taken. In Alaska, eagles feed on sea urchins (Strongylocentrotus sp.), chitons, mussels, and crabs. Other various mollusks such as land snails, abalones, bivalves, periwinkles, blue mussels, squids, and starfishes are taken as well. Interspecific predatory relationships When competing for food, eagles will usually dominate other fish-eaters and scavengers, aggressively displacing mammals such as coyotes (Canis latrans) and foxes, and birds such as corvids, gulls, vultures and other raptors. Occasionally, coyotes, bobcats (Lynx rufus) and domestic dogs (Canis familiaris) can displace eagles from carrion, usually less confident immature birds, as has been recorded in Maine. Bald eagles are less active, bold predators than golden eagles and get relatively more of their food as carrion and from kleptoparasitism (although it is now generally thought that golden eagles eat more carrion than was previously assumed). However, the two species are roughly equal in size, aggressiveness and physical strength and so competitions can go either way. Neither species is known to be dominant, and the outcome depends on the size and disposition of the individual eagles involved. Wintering bald and golden eagles in Utah both sometimes won conflicts, though in one recorded instance a single bald eagle successfully displaced two consecutive golden eagles from a kill. Though bald eagles face few natural threats, an unusual attacker comes in the form of the common loon (G. immer), which is also taken by eagles as prey. While common loons normally avoid conflict, they are highly territorial and will attack predators and competitors by stabbing at them with their knife-like bill; as the range of the bald eagle has increased following conservation efforts, these interactions have been observed on several occasions, including a fatality of a bald eagle in Maine that is presumed to have come about as a result of it attacking a nest, then having a fatal puncture wound inflicted by one or both loon parents. The bald eagle is thought to be much more numerous in North America than the golden eagle, with the bald species estimated to number at least 150,000 individuals, about twice as many golden eagles there are estimated to live in North America. Due to this, bald eagles often outnumber golden eagles at attractive food sources. Despite the potential for contention between these animals, in New Jersey during winter, a golden eagle and numerous bald eagles were observed to hunt snow geese alongside each other without conflict. Similarly, both eagle species have been recorded, via video-monitoring, to feed on gut piles and carcasses of white-tailed deer (Odocoileus virginianus) in remote forest clearings in the eastern Appalachian Mountains without apparent conflict. Bald eagles are frequently mobbed by smaller raptors, due to their infrequent but unpredictable tendency to hunt other birds of prey. Many bald eagles are habitual kleptoparasites, especially in winters when fish are harder to come by. They have been recorded stealing fish from other predators such as ospreys, herons and even otters. They have also been recorded opportunistically pirating birds from peregrine falcons (Falco peregrinus), prairie dogs from ferruginous hawks (Buteo regalis) and even jackrabbits from golden eagles. When they approach scavengers such as dogs, gulls or vultures at carrion sites, they often attack them in an attempt to force them to disgorge their food. Healthy adult bald eagles are not preyed upon in the wild and are thus considered apex predators. Reproduction Bald eagles are sexually mature at four or five years of age. When they are old enough to breed, they often return to the area where they were born. Bald eagles have high mate fidelity and generally mate for life. However, if one pair member dies or disappears, the survivor will choose a new mate. A pair that has repeatedly failed in breeding attempts may split and look for new mates. Bald eagle courtship involves elaborate, spectacular calls and flight displays by the males. The flight includes swoops, chases, and cartwheels, in which they fly high, lock talons, and free-fall, separating just before hitting the ground. Usually, a territory defended by a mature pair will be of waterside habitat. Compared to most other raptors, which mostly nest in April or May, bald eagles are early breeders: nest building or reinforcing is often by mid-February, egg laying is often late February (sometimes during deep snow in the North), and incubation is usually mid-March and early May. Eggs hatch from mid-April to early May, and the young fledge from late June to early July. The nest is the largest of any bird in North America; it is used repeatedly over many years and with new material added each year may eventually be as large as deep, across and weigh . One nest in Florida was found to be deep, across, and to weigh . This nest is on record as the largest tree nest ever recorded for any animal. Usually nests are used for under five years, as they either collapse in storms or break the branches supporting them by their sheer weight. However, one nest in the Midwest was occupied continuously for at least 34 years. The nest is built of branches, usually in large trees found near water. When breeding where there are no trees, the bald eagle will nest on the ground, as has been recorded largely in areas largely isolated from terrestrial predators, such as Amchitka Island in Alaska. In Sonora, Mexico, eagles have been observed nesting on top of hecho catcuses (Pachycereus pectin-aboriginum). Nests located on cliffs and rock pinnacles have been reported historically in California, Kansas, Nevada, New Mexico and Utah, but currently are only verified to occur only in Alaska and Arizona. The eggs average about long, ranging from , and have a breadth of , ranging from . Eggs in Alaska averaged in mass, while in Saskatchewan they averaged . As with their ultimate body size, egg size tends to increase with distance from the equator. Eagles produce between one and three eggs per year, two being typical. Rarely, four eggs have been found in nests, but these may be exceptional cases of polygyny. Eagles in captivity have been capable of producing up to seven eggs. It is rare for all three chicks to successfully reach the fledgling stage. The oldest chick often bears the advantage of a larger size and louder voice, which tends to draw the parents' attention towards it. Occasionally, as is recorded in many large raptorial birds, the oldest sibling sometimes attacks and kills its younger sibling(s), especially early in the nesting period when their sizes are most different. However, nearly half of the known bald eagles produce two fledglings (more rarely three), unlike in some other "eagle" species such as some in the genus Aquila, in which a second fledgling is typically observed in less than 20% of nests, despite two eggs typically being laid. Both the male and female take turns incubating the eggs, but the female does most of the sitting. The parent not incubating will hunt for food or look for nesting material during this stage. For the first two to three weeks of the nestling period, at least one adult is at the nest almost 100% of the time. After five to six weeks, the attendance of parents usually drops off considerably (with the parents often perching in trees nearby). A young eaglet can gain up to a day, the fastest growth rate of any North American bird. The young eaglets pick up and manipulate sticks, play tug of war with each other, practice holding things in their talons, and stretch and flap their wings. By eight weeks, the eaglets are strong enough to flap their wings, lift their feet off the nest platform, and rise in the air. The young fledge at anywhere from 8 to 14 weeks of age, though will remain close to the nest and be attended to by their parents for a further 6 weeks. Juvenile eagles first start dispersing away from their parents about 8 weeks after they fledge. Variability in departure date related to effects of sex and hatching order on growth and development. For the next four years, immature eagles wander widely in search of food until they attain adult plumage and are eligible to reproduce. Male eagles have been observed killing and cannibalizing their chicks. In 2024 at the National Conservation Training Center in West Virginia, the NCTC's Eagle Cam recorded two bald eagle chicks being attacked and devoured by their father as soon as the mother departed from the nest. The NCTC noted in its statement on the incident that such behavior "has been observed in other nests and is not uncommon in birds of prey." On rare occasions, bald eagles have been recorded to adopt other raptor fledglings into their nests, as seen in 2017 by a pair of eagles in Shoal Harbor Migratory Bird Sanctuary near Sidney, British Columbia. The pair of eagles in question are believed to have carried a juvenile red-tailed hawk back to their nest, presumably as prey, whereupon the chick was accepted into the family by both the parents and the eagles' three nestlings. The hawk, nicknamed "Spunky" by biologists monitoring the nest, fledged successfully. Longevity and mortality The average lifespan of bald eagles in the wild is around 20 years, with the oldest confirmed one having been 38 years of age. In captivity, they often live somewhat longer. In one instance, a captive individual in New York lived for nearly 50 years. As with size, the average lifespan of an eagle population appears to be influenced by its location and access to prey. As they are no longer heavily persecuted, adult mortality is quite low. In one study of Florida eagles, adult bald eagles reportedly had 100% annual survival rate. In Prince William Sound in Alaska, adults had an annual survival rate of 88% even after the Exxon Valdez oil spill adversely affected eagles in the area. Of 1,428 individuals from across the range necropsied by National Wildlife Health Center from 1963 to 1984, 329 (23%) eagles died from trauma, primarily impact with wires and vehicles; 309 (22%) died from gunshot; 158 (11%) died from poisoning; 130 (9%) died from electrocution; 68 (5%) died from trapping; 110 (8%) from emaciation; and 31 (2%) from disease; cause of death was undetermined in 293 (20%) of cases. In this study, 68% of mortality was human-caused. Today, eagle-shooting is believed to be considerably reduced due to the species' protected status. A U.S. Fish and Wildlife Service study of 1,490 bald eagle deaths from 1986 through 2017 in Michigan found that 532 (36%) died due to being struck by cars while scavenging roadkill and 176 (12%) died due to lead poisoning from ingesting fragments of lead ammo and fishing gear present in carrion, with the proportion of both causes of death increasing significantly towards the end of the study period. Most non-human-related mortality involves nestlings or eggs. Around 50% of eagles survive their first year. However, in the Chesapeake Bay area, 100% of 39 radio-tagged nestlings survived to their first year. Nestling or egg fatalities may be due to nest collapses, starvation, sibling aggression or inclement weather. Another significant cause of egg and nestling mortality is predation. Nest predators include large gulls, corvids (including ravens, crows and magpies), wolverines (Gulo gulo), fishers (Pekania pennanti), red-tailed hawks, owls, other eagles, bobcats, American black bears (Ursus americanus) and raccoons. If food access is low, parental attendance at the nest may be lower because both parents may have to forage, thus resulting in less protection. Nestlings are usually exempt from predation by terrestrial carnivores that are poor tree-climbers, but Arctic foxes (Vulpes lagopus) occasionally snatched nestlings from ground nests on Amchitka Island in Alaska before they were extirpated from the island. The bald eagle will defend its nest fiercely from all comers and has even repelled attacks from bears, having been recorded knocking a black bear out of a tree when the latter tried to climb a tree holding nestlings. Relationship with humans Population decline and recovery Once a common sight in much of the continent, the bald eagle was severely affected in the mid-20th century by a variety of factors, among them the thinning of egg shells attributed to use of the pesticide DDT. Bald eagles, like many birds of prey, were especially affected by DDT due to biomagnification. DDT itself was not lethal to the adult bird, but it interfered with their calcium metabolism, making them either sterile or unable to lay healthy eggs; many of their eggs were too brittle to withstand the weight of a brooding adult, making it nearly impossible for them to hatch. It is estimated that in the early 18th century the bald eagle population was 300,000–500,000, but by the 1950s there were only 412 nesting pairs in the 48 contiguous states of the US. Other factors in bald eagle population reductions were a widespread loss of suitable habitat, as well as both legal and illegal shooting. In 1930 a New York City ornithologist wrote that in the territory of Alaska in the previous 12 years approximately 70,000 bald eagles had been shot. Many of the hunters killed the bald eagles under the long-held beliefs that bald eagles grabbed young lambs and even children with their talons, yet the birds were innocent of most of these alleged acts of predation (lamb predation is rare, human predation is thought to be non-existent). Illegal shooting was described as "the leading cause of direct mortality in both adult and immature bald eagles" by the U.S. Fish and Wildlife Service in 1978. Leading causes of death in bald eagles include lead pollution, poisoning, collision with motor vehicles, and power-line electrocution. A study published in 2022 in the journal Science found that more than half of adult eagles across 38 US states suffered from lead poisoning. The primary cause is when eagles scavenge carcasses of animals shot by hunters. These are often tainted with lead shotgun pellets, rifle rounds, or fishing tackle. The species was first protected in the U.S. and Canada by the 1918 Migratory Bird Treaty, later extended to all of North America. The Bald and Golden Eagle Protection Act, approved by the U.S. Congress in 1940, protected the bald eagle and the golden eagle, prohibiting commercial trapping and killing of the birds. The bald eagle was declared an endangered species in the U.S. in 1967, and amendments to the 1940 act between 1962 and 1972 further restricted commercial uses and increased penalties for violators. Perhaps most significant in the species' recovery, in 1972, DDT was banned from usage in the United States due to the fact that it inhibited the reproduction of many birds. DDT was completely banned in Canada in 1989, though its use had been highly restricted since the late 1970s. With regulations in place and DDT banned, the eagle population rebounded. The bald eagle can be found in growing concentrations throughout the United States and Canada, particularly near large bodies of water. In the early 1980s, the estimated total population was 100,000 individuals, with 110,000–115,000 by 1992; the U.S. state with the largest resident population is Alaska, with about 40,000–50,000, with the next highest population the Canadian province of British Columbia with 20,000–30,000 in 1992. Obtaining a precise count of the bald eagle population is extremely difficult. The most recent data submitted by individual states was in 2006, when 9789 breeding pairs were reported. For some time, the stronghold breeding population of bald eagles in the lower 48 states was in Florida, where over a thousand pairs have held on while populations in other states were significantly reduced by DDT use. Today, the contiguous state with the largest number of breeding pairs of eagles is Minnesota with an estimated 1,312 pairs, surpassing Florida's most recent count of 1,166 pairs. 23, or nearly half, of the 48 contiguous states now have at least 100 breeding pairs of bald eagles. In Washington State, there were only 105 occupied nests in 1980. That number increased by about 30 per year, so that by 2005 there were 840 occupied nests. 2005 was the last year that the Washington Department of Fish and Wildlife counted occupied nests. Further population increases in Washington may be limited by the availability of late winter food, particularly salmon. The bald eagle was officially removed from the U.S. federal government's list of endangered species on July 12, 1995, by the U.S. Fish & Wildlife Service, when it was reclassified from "endangered" to "threatened". On July 6, 1999, a proposal was initiated "To Remove the Bald Eagle in the Lower 48 States From the List of Endangered and Threatened Wildlife". It was de-listed on June 28, 2007. It has also been assigned a risk level of least concern category on the IUCN Red List. In the Exxon Valdez oil spill of 1989 an estimated 247 were killed in Prince William Sound, though the local population returned to its pre-spill level by 1995. In some areas, the increase in eagles has led to decreases in other bird populations and the eagles may be considered a pest. Killing permits In December 2016, the U.S. Fish and Wildlife Service proposed extending the permits issued to wind generation companies to allow them to kill up to 4,200 bald eagles per year without facing a penalty, four times the previous number. The permits would last 30 years, six times the previous 5-year term. In captivity Permits are required to keep bald eagles in captivity in the United States. Permits are primarily issued to public educational institutions, and the eagles that they show are permanently injured individuals that cannot be released to the wild. The facilities where eagles are kept must be equipped with adequate caging, as well as workers experienced in the handling and care of eagles. The bald eagle can be long-lived in captivity if well cared for, but does not breed well even under the best conditions. In Canada and in England a license is required to keep bald eagles for falconry. Bald eagles cannot legally be kept for falconry in the United States, but a license may be issued in some jurisdictions to allow use of such eagles in birds-of-prey flight shows. Cultural significance The bald eagle is important in various Native American cultures and, as the national symbol of the United States, is prominent in seals and logos, coinage, postage stamps, and other items relating to the U.S. federal government. Role in Native American culture The bald eagle is a sacred bird in some North American cultures, and its feathers, like those of the golden eagle, are central to many religious and spiritual customs among Native Americans. Eagles are considered spiritual messengers between gods and humans by some cultures. Many pow wow dancers use the eagle claw as part of their regalia as well. Eagle feathers are often used in traditional ceremonies, particularly in the construction of regalia worn and as a part of fans, bustles and head dresses. In the Navajo tradition an eagle feather is represented to be a protector, along with the feather Navajo medicine men use the leg and wing bones for ceremonial whistles. The Lakota, for instance, give an eagle feather as a symbol of honor to person who achieves a task. In modern times, it may be given on an event such as a graduation from college. The Pawnee consider eagles as symbols of fertility because their nests are built high off the ground and because they fiercely protect their young. The Choctaw consider the bald eagle, who has direct contact with the upper world of the sun, as a symbol of peace. During the Sun Dance, which is practiced by many Plains Indian tribes, the eagle is represented in several ways. The eagle nest is represented by the fork of the lodge where the dance is held. A whistle made from the wing bone of an eagle is used during the course of the dance. Also during the dance, a medicine man may direct his fan, which is made of eagle feathers, to people who seek to be healed. The medicine man touches the fan to the center pole and then to the patient, in order to transmit power from the pole to the patient. The fan is then held up toward the sky, so that the eagle may carry the prayers for the sick to the Creator. Current eagle feather law stipulates that only individuals of certifiable Native American ancestry enrolled in a federally recognized tribe are legally authorized to obtain or possess bald or golden eagle feathers for religious or spiritual use. The constitutionality of these laws has been questioned by Native American groups on the basis that it violates the First Amendment by affecting ability to practice their religion freely. The National Eagle Repository, a division of the FWS, exists as a means to receive, process, and store bald and golden eagles which are found dead and to distribute the eagles, their parts and feathers to federally recognized Native American tribes for use in religious ceremonies. National symbol of the United States The bald eagle is the national symbol of the United States. It was adopted as a national emblem in 1782, but not designated the "national bird" until an act of Congress in December 2024. The founders of the United States were fond of comparing their new republic with the Roman Republic, in which eagle imagery (usually involving the golden eagle) was prominent. On June 20, 1782, the Continental Congress adopted the design for the Great Seal of the United States, depicting a bald eagle grasping 13 arrows and an olive branch with thirteen leaves with its talons. The bald eagle appears on most official seals of the U.S. government, including the presidential seal, the presidential flag, and in the logos of many U.S. federal agencies. Between 1916 and 1945, the presidential flag (but not the seal) showed an eagle facing to its left (the viewer's right), which gave rise to the urban legend that the flag is changed to have the eagle face towards the olive branch in peace, and towards the arrows in wartime. Contrary to popular legend, there is no evidence that Benjamin Franklin ever publicly supported the wild turkey (Meleagris gallopavo), rather than the bald eagle, as a symbol of the United States. However, in a letter written to his daughter in 1784 from Paris, criticizing the Society of the Cincinnati, he stated his personal distaste for the bald eagle's behavior. In the letter Franklin states: Franklin opposed the creation of the Society because he viewed it, with its hereditary membership, as a noble order unwelcome in the newly independent Republic, contrary to the ideals of Lucius Quinctius Cincinnatus, for whom the Society was named. His reference to the two kinds of birds is interpreted as a satirical comparison between the Society of the Cincinnati and Cincinnatus. Popular culture Largely because of its role as a symbol of the United States, but also because of its being a large predator, the bald eagle has many representations in popular culture. In film and television depictions the call of the red-tailed hawk, which is much louder and more powerful, is often substituted for bald eagles.
Biology and health sciences
Accipitrimorphae
Animals
4402
https://en.wikipedia.org/wiki/Brown%20bear
Brown bear
The brown bear (Ursus arctos) is a large bear native to Eurasia and North America. Of the land carnivorans, it is rivaled in size only by its closest relative, the polar bear, which is much less variable in size and slightly bigger on average. The brown bear is a sexually dimorphic species, as adult males are larger and more compactly built than females. The fur ranges in color from cream to reddish to dark brown. It has evolved large hump muscles, unique among bears, and paws up to wide and long, to effectively dig through dirt. Its teeth are similar to those of other bears and reflect its dietary plasticity. Throughout the brown bear's range, it inhabits mainly forested habitats in elevations of up to . It is omnivorous, and consumes a variety of plant and animal species. Contrary to popular belief, the brown bear derives 90% of its diet from plants. When hunting, it will target animals as small as insects and rodents to those as large as moose or muskoxen. In parts of coastal Alaska, brown bears predominantly feed on spawning salmon that come near shore to lay their eggs. For most of the year, it is a usually solitary animal that associates only when mating or raising cubs. Females give birth to an average of one to three cubs that remain with their mother for 1.5 to 4.5 years. It is a long-lived animal, with an average lifespan of 25 years in the wild. Relative to its body size, the brown bear has an exceptionally large brain. This large brain allows for high cognitive abilities, such as tool use. Attacks on humans, though widely reported, are generally rare. While the brown bear's range has shrunk, and it has faced local extinctions across its wide range, it remains listed as a least concern species by the International Union for Conservation of Nature (IUCN) with a total estimated population in 2017 of 110,000. Populations that were hunted to extinction in the 19th and 20th centuries are the Atlas bear of North Africa and the Californian, Ungavan and Mexican populations of the grizzly bear of North America. Many of the populations in the southern parts of Eurasia are highly endangered as well. One of the smaller-bodied forms, the Himalayan brown bear, is critically endangered: it occupies only 2% of its former range and is threatened by uncontrolled poaching for its body parts. The Marsican brown bear of central Italy is one of several currently isolated populations of the Eurasian brown bear and is believed to have a population of only about 50 bears. The brown bear is considered to be one of the most popular of the world's charismatic megafauna. It has been kept in zoos since ancient times, and has been tamed and trained to perform in circuses and other acts. For thousands of years, the brown bear has had a role in human culture, and is often featured in literature, art, folklore, and mythology. Etymology The brown bear is sometimes referred to as the , from Middle English. This name originated in the fable History of Reynard the Fox, translated by William Caxton, from the Middle Dutch word or , meaning "brown". In the mid-19th-century United States, the brown bear was given the nicknames "Old Ephraim" and "Moccasin Joe". The scientific name of the brown bear, Ursus arctos, comes from the Latin , meaning "bear", and the Greek /, also meaning "bear". Evolution and taxonomy Taxonomy and subspecies Carl Linnaeus scientifically described the species under the name Ursus arctos in the 1758 edition of Systema Naturae. Brown bear taxonomy and subspecies classification has been described as "formidable and confusing", with few authorities listing the same set of subspecies. There are hundreds of obsolete brown-bear subspecies. As many as 90 subspecies have been proposed. A 2008 DNA analysis identified as few as five main clades, which comprise all extant brown bear species, while a 2017 phylogenetic study revealed nine clades, including one representing polar bears. , 15 extant, or recently extinct, subspecies were recognized by the general scientific community. DNA analysis shows that, apart from recent, human-caused population fragmentation, brown bears in North America are generally part of a single interconnected population system, with the exception of the population (or subspecies) in the Kodiak Archipelago, which has probably been isolated since the end of the last Ice Age. These data demonstrate that U. a. gyas, U. a. horribilis, U. a. sitkensis, and U. a. stikeenensis are not distinct or cohesive groups, and would more accurately be described as ecotypes. For example, brown bears in any particular region of the Alaska coast are more closely related to adjacent grizzly bears than to distant populations of brown bears. The history of the bears of the Alexander Archipelago is unusual in that these island populations carry polar bear DNA, presumably originating from a population of polar bears that was left behind at the end of the Pleistocene, but have since been connected with adjacent mainland populations through the movement of males, to the point where their nuclear genomes indicate more than 90% brown bear ancestry. MtDNA analysis revealed that brown bears are apparently divided into five different clades, some of which coexist or co-occur in different regions. Evolution The brown bear is one of eight extant species in the bear family Ursidae and of six extant species in the subfamily Ursinae. The brown bear is thought to have evolved from the Etruscan bear (Ursus etruscus) in Asia during the early Pliocene. A genetic analysis indicated that the brown bear lineage diverged from the cave bear species-complex approximately 1.2–1.4 million years ago, but did not clarify if U. savini persisted as a paraspecies for the brown bear before perishing. The oldest brown bear fossils occur in Asia from about 500,000 to 300,000 years ago. They entered Europe 250,000 years ago and North Africa shortly after. Brown bear remains from the Pleistocene period are common in the British Isles, where, amongst other factors, they may have contributed to the extinction of cave bears (Ursus spelaeus). Brown bears first emigrated to North America from Eurasia via Beringia during the Illinoian Glaciation. Genetic evidence suggests that several brown bear populations migrated into North America, aligning with the glacial cycles of the Pleistocene. The founding population of most North American brown bears arrived first, with the genetic lineage developing around ~177,000 BP. Genetic divergences suggest that brown bears first migrated south during MIS-5 (~92,000–83,000 BP), upon the opening of the ice-free corridor, After a local extinction in Beringia ~33,000 BP, two new but closely related lineages repopulated Alaska and northern Canada from Eurasia after the Last Glacial Maximum (>25,000 BP). Brown-bear fossils discovered in Ontario, Ohio, Kentucky, and Labrador show that the species occurred farther east than indicated in historic records. In North America, two types of the subspecies Ursus arctos horribilis are generally recognized—the coastal brown bear and the inland grizzly bear. Hybrids A grizzly–polar bear hybrid is a rare ursid hybrid resulting from a crossbreeding of a brown bear with a polar bear. It has occurred both in captivity and in the wild. In 2006, the occurrence of this hybrid was confirmed by testing the DNA of a strange-looking bear that had been shot in the Canadian Arctic, and seven more hybrids have since been confirmed in the same region, all descended from a single female polar bear. Previously, the hybrid had been produced in zoos and was considered a "cryptid" (a hypothesized animal for which there is no scientific proof of existence in the wild). Analyses of the genomes of bears have shown that introgression between species was widespread during the evolution of the genus Ursus, including the introgression of polar-bear DNA introduced to brown bears during the Pleistocene. Description Size The brown bear is the most variable in size of modern bears. The typical size depends upon which population it is from, as most accepted subtypes vary widely in size. This is in part due to sexual dimorphism, as male brown bears average at least 30% larger than females in most subtypes. Individual bears vary in size seasonally, weighing the least in spring due to lack of foraging during hibernation, and the most in late fall, after a period of hyperphagia to put on additional weight to prepare for hibernation. Brown bears generally weigh , with males outweighing females. They have a head-and-body length of and a shoulder height of . The tail is relatively short, as in all bears, ranging from in length. The smallest brown bears, females during spring among barren-ground populations, can weigh so little as to roughly match the body mass of males of the smallest living bear species, the sun bear (Helarctos malayanus), while the largest coastal populations attain sizes broadly similar to those of the largest living bear species, the polar bear. Brown bears of the interior are generally smaller, being around the same weight as an average lion, at an average of in males and in females, whereas adults of the coastal populations weigh about twice as much. The average weight of adult male bears, from 19 populations, was found to be while adult females from 24 populations were found to average . Coloration Brown bears are often not fully brown. They have long, thick fur, with a moderately elongated mane at the back of the neck which varies somewhat across bear types. In India, brown bears can be reddish with silver-tipped hairs, while in China, brown bears are bicolored, with a yellowish-brown or whitish collar across the neck, chest, and shoulders. Even within well-defined subspecies, individuals may show highly variable hues of brown. North American grizzlies can be from dark brown (almost black) to cream (almost white) or yellowish-brown and often have darker-colored legs. The common name "grizzly" stems from their typical coloration, with the hairs on their back usually being brownish-black at the base and whitish-cream at the tips, giving them their distinctive "grizzled" color. Apart from the cinnamon subspecies of the American black bear (U. americanus cinnamonum), the brown bear is the only modern bear species to typically appear truly brown. The brown bear's winter fur is very thick and long, especially in northern subspecies, and can reach at the withers. The winter hairs are thin, yet rough to the touch. The summer fur is much shorter and sparser, with its length and density varying among geographic ranges. Cranial morphology and size Adults have massive, heavily built, concave skulls, which are large in proportion to the body. The projections of the skull are well developed. Skull lengths of Russian brown bears tend to be for males, and for females. Brown bears have the broadest skull of any extant ursine bear. The width of the zygomatic arches in males is , and in females. Brown bears have strong jaws: the incisors and canine teeth are large, with the lower canines being strongly curved. The first three molars of the upper jaw are underdeveloped and single-crowned with one root. The second upper molar is smaller than the others, and is usually absent in adults. It is usually lost at an early age, leaving no trace of its alveolus in the jaw. The first three molars of the lower jaw are very weak, and are often lost at an early age. The teeth of brown bears reflect their dietary plasticity and are broadly similar to those of other bears. They are reliably larger than teeth of American black bears, but average smaller in molar length than those of polar bears. Claws and feet Brown bears have large, curved claws, with the front ones being larger than the back. They may reach and measure along the curve. Compared with the American black bear (Ursus americanus), the brown bear has longer and stronger claws, with a blunt curve. Due to their claw structure, in addition to their excessive weight, adult brown bears are not able to climb trees as well as black bears. In rare cases adult female brown bears have been seen scaling trees. The claws of a polar bear are quite different, being notably shorter but broader with a strong curve and sharper point. The species has large paws; the rear feet measure long, while the forefeet tend to measure 40% less. Brown bears are the only extant bears with a hump at the top of their shoulder, which is made entirely of muscle. This feature developed presumably to impart more force in digging, which helps during foraging and facilitates den construction prior to hibernation. Distribution and habitat Brown bears inhabit the broadest range of habitats of any living bear species. They seem to have no altitudinal preferences and have been recorded from sea level to an elevation of in the Himalayas. In most of their range, brown bears seem to prefer semi-open country, with a scattering of vegetation, that can allow them a resting spot during the day. However, they have been recorded as inhabiting every variety of northern temperate forest known to occur. This species was once native to Europe, much of Asia, the Atlas Mountains of Africa, and North America, but are now extirpated in some areas, and their populations have greatly decreased in other areas. There are approximately 200,000 brown bears left in the world. The largest populations are in Russia with 130,000, the United States with 32,500, and Canada with around 25,000. Brown bears live in Alaska, east through the Yukon and Northwest Territories, south through British Columbia, and through the western half of Alberta. The Alaskan population is estimated at a healthy 30,000 individuals. In the lower 48 states, they are repopulating slowly, but steadily along the Rockies and the western Great Plains. In Europe, in 2010, there were 14,000 brown bears in ten fragmented populations, from Spain (estimated at only 20–25 animals in the Pyrenees in 2010, in a range shared between Spain, France, and Andorra, and some 210 animals in Asturias, Cantabria, Galicia, and León, in the Picos de Europa and adjacent areas in 2013) in the west, to Russia in the east, and from Sweden and Finland in the north to Romania (5,000–6,000), Bulgaria (900–1,200), Slovakia (with about 600–800 animals), Slovenia (500–700 animals), and Greece (with Karamanlidis et al. 2015 estimating >450 animals) in the south. In Asia, brown bears are found primarily throughout Russia, thence more spottily southwest to parts of the Middle East, including the Eastern Black Sea Region, Turkey which has 5,432 individuals of brown bear, to as far south as southwestern Iran, and to the southeast in Northeast China. Brown bears are also found in Western China, Kyrgyzstan, North Korea, Pakistan, Afghanistan, and India. A population of brown bears can be found on the Japanese island of Hokkaidō, which holds the largest number of non-Russian brown bears in eastern Asia, with about 2,000–3,000 animals. Conservation status While the brown bear's range has shrunk and it has faced local extinctions, it remains listed as a least-concern species by the IUCN, with a total population of approximately 200,000. , the brown bear and the American black bear are the only bear species not classified as threatened by the IUCN. However, the California grizzly bear, Ungava brown bear, Atlas bear, and Mexican grizzly bear, as well as brown bear populations in the Pacific Northwest, were hunted to extinction in the 19th and early 20th centuries and many of the southern Asian subspecies are highly endangered. The Syrian brown bear (U. a. syriacus) is very rare and it has been extirpated from more than half of its historic range. One of the smallest-bodied subspecies, the Himalayan brown bear (U. a. isabellinus), is critically endangered: it occupies only 2% of its former range and is threatened by uncontrolled poaching for its body parts. The Marsican brown bear in central Italy is believed to have a population of just 50 bears. The smallest populations are most vulnerable to habitat loss and fragmentation, whereas the largest are primarily threatened by overhunting. The use of land for agriculture may negatively effect brown bears. Additionally, roads and railway tracks could pose a serious threat, as oncoming vehicles may collide with crossing animals. Poaching has been cited as another mortality factor. In one instance, a 3-year-long survey in the Russian Far East detected the illegal shipping of brown bear gallbladders to Southeast Asian countries. The purpose and motive behind the trade is unknown. An action plan in 2000 aimed to conserve brown bears in Europe by mitigating human–wildlife conflict, educating farm owners as to sustainable practices, and preserving and expanding remaining forests. Compensation was given to people who suffered losses of livestock, food supplies, or shelter. Growing bear populations were recorded in some countries, such as Sweden, where an increase of 1.5% per annum occurred between the 1940s and 1990s. Brown bears in Central Asia are primarily threatened by climate change. In response to this, conservationists plan on building wildlife corridors to promote easy access from one brown bear population to another. In Himalayan Nepal, farmers may kill brown bears in revenge for livestock predation. Behavior and life history A 2014 study revealed that brown bears peaked in activity around the morning and early evening hours. Although activity can happen day or night, bears that live in locations where they are apt to interact with humans are more likely to be fully nocturnal. In areas with little interaction, many adult bears are primarily crepuscular, while yearlings and newly independent bears appear to be most active throughout the day. From summer through autumn, a brown bear can double its weight from what it was in the spring, gaining up to of fat, on which it relies to make it through winter, when it becomes lethargic. Although they are not full hibernators and can be woken easily, both sexes prefer to den in a protected spot during the winter months. Hibernation dens may be located at any spot that provides cover from the elements and that can accommodate their bodies, such as a cave, crevice, cavernous tree roots, or hollow logs. Brown bears have one of the largest brains of any extant carnivoran relative to their body size and have been shown to engage in tool use, which requires advanced cognitive abilities. This species is mostly solitary, although bears may gather in large numbers at major food sources (e.g., open garbage dumps or rivers containing spawning salmon) and form social hierarchies based on age and size. Adult male bears are particularly aggressive and are avoided by adolescent and subadult males, both at concentrated feeding opportunities and chance encounters. Females with cubs rival adult males in aggression and are much more intolerant of other bears than single females. Young adolescent males tend to be least aggressive and have been observed in nonantagonistic interactions with each other. Dominance between bears is asserted by making a frontal orientation, showing off canine teeth, muzzle twisting, and neck stretching, to which a subordinate will respond with a lateral orientation, by turning away and dropping the head, and by sitting or lying down. During combat, bears use their paws to strike their opponents in the chest or shoulders and bite the head or neck. Communication Several different facial expressions have been documented in brown bears. The "relaxed-face" is made during everyday activities, a face where the ears pointed to the sides and the mouth closed or slackly open. During social play, bears make "relaxed open-mouth face" in which the mouth is open, with a curled upper lip and hanging lower lip, and the ears alert and shifting. When looking at another animal at a distance, the bear makes an "alert face" as the ears are cocked and alert, the eyes wide open with the mouth is closed or only open slightly. The "tense closed mouth face" is made with the ears laid back and the mouth closed, and occurs when the bear feels threatened. When approached by another individual, the animal makes a "puckered-lip face" with a protruding upper lip and ears that go from cocked and alert when at a certain distance to laid back when closer or when retreating. The "jaw gape face" consists of an open mouth with visible lower canines and hanging lips while the "biting face" is similar to the "relaxed open-mouth face" except the ears are flattened and the eyes are wide enough to expose the sclera. Both the "jaw gape face" and the "biting face" are made when the bear is aggressive and can quickly switch between them. Brown bears also produce various vocalizations. Huffing occurs when the animal is tense, while woofing is made when alarmed. Both sounds are produced by exhalations, though huffing is harsher and is made continuously (approximately twice per second). Growls and roars are made when aggressive. Growling is "harsh" and "guttural" and can range from a simple grrr to a rumble. A rumbling growl can escalate to a roar when the bear is charging. Roaring is described as "thunderous" and can travel . Mothers and cubs wanting physical contact will bawl, which is heard as waugh!, waugh!. Home ranges Brown bears usually inhabit vast home ranges; however, they are not highly territorial. Several adult bears roam freely over the same vicinity without contention, unless rights to a fertile female or food sources are being contested. Despite their lack of traditional territorial behavior, adult males seem to have a "personal zone" within which other bears are not tolerated if they are seen. Males always wander further than females, due to such behavior giving increasing access to both females and food sources. Females have the advantage of inhabiting smaller territories, which decreases the likelihood of encounters with male bears who may endanger their cubs. In areas where food is abundant, such as coastal Alaska, home ranges for females and males are up to and , respectively. Similarly, in British Columbia, bears of the two sexes travel in relatively compact home ranges of . In Yellowstone National Park, home ranges for females are up to and up to for males. In Romania, the largest home range was recorded for adult males (). In the central Arctic of Canada, where food sources are quite scarce, home ranges range up to for females and for males. Reproduction The mating season occurs from mid-May to early July, shifting to later in the year the farther north the bears are found. Brown bears are polygynandrous, remaining with the same mate for a couple of days to a couple of weeks and mating multiply during the mating season. Outside of this narrow time frame, adult male and female brown bears show no sexual interest in each other. Females mature sexually between the ages of four and eight. Males first mate about a year later, when they are large and strong enough to compete with other males for mating rights. Males will try to mate with as many females as they can; usually a successful male will mate with two females in a span of one to three weeks. Similarly, adult female brown bears can mate with up to four, sometimes even eight, males while in oestrus (heat), potentially mating with two in a single day. Females come into oestrus every three to four years, with an outside range of 2.4 to 5.7 years. The urine markings of a female in oestrus can attract several males via scent. Dominant males may try to sequester a female for her entire oestrus period of approximately two weeks, but usually are unable to retain her for the entire time. Copulation is prolonged and lasts for over 20 minutes. Males take no part in raising cubs – parenting is left entirely to the females. Through the process of delayed implantation, a female's fertilized egg divides and floats freely in the uterus for six months. During winter dormancy, the fetus attaches to the uterine wall. The cubs are born eight weeks later, while the mother sleeps. If the mother does not gain enough weight to survive through the winter while gestating, the embryo does not implant and is reabsorbed into the body. Litters consist of as many as six cubs, though litters of one to three are more typical. The size of a litter depends on factors such as geographic location and food supply. At birth, cubs are blind, toothless and hairless and may weigh . There are records of females sometimes adopting stray cubs or even trading or kidnapping cubs when they emerge from hibernation (a larger female may claim cubs from a smaller one). Older and larger females within a population tend to give birth to larger litters. The cubs feed on their mother's milk until spring or early summer, depending on climate conditions. At this time, the cubs weigh and have developed enough to follow and forage for solid food with their mother over long distances. The cubs are dependent on the mother and a close bond is formed. During the dependency stage, the cubs learn (rather than inherit as instincts from birth) survival techniques, such as which foods have the highest nutritional value and where to obtain them; how to hunt, fish, and defend themselves; and where to den. Increased brain size in large carnivores has been positively linked to whether a given species is solitary, as is the brown bear, or raises offspring communally. Thus, the relatively large, well-developed brain of a female brown bear is presumably key in teaching behavior. The cubs learn by following and imitating their mother's actions during the period they are with her. Cubs remain with their mother for an average of 2.5 years in North America, and gain independence from as early as 1.5 years of age to as late as 4.5 years. The stage at which independence is attained may generally be earlier in some parts of Eurasia, as the latest date which mother and cubs were together was 2.3 years. Most families separated in under two years in a study in Hokkaido, and in Sweden most yearlings were their own. Brown bears practice infanticide, as an adult male bear may kill the cubs of another. When an adult male brown bear kills a cub, it is usually because he is trying to bring the female into oestrus, as she will enter that state within two to four days after the death of her cubs. Cubs may flee up a tree when they see a strange male bear approaching. The mother often successfully defends them, even though the male may be twice as heavy as she. However, females have been known to die in such confrontations. Dietary habits The brown bear is one of the most omnivorous animals and has been recorded as consuming the greatest variety of foods of any bear. Despite their reputation, most brown bears are not highly carnivorous, as they derive up to 90% of their dietary food energy from vegetable matter. They often feed on a variety of plant life, including berries, grasses, flowers, acorns, and pine cones, as well as fungi such as mushrooms. Among all bears, brown bears are uniquely equipped to dig for tough foods such as roots, bulbs, and shoots. They use their long, strong claws to dig out earth to reach roots and their powerful jaws to bite through them. In spring, winter-provided carrion, grasses, shoots, sedges, moss, and forbs are the dietary mainstays for brown bears internationally. Fruits, including berries, become increasingly important during summer and early autumn. Roots and bulbs become critical in autumn for some inland bear populations if fruit crops are poor. They will also commonly consume animal matter, which in summer and autumn may regularly be in the form of insects, larvae, and grubs, including beehives. Bears in Yellowstone eat an enormous number of moths during the summer, sometimes as many as 40,000 army cutworm moths in a single day, and may derive up to half of their annual food energy from these insects. Brown bears living near coastal regions will regularly eat crabs and clams. In Alaska, bears along the beaches of estuaries regularly dig through the sand for clams. This species may eat birds and their eggs, including almost entirely ground- or rock-nesting species. The diet may be supplemented by rodents or similar small mammals, including marmots, ground squirrels, mice, rats, lemmings, and voles. With particular regularity, bears in Denali National Park will wait at burrows of Arctic ground squirrels, hoping to pick off a few of those rodents. In the Kamchatka peninsula and several parts of coastal Alaska, brown bears feed mostly on spawning salmon, whose nutrition and abundance explain the enormous size of the bears in those areas. The fishing techniques of bears are well-documented. They often congregate around falls when the salmon are forced to breach the water, at which point the bears will try to catch the fish in mid-air (often with their mouths). They will also wade into shallow water, hoping to pin a slippery salmon with their claws. While they may eat almost all the parts of the fish, bears at the peak of salmon spawning, when there is usually a glut of fish to feed on, may eat only the most nutrious parts of the salmon (including the eggs and head) and then indifferently leave the rest of the carcass to scavengers, which can include red foxes, bald eagles, common ravens, and gulls. Despite their normally solitary habits, brown bears will gather closely in numbers at good spawning sites. The largest and most powerful males claim the most fruitful fishing spots and will sometimes fight over the rights to them. Beyond the regular predation of salmon, most brown bears are not particularly active predators. While perhaps a majority of bears of the species will charge at large prey at one point in their lives, many predation attempts start with the bear clumsily and half-heartedly pursuing the prey and end with the prey escaping alive. On the other hand, some brown bears are quite self-assured predators who habitually pursue and catch large prey. Such bears are usually taught how to hunt by their mothers from an early age. Large mammals preyed on can include various ungulate species such as elk, moose, caribou, muskoxen, and wild boar. When brown bears attack these large animals, they usually target young or infirm ones, which are easier to catch. Typically when hunting (especially young prey), the bear pins its prey to the ground and then immediately tears at and eats it alive. It will also bite or swipe some prey to stun it enough to knock it over for consumption. In general, large mammalian prey is killed with raw strength and bears do not display the specialized killing methods of felids and canids. To pick out young or infirm individuals, bears will charge at herds so the more vulnerable, and thus slower-moving, individuals will become apparent. Brown bears may ambush young animals by finding them via scent. When emerging from hibernation, brown bears, whose broad paws allow them to walk over most ice and snow, may pursue large prey such as moose, whose hooves cannot support them on encrusted snow. Similarly, predatory attacks on large prey sometimes occur at riverbeds, when it is more difficult for the prey specimen to run away due to muddy or slippery soil. On rare occasions, while confronting fully-grown, dangerous prey, bears kill them by hitting with their powerful forearms, which can break the necks and backs of large creatures such as adult moose and adult bison. They feed on carrion, and use their size to intimidate other predators – such as wolves, cougars, tigers, and American black bears – away from their kills. Carrion is especially important in the early spring (when the bears are emerging from hibernation), much of it comprising winter-killed big game. Cannibalism is not unheard of, though predation is not normally believed to be the primary motivation when brown bears attack each other. When forced to live in close proximity with humans and their domesticated animals, bears may potentially predate any type of domestic animal. Among these, domestic cattle are sometimes exploited as prey. Cattle are bitten on the neck, back, or head, and then the abdominal cavity is opened for eating. Plants and fruit farmed by humans are readily consumed as well, including corn, wheat, sorghum, melons, and any form of berries. They may feed on domestic bee yards, readily consuming both honey and the brood (grubs and pupae) of the honey bee colony. Human foods and trash are eaten when possible. When an open garbage dump was kept in Yellowstone, brown bears were one of the most voracious and regular scavengers. The dump was closed after both brown and American black bears came to associate humans with food and lost their natural fear of them. Relations with other predators Adult bears are generally immune to predatory attacks except from large Siberian (Amur) tigers and other bears. Following a decrease of ungulate populations from 1944 to 1959, 32 cases of Siberian tigers attacking both Ussuri brown bears (Ursus arctos lasiotus) and Ussuri black bears (U. thibetanus ussuricus) were recorded in the Russian Far East, and bear hairs were found in several tiger scat samples. Tigers attack black bears less often than brown bears, since the brown bears live in more open habitats and are not able to climb trees. In the same time period, four cases of brown bears killing female tigers and young cubs were reported, both in disputes over prey and in self-defense. In rare cases, when Amur tigers prey on brown bears, they usually target young and sub-adult bears, besides small female adults taken outside their dens, generally when lethargic from hibernation. Predation by tigers on denned brown bears was not detected during a study carried out between 1993 and 2002. Ussuri brown bears, along with the smaller black bears constitute 2.1% of the Siberian tiger's annual diet, of which 1.4% are brown bears. Brown bears regularly intimidate wolves to drive them away from their kills. In Yellowstone National Park, bears pirate wolf kills so often, Yellowstone's Wolf Project director Doug Smith wrote, "It's not a matter of if the bears will come calling after a kill, but when." Despite the animosity between the two species, most confrontations at kill sites or large carcasses end without bloodshed on either side. Though conflict over carcasses is common, on rare occasions the two predators tolerate each other at the same kill. To date, there is a single recorded case of fully-grown wolves being killed by a grizzly bear. Given the opportunity, however, both species will prey on the other's cubs. In some areas, grizzly bears regularly displace cougars from their kills. Cougars kill small bear cubs on rare occasions, but there was only one report of a bear killing a cougar, of unknown age and condition, between 1993 and 1996. Brown bears usually dominate other bear species in areas where they coexist. Due to their smaller size, American black bears are at a competitive disadvantage to grizzly bears in open, unforested areas. Although displacement of black bears by grizzly bears has been documented, actual killing of black bears by grizzlies has only occasionally been reported. Confrontation is mostly avoided due to the black bear's diurnal habits and preference for heavily forested areas, as opposed to the grizzly's largely nocturnal habits and preference for open spaces. Brown bears may also kill Asian black bears, though the latter species probably largely avoids conflicts with the brown bear, due to similar habits and habitat preferences to the American black species. As of the 21st century, there has been an increase in interactions between brown bears and polar bears, theorized to be caused by climate change. Brown and grizzly bears have been seen moving increasingly northward into territories formerly claimed by polar bears. They tend to dominate polar bears in disputes over carcasses, and dead polar bear cubs have been found in brown bear dens. Longevity and mortality The brown bear has a naturally long life. Wild females have been observed reproducing at 28 years, which is the oldest known age for reproduction of any ursid in the wild. The peak reproductive age for females ranges from four to 20 years old. The lifespan of both sexes within minimally hunted populations is estimated at an average of 25 years. The oldest recorded wild individual was nearly 37 years old. In captivity, the oldest recorded female was around 40 years old, while males have been known to live up to 47 years. While male bears potentially live longer in captivity, female grizzly bears have a greater annual survival rate than males within wild populations, per a study done in the Greater Yellowstone Ecosystem. Annual mortality for bears of any age is estimated at 10% in most protected areas. Around 13% to 44% of cubs die within their first year. Beyond predation by large predators – including wolves, Siberian tigers, and other brown bears – starvation and accidents also claim the lives of cubs. Studies have indicated that the most prevalent cause of mortality for first-year cubs is malnutrition. Brown bears are susceptible to parasites such as flukes, ticks, tapeworms, roundworms, and biting lice. It is thought that brown bears may catch canine distemper virus (CDV) from other caniforms such as stray dogs and wolves. A captive individual allegedly succumbed to Aujeszky's disease. Hibernation physiology A study conducted by the Brown Bear Research Project did a proteomic analysis of the brown bear's blood, organs, and tissues to pinpoint proteins and peptides that either increased or decreased in expression in the winter and summer months. One major finding was that the presence of the plasma protein sex hormone-binding globulin (SHBG) increased by 45 times during the brown bear's hibernation period. Although scientists do not yet understand the role of SHBG in maintaining the brown bear's health, some believe these findings could potentially help in understanding and preventing human diseases that come from a sedentary lifestyle. Relations with humans Attacks on humans Brown bears usually avoid areas where extensive development or urbanization has occurred. They usually avoid people and rarely attack on sight. They are, however, unpredictable in temperament, and may attack if threatened or surprised. Mothers defending cubs are the most prone to attacking, being responsible for 70% of brown bear-caused human fatalities in North America. Attacks tend to result in serious injury and, in some cases, death. Due to the bears' enormous physical strength, a single bite or swipe can be deadly. Violent encounters with brown bears usually last a few minutes, though they can be prolonged if the victims fight back. A study conducted in 2019 found that 664 bear attacks were reported during a 15-year period (20002015) throughout North America and Eurasia. There were 568 injuries and 95 fatalities. Around 10 people a year are killed by brown bears in Russia, more than all the other parts of the brown bear's range combined. In Japan, a large brown bear nicknamed Kesagake ("kesa-style slasher") caused the worst brown bear attack in Japanese history in Tomamae, Hokkaidō, during numerous encounters during December 1915. It killed seven people and wounded three others before being gunned down during a large-scale beast-hunt. A study by U.S. and Canadian researchers has found bear spray to be more effective at stopping aggressive bear behavior than guns, working in 92% of studied incidents, versus 67% for guns. Bear hunting Humans have been recorded hunting brown bears for over 10,300–9,300 years. Bears were hunted throughout their range in Europe, Asia and North America by both the native Americans and Europeans. The former usually killed bears for survival needs, while the latter for sports or population control. In Europe, between the 17th and 18th centuries, humans sought to control brown bear numbers by awarding those who managed to kill one. This bounty scheme pushed the brown bear population to the brink of extinction before comprehensive protection was offered in the 1900s. Despite this, a 2018 study found hunting to be one of the contributing factors to the drop in brown bear numbers in northern Europe. The earliest known case of a European killing a grizzly bear dates back to 1691. Their arrival in western United States led to the extirpation of local brown bear populations in the 19th and early 20th centuries. During the early years of European settlement in North America, bears were usually killed with a spear or lasso rope. The introduction of rifles in the mid-19th century largely facilitated bear hunts, which allowed for an increasing trend. Bears were also pitted into fights against male cattle, often ending with either animal grievously injured or dead. The last two decades of the 19th century saw an increase in bounties. Conflicts with farmers also contributed to its rapid decline. It wasn't until the 1920s that grizzly bears received some type of protection from the US government. Today, brown bears are legally hunted in some American states, such as Alaska. However, a hunting license is required and killings of nurturing females and cubs will result in a prison sentence. Brown bear meat is sometimes consumed and used in recipes such as dumplings, hams and stews. The Indigenous people of James Bay (Eastern) Cree use their flesh in traditional dishes. In Asia and Romania, the paws are consumed as exotic delectables; they have been a prevalent component of traditional Chinese food since 500 BC. The total weight of commercially sold brown bear meat is estimated at 17 tons annually. In captivity Bears have been recorded in captivity as early as 1,500 BC. As of 2017, there are more than 700 brown bears in zoos and wildlife parks worldwide. Captive bears are largely lethargic and spend a considerable amount of time doing nothing. When active, captive bears may engage in repetitive back and forth motion, known widely as pacing. This behavior is most prevalent in bears kept in small, cramped cages often with no natural setting. Pacing is a way of coping with stress that comes with being trapped in unnaturally small spaces. These stereotyped behaviors have decreased due to better and larger enclosures being built, and more sustainable management from zoo staff. Starting from infancy, brown bears may also be exploited as dancing bears. Cubs, for example, are positioned on hot metal plates, causing them to "dance" to the sound of violin music running in the background. The process is repeated, resulting in bears being trained to "dance" when a violin is played. Similarly, brown bears are displayed in tiny enclosures near a restaurant, mainly for the purpose of luring customers. Privately-owned bears are also placed in insufficient environments and often suffer from malnutrition and obesity. Brown bears have been popular attractions at circuses and other acts since ancient times. Due to their large size and imposing demeanor, the Romans used brown bears in the execution of criminals, and pitted bears in fights with other animals. Gladiators would also fight bears, in what was essentially a fight to the death. Such events occurred in amphitheaters housing thousands of spectators. In later times, street performances became popular in the Middle Ages; acts included "dancing" and "sleeping on command". These performances became increasingly widespread, and from the 1700s to 1800s, traveling circuses would perform in the streets of many European and Asian countries. Such circuses made use of bears that wore special clothing, and were usually run by musicians. A short while later, modern circuses began utilizing bears around the second half of the 18th century. Brown bears were said to be the easiest bear species to train due to their intelligence, unique personalities, and exceptional stability. According to a 2009 analysis, the brown bear was the second most exploited circus animal after the tiger. Culture Bears have been popular subjects in art, literature, folklore, and mythology. The image of the mother bear was prevalent throughout societies in North America and Eurasia, based on the female's devotion and protection of her cubs. The earliest cave paintings of bears occurred in the Paleolithic, with over 100 recorded paintings. Brown bears often figure in the literature of Europe and North America as "cute and cuddly", in particular that which is written for children. "The Brown Bear of Norway" is a Scottish fairy tale telling of the adventures of a girl who married a prince magically turned into a bear and who managed to get him back into a human form by the force of her love after many trials and tribulations. With "Goldilocks and the Three Bears", a story from England, the Three Bears are usually depicted as brown bears. In German-speaking countries, children are often told the fairytale of "Snow White and Rose Red"; the handsome prince in this tale has been transfigured into a brown bear. In the United States, parents often read their preschool age children the book Brown Bear, Brown Bear, What Do You See? to teach them their colors and how they are associated with different animals. Smokey Bear, the famous mascot of U.S. Forest Service, has since the 1940s been used to educate people on the dangers of human-caused wildfire. Brown bears have been extensively featured in the culture of Native Americans, and are considered sacred. To stop a bear's spirit from escaping after it was killed, the Denaa people severed all 4 of its paws. They delayed consuming brown-bear flesh, owing to the belief that the bear's spirit was overwhelming in fresh kills. In addition, community members that wore bear claw necklaces were highly respected, as wearing one was seen as a sign of bravery and honor. The clattering caused by repeatedly shaking these necklaces were believed to bring forth therapeutic powers. In Haida culture, one legend has it that a marriage between a woman and a grizzly bear commenced the lineage of the native people. This is thought to have allowed the Haida to thrive in bear country. There is evidence of prehistoric bear worship, though this is disputed by archaeologists. It is possible that bear worship existed in early Chinese and Ainu cultures. The Romans built small carved figures of bears that were used during the burials of infants. In Ancient Greek mythology, bears were considered similar to humans, mainly due to their ability to stand upright. In many western stories and older fables the portrayed attributes of bears are sluggishness, foolishness, and gullibility, which contradicts the actual behavior of the species. For example, bears have been reported tricking hunters by backtracking in the snow. In North America, the brown bear is considered a charismatic megafauna and has long piqued people's interest. The death of Bear 148 at the hands of a trophy hunter in 2017 sparked media outrage and the continued disapproval of trophy hunting. The Russian bear is a common national personification for Russia (as well as the former Soviet Union), despite the country having no officially-designated national animal. The brown bear is Finland's national animal. The grizzly bear is the state animal of Montana. The California golden bear is the state animal of California, despite being extinct. The coat of arms of Madrid depicts a bear reaching up into a madroño or strawberry tree (Arbutus unedo) to eat some of its fruit. The Swiss city of Bern's coat of arms depicts a bear and the city's name is popularly thought to derive from the German word for bear. The brown bear is depicted on the reverse of the Croatian 5-kuna coin, minted since 1993.
Biology and health sciences
Carnivora
null
4436
https://en.wikipedia.org/wiki/Brownian%20motion
Brownian motion
Brownian motion is the random motion of particles suspended in a medium (a liquid or a gas). This motion pattern typically consists of random fluctuations in a particle's position inside a fluid sub-domain, followed by a relocation to another sub-domain. Each relocation is followed by more fluctuations within the new closed volume. This pattern describes a fluid at thermal equilibrium, defined by a given temperature. Within such a fluid, there exists no preferential direction of flow (as in transport phenomena). More specifically, the fluid's overall linear and angular momenta remain null over time. The kinetic energies of the molecular Brownian motions, together with those of molecular rotations and vibrations, sum up to the caloric component of a fluid's internal energy (the equipartition theorem). This motion is named after the Scottish botanist Robert Brown, who first described the phenomenon in 1827, while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water. In 1900, the French mathematician Louis Bachelier modeled the stochastic process now called Brownian motion in his doctoral thesis, The Theory of Speculation (Théorie de la spéculation), prepared under the supervision of Henri Poincaré. Then, in 1905, theoretical physicist Albert Einstein published a paper where he modeled the motion of the pollen particles as being moved by individual water molecules, making one of his first major scientific contributions. The direction of the force of atomic bombardment is constantly changing, and at different times the particle is hit more on one side than another, leading to the seemingly random nature of the motion. This explanation of Brownian motion served as convincing evidence that atoms and molecules exist and was further verified experimentally by Jean Perrin in 1908. Perrin was awarded the Nobel Prize in Physics in 1926 "for his work on the discontinuous structure of matter". The many-body interactions that yield the Brownian pattern cannot be solved by a model accounting for every involved molecule. Consequently, only probabilistic models applied to molecular populations can be employed to describe it. Two such models of the statistical mechanics, due to Einstein and Smoluchowski, are presented below. Another, pure probabilistic class of models is the class of the stochastic process models. There exist sequences of both simpler and more complicated stochastic processes which converge (in the limit) to Brownian motion (see random walk and Donsker's theorem). History The Roman philosopher-poet Lucretius' scientific poem "On the Nature of Things" () has a remarkable description of the motion of dust particles in verses 113–140 from Book II. He uses this as a proof of the existence of atoms: Although the mingling, tumbling motion of dust particles is caused largely by air currents, the glittering, jiggling motion of small dust particles is caused chiefly by true Brownian dynamics; Lucretius "perfectly describes and explains the Brownian movement by a wrong example". While Jan Ingenhousz described the irregular motion of coal dust particles on the surface of alcohol in 1785, the discovery of this phenomenon is often credited to the botanist Robert Brown in 1827. Brown was studying pollen grains of the plant Clarkia pulchella suspended in water under a microscope when he observed minute particles, ejected by the pollen grains, executing a jittery motion. By repeating the experiment with particles of inorganic matter he was able to rule out that the motion was life-related, although its origin was yet to be explained. The first person to describe the mathematics behind Brownian motion was Thorvald N. Thiele in a paper on the method of least squares published in 1880. This was followed independently by Louis Bachelier in 1900 in his PhD thesis "The theory of speculation", in which he presented a stochastic analysis of the stock and option markets. The Brownian model of financial markets is often cited, but Benoit Mandelbrot rejected its applicability to stock price movements in part because these are discontinuous. Albert Einstein (in one of his 1905 papers) and Marian Smoluchowski (1906) brought the solution of the problem to the attention of physicists, and presented it as a way to indirectly confirm the existence of atoms and molecules. Their equations describing Brownian motion were subsequently verified by the experimental work of Jean Baptiste Perrin in 1908. The instantaneous velocity of the Brownian motion can be defined as , when , where is the momentum relaxation time. In 2010, the instantaneous velocity of a Brownian particle (a glass microsphere trapped in air with optical tweezers) was measured successfully. The velocity data verified the Maxwell–Boltzmann velocity distribution, and the equipartition theorem for a Brownian particle. Statistical mechanics theories Einstein's theory There are two parts to Einstein's theory: the first part consists in the formulation of a diffusion equation for Brownian particles, in which the diffusion coefficient is related to the mean squared displacement of a Brownian particle, while the second part consists in relating the diffusion coefficient to measurable physical quantities. In this way Einstein was able to determine the size of atoms, and how many atoms there are in a mole, or the molecular weight in grams, of a gas. In accordance to Avogadro's law, this volume is the same for all ideal gases, which is 22.414 liters at standard temperature and pressure. The number of atoms contained in this volume is referred to as the Avogadro number, and the determination of this number is tantamount to the knowledge of the mass of an atom, since the latter is obtained by dividing the molar mass of the gas by the Avogadro constant. The first part of Einstein's argument was to determine how far a Brownian particle travels in a given time interval. Classical mechanics is unable to determine this distance because of the enormous number of bombardments a Brownian particle will undergo, roughly of the order of 1014 collisions per second. He regarded the increment of particle positions in time in a one-dimensional (x) space (with the coordinates chosen so that the origin lies at the initial position of the particle) as a random variable () with some probability density function (i.e., is the probability density for a jump of magnitude , i.e., the probability density of the particle incrementing its position from to in the time interval ). Further, assuming conservation of particle number, he expanded the number density (number of particles per unit volume around ) at time in a Taylor series, where the second equality is by definition of . The integral in the first term is equal to one by the definition of probability, and the second and other even terms (i.e. first and other odd moments) vanish because of space symmetry. What is left gives rise to the following relation: Where the coefficient after the Laplacian, the second moment of probability of displacement , is interpreted as mass diffusivity D: Then the density of Brownian particles at point at time satisfies the diffusion equation: Assuming that N particles start from the origin at the initial time t = 0, the diffusion equation has the solution This expression (which is a normal distribution with the mean and variance usually called Brownian motion ) allowed Einstein to calculate the moments directly. The first moment is seen to vanish, meaning that the Brownian particle is equally likely to move to the left as it is to move to the right. The second moment is, however, non-vanishing, being given by This equation expresses the mean squared displacement in terms of the time elapsed and the diffusivity. From this expression Einstein argued that the displacement of a Brownian particle is not proportional to the elapsed time, but rather to its square root. His argument is based on a conceptual switch from the "ensemble" of Brownian particles to the "single" Brownian particle: we can speak of the relative number of particles at a single instant just as well as of the time it takes a Brownian particle to reach a given point. The second part of Einstein's theory relates the diffusion constant to physically measurable quantities, such as the mean squared displacement of a particle in a given time interval. This result enables the experimental determination of the Avogadro number and therefore the size of molecules. Einstein analyzed a dynamic equilibrium being established between opposing forces. The beauty of his argument is that the final result does not depend upon which forces are involved in setting up the dynamic equilibrium. In his original treatment, Einstein considered an osmotic pressure experiment, but the same conclusion can be reached in other ways. Consider, for instance, particles suspended in a viscous fluid in a gravitational field. Gravity tends to make the particles settle, whereas diffusion acts to homogenize them, driving them into regions of smaller concentration. Under the action of gravity, a particle acquires a downward speed of , where is the mass of the particle, is the acceleration due to gravity, and is the particle's mobility in the fluid. George Stokes had shown that the mobility for a spherical particle with radius is , where is the dynamic viscosity of the fluid. In a state of dynamic equilibrium, and under the hypothesis of isothermal fluid, the particles are distributed according to the barometric distribution where is the difference in density of particles separated by a height difference, of , is the Boltzmann constant (the ratio of the universal gas constant, , to the Avogadro constant, ), and is the absolute temperature. Dynamic equilibrium is established because the more that particles are pulled down by gravity, the greater the tendency for the particles to migrate to regions of lower concentration. The flux is given by Fick's law, where . Introducing the formula for , we find that In a state of dynamical equilibrium, this speed must also be equal to . Both expressions for are proportional to , reflecting that the derivation is independent of the type of forces considered. Similarly, one can derive an equivalent formula for identical charged particles of charge in a uniform electric field of magnitude , where is replaced with the electrostatic force . Equating these two expressions yields the Einstein relation for the diffusivity, independent of or or other such forces: Here the first equality follows from the first part of Einstein's theory, the third equality follows from the definition of the Boltzmann constant as , and the fourth equality follows from Stokes's formula for the mobility. By measuring the mean squared displacement over a time interval along with the universal gas constant , the temperature , the viscosity , and the particle radius , the Avogadro constant can be determined. The type of dynamical equilibrium proposed by Einstein was not new. It had been pointed out previously by J. J. Thomson in his series of lectures at Yale University in May 1903 that the dynamic equilibrium between the velocity generated by a concentration gradient given by Fick's law and the velocity due to the variation of the partial pressure caused when ions are set in motion "gives us a method of determining Avogadro's constant which is independent of any hypothesis as to the shape or size of molecules, or of the way in which they act upon each other". An identical expression to Einstein's formula for the diffusion coefficient was also found by Walther Nernst in 1888 in which he expressed the diffusion coefficient as the ratio of the osmotic pressure to the ratio of the frictional force and the velocity to which it gives rise. The former was equated to the law of van 't Hoff while the latter was given by Stokes's law. He writes for the diffusion coefficient , where is the osmotic pressure and is the ratio of the frictional force to the molecular viscosity which he assumes is given by Stokes's formula for the viscosity. Introducing the ideal gas law per unit volume for the osmotic pressure, the formula becomes identical to that of Einstein's. The use of Stokes's law in Nernst's case, as well as in Einstein and Smoluchowski, is not strictly applicable since it does not apply to the case where the radius of the sphere is small in comparison with the mean free path. At first, the predictions of Einstein's formula were seemingly refuted by a series of experiments by Svedberg in 1906 and 1907, which gave displacements of the particles as 4 to 6 times the predicted value, and by Henri in 1908 who found displacements 3 times greater than Einstein's formula predicted. But Einstein's predictions were finally confirmed in a series of experiments carried out by Chaudesaigues in 1908 and Perrin in 1909. The confirmation of Einstein's theory constituted empirical progress for the kinetic theory of heat. In essence, Einstein showed that the motion can be predicted directly from the kinetic model of thermal equilibrium. The importance of the theory lay in the fact that it confirmed the kinetic theory's account of the second law of thermodynamics as being an essentially statistical law. Smoluchowski model Smoluchowski's theory of Brownian motion starts from the same premise as that of Einstein and derives the same probability distribution for the displacement of a Brownian particle along the in time . He therefore gets the same expression for the mean squared displacement: However, when he relates it to a particle of mass moving at a velocity which is the result of a frictional force governed by Stokes's law, he finds where is the viscosity coefficient, and is the radius of the particle. Associating the kinetic energy with the thermal energy , the expression for the mean squared displacement is times that found by Einstein. The fraction 27/64 was commented on by Arnold Sommerfeld in his necrology on Smoluchowski: "The numerical coefficient of Einstein, which differs from Smoluchowski by 27/64 can only be put in doubt." Smoluchowski attempts to answer the question of why a Brownian particle should be displaced by bombardments of smaller particles when the probabilities for striking it in the forward and rear directions are equal. If the probability of gains and losses follows a binomial distribution, with equal probabilities of 1/2, the mean total gain is If is large enough so that Stirling's approximation can be used in the form then the expected total gain will be showing that it increases as the square root of the total population. Suppose that a Brownian particle of mass is surrounded by lighter particles of mass which are traveling at a speed . Then, reasons Smoluchowski, in any collision between a surrounding and Brownian particles, the velocity transmitted to the latter will be . This ratio is of the order of . But we also have to take into consideration that in a gas there will be more than 1016 collisions in a second, and even greater in a liquid where we expect that there will be 1020 collision in one second. Some of these collisions will tend to accelerate the Brownian particle; others will tend to decelerate it. If there is a mean excess of one kind of collision or the other to be of the order of 108 to 1010 collisions in one second, then velocity of the Brownian particle may be anywhere between . Thus, even though there are equal probabilities for forward and backward collisions there will be a net tendency to keep the Brownian particle in motion, just as the ballot theorem predicts. These orders of magnitude are not exact because they don't take into consideration the velocity of the Brownian particle, , which depends on the collisions that tend to accelerate and decelerate it. The larger is, the greater will be the collisions that will retard it so that the velocity of a Brownian particle can never increase without limit. Could such a process occur, it would be tantamount to a perpetual motion of the second type. And since equipartition of energy applies, the kinetic energy of the Brownian particle, will be equal, on the average, to the kinetic energy of the surrounding fluid particle, In 1906 Smoluchowski published a one-dimensional model to describe a particle undergoing Brownian motion. The model assumes collisions with where is the test particle's mass and the mass of one of the individual particles composing the fluid. It is assumed that the particle collisions are confined to one dimension and that it is equally probable for the test particle to be hit from the left as from the right. It is also assumed that every collision always imparts the same magnitude of . If is the number of collisions from the right and the number of collisions from the left then after collisions the particle's velocity will have changed by . The multiplicity is then simply given by: and the total number of possible states is given by . Therefore, the probability of the particle being hit from the right times is: As a result of its simplicity, Smoluchowski's 1D model can only qualitatively describe Brownian motion. For a realistic particle undergoing Brownian motion in a fluid, many of the assumptions don't apply. For example, the assumption that on average occurs an equal number of collisions from the right as from the left falls apart once the particle is in motion. Also, there would be a distribution of different possible s instead of always just one in a realistic situation. Langevin equation The diffusion equation yields an approximation of the time evolution of the probability density function associated with the position of the particle going under a Brownian movement under the physical definition. The approximation is valid on short timescales. The time evolution of the position of the Brownian particle over all time scales described using the Langevin equation, an equation that involves a random force field representing the effect of the thermal fluctuations of the solvent on the particle. In Langevin dynamics and Brownian dynamics, the Langevin equation is used to efficiently simulate the dynamics of molecular systems that exhibit a strong Brownian component. Astrophysics: star motion within galaxies In stellar dynamics, a massive body (star, black hole, etc.) can experience Brownian motion as it responds to gravitational forces from surrounding stars. The rms velocity of the massive object, of mass , is related to the rms velocity of the background stars by where is the mass of the background stars. The gravitational force from the massive object causes nearby stars to move faster than they otherwise would, increasing both and . The Brownian velocity of Sgr A*, the supermassive black hole at the center of the Milky Way galaxy, is predicted from this formula to be less than 1 km s−1. Mathematics In mathematics, Brownian motion is described by the Wiener process, a continuous-time stochastic process named in honor of Norbert Wiener. It is one of the best known Lévy processes (càdlàg stochastic processes with stationary independent increments) and occurs frequently in pure and applied mathematics, economics and physics. The Wiener process is characterized by four facts: is almost surely continuous has independent increments denotes the normal distribution with expected value and variance . The condition that it has independent increments means that if then and are independent random variables. In addition, for some filtration is measurable for all An alternative characterisation of the Wiener process is the so-called Lévy characterisation that says that the Wiener process is an almost surely continuous martingale with and quadratic variation A third characterisation is that the Wiener process has a spectral representation as a sine series whose coefficients are independent random variables. This representation can be obtained using the Kosambi–Karhunen–Loève theorem. The Wiener process can be constructed as the scaling limit of a random walk, or other discrete-time stochastic processes with stationary independent increments. This is known as Donsker's theorem. Like the random walk, the Wiener process is recurrent in one or two dimensions (meaning that it returns almost surely to any fixed neighborhood of the origin infinitely often) whereas it is not recurrent in dimensions three and higher. Unlike the random walk, it is scale invariant. A d-dimensional Gaussian free field has been described as "a d-dimensional-time analog of Brownian motion." Statistics The Brownian motion can be modeled by a random walk. In the general case, Brownian motion is a Markov process and described by stochastic integral equations. Lévy characterisation The French mathematician Paul Lévy proved the following theorem, which gives a necessary and sufficient condition for a continuous -valued stochastic process to actually be -dimensional Brownian motion. Hence, Lévy's condition can actually be used as an alternative definition of Brownian motion. Let be a continuous stochastic process on a probability space taking values in . Then the following are equivalent: is a Brownian motion with respect to , i.e., the law of with respect to is the same as the law of an -dimensional Brownian motion, i.e., the push-forward measure is classical Wiener measure on . both is a martingale with respect to (and its own natural filtration); and for all , is a martingale with respect to (and its own natural filtration), where denotes the Kronecker delta. Spectral content The spectral content of a stochastic process can be found from the power spectral density, formally defined as where stands for the expected value. The power spectral density of Brownian motion is found to be where is the diffusion coefficient of . For naturally occurring signals, the spectral content can be found from the power spectral density of a single realization, with finite available time, i.e., which for an individual realization of a Brownian motion trajectory, it is found to have expected value and variance For sufficiently long realization times, the expected value of the power spectrum of a single trajectory converges to the formally defined power spectral density but its coefficient of variation tends to This implies the distribution of is broad even in the infinite time limit. Riemannian manifold The infinitesimal generator (and hence characteristic operator) of a Brownian motion on is easily calculated to be , where denotes the Laplace operator. In image processing and computer vision, the Laplacian operator has been used for various tasks such as blob and edge detection. This observation is useful in defining Brownian motion on an -dimensional Riemannian manifold : a Brownian motion on is defined to be a diffusion on whose characteristic operator in local coordinates , , is given by , where is the Laplace–Beltrami operator given in local coordinates by where in the sense of the inverse of a square matrix. Narrow escape The narrow escape problem is a ubiquitous problem in biology, biophysics and cellular biology which has the following formulation: a Brownian particle (ion, molecule, or protein) is confined to a bounded domain (a compartment or a cell) by a reflecting boundary, except for a small window through which it can escape. The narrow escape problem is that of calculating the mean escape time. This time diverges as the window shrinks, thus rendering the calculation a singular perturbation problem.
Physical sciences
Thermodynamics
Physics
4473
https://en.wikipedia.org/wiki/BIOS
BIOS
In computing, BIOS (, ; Basic Input/Output System, also known as the System BIOS, ROM BIOS, BIOS ROM or PC BIOS) is firmware used to provide runtime services for operating systems and programs and to perform hardware initialization during the booting process (power-on startup). The firmware comes pre-installed on the computer's motherboard. The name originates from the Basic Input/Output System used in the CP/M operating system in 1975. The BIOS firmware was originally proprietary to the IBM PC; it was reverse engineered by some companies (such as Phoenix Technologies) looking to create compatible systems. The interface of that original system serves as a de facto standard. The BIOS in older PCs initializes and tests the system hardware components (power-on self-test or POST for short), and loads a boot loader from a mass storage device which then initializes a kernel. In the era of DOS, the BIOS provided BIOS interrupt calls for the keyboard, display, storage, and other input/output (I/O) devices that standardized an interface to application programs and the operating system. More recent operating systems do not use the BIOS interrupt calls after startup. Most BIOS implementations are specifically designed to work with a particular computer or motherboard model, by interfacing with various devices especially system chipset. Originally, BIOS firmware was stored in a ROM chip on the PC motherboard. In later computer systems, the BIOS contents are stored on flash memory so it can be rewritten without removing the chip from the motherboard. This allows easy, end-user updates to the BIOS firmware so new features can be added or bugs can be fixed, but it also creates a possibility for the computer to become infected with BIOS rootkits. Furthermore, a BIOS upgrade that fails could brick the motherboard. Unified Extensible Firmware Interface (UEFI) is a successor to the legacy PC BIOS, aiming to address its technical limitations. UEFI firmware may include legacy BIOS compatibility to maintain compatibility with operating systems and option cards that do not support UEFI native operation. Since 2020, all PCs for Intel platforms no longer support Legacy BIOS. The last version of Microsoft Windows to officially support running on PCs which use legacy BIOS firmware is Windows 10 as Windows 11 requires a UEFI-compliant system (except for IoT Enterprise editions of Windows 11 since version 24H2). History The term BIOS (Basic Input/Output System) was created by Gary Kildall and first appeared in the CP/M operating system in 1975, describing the machine-specific part of CP/M loaded during boot time that interfaces directly with the hardware. (A CP/M machine usually has only a simple boot loader in its ROM.) Versions of MS-DOS, PC DOS or DR-DOS contain a file called variously "IO.SYS", "IBMBIO.COM", "IBMBIO.SYS", or "DRBIOS.SYS"; this file is known as the "DOS BIOS" (also known as the "DOS I/O System") and contains the lower-level hardware-specific part of the operating system. Together with the underlying hardware-specific but operating system-independent "System BIOS", which resides in ROM, it represents the analogue to the "CP/M BIOS". The BIOS originally proprietary to the IBM PC has been reverse engineered by some companies (such as Phoenix Technologies) looking to create compatible systems. With the introduction of PS/2 machines, IBM divided the System BIOS into real- and protected-mode portions. The real-mode portion was meant to provide backward compatibility with existing operating systems such as DOS, and therefore was named "CBIOS" (for "Compatibility BIOS"), whereas the "ABIOS" (for "Advanced BIOS") provided new interfaces specifically suited for multitasking operating systems such as OS/2. User interface The BIOS of the original IBM PC and XT had no interactive user interface. Error codes or messages were displayed on the screen, or coded series of sounds were generated to signal errors when the power-on self-test (POST) had not proceeded to the point of successfully initializing a video display adapter. Options on the IBM PC and XT were set by switches and jumpers on the main board and on expansion cards. Starting around the mid-1990s, it became typical for the BIOS ROM to include a "BIOS configuration utility" (BCU) or "BIOS setup utility", accessed at system power-up by a particular key sequence. This program allowed the user to set system configuration options, of the type formerly set using DIP switches, through an interactive menu system controlled through the keyboard. In the interim period, IBM-compatible PCsincluding the IBM ATheld configuration settings in battery-backed RAM and used a bootable configuration program on floppy disk, not in the ROM, to set the configuration options contained in this memory. The floppy disk was supplied with the computer, and if it was lost the system settings could not be changed. The same applied in general to computers with an EISA bus, for which the configuration program was called an EISA Configuration Utility (ECU). A modern Wintel-compatible computer provides a setup routine essentially unchanged in nature from the ROM-resident BIOS setup utilities of the late 1990s; the user can configure hardware options using the keyboard and video display. The modern Wintel machine may store the BIOS configuration settings in flash ROM, perhaps the same flash ROM that holds the BIOS itself. Extensions (option ROMs) Peripheral cards such as hard disk drive host bus adapters and video cards have their own firmware, and BIOS extension option ROM code may be a part of the expansion card firmware; that code provides additional capabilities in the BIOS. Code in option ROMs runs before the BIOS boots the operating system from mass storage. These ROMs typically test and initialize hardware, add new BIOS services, or replace existing BIOS services with their own services. For example, a SCSI controller usually has a BIOS extension ROM that adds support for hard drives connected through that controller. An extension ROM could in principle contain operating system, or it could implement an entirely different boot process such as network booting. Operation of an IBM-compatible computer system can be completely changed by removing or inserting an adapter card (or a ROM chip) that contains a BIOS extension ROM. The motherboard BIOS typically contains code for initializing and bootstrapping integrated display and integrated storage. The initialization process can involve the execution of code related to the device being initialized, for locating the device, verifying the type of device, then establishing base registers, setting pointers, establishing interrupt vector tables, selecting paging modes which are ways for organizing available registers in devices, setting default values for accessing software routines related to interrupts, and setting the device's configuration using default values. In addition, plug-in adapter cards such as SCSI, RAID, network interface cards, and video cards often include their own BIOS (e.g. Video BIOS), complementing or replacing the system BIOS code for the given component. Even devices built into the motherboard can behave in this way; their option ROMs can be a part of the motherboard BIOS. An add-in card requires an option ROM if the card is not supported by the motherboard BIOS and the card needs to be initialized or made accessible through BIOS services before the operating system can be loaded (usually this means it is required in the boot process). An additional advantage of ROM on some early PC systems (notably including the IBM PCjr) was that ROM was faster than main system RAM. (On modern systems, the case is very much the reverse of this, and BIOS ROM code is usually copied ("shadowed") into RAM so it will run faster.) Physical placement Option ROMs normally reside on adapter cards. However, the original PC, and perhaps also the PC XT, have a spare ROM socket on the motherboard (the "system board" in IBM's terms) into which an option ROM can be inserted, and the four ROMs that contain the BASIC interpreter can also be removed and replaced with custom ROMs which can be option ROMs. The IBM PCjr is unique among PCs in having two ROM cartridge slots on the front. Cartridges in these slots map into the same region of the upper memory area used for option ROMs, and the cartridges can contain option ROM modules that the BIOS would recognize. The cartridges can also contain other types of ROM modules, such as BASIC programs, that are handled differently. One PCjr cartridge can contain several ROM modules of different types, possibly stored together in one ROM chip. Operation System startup The 8086 and 8088 start at physical address FFFF0h. The 80286 starts at physical address FFFFF0h. The 80386 and later x86 processors start at physical address FFFFFFF0h. When the system is initialized, the first instruction of the BIOS appears at that address. If the system has just been powered up or the reset button was pressed ("cold boot"), the full power-on self-test (POST) is run. If Ctrl+Alt+Delete was pressed ("warm boot"), a special flag value stored in nonvolatile BIOS memory ("CMOS") tested by the BIOS allows bypass of the lengthy POST and memory detection. The POST identifies, tests and initializes system devices such as the CPU, chipset, RAM, motherboard, video card, keyboard, mouse, hard disk drive, optical disc drive and other hardware, including integrated peripherals. Early IBM PCs had a routine in the POST that would download a program into RAM through the keyboard port and run it. This feature was intended for factory test or diagnostic purposes. After the motherboard BIOS completes its POST, most BIOS versions search for option ROM modules, also called BIOS extension ROMs, and execute them. The motherboard BIOS scans for extension ROMs in a portion of the "upper memory area" (the part of the x86 real-mode address space at and above address 0xA0000) and runs each ROM found, in order. To discover memory-mapped option ROMs, a BIOS implementation scans the real-mode address space from 0x0C0000 to 0x0F0000 on 2 KB (2,048 bytes) boundaries, looking for a two-byte ROM signature: 0x55 followed by 0xAA. In a valid expansion ROM, this signature is followed by a single byte indicating the number of 512-byte blocks the expansion ROM occupies in real memory, and the next byte is the option ROM's entry point (also known as its "entry offset"). If the ROM has a valid checksum, the BIOS transfers control to the entry address, which in a normal BIOS extension ROM should be the beginning of the extension's initialization routine. At this point, the extension ROM code takes over, typically testing and initializing the hardware it controls and registering interrupt vectors for use by post-boot applications. It may use BIOS services (including those provided by previously initialized option ROMs) to provide a user configuration interface, to display diagnostic information, or to do anything else that it requires. An option ROM should normally return to the BIOS after completing its initialization process. Once (and if) an option ROM returns, the BIOS continues searching for more option ROMs, calling each as it is found, until the entire option ROM area in the memory space has been scanned. It is possible that an option ROM will not return to BIOS, pre-empting the BIOS's boot sequence altogether. Boot process After the POST completes and, in a BIOS that supports option ROMs, after the option ROM scan is completed and all detected ROM modules with valid checksums have been called, the BIOS calls interrupt 19h to start boot processing. Post-boot, programs loaded can also call interrupt 19h to reboot the system, but they must be careful to disable interrupts and other asynchronous hardware processes that may interfere with the BIOS rebooting process, or else the system may hang or crash while it is rebooting. When interrupt 19h is called, the BIOS attempts to locate boot loader software on a "boot device", such as a hard disk, a floppy disk, CD, or DVD. It loads and executes the first boot software it finds, giving it control of the PC. The BIOS uses the boot devices set in Nonvolatile BIOS memory (CMOS), or, in the earliest PCs, DIP switches. The BIOS checks each device in order to see if it is bootable by attempting to load the first sector (boot sector). If the sector cannot be read, the BIOS proceeds to the next device. If the sector is read successfully, some BIOSes will also check for the boot sector signature 0x55 0xAA in the last two bytes of the sector (which is 512 bytes long), before accepting a boot sector and considering the device bootable. When a bootable device is found, the BIOS transfers control to the loaded sector. The BIOS does not interpret the contents of the boot sector other than to possibly check for the boot sector signature in the last two bytes. Interpretation of data structures like partition tables and BIOS Parameter Blocks is done by the boot program in the boot sector itself or by other programs loaded through the boot process. A non-disk device such as a network adapter attempts booting by a procedure that is defined by its option ROM or the equivalent integrated into the motherboard BIOS ROM. As such, option ROMs may also influence or supplant the boot process defined by the motherboard BIOS ROM. With the El Torito optical media boot standard, the optical drive actually emulates a 3.5" high-density floppy disk to the BIOS for boot purposes. Reading the "first sector" of a CD-ROM or DVD-ROM is not a simply defined operation like it is on a floppy disk or a hard disk. Furthermore, the complexity of the medium makes it difficult to write a useful boot program in one sector. The bootable virtual floppy disk can contain software that provides access to the optical medium in its native format. If an expansion ROM wishes to change the way the system boots (such as from a network device or a SCSI adapter) in a cooperative way, it can use the BIOS Boot Specification (BBS) API to register its ability to do so. Once the expansion ROMs have registered using the BBS APIs, the user can select among the available boot options from within the BIOS's user interface. This is why most BBS compliant PC BIOS implementations will not allow the user to enter the BIOS's user interface until the expansion ROMs have finished executing and registering themselves with the BBS API. Also, if an expansion ROM wishes to change the way the system boots unilaterally, it can simply hook interrupt 19h or other interrupts normally called from interrupt 19h, such as interrupt 13h, the BIOS disk service, to intercept the BIOS boot process. Then it can replace the BIOS boot process with one of its own, or it can merely modify the boot sequence by inserting its own boot actions into it, by preventing the BIOS from detecting certain devices as bootable, or both. Before the BIOS Boot Specification was promulgated, this was the only way for expansion ROMs to implement boot capability for devices not supported for booting by the native BIOS of the motherboard. Boot priority The user can select the boot priority implemented by the BIOS. For example, most computers have a hard disk that is bootable, but sometimes there is a removable-media drive that has higher boot priority, so the user can cause a removable disk to be booted. In most modern BIOSes, the boot priority order can be configured by the user. In older BIOSes, limited boot priority options are selectable; in the earliest BIOSes, a fixed priority scheme was implemented, with floppy disk drives first, fixed disks (i.e., hard disks) second, and typically no other boot devices supported, subject to modification of these rules by installed option ROMs. The BIOS in an early PC also usually would only boot from the first floppy disk drive or the first hard disk drive, even if there were two drives installed. Boot failure On the original IBM PC and XT, if no bootable disk was found, the BIOS would try to start ROM BASIC with the interrupt call to interrupt 18h. Since few programs used BASIC in ROM, clone PC makers left it out; then a computer that failed to boot from a disk would display "No ROM BASIC" and halt (in response to interrupt 18h). Later computers would display a message like "No bootable disk found"; some would prompt for a disk to be inserted and a key to be pressed to retry the boot process. A modern BIOS may display nothing or may automatically enter the BIOS configuration utility when the boot process fails. Boot environment The environment for the boot program is very simple: the CPU is in real mode and the general-purpose and segment registers are undefined, except SS, SP, CS, and DL. CS:IP always points to physical address 0x07C00. What values CS and IP actually have is not well defined. Some BIOSes use a CS:IP of 0x0000:0x7C00 while others may use 0x07C0:0x0000. Because boot programs are always loaded at this fixed address, there is no need for a boot program to be relocatable. DL may contain the drive number, as used with interrupt 13h, of the boot device. SS:SP points to a valid stack that is presumably large enough to support hardware interrupts, but otherwise SS and SP are undefined. (A stack must be already set up in order for interrupts to be serviced, and interrupts must be enabled in order for the system timer-tick interrupt, which BIOS always uses at least to maintain the time-of-day count and which it initializes during POST, to be active and for the keyboard to work. The keyboard works even if the BIOS keyboard service is not called; keystrokes are received and placed in the 15-character type-ahead buffer maintained by BIOS.) The boot program must set up its own stack, because the size of the stack set up by BIOS is unknown and its location is likewise variable; although the boot program can investigate the default stack by examining SS:SP, it is easier and shorter to just unconditionally set up a new stack. At boot time, all BIOS services are available, and the memory below address 0x00400 contains the interrupt vector table. BIOS POST has initialized the system timers, interrupt controller(s), DMA controller(s), and other motherboard/chipset hardware as necessary to bring all BIOS services to ready status. DRAM refresh for all system DRAM in conventional memory and extended memory, but not necessarily expanded memory, has been set up and is running. The interrupt vectors corresponding to the BIOS interrupts have been set to point at the appropriate entry points in the BIOS, hardware interrupt vectors for devices initialized by the BIOS have been set to point to the BIOS-provided ISRs, and some other interrupts, including ones that BIOS generates for programs to hook, have been set to a default dummy ISR that immediately returns. The BIOS maintains a reserved block of system RAM at addresses 0x00400–0x004FF with various parameters initialized during the POST. All memory at and above address 0x00500 can be used by the boot program; it may even overwrite itself. Operating system services The BIOS ROM is customized to the particular manufacturer's hardware, allowing low-level services (such as reading a keystroke or writing a sector of data to diskette) to be provided in a standardized way to programs, including operating systems. For example, an IBM PC might have either a monochrome or a color display adapter (using different display memory addresses and hardware), but a single, standard, BIOS system call may be invoked to display a character at a specified position on the screen in text mode or graphics mode. The BIOS provides a small library of basic input/output functions to operate peripherals (such as the keyboard, rudimentary text and graphics display functions and so forth). When using MS-DOS, BIOS services could be accessed by an application program (or by MS-DOS) by executing an interrupt 13h interrupt instruction to access disk functions, or by executing one of a number of other documented BIOS interrupt calls to access video display, keyboard, cassette, and other device functions. Operating systems and executive software that are designed to supersede this basic firmware functionality provide replacement software interfaces to application software. Applications can also provide these services to themselves. This began even in the 1980s under MS-DOS, when programmers observed that using the BIOS video services for graphics display were very slow. To increase the speed of screen output, many programs bypassed the BIOS and programmed the video display hardware directly. Other graphics programmers, particularly but not exclusively in the demoscene, observed that there were technical capabilities of the PC display adapters that were not supported by the IBM BIOS and could not be taken advantage of without circumventing it. Since the AT-compatible BIOS ran in Intel real mode, operating systems that ran in protected mode on 286 and later processors required hardware device drivers compatible with protected mode operation to replace BIOS services. In modern PCs running modern operating systems (such as Windows and Linux) the BIOS interrupt calls are used only during booting and initial loading of operating systems. Before the operating system's first graphical screen is displayed, input and output are typically handled through BIOS. A boot menu such as the textual menu of Windows, which allows users to choose an operating system to boot, to boot into the safe mode, or to use the last known good configuration, is displayed through BIOS and receives keyboard input through BIOS. Many modern PCs can still boot and run legacy operating systems such as MS-DOS or DR-DOS that rely heavily on BIOS for their console and disk I/O, providing that the system has a BIOS, or a CSM-capable UEFI firmware. Processor microcode updates Intel processors have reprogrammable microcode since the P6 microarchitecture. AMD processors have reprogrammable microcode since the K7 microarchitecture. The BIOS contain patches to the processor microcode that fix errors in the initial processor microcode; microcode is loaded into processor's SRAM so reprogramming is not persistent, thus loading of microcode updates is performed each time the system is powered up. Without reprogrammable microcode, an expensive processor swap would be required; for example, the Pentium FDIV bug became an expensive fiasco for Intel as it required a product recall because the original Pentium processor's defective microcode could not be reprogrammed. Operating systems can update main processor microcode also. Identification Some BIOSes contain a software licensing description table (SLIC), a digital signature placed inside the BIOS by the original equipment manufacturer (OEM), for example Dell. The SLIC is inserted into the ACPI data table and contains no active code. Computer manufacturers that distribute OEM versions of Microsoft Windows and Microsoft application software can use the SLIC to authenticate licensing to the OEM Windows Installation disk and system recovery disc containing Windows software. Systems with a SLIC can be preactivated with an OEM product key, and they verify an XML formatted OEM certificate against the SLIC in the BIOS as a means of self-activating (see System Locked Preinstallation, SLP). If a user performs a fresh install of Windows, they will need to have possession of both the OEM key (either SLP or COA) and the digital certificate for their SLIC in order to bypass activation. This can be achieved if the user performs a restore using a pre-customised image provided by the OEM. Power users can copy the necessary certificate files from the OEM image, decode the SLP product key, then perform SLP activation manually. Overclocking Some BIOS implementations allow overclocking, an action in which the CPU is adjusted to a higher clock rate than its manufacturer rating for guaranteed capability. Overclocking may, however, seriously compromise system reliability in insufficiently cooled computers and generally shorten component lifespan. Overclocking, when incorrectly performed, may also cause components to overheat so quickly that they mechanically destroy themselves. Modern use Some older operating systems, for example MS-DOS, rely on the BIOS to carry out most input/output tasks within the PC. Calling real mode BIOS services directly is inefficient for protected mode (and long mode) operating systems. BIOS interrupt calls are not used by modern multitasking operating systems after they initially load. In the 1990s, BIOS provided some protected mode interfaces for Microsoft Windows and Unix-like operating systems, such as Advanced Power Management (APM), Plug and Play BIOS, Desktop Management Interface (DMI), VESA BIOS Extensions (VBE), e820 and MultiProcessor Specification (MPS). Starting from the year 2000, most BIOSes provide ACPI, SMBIOS, VBE and e820 interfaces for modern operating systems. After operating systems load, the System Management Mode code is still running in SMRAM. Since 2010, BIOS technology is in a transitional process toward UEFI. Configuration Setup utility Historically, the BIOS in the IBM PC and XT had no built-in user interface. The BIOS versions in earlier PCs (XT-class) were not software configurable; instead, users set the options via DIP switches on the motherboard. Later computers, including most IBM-compatibles with 80286 CPUs, had a battery-backed nonvolatile BIOS memory (CMOS RAM chip) that held BIOS settings. These settings, such as video-adapter type, memory size, and hard-disk parameters, could only be configured by running a configuration program from a disk, not built into the ROM. A special "reference diskette" was inserted in an IBM AT to configure settings such as memory size. Early BIOS versions did not have passwords or boot-device selection options. The BIOS was hard-coded to boot from the first floppy drive, or, if that failed, the first hard disk. Access control in early AT-class machines was by a physical keylock switch (which was not hard to defeat if the computer case could be opened). Anyone who could switch on the computer could boot it. Later, 386-class computers started integrating the BIOS setup utility in the ROM itself, alongside the BIOS code; these computers usually boot into the BIOS setup utility if a certain key or key combination is pressed, otherwise the BIOS POST and boot process are executed. A modern BIOS setup utility has a text user interface (TUI) or graphical user interface (GUI) accessed by pressing a certain key on the keyboard when the PC starts. Usually, the key is advertised for short time during the early startup, for example "Press DEL to enter Setup". The actual key depends on specific hardware. The settings key is most often Delete (Acer, ASRock, Asus PC, ECS, Gigabyte, MSI, Zotac) and F2 (Asus motherboard, Dell, Lenovo laptop, Origin PC, Samsung, Toshiba), but it can also be F1 (Lenovo desktop) and F10 (HP). Features present in the BIOS setup utility typically include: Configuring, enabling and disabling the hardware components Setting the system time Setting the boot order Setting various passwords, such as a password for securing access to the BIOS user interface and preventing malicious users from booting the system from unauthorized portable storage devices, or a password for booting the system Hardware monitoring A modern BIOS setup screen often features a PC Health Status or a Hardware Monitoring tab, which directly interfaces with a Hardware Monitor chip of the mainboard. This makes it possible to monitor CPU and chassis temperature, the voltage provided by the power supply unit, as well as monitor and control the speed of the fans connected to the motherboard. Once the system is booted, hardware monitoring and computer fan control is normally done directly by the Hardware Monitor chip itself, which can be a separate chip, interfaced through I²C or SMBus, or come as a part of a Super I/O solution, interfaced through Industry Standard Architecture (ISA) or Low Pin Count (LPC). Some operating systems, like NetBSD with envsys and OpenBSD with sysctl hw.sensors, feature integrated interfacing with hardware monitors. However, in some circumstances, the BIOS also provides the underlying information about hardware monitoring through ACPI, in which case, the operating system may be using ACPI to perform hardware monitoring. Reprogramming In modern PCs the BIOS is stored in rewritable EEPROM or NOR flash memory, allowing the contents to be replaced and modified. This rewriting of the contents is sometimes termed flashing. It can be done by a special program, usually provided by the system's manufacturer, or at POST, with a BIOS image in a hard drive or USB flash drive. A file containing such contents is sometimes termed "a BIOS image". A BIOS might be reflashed in order to upgrade to a newer version to fix bugs or provide improved performance or to support newer hardware. Some computers also support updating the BIOS via an update floppy disk or a special partition on the hard drive. Hardware The original IBM PC BIOS (and cassette BASIC) was stored on mask-programmed read-only memory (ROM) chips in sockets on the motherboard. ROMs could be replaced, but not altered, by users. To allow for updates, many compatible computers used re-programmable BIOS memory devices such as EPROM, EEPROM and later flash memory (usually NOR flash) devices. According to Robert Braver, the president of the BIOS manufacturer Micro Firmware, Flash BIOS chips became common around 1995 because the electrically erasable PROM (EEPROM) chips are cheaper and easier to program than standard ultraviolet erasable PROM (EPROM) chips. Flash chips are programmed (and re-programmed) in-circuit, while EPROM chips need to be removed from the motherboard for re-programming. BIOS versions are upgraded to take advantage of newer versions of hardware and to correct bugs in previous revisions of BIOSes. Beginning with the IBM AT, PCs supported a hardware clock settable through BIOS. It had a century bit which allowed for manually changing the century when the year 2000 happened. Most BIOS revisions created in 1995 and nearly all BIOS revisions in 1997 supported the year 2000 by setting the century bit automatically when the clock rolled past midnight, 31 December 1999. The first flash chips were attached to the ISA bus. Starting in 1998, the BIOS flash moved to the LPC bus, following a new standard implementation known as "firmware hub" (FWH). In 2005, the BIOS flash memory moved to the SPI bus. The size of the BIOS, and the capacity of the ROM, EEPROM, or other media it may be stored on, has increased over time as new features have been added to the code; BIOS versions now exist with sizes up to 32 megabytes. For contrast, the original IBM PC BIOS was contained in an 8 KB mask ROM. Some modern motherboards are including even bigger NAND flash memory ICs on board which are capable of storing whole compact operating systems, such as some Linux distributions. For example, some ASUS notebooks included Splashtop OS embedded into their NAND flash memory ICs. However, the idea of including an operating system along with BIOS in the ROM of a PC is not new; in the 1980s, Microsoft offered a ROM option for MS-DOS, and it was included in the ROMs of some PC clones such as the Tandy 1000 HX. Another type of firmware chip was found on the IBM PC AT and early compatibles. In the AT, the keyboard interface was controlled by a microcontroller with its own programmable memory. On the IBM AT, that was a 40-pin socketed device, while some manufacturers used an EPROM version of this chip which resembled an EPROM. This controller was also assigned the A20 gate function to manage memory above the one-megabyte range; occasionally an upgrade of this "keyboard BIOS" was necessary to take advantage of software that could use upper memory. The BIOS may contain components such as the Memory Reference Code (MRC), which is responsible for the memory initialization (e.g. SPD and memory timings initialization). Modern BIOS includes Intel Management Engine or AMD Platform Security Processor firmware. Vendors and products IBM published the entire listings of the BIOS for its original PC, PC XT, PC AT, and other contemporary PC models, in an appendix of the IBM PC Technical Reference Manual for each machine type. The effect of the publication of the BIOS listings is that anyone can see exactly what a definitive BIOS does and how it does it. In May 1984, Phoenix Software Associates released its first ROM-BIOS. This BIOS enabled OEMs to build essentially fully compatible clones without having to reverse-engineer the IBM PC BIOS themselves, as Compaq had done for the Portable; it also helped fuel the growth in the PC-compatibles industry and sales of non-IBM versions of DOS. The first American Megatrends (AMI) BIOS was released in 1986. New standards grafted onto the BIOS are usually without complete public documentation or any BIOS listings. As a result, it is not as easy to learn the intimate details about the many non-IBM additions to BIOS as about the core BIOS services. Many PC motherboard suppliers licensed the BIOS "core" and toolkit from a commercial third party, known as an "independent BIOS vendor" or IBV. The motherboard manufacturer then customized this BIOS to suit its own hardware. For this reason, updated BIOSes are normally obtained directly from the motherboard manufacturer. Major IBVs included American Megatrends (AMI), Insyde Software, Phoenix Technologies, and Byosoft. Microid Research and Award Software were acquired by Phoenix Technologies in 1998; Phoenix later phased out the Award brand name (although Award Software is still credited in newer AwardBIOS versions and in UEFI firmwares). General Software, which was also acquired by Phoenix in 2007, sold BIOS for embedded systems based on Intel processors. SeaBIOS is an open-source BIOS implementation. Open-source BIOS replacements The open-source community increased their effort to develop a replacement for proprietary BIOSes and their future incarnations with an open-sourced counterparts. Open Firmware was an early attempt to make an open specification for boot firmware. It was initially endorsed by IEEE in its IEEE 1275-1994 standard but was withdrawn in 2005. Later examples include the OpenBIOS, coreboot and libreboot projects. AMD provided product specifications for some chipsets using coreboot, and Google is sponsoring the project. Motherboard manufacturer Tyan offers coreboot next to the standard BIOS with their Opteron line of motherboards. Security EEPROM and flash memory chips are advantageous because they can be easily updated by the user; it is customary for hardware manufacturers to issue BIOS updates to upgrade their products, improve compatibility and remove bugs. However, this advantage had the risk that an improperly executed or aborted BIOS update could render the computer or device unusable. To avoid these situations, more recent BIOSes use a "boot block"; a portion of the BIOS which runs first and must be updated separately. This code verifies if the rest of the BIOS is intact (using hash checksums or other methods) before transferring control to it. If the boot block detects any corruption in the main BIOS, it will typically warn the user that a recovery process must be initiated by booting from removable media (floppy, CD or USB flash drive) so the user can try flashing the BIOS again. Some motherboards have a backup BIOS (sometimes referred to as DualBIOS boards) to recover from BIOS corruptions. There are at least five known viruses that attack the BIOS. Two of which were for demonstration purposes. The first one found in the wild was Mebromi, targeting Chinese users. The first BIOS virus was BIOS Meningitis, which instead of erasing BIOS chips it infected them. BIOS Meningitis was relatively harmless, compared to a virus like CIH. The second BIOS virus was CIH, also known as the "Chernobyl Virus", which was able to erase flash ROM BIOS content on compatible chipsets. CIH appeared in mid-1998 and became active in April 1999. Often, infected computers could no longer boot, and people had to remove the flash ROM IC from the motherboard and reprogram it. CIH targeted the then-widespread Intel i430TX motherboard chipset and took advantage of the fact that the Windows 9x operating systems, also widespread at the time, allowed direct hardware access to all programs. Modern systems are not vulnerable to CIH because of a variety of chipsets being used which are incompatible with the Intel i430TX chipset, and also other flash ROM IC types. There is also extra protection from accidental BIOS rewrites in the form of boot blocks which are protected from accidental overwrite or dual and quad BIOS equipped systems which may, in the event of a crash, use a backup BIOS. Also, all modern operating systems such as FreeBSD, Linux, macOS, Windows NT-based Windows OS like Windows 2000, Windows XP and newer, do not allow user-mode programs to have direct hardware access using a hardware abstraction layer. As a result, as of 2008, CIH has become essentially harmless, at worst causing annoyance by infecting executable files and triggering antivirus software. Other BIOS viruses remain possible, however; since most Windows home users without Windows Vista/7's UAC run all applications with administrative privileges, a modern CIH-like virus could in principle still gain access to hardware without first using an exploit. The operating system OpenBSD prevents all users from having this access and the grsecurity patch for the Linux kernel also prevents this direct hardware access by default, the difference being an attacker requiring a much more difficult kernel level exploit or reboot of the machine. The third BIOS virus was a technique presented by John Heasman, principal security consultant for UK-based Next-Generation Security Software. In 2006, at the Black Hat Security Conference, he showed how to elevate privileges and read physical memory, using malicious procedures that replaced normal ACPI functions stored in flash memory. The fourth BIOS virus was a technique called "Persistent BIOS infection." It appeared in 2009 at the CanSecWest Security Conference in Vancouver, and at the SyScan Security Conference in Singapore. Researchers Anibal Sacco and Alfredo Ortega, from Core Security Technologies, demonstrated how to insert malicious code into the decompression routines in the BIOS, allowing for nearly full control of the PC at start-up, even before the operating system is booted. The proof-of-concept does not exploit a flaw in the BIOS implementation, but only involves the normal BIOS flashing procedures. Thus, it requires physical access to the machine, or for the user to be root. Despite these requirements, Ortega underlined the profound implications of his and Sacco's discovery: "We can patch a driver to drop a fully working rootkit. We even have a little code that can remove or disable antivirus." Mebromi is a trojan which targets computers with AwardBIOS, Microsoft Windows, and antivirus software from two Chinese companies: Rising Antivirus and Jiangmin KV Antivirus. Mebromi installs a rootkit which infects the Master boot record. In a December 2013 interview with 60 Minutes, Deborah Plunkett, Information Assurance Director for the US National Security Agency claimed the NSA had uncovered and thwarted a possible BIOS attack by a foreign nation state, targeting the US financial system. The program cited anonymous sources alleging it was a Chinese plot. However follow-up articles in The Guardian, The Atlantic, Wired and The Register refuted the NSA's claims. Newer Intel platforms have Intel Boot Guard (IBG) technology enabled, this technology will check the BIOS digital signature at startup, and the IBG public key is fused into the PCH. End users can't disable this function. Alternatives and successors Unified Extensible Firmware Interface (UEFI) supplements the BIOS in many new machines. Initially written for the Intel Itanium architecture, UEFI is now available for x86 and Arm platforms; the specification development is driven by the Unified EFI Forum, an industry special interest group. EFI booting has been supported in only Microsoft Windows versions supporting GPT, the Linux kernel 2.6.1 and later, and macOS on Intel-based Macs. , new PC hardware predominantly ships with UEFI firmware. The architecture of the rootkit safeguard can also prevent the system from running the user's own software changes, which makes UEFI controversial as a legacy BIOS replacement in the open hardware community. Also, Windows 11 requires UEFI to boot, with the exception of IoT Enterprise editions of Windows 11. UEFI is required for devices shipping with Windows 8 and above. Other alternatives to the functionality of the "Legacy BIOS" in the x86 world include coreboot and libreboot. Some servers and workstations use a platform-independent Open Firmware (IEEE-1275) based on the Forth programming language; it is included with Sun's SPARC computers, IBM's RS/6000 line, and other PowerPC systems such as the CHRP motherboards, along with the x86-based OLPC XO-1. As of at least 2015, Apple has removed legacy BIOS support from the UEFI monitor in Intel-based Macs. As such, the BIOS utility no longer supports the legacy option, and prints "Legacy mode not supported on this system". In 2017, Intel announced that it would remove legacy BIOS support by 2020. Since 2019, new Intel platform OEM PCs no longer support the legacy option.
Technology
Computer hardware
null
4474
https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein%20condensate
Bose–Einstein condensate
In condensed matter physics, a Bose–Einstein condensate (BEC) is a state of matter that is typically formed when a gas of bosons at very low densities is cooled to temperatures very close to absolute zero, i.e., . Under such conditions, a large fraction of bosons occupy the lowest quantum state, at which microscopic quantum-mechanical phenomena, particularly wavefunction interference, become apparent macroscopically. More generally, condensation refers to the appearance of macroscopic occupation of one or several states: for example, in BCS theory, a superconductor is a condensate of Cooper pairs. As such, condensation can be associated with phase transition, and the macroscopic occupation of the state is the order parameter. Bose–Einstein condensate was first predicted, generally, in 1924–1925 by Albert Einstein, crediting a pioneering paper by Satyendra Nath Bose on the new field now known as quantum statistics. In 1995, the Bose–Einstein condensate was created by Eric Cornell and Carl Wieman of the University of Colorado Boulder using rubidium atoms; later that year, Wolfgang Ketterle of MIT produced a BEC using sodium atoms. In 2001 Cornell, Wieman, and Ketterle shared the Nobel Prize in Physics "for the achievement of Bose–Einstein condensation in dilute gases of alkali atoms, and for early fundamental studies of the properties of the condensates". History Bose first sent a paper to Einstein on the quantum statistics of light quanta (now called photons), in which he derived Planck's quantum radiation law without any reference to classical physics. Einstein was impressed, translated the paper himself from English to German and submitted it for Bose to the Zeitschrift für Physik, which published it in 1924. (The Einstein manuscript, once believed to be lost, was found in a library at Leiden University in 2005.) Einstein then extended Bose's ideas to matter in two other papers. The result of their efforts is the concept of a Bose gas, governed by Bose–Einstein statistics, which describes the statistical distribution of identical particles with integer spin, now called bosons. Bosons are allowed to share a quantum state. Einstein proposed that cooling bosonic atoms to a very low temperature would cause them to fall (or "condense") into the lowest accessible quantum state, resulting in a new form of matter. Bosons include the photon, polaritons, magnons, some atoms and molecules (depending on the number of nucleons, see #Isotopes) such as atomic hydrogen, helium-4, lithium-7, rubidium-87 or strontium-84. In 1938, Fritz London proposed the BEC as a mechanism for superfluidity in and superconductivity. The quest to produce a Bose–Einstein condensate in the laboratory was stimulated by a paper published in 1976 by two program directors at the National Science Foundation (William Stwalley and Lewis Nosanow), proposing to use spin-polarized atomic hydrogen to produce a gaseous BEC. This led to the immediate pursuit of the idea by four independent research groups; these were led by Isaac Silvera (University of Amsterdam), Walter Hardy (University of British Columbia), Thomas Greytak (Massachusetts Institute of Technology) and David Lee (Cornell University). However, cooling atomic hydrogen turned out to be technically difficult, and Bose-Einstein condensation of atomic hydrogen was only realized in 1998. On 5 June 1995, the first gaseous condensate was produced by Eric Cornell and Carl Wieman at the University of Colorado at Boulder NIST–JILA lab, in a gas of rubidium atoms cooled to 170 nanokelvins (nK). Shortly thereafter, Wolfgang Ketterle at MIT produced a Bose–Einstein Condensate in a gas of sodium atoms. For their achievements Cornell, Wieman, and Ketterle received the 2001 Nobel Prize in Physics. Bose-Einstein condensation of alkali gases is easier because they can be pre-cooled with laser cooling techniques, unlike atomic hydrogen at the time, which give a significant head start when performing the final forced evaporative cooling to cross the condensation threshold. These early studies founded the field of ultracold atoms, and hundreds of research groups around the world now routinely produce BECs of dilute atomic vapors in their labs. Since 1995, many other atomic species have been condensed (see #Isotopes), and BECs have also been realized using molecules, polaritons, other quasi-particles. BECs of photons can also be made, for example, in dye microcavites with wavelength-scale mirror separation, making a two-dimensional harmonically confined photon gas with tunable chemical potential. Critical temperature This transition to BEC occurs below a critical temperature, which for a uniform three-dimensional gas consisting of non-interacting particles with no apparent internal degrees of freedom is given by where: is the critical temperature, is the particle density, is the mass per boson, is the reduced Planck constant, is the Boltzmann constant, is the Riemann zeta function (). Interactions shift the value, and the corrections can be calculated by mean-field theory. This formula is derived from finding the gas degeneracy in the Bose gas using Bose–Einstein statistics. The critical temperature depends on the density. A more concise and experimentally relevant condition involves the phase-space density , where is the thermal de Broglie wavelength. It is a dimensionless quantity. The transition to BEC occurs when the phase-space density is greater than critical value: in 3D uniform space. This is equivalent to the above condition on the temperature. In a 3D harmonic potential, the critical value is instead where has to be understood as the peak density. Derivation Ideal Bose gas For an ideal Bose gas we have the equation of state where is the per-particle volume, is the thermal wavelength, is the fugacity, and It is noticeable that is a monotonically growing function of in , which are the only values for which the series converge. Recognizing that the second term on the right-hand side contains the expression for the average occupation number of the fundamental state , the equation of state can be rewritten as Because the left term on the second equation must always be positive, , and because , a stronger condition is which defines a transition between a gas phase and a condensed phase. On the critical region it is possible to define a critical temperature and thermal wavelength: recovering the value indicated on the previous section. The critical values are such that if or , we are in the presence of a Bose–Einstein condensate. Understanding what happens with the fraction of particles on the fundamental level is crucial. As so, write the equation of state for , obtaining and equivalently So, if , the fraction , and if , the fraction . At temperatures near to absolute 0, particles tend to condense in the fundamental state, which is the state with momentum . Experimental observation Superfluid helium-4 In 1938, Pyotr Kapitsa, John Allen and Don Misener discovered that helium-4 became a new kind of fluid, now known as a superfluid, at temperatures less than 2.17 K (the lambda point). Superfluid helium has many unusual properties, including zero viscosity (the ability to flow without dissipating energy) and the existence of quantized vortices. It was quickly believed that the superfluidity was due to partial Bose–Einstein condensation of the liquid. In fact, many properties of superfluid helium also appear in gaseous condensates created by Cornell, Wieman and Ketterle (see below). Superfluid helium-4 is a liquid rather than a gas, which means that the interactions between the atoms are relatively strong; the original theory of Bose–Einstein condensation must be heavily modified in order to describe it. Bose–Einstein condensation remains, however, fundamental to the superfluid properties of helium-4. Note that helium-3, a fermion, also enters a superfluid phase (at a much lower temperature) which can be explained by the formation of bosonic Cooper pairs of two atoms (see also fermionic condensate). Dilute atomic gases The first "pure" Bose–Einstein condensate was created by Eric Cornell, Carl Wieman, and co-workers at JILA on 5 June 1995. They cooled a dilute vapor of approximately two thousand rubidium-87 atoms to below 170 nK using a combination of laser cooling (a technique that won its inventors Steven Chu, Claude Cohen-Tannoudji, and William D. Phillips the 1997 Nobel Prize in Physics) and magnetic evaporative cooling. About four months later, an independent effort led by Wolfgang Ketterle at MIT condensed sodium-23. Ketterle's condensate had a hundred times more atoms, allowing important results such as the observation of quantum mechanical interference between two different condensates. Cornell, Wieman and Ketterle won the 2001 Nobel Prize in Physics for their achievements. A group led by Randall Hulet at Rice University announced a condensate of lithium atoms only one month following the JILA work. Lithium has attractive interactions, causing the condensate to be unstable and collapse for all but a few atoms. Hulet's team subsequently showed the condensate could be stabilized by confinement quantum pressure for up to about 1000 atoms. Various isotopes have since been condensed. Velocity-distribution data graph In the image accompanying this article, the velocity-distribution data indicates the formation of a Bose–Einstein condensate out of a gas of rubidium atoms. The false colors indicate the number of atoms at each velocity, with red being the fewest and white being the most. The areas appearing white and light blue are at the lowest velocities. The peak is not infinitely narrow because of the Heisenberg uncertainty principle: spatially confined atoms have a minimum width velocity distribution. This width is given by the curvature of the magnetic potential in the given direction. More tightly confined directions have bigger widths in the ballistic velocity distribution. This anisotropy of the peak on the right is a purely quantum-mechanical effect and does not exist in the thermal distribution on the left. This graph served as the cover design for the 1999 textbook Thermal Physics by Ralph Baierlein. Quasiparticles Bose–Einstein condensation also applies to quasiparticles in solids. Magnons, excitons, and polaritons have integer spin which means they are bosons that can form condensates. Magnons, electron spin waves, can be controlled by a magnetic field. Densities from the limit of a dilute gas to a strongly interacting Bose liquid are possible. Magnetic ordering is the analog of superfluidity. In 1999 condensation was demonstrated in antiferromagnetic , at temperatures as great as 14 K. The high transition temperature (relative to atomic gases) is due to the magnons' small mass (near that of an electron) and greater achievable density. In 2006, condensation in a ferromagnetic yttrium-iron-garnet thin film was seen even at room temperature, with optical pumping. Excitons, electron-hole pairs, were predicted to condense at low temperature and high density by Boer et al., in 1961. Bilayer system experiments first demonstrated condensation in 2003, by Hall voltage disappearance. Fast optical exciton creation was used to form condensates in sub-kelvin in 2005 on. Polariton condensation was first detected for exciton-polaritons in a quantum well microcavity kept at 5 K. In zero gravity In June 2020, the Cold Atom Laboratory experiment on board the International Space Station successfully created a BEC of rubidium atoms and observed them for over a second in free-fall. Although initially just a proof of function, early results showed that, in the microgravity environment of the ISS, about half of the atoms formed into a magnetically insensitive halo-like cloud around the main body of the BEC. Models Bose Einstein's non-interacting gas Consider a collection of N non-interacting particles, which can each be in one of two quantum states, and . If the two states are equal in energy, each different configuration is equally likely. If we can tell which particle is which, there are different configurations, since each particle can be in or independently. In almost all of the configurations, about half the particles are in and the other half in . The balance is a statistical effect: the number of configurations is largest when the particles are divided equally. If the particles are indistinguishable, however, there are only different configurations. If there are particles in state , there are particles in state . Whether any particular particle is in state or in state cannot be determined, so each value of determines a unique quantum state for the whole system. Suppose now that the energy of state is slightly greater than the energy of state by an amount . At temperature , a particle will have a lesser probability to be in state by . In the distinguishable case, the particle distribution will be biased slightly towards state . But in the indistinguishable case, since there is no statistical pressure toward equal numbers, the most-likely outcome is that most of the particles will collapse into state . In the distinguishable case, for large N, the fraction in state can be computed. It is the same as flipping a coin with probability proportional to to land tails. In the indistinguishable case, each value of is a single state, which has its own separate Boltzmann probability. So the probability distribution is exponential: For large , the normalization constant is . The expected total number of particles not in the lowest energy state, in the limit that , is equal to It does not grow when N is large; it just approaches a constant. This will be a negligible fraction of the total number of particles. So a collection of enough Bose particles in thermal equilibrium will mostly be in the ground state, with only a few in any excited state, no matter how small the energy difference. Consider now a gas of particles, which can be in different momentum states labeled . If the number of particles is less than the number of thermally accessible states, for high temperatures and low densities, the particles will all be in different states. In this limit, the gas is classical. As the density increases or the temperature decreases, the number of accessible states per particle becomes smaller, and at some point, more particles will be forced into a single state than the maximum allowed for that state by statistical weighting. From this point on, any extra particle added will go into the ground state. To calculate the transition temperature at any density, integrate, over all momentum states, the expression for maximum number of excited particles, : When the integral (also known as Bose–Einstein integral) is evaluated with factors of and restored by dimensional analysis, it gives the critical temperature formula of the preceding section. Therefore, this integral defines the critical temperature and particle number corresponding to the conditions of negligible chemical potential . In Bose–Einstein statistics distribution, is actually still nonzero for BECs; however, is less than the ground state energy. Except when specifically talking about the ground state, can be approximated for most energy or momentum states as . Bogoliubov theory for weakly interacting gas Nikolay Bogoliubov considered perturbations on the limit of dilute gas, finding a finite pressure at zero temperature and positive chemical potential. This leads to corrections for the ground state. The Bogoliubov state has pressure : . The original interacting system can be converted to a system of non-interacting particles with a dispersion law. Gross–Pitaevskii equation In some simplest cases, the state of condensed particles can be described with a nonlinear Schrödinger equation, also known as Gross–Pitaevskii or Ginzburg–Landau equation. The validity of this approach is actually limited to the case of ultracold temperatures, which fits well for the most alkali atoms experiments. This approach originates from the assumption that the state of the BEC can be described by the unique wavefunction of the condensate . For a system of this nature, is interpreted as the particle density, so the total number of atoms is Provided essentially all atoms are in the condensate (that is, have condensed to the ground state), and treating the bosons using mean-field theory, the energy (E) associated with the state is: Minimizing this energy with respect to infinitesimal variations in , and holding the number of atoms constant, yields the Gross–Pitaevski equation (GPE) (also a non-linear Schrödinger equation): where: {|cellspacing="0" cellpadding="0" |- | |  is the mass of the bosons, |- | |  is the external potential, and |- | |  represents the inter-particle interactions. |} In the case of zero external potential, the dispersion law of interacting Bose–Einstein-condensed particles is given by so-called Bogoliubov spectrum (for ): The Gross-Pitaevskii equation (GPE) provides a relatively good description of the behavior of atomic BEC's. However, GPE does not take into account the temperature dependence of dynamical variables, and is therefore valid only for . It is not applicable, for example, for the condensates of excitons, magnons and photons, where the critical temperature is comparable to room temperature. Numerical solution The Gross-Pitaevskii equation is a partial differential equation in space and time variables. Usually it does not have analytic solution and different numerical methods, such as split-step Crank–Nicolson and Fourier spectral methods, are used for its solution. There are different Fortran and C programs for its solution for contact interaction and long-range dipolar interaction which can be freely used. Weaknesses of Gross–Pitaevskii model The Gross–Pitaevskii model of BEC is a physical approximation valid for certain classes of BECs. By construction, the GPE uses the following simplifications: it assumes that interactions between condensate particles are of the contact two-body type and also neglects anomalous contributions to self-energy. These assumptions are suitable mostly for the dilute three-dimensional condensates. If one relaxes any of these assumptions, the equation for the condensate wavefunction acquires the terms containing higher-order powers of the wavefunction. Moreover, for some physical systems the amount of such terms turns out to be infinite, therefore, the equation becomes essentially non-polynomial. The examples where this could happen are the Bose–Fermi composite condensates, effectively lower-dimensional condensates, and dense condensates and superfluid clusters and droplets. It is found that one has to go beyond the Gross-Pitaevskii equation. For example, the logarithmic term found in the Logarithmic Schrödinger equation must be added to the Gross-Pitaevskii equation along with a Ginzburg–Sobyanin contribution to correctly determine that the speed of sound scales as the cubic root of pressure for Helium-4 at very low temperatures in close agreement with experiment. Other However, it is clear that in a general case the behaviour of Bose–Einstein condensate can be described by coupled evolution equations for condensate density, superfluid velocity and distribution function of elementary excitations. This problem was solved in 1977 by Peletminskii et al. in microscopical approach. The Peletminskii equations are valid for any finite temperatures below the critical point. Years after, in 1985, Kirkpatrick and Dorfman obtained similar equations using another microscopical approach. The Peletminskii equations also reproduce Khalatnikov hydrodynamical equations for superfluid as a limiting case. Superfluidity of BEC and Landau criterion The phenomena of superfluidity of a Bose gas and superconductivity of a strongly-correlated Fermi gas (a gas of Cooper pairs) are tightly connected to Bose–Einstein condensation. Under corresponding conditions, below the temperature of phase transition, these phenomena were observed in helium-4 and different classes of superconductors. In this sense, the superconductivity is often called the superfluidity of Fermi gas. In the simplest form, the origin of superfluidity can be seen from the weakly interacting bosons model. Peculiar properties Quantized vortices As in many other systems, vortices can exist in BECs. Vortices can be created, for example, by "stirring" the condensate with lasers, rotating the confining trap, or by rapid cooling across the phase transition. The vortex created will be a quantum vortex with core shape determined by the interactions. Fluid circulation around any point is quantized due to the single-valued nature of the order BEC order parameter or wavefunction, that can be written in the form where and are as in the cylindrical coordinate system, and is the angular quantum number (a.k.a. the "charge" of the vortex). Since the energy of a vortex is proportional to the square of its angular momentum, in trivial topology only vortices can exist in the steady state; Higher-charge vortices will have a tendency to split into vortices, if allowed by the topology of the geometry. An axially symmetric (for instance, harmonic) confining potential is commonly used for the study of vortices in BEC. To determine , the energy of must be minimized, according to the constraint . This is usually done computationally, however, in a uniform medium, the following analytic form demonstrates the correct behavior, and is a good approximation: Here, is the density far from the vortex and , where is the healing length of the condensate. A singly charged vortex () is in the ground state, with its energy given by where  is the farthest distance from the vortices considered.(To obtain an energy which is well defined it is necessary to include this boundary .) For multiply charged vortices () the energy is approximated by which is greater than that of singly charged vortices, indicating that these multiply charged vortices are unstable to decay. Research has, however, indicated they are metastable states, so may have relatively long lifetimes. Closely related to the creation of vortices in BECs is the generation of so-called dark solitons in one-dimensional BECs. These topological objects feature a phase gradient across their nodal plane, which stabilizes their shape even in propagation and interaction. Although solitons carry no charge and are thus prone to decay, relatively long-lived dark solitons have been produced and studied extensively. Attractive interactions Experiments led by Randall Hulet at Rice University from 1995 through 2000 showed that lithium condensates with attractive interactions could stably exist up to a critical atom number. Quench cooling the gas, they observed the condensate to grow, then subsequently collapse as the attraction overwhelmed the zero-point energy of the confining potential, in a burst reminiscent of a supernova, with an explosion preceded by an implosion. Further work on attractive condensates was performed in 2000 by the JILA team, of Cornell, Wieman and coworkers. Their instrumentation now had better control so they used naturally attracting atoms of rubidium-85 (having negative atom–atom scattering length). Through Feshbach resonance involving a sweep of the magnetic field causing spin flip collisions, they lowered the characteristic, discrete energies at which rubidium bonds, making their Rb-85 atoms repulsive and creating a stable condensate. The reversible flip from attraction to repulsion stems from quantum interference among wave-like condensate atoms. When the JILA team raised the magnetic field strength further, the condensate suddenly reverted to attraction, imploded and shrank beyond detection, then exploded, expelling about two-thirds of its 10,000 atoms. About half of the atoms in the condensate seemed to have disappeared from the experiment altogether, not seen in the cold remnant or expanding gas cloud. Carl Wieman explained that under current atomic theory this characteristic of Bose–Einstein condensate could not be explained because the energy state of an atom near absolute zero should not be enough to cause an implosion; however, subsequent mean-field theories have been proposed to explain it. Most likely they formed molecules of two rubidium atoms; energy gained by this bond imparts velocity sufficient to leave the trap without being detected. The process of creation of molecular Bose condensate during the sweep of the magnetic field throughout the Feshbach resonance, as well as the reverse process, are described by the exactly solvable model that can explain many experimental observations. Current research Compared to more commonly encountered states of matter, Bose–Einstein condensates are extremely fragile. The slightest interaction with the external environment can be enough to warm them past the condensation threshold, eliminating their interesting properties and forming a normal gas. Nevertheless, they have proven useful in exploring a wide range of questions in fundamental physics, and the years since the initial discoveries by the JILA and MIT groups have seen an increase in experimental and theoretical activity. Bose–Einstein condensates composed of a wide range of isotopes have been produced; see below. Fundamental research Examples include experiments that have demonstrated interference between condensates due to wave–particle duality, the study of superfluidity and quantized vortices, the creation of bright matter wave solitons from Bose condensates confined to one dimension, and the slowing of light pulses to very low speeds using electromagnetically induced transparency. Vortices in Bose–Einstein condensates are also currently the subject of analogue gravity research, studying the possibility of modeling black holes and their related phenomena in such environments in the laboratory. Experimenters have also realized "optical lattices", where the interference pattern from overlapping lasers provides a periodic potential. These are used to explore the transition between a superfluid and a Mott insulator. They are also useful in studying Bose–Einstein condensation in fewer than three dimensions, for example the Lieb–Liniger model (an the limit of strong interactions, the Tonks–Girardeau gas) in 1D and the Berezinskii–Kosterlitz–Thouless transition in 2D. Indeed, a deep optical lattice allows the experimentalist to freeze the motion of the particles along one or two directions, effectively eliminating one or two dimension from the system. Further, the sensitivity of the pinning transition of strongly interacting bosons confined in a shallow one-dimensional optical lattice originally observed by Haller has been explored via a tweaking of the primary optical lattice by a secondary weaker one. Thus for a resulting weak bichromatic optical lattice, it has been found that the pinning transition is robust against the introduction of the weaker secondary optical lattice. Studies of vortices in nonuniform Bose–Einstein condensates as well as excitations of these systems by the application of moving repulsive or attractive obstacles, have also been undertaken. Within this context, the conditions for order and chaos in the dynamics of a trapped Bose–Einstein condensate have been explored by the application of moving blue and red-detuned laser beams (hitting frequencies slightly above and below the resonance frequency, respectively) via the time-dependent Gross-Pitaevskii equation. Applications In 1999, Danish physicist Lene Hau led a team from Harvard University which slowed a beam of light to about 17 meters per second using a superfluid. Hau and her associates have since made a group of condensate atoms recoil from a light pulse such that they recorded the light's phase and amplitude, recovered by a second nearby condensate, in what they term "slow-light-mediated atomic matter-wave amplification" using Bose–Einstein condensates. Another current research interest is the creation of Bose–Einstein condensates in microgravity in order to use its properties for high precision atom interferometry. The first demonstration of a BEC in weightlessness was achieved in 2008 at a drop tower in Bremen, Germany by a consortium of researchers led by Ernst M. Rasel from Leibniz University Hannover. The same team demonstrated in 2017 the first creation of a Bose–Einstein condensate in space and it is also the subject of two upcoming experiments on the International Space Station. Researchers in the new field of atomtronics use the properties of Bose–Einstein condensates in the emerging quantum technology of matter-wave circuits. In 1970, BECs were proposed by Emmanuel David Tannenbaum for anti-stealth technology. Isotopes Bose-Einstein condensation has mainly been observed on alkaline atoms, some of which have collisional properties particularly suitable for evaporative cooling in traps, and which where the first to laser-cooled. As of 2021, using ultra-low temperatures of or below, Bose–Einstein condensates had been obtained for a multitude of isotopes with more or less ease, mainly of alkali metal, alkaline earth metal, and lanthanide atoms (, , , , , , , , , , , , , , , , , , and metastable (orthohelium)). Research was finally successful in atomic hydrogen with the aid of the newly developed method of 'evaporative cooling'. In contrast, the superfluid state of below is differs significantly from dilute degenerate atomic gases because the interaction between the atoms is strong. Only 8% of atoms are in the condensed fraction near absolute zero, rather than near 100% of a weakly interacting BEC. The bosonic behavior of some of these alkaline gases appears odd at first sight, because their nuclei have half-integer total spin. It arises from the interplay of electronic and nuclear spins: at ultra-low temperatures and corresponding excitation energies, the half-integer total spin of the electronic shell (one outer electron) and half-integer total spin of the nucleus are coupled by a very weak hyperfine interaction. The total spin of the atom, arising from this coupling, is an integer value. Conversely, alkali isotopes which have an integer nuclear spin (such as and ) are fermions and can form degenerate Fermi gases, also called "Fermi condensates". Cooling fermions to extremely low temperatures has created degenerate gases, subject to the Pauli exclusion principle. To exhibit Bose–Einstein condensation, the fermions must "pair up" to form bosonic compound particles (e.g. molecules or Cooper pairs). The first molecular condensates were created in November 2003 by the groups of Rudolf Grimm at the University of Innsbruck, Deborah S. Jin at the University of Colorado at Boulder and Wolfgang Ketterle at MIT. Jin quickly went on to create the first fermionic condensate, working with the same system but outside the molecular regime. Continuous Bose–Einstein condensation Limitations of evaporative cooling have restricted atomic BECs to "pulsed" operation, involving a highly inefficient duty cycle that discards more than 99% of atoms to reach BEC. Achieving continuous BEC has been a major open problem of experimental BEC research, driven by the same motivations as continuous optical laser development: high flux, high coherence matter waves produced continuously would enable new sensing applications. Continuous BEC was achieved for the first time in 2022 with . In solid state physics In 2020, researchers reported the development of superconducting BEC and that there appears to be a "smooth transition between" BEC and Bardeen–Cooper–Shrieffer regimes. Dark matter P. Sikivie and Q. Yang showed that cold dark matter axions would form a Bose–Einstein condensate by thermalisation because of gravitational self-interactions. Axions have not yet been confirmed to exist. However the important search for them has been greatly enhanced with the completion of upgrades to the Axion Dark Matter Experiment (ADMX) at the University of Washington in early 2018. In 2014, a potential dibaryon was detected at the Jülich Research Center at about 2380 MeV. The center claimed that the measurements confirm results from 2011, via a more replicable method. The particle existed for 10−23 seconds and was named d*(2380). This particle is hypothesized to consist of three up and three down quarks. It is theorized that groups of d* (d-stars) could form Bose–Einstein condensates due to prevailing low temperatures in the early universe, and that BECs made of such hexaquarks with trapped electrons could behave like dark matter. In fiction In the 2016 film Spectral, the US military battles mysterious enemy creatures fashioned out of Bose–Einstein condensates. In the 2003 novel Blind Lake, scientists observe sentient life on a planet 51 light-years away using telescopes powered by Bose–Einstein condensate-based quantum computers. The video game franchise Mass Effect has cryonic ammunition whose flavour text describes it as being filled with Bose–Einstein condensates. Upon impact, the bullets rupture and spray supercooled liquid on the enemy.
Physical sciences
States of matter
null
4476
https://en.wikipedia.org/wiki/Beer%E2%80%93Lambert%20law
Beer–Lambert law
The Beer–Bouguer–Lambert (BBL) extinction law is an empirical relationship describing the attenuation in intensity of a radiation beam passing through a macroscopically homogenous medium with which it interacts. Formally, it states that the intensity of radiation decays exponentially in the absorbance of the medium, and that said absorbance is proportional to the length of beam passing through the medium, the concentration of interacting matter along that path, and a constant representing said matter's propensity to interact. The extinction law's primary application is in chemical analysis, where it underlies the Beer–Lambert law, commonly called Beer's law. Beer's law states that a beam of visible light passing through a chemical solution of fixed geometry experiences absorption proportional to the solute concentration. Other applications appear in physical optics, where it quantifies astronomical extinction and the absorption of photons, neutrons, or rarefied gases. Forms of the BBL law date back to the mid-eighteenth century, but it only took its modern form during the early twentieth. History The first work towards the BBL law began with astronomical observations Pierre Bouguer performed in the early eighteenth century and published in 1729. Bouguer needed to compensate for the refraction of light by the earth's atmosphere, and found it necessary to measure the local height of the atmosphere. The latter, he sought to obtain through variations in the observed intensity of known stars. When calibrating this effect, Bouguer discovered that light intensity had an exponential dependence on length traveled through the atmosphere (in Bouguer's terms, a geometric progression). Bouguer's work was then popularized in Johann Heinrich Lambert's Photometria in 1760. Lambert expressed the law, which states that the loss of light intensity when it propagates in a medium is directly proportional to intensity and path length, in a mathematical form quite similar to that used in modern physics. Lambert began by assuming that the intensity of light traveling into an absorbing body would be given by the differential equation which is compatible with Bouguer's observations. The constant of proportionality was often termed the "optical density" of the body. As long as is constant along a distance , the exponential attenuation law, follows from integration. In 1852, August Beer noticed that colored solutions also appeared to exhibit a similar attenuation relation. In his analysis, Beer does not discuss Bouguer and Lambert's prior work, writing in his introduction that "Concerning the absolute magnitude of the absorption that a particular ray of light suffers during its propagation through an absorbing medium, there is no information available." Beer may have omitted reference to Bouguer's work because there is a subtle physical difference between color absorption in solutions and astronomical contexts. Solutions are homogeneous and do not scatter light at common analytical wavelengths (ultraviolet, visible, or infrared), except at entry and exit. Thus light within a solution is reasonably approximated as due to absorption alone. In Bouguer's context, atmospheric dust or other inhomogeneities could also scatter light away from the detector. Modern texts combine the two laws because scattering and absorption have the same effect. Thus a scattering coefficient and an absorption coefficient can be combined into a total extinction coefficient . Importantly, Beer also seems to have conceptualized his result in terms of a given thickness' opacity, writing "If is the coefficient (fraction) of diminution, then this coefficient (fraction) will have the value for double this thickness." Although this geometric progression is mathematically equivalent to the modern law, modern treatments instead emphasize the logarithm of , which clarifies that concentration and path length have equivalent effects on the absorption. An early, possibly the first, modern formulation was given by Robert Luther and Andreas Nikolopulos in 1913. Mathematical formulations There are several equivalent formulations of the BBL law, depending on the precise choice of measured quantities. All of them state that, provided that the physical state is held constant, the extinction process is linear in the intensity of radiation and amount of radiatively-active matter, a fact sometimes called the fundamental law of extinction. Many of them then connect the quantity of radiatively-active matter to a length traveled and a concentration or number density . The latter two are related by Avogadro's number: . A collimated beam (directed radiation) with cross-sectional area will encounter particles (on average) during its travel. However, not all of these particles interact with the beam. Propensity to interact is a material-dependent property, typically summarized in absorptivity or scattering cross-section . These almost exhibit another Avogadro-type relationship: . The factor of appears because physicists tend to use natural logarithms and chemists decadal logarithms. Beam intensity can also be described in terms of multiple variables: the intensity or radiant flux . In the case of a collimated beam, these are related by , but is often used in non-collimated contexts. The ratio of intensity (or flux) in to out is sometimes summarized as a transmittance coefficient . When considering an extinction law, dimensional analysis can verify the consistency of the variables, as logarithms (being nonlinear) must always be dimensionless. Formulation The simplest formulation of Beer's relates the optical attenuation of a physical material containing a single attenuating species of uniform concentration to the optical path length through the sample and absorptivity of the species. This expression is:The quantities so equated are defined to be the absorbance , which depends on the logarithm base. The Naperian absorbance is then given by and satisfies If multiple species in the material interact with the radiation, then their absorbances add. Thus a slightly more general formulation is that where the sum is over all possible radiation-interacting ("translucent") species, and indexes those species. In situations where length may vary significantly, absorbance is sometimes summarized in terms of an attenuation coefficient In atmospheric science and radiation shielding applications, the attenuation coefficient may vary significantly through an inhomogenous material. In those situations, the most general form of the Beer–Lambert law states that the total attenuation can be obtained by integrating the attenuation coefficient over small slices of the beamline: These formulations then reduce to the simpler versions when there is only one active species and the attenuation coefficients are constant. Derivation There are two factors that determine the degree to which a medium containing particles will attenuate a light beam: the number of particles encountered by the light beam, and the degree to which each particle extinguishes the light. Assume that a beam of light enters a material sample. Define as an axis parallel to the direction of the beam. Divide the material sample into thin slices, perpendicular to the beam of light, with thickness sufficiently small that one particle in a slice cannot obscure another particle in the same slice when viewed along the direction. The radiant flux of the light that emerges from a slice is reduced, compared to that of the light that entered, by where is the (Napierian) attenuation coefficient, which yields the following first-order linear, ordinary differential equation: The attenuation is caused by the photons that did not make it to the other side of the slice because of scattering or absorption. The solution to this differential equation is obtained by multiplying the integrating factorthroughout to obtainwhich simplifies due to the product rule (applied backwards) to Integrating both sides and solving for for a material of real thickness , with the incident radiant flux upon the slice and the transmitted radiant flux givesand finally Since the decadic attenuation coefficient is related to the (Napierian) attenuation coefficient by we also have To describe the attenuation coefficient in a way independent of the number densities of the attenuating species of the material sample, one introduces the attenuation cross section has the dimension of an area; it expresses the likelihood of interaction between the particles of the beam and the particles of the species in the material sample: One can also use the molar attenuation coefficients where is the Avogadro constant, to describe the attenuation coefficient in a way independent of the amount concentrations of the attenuating species of the material sample: Validity Under certain conditions the Beer–Lambert law fails to maintain a linear relationship between attenuation and concentration of analyte. These deviations are classified into three categories: Real—fundamental deviations due to the limitations of the law itself. Chemical—deviations observed due to specific chemical species of the sample which is being analyzed. Instrument—deviations which occur due to how the attenuation measurements are made. There are at least six conditions that need to be fulfilled in order for the Beer–Lambert law to be valid. These are: The attenuators must act independently of each other. The attenuating medium must be homogeneous in the interaction volume. The attenuating medium must not scatter the radiation—no turbidity—unless this is accounted for as in DOAS. The incident radiation must consist of parallel rays, each traversing the same length in the absorbing medium. The incident radiation should preferably be monochromatic, or have at least a width that is narrower than that of the attenuating transition. Otherwise a spectrometer as detector for the power is needed instead of a photodiode which cannot discriminate between wavelengths. The incident flux must not influence the atoms or molecules; it should only act as a non-invasive probe of the species under study. In particular, this implies that the light should not cause optical saturation or optical pumping, since such effects will deplete the lower level and possibly give rise to stimulated emission. If any of these conditions are not fulfilled, there will be deviations from the Beer–Lambert law. The law tends to break down at very high concentrations, especially if the material is highly scattering. Absorbance within range of 0.2 to 0.5 is ideal to maintain linearity in the Beer–Lambert law. If the radiation is especially intense, nonlinear optical processes can also cause variances. The main reason, however, is that the concentration dependence is in general non-linear and Beer's law is valid only under certain conditions as shown by derivation below. For strong oscillators and at high concentrations the deviations are stronger. If the molecules are closer to each other interactions can set in. These interactions can be roughly divided into physical and chemical interactions. Physical interaction do not alter the polarizability of the molecules as long as the interaction is not so strong that light and molecular quantum state intermix (strong coupling), but cause the attenuation cross sections to be non-additive via electromagnetic coupling. Chemical interactions in contrast change the polarizability and thus absorption. In solids, attenuation is usually an addition of absorption coefficient (creation of electron-hole pairs) or scattering (for example Rayleigh scattering if the scattering centers are much smaller than the incident wavelength). Also note that for some systems we can put (1 over inelastic mean free path) in place of Applications In plasma physics The BBL extinction law also arises as a solution to the BGK equation. Chemical analysis by spectrophotometry The Beer–Lambert law can be applied to the analysis of a mixture by spectrophotometry, without the need for extensive pre-processing of the sample. An example is the determination of bilirubin in blood plasma samples. The spectrum of pure bilirubin is known, so the molar attenuation coefficient is known. Measurements of decadic attenuation coefficient are made at one wavelength that is nearly unique for bilirubin and at a second wavelength in order to correct for possible interferences. The amount concentration is then given by For a more complicated example, consider a mixture in solution containing two species at amount concentrations and . The decadic attenuation coefficient at any wavelength is, given by Therefore, measurements at two wavelengths yields two equations in two unknowns and will suffice to determine the amount concentrations and as long as the molar attenuation coefficients of the two components, and are known at both wavelengths. This two system equation can be solved using Cramer's rule. In practice it is better to use linear least squares to determine the two amount concentrations from measurements made at more than two wavelengths. Mixtures containing more than two components can be analyzed in the same way, using a minimum of wavelengths for a mixture containing components. The law is used widely in infra-red spectroscopy and near-infrared spectroscopy for analysis of polymer degradation and oxidation (also in biological tissue) as well as to measure the concentration of various compounds in different food samples. The carbonyl group attenuation at about 6 micrometres can be detected quite easily, and degree of oxidation of the polymer calculated. In-atmosphere astronomy The Bouguer–Lambert law may be applied to describe the attenuation of solar or stellar radiation as it travels through the atmosphere. In this case, there is scattering of radiation as well as absorption. The optical depth for a slant path is , where refers to a vertical path, is called the relative airmass, and for a plane-parallel atmosphere it is determined as where is the zenith angle corresponding to the given path. The Bouguer-Lambert law for the atmosphere is usually written where each is the optical depth whose subscript identifies the source of the absorption or scattering it describes: refers to aerosols (that absorb and scatter); are uniformly mixed gases (mainly carbon dioxide (CO2) and molecular oxygen (O2) which only absorb); is nitrogen dioxide, mainly due to urban pollution (absorption only); are effects due to Raman scattering in the atmosphere; is water vapour absorption; is ozone (absorption only); is Rayleigh scattering from molecular oxygen () and nitrogen () (responsible for the blue color of the sky); the selection of the attenuators which have to be considered depends on the wavelength range and can include various other compounds. This can include tetraoxygen, HONO, formaldehyde, glyoxal, a series of halogen radicals and others. is the optical mass or airmass factor, a term approximately equal (for small and moderate values of ) to where is the observed object's zenith angle (the angle measured from the direction perpendicular to the Earth's surface at the observation site). This equation can be used to retrieve , the aerosol optical thickness, which is necessary for the correction of satellite images and also important in accounting for the role of aerosols in climate.
Physical sciences
Optics
Physics
4485
https://en.wikipedia.org/wiki/Bakelite
Bakelite
Bakelite ( ), formally , is a thermosetting phenol formaldehyde resin, formed from a condensation reaction of phenol with formaldehyde. The first plastic made from synthetic components, it was developed by Leo Baekeland in Yonkers, New York, in 1907, and patented on December 7, 1909. Bakelite was one of the first plastic-like materials to be introduced into the modern world and was popular because it could be moulded and then hardened into any shape. Because of its electrical nonconductivity and heat-resistant properties, it became a great commercial success. It was used in electrical insulators, radio and telephone casings, and such diverse products as kitchenware, jewelry, pipe stems, children's toys, and firearms. The retro appeal of old Bakelite products has made them collectible. The creation of a synthetic plastic was revolutionary for the chemical industry, which at the time made most of its income from cloth dyes and explosives. Bakelite's commercial success inspired the industry to develop other synthetic plastics. As the world's first commercial synthetic plastic, Bakelite was named a National Historic Chemical Landmark by the American Chemical Society. History Bakelite was produced for the first time in 1872 by Adolf von Baeyer, though its use as a commercial product was not considered at the time. Leo Baekeland was already wealthy due to his invention of Velox photographic paper when he began to investigate the reactions of phenol and formaldehyde in his home laboratory. Chemists had begun to recognize that many natural resins and fibers were polymers. Baekeland's initial intent was to find a replacement for shellac, a material in limited supply because it was made naturally from the secretion of lac insects (specifically Kerria lacca). He produced a soluble phenol-formaldehyde shellac called Novolak, but it was not a market success, even though it is still used to this day (e.g., as a photoresist). He then began experimenting on strengthening wood by impregnating it with a synthetic resin rather than coating it. By controlling the pressure and temperature applied to phenol and formaldehyde, he produced a hard moldable material that he named Bakelite, after himself. It was the first synthetic thermosetting plastic produced, and Baekeland speculated on "the thousand and one ... articles" it could be used to make. He considered the possibilities of using a wide variety of filling materials, including cotton, powdered bronze, and slate dust, but was most successful with wood and asbestos fibers, though asbestos was gradually abandoned by all manufacturers due to stricter environmental laws. Baekeland filed a substantial number of related patents. Bakelite, his "method of making insoluble products of phenol and formaldehyde", was filed on July 13, 1907, and granted on December 7, 1909. He also filed for patent protection in other countries, including Belgium, Canada, Denmark, Hungary, Japan, Mexico, Russia, and Spain. He announced his invention at a meeting of the American Chemical Society on February 5, 1909. Baekeland started semi-commercial production of his new material in his home laboratory, marketing it as a material for electrical insulators. In the summer of 1909, he licensed the continental European rights to Rütger AG. The subsidiary formed at that time, Bakelite AG, was the first to produce Bakelite on an industrial scale. By 1910, Baekeland was producing enough material in the US to justify expansion. He formed the General Bakelite Company of Perth Amboy, New Jersey, as a U.S. company to manufacture and market his new industrial material, and made overseas connections to produce it in other countries. The Bakelite Company produced "transparent" cast resin (which did not include filler) for a small market during the 1910s and 1920s. Blocks or rods of cast resin, also known as "artificial amber", were machined and carved to create items such as pipe stems, cigarette holders, and jewelry. However, the demand for molded plastics led the company to concentrate on molding rather than cast solid resins. The Bakelite Corporation was formed in 1922 after patent litigation favorable to Baekeland, from a merger of three companies: Baekeland's General Bakelite Company; the Condensite Company, founded by J. W. Aylesworth; and the Redmanol Chemical Products Company, founded by Lawrence V. Redman. Under director of advertising and public relations Allan Brown, who came to Bakelite from Condensite, Bakelite was aggressively marketed as "the material of a thousand uses". A filing for a trademark featuring the letter B above the mathematical symbol for infinity was made August 25, 1925, and claimed the mark was in use as of December 1, 1924. A wide variety of uses were listed in their trademark applications. The first issue of Plastics magazine, October 1925, featured Bakelite on its cover and included the article "Bakelite – What It Is" by Allan Brown. The range of colors that were available included "black, brown, red, yellow, green, gray, blue, and blends of two or more of these". The article emphasized that Bakelite came in various forms. In a 1925 report, the United States Tariff Commission hailed the commercial manufacture of synthetic phenolic resin as "distinctly an American achievement", and noted that "the publication of figures, however, would be a virtual disclosure of the production of an individual company". In England, Bakelite Limited, a merger of three British phenol formaldehyde resin suppliers (Damard Lacquer Company Limited of Birmingham, Mouldensite Limited of Darley Dale and Redmanol Chemical Products Company of London), was formed in 1926. A new Bakelite factory opened in Tyseley, Birmingham, around 1928. It was the "heart of Bakelite production in the UK" until it closed in 1987. A factory to produce phenolic resins and precursors opened in Bound Brook, New Jersey, in 1931. In 1939, the companies were acquired by Union Carbide and Carbon Corporation. In 2005, German Bakelite manufacturer Bakelite AG was acquired by Borden Chemical of Columbus, Ohio, now Hexion Inc. In addition to the original Bakelite material, these companies eventually made a wide range of other products, many of which were marketed under the brand name "Bakelite plastics". These included other types of cast phenolic resins similar to Catalin, and urea-formaldehyde resins, which could be made in brighter colors than . Once Baekeland's heat and pressure patents expired in 1927, Bakelite Corporation faced serious competition from other companies. Because molded Bakelite incorporated fillers to give it strength, it tended to be made in concealing dark colors. In 1927, beads, bangles, and earrings were produced by the Catalin company, through a different process which enabled them to introduce 15 new colors. Translucent jewelry, poker chips and other items made of phenolic resins were introduced in the 1930s or 1940s by the Catalin company under the Prystal name. The creation of marbled phenolic resins may also be attributable to the Catalin company. Synthesis Making Bakelite is a multi-stage process. It begins with the heating of phenol and formaldehyde in the presence of a catalyst such as hydrochloric acid, zinc chloride, or the base ammonia. This creates a liquid condensation product, referred to as Bakelite A, which is soluble in alcohol, acetone, or additional phenol. Heated further, the product becomes partially soluble and can still be softened by heat. Sustained heating results in an "insoluble hard gum". However, the high temperatures required to create this tend to cause violent foaming of the mixture when done at standard atmospheric pressure, which results in the cooled material being porous and breakable. Baekeland's innovative step was to put his "last condensation product" into an egg-shaped "Bakelizer". By heating it under pressure, at about , Baekeland was able to suppress the foaming that would otherwise occur. The resulting substance is extremely hard and both infusible and insoluble. Compression molding Molded Bakelite forms in a condensation reaction of phenol and formaldehyde, with wood flour or asbestos fiber as a filler, under high pressure and heat in a time frame of a few minutes of curing. The result is a hard plastic material. Asbestos was gradually abandoned as filler because many countries banned the production of asbestos. Bakelite's molding process had a number of advantages. Bakelite resin could be provided either as powder or as preformed partially cured slugs, increasing the speed of the casting. Thermosetting resins such as Bakelite required heat and pressure during the molding cycle but could be removed from the molding process without being cooled, again making the molding process faster. Also, because of the smooth polished surface that resulted, Bakelite objects required less finishing. Millions of parts could be duplicated quickly and relatively cheaply. Phenolic sheet Another market for Bakelite resin was the creation of phenolic sheet materials. A phenolic sheet is a hard, dense material made by applying heat and pressure to layers of paper or glass cloth impregnated with synthetic resin. Paper, cotton fabrics, synthetic fabrics, glass fabrics, and unwoven fabrics are all possible materials used in lamination. When heat and pressure are applied, polymerization transforms the layers into thermosetting industrial laminated plastic. Bakelite phenolic sheet is produced in many commercial grades and with various additives to meet diverse mechanical, electrical, and thermal requirements. Some common types include: Paper reinforced NEMA XX per MIL-I-24768 PBG. Normal electrical applications, moderate mechanical strength, continuous operating temperature of . Canvas-reinforced NEMA C per MIL-I-24768 TYPE FBM NEMA CE per MIL-I-24768 TYPE FBG. Good mechanical and impact strength with a continuous operating temperature of 250 °F. Linen-reinforced NEMA L per MIL-I-24768 TYPE FBI NEMA LE per MIL-I-24768 TYPE FEI. Good mechanical and electrical strength. Recommended for intricate high-strength parts. Continuous operating temperature convert 250 °F. Nylon reinforced NEMA N-1 per MIL-I-24768 TYPE NPG. Superior electrical properties under humid conditions, fungus resistant, continuous operating temperature of . Properties Bakelite has a number of important properties. It can be molded very quickly, decreasing production time. Moldings are smooth, retain their shape, and are resistant to heat, scratches, and destructive solvents. It is also resistant to electricity, and prized for its low conductivity. It is not flexible. Phenolic resin products may swell slightly under conditions of extreme humidity or perpetual dampness. When rubbed or burnt, Bakelite has a distinctive, acrid, sickly-sweet or fishy odor. Applications and uses The characteristics of Bakelite made it particularly suitable as a molding compound, an adhesive or binding agent, a varnish, and a protective coating, as well as for the emerging electrical and automobile industries because of its extraordinarily high resistance to electricity, heat, and chemical action. The earliest commercial use of Bakelite in the electrical industry was the molding of tiny insulating bushings, made in 1908 for the Weston Electrical Instrument Corporation by Richard W. Seabury of the Boonton Rubber Company. Bakelite was soon used for non-conducting parts of telephones, radios, and other electrical devices, including bases and sockets for light bulbs and electron tubes (vacuum tubes), supports for any type of electrical components, automobile distributor caps, and other insulators. By 1912, it was being used to make billiard balls, since its elasticity and the sound it made were similar to ivory. During World War I, Bakelite was used widely, particularly in electrical systems. Important projects included the Liberty airplane engine, the wireless telephone and radio phone, and the use of micarta-bakelite propellers in the NBS-1 bomber and the DH-4B aeroplane. Bakelite's availability and ease and speed of molding helped to lower the costs and increase product availability so that telephones and radios became common household consumer goods. It was also very important to the developing automobile industry. It was soon found in myriad other consumer products ranging from pipe stems and buttons to saxophone mouthpieces, cameras, early machine guns, and appliance casings. Bakelite was also very commonly used in making molded grip panels on handguns, as furniture for submachine guns and machineguns, the classic Bakelite magazines for Kalashnikov rifles, as well as numerous knife handles and "scales" through the first half of the 20th century. Beginning in the 1920s, it became a popular material for jewelry. Designer Coco Chanel included Bakelite bracelets in her costume jewelry collections. Designers such as Elsa Schiaparelli used it for jewelry and also for specially designed dress buttons. Later, Diana Vreeland, editor of Vogue, was enthusiastic about Bakelite. Bakelite was also used to make presentation boxes for Breitling watches. By 1930, designer Paul T. Frankl considered Bakelite a "Materia Nova", "expressive of our own age". By the 1930s, Bakelite was used for game pieces like chess pieces, poker chips, dominoes, and mahjong sets. Kitchenware made with Bakelite, including canisters and tableware, was promoted for its resistance to heat and to chipping. In the mid-1930s, Northland marketed a line of skis with a black "Ebonite" base, a coating of Bakelite. By 1935, it was used in solid-body electric guitars. Performers such as Jerry Byrd loved the tone of Bakelite guitars but found them difficult to keep in tune. Charles Plimpton patented BAYKO in 1933 and rushed out his first construction sets for Christmas 1934. He called the toy Bayko Light Constructional Sets, the words "Bayko Light" being a pun on the word "Bakelite". During World War II, Bakelite was used in a variety of wartime equipment including pilots' goggles and field telephones. It was also used for patriotic wartime jewelry. In 1943, the thermosetting phenolic resin was even considered for the manufacture of coins, due to a shortage of traditional material. Bakelite and other non-metal materials were tested for usage for the one cent coin in the US before the Mint settled on zinc-coated steel. During World War II, Bakelite buttons were part of British uniforms. These included brown buttons for the Army and black buttons for the RAF. In 1947, Dutch art forger Han van Meegeren was convicted of forgery, after chemist and curator Paul B. Coremans proved that a purported Vermeer contained Bakelite, which van Meegeren had used as a paint hardener. Bakelite was sometimes used in the pistol grip, hand guard, and buttstock of firearms. The AKM and some early AK-74 rifles are frequently mistakenly identified as using Bakelite, but most were made with AG-4S. By the late 1940s, newer materials were superseding Bakelite in many areas. Phenolics are less frequently used in general consumer products today due to their cost and complexity of production and their brittle nature. They still appear in some applications where their specific properties are required, such as small precision-shaped components, molded disc brake cylinders, saucepan handles, electrical plugs, switches and parts for electrical irons, printed circuit boards, as well as in the area of inexpensive board and tabletop games produced in China, Hong Kong, and India. Items such as billiard balls, dominoes and pieces for board games such as chess, checkers, and backgammon are constructed of Bakelite for its look, durability, fine polish, weight, and sound. Common dice are sometimes made of Bakelite for weight and sound, but the majority are made of a thermoplastic polymer such as acrylonitrile butadiene styrene (ABS). Bakelite continues to be used for wire insulation, brake pads and related automotive components, and industrial electrical-related applications. Bakelite stock is still manufactured and produced in sheet, rod, and tube form for industrial applications in the electronics, power generation, and aerospace industries, and under a variety of commercial brand names. Phenolic resins have been commonly used in ablative heat shields. Soviet heatshields for ICBM warheads and spacecraft reentry consisted of asbestos textolite, impregnated with Bakelite. Bakelite is also used in the mounting of metal samples in metallography. Collectible status Bakelite items, particularly jewelry and radios, have become popular collectibles. The term Bakelite is sometimes used in the resale market as a catch-all for various types of early plastics, including Catalin and Faturan, which may be brightly colored, as well as items made of true Bakelite material. Due to its aesthetics, a similar material fakelite (fake bakelite) exists made from modern safer materials which do not contain asbestos. Patents The United States Patent and Trademark Office granted Baekeland a patent for a "Method of making insoluble products of phenol and formaldehyde" on December 7, 1909. Producing hard, compact, insoluble, and infusible condensation products of phenols and formaldehyde marked the beginning of the modern plastics industry. Similar plastics Catalin is also a phenolic resin, similar to Bakelite, but contains different mineral fillers that allow the production of light colors. Condensites are similar thermoset materials having much the same properties, characteristics, and uses. Crystalate is an early plastic. Faturan is a phenolic resin, also similar to Bakelite, that turns red over time, regardless of its original color. Galalith is an early plastic derived from milk products. Micarta is an early composite insulating plate that used Bakelite as a binding agent. It was developed in 1910 by the Westinghouse Electric & Manufacturing Company, which put the new material to use for casting synthetic blades for Westinghouse electric fans. Novotext is a brand name for cotton textile-phenolic resin. G-10 or garolite is made with fiberglass and epoxy resin.
Physical sciences
Polymers
Chemistry
4487
https://en.wikipedia.org/wiki/Bean
Bean
A bean is the seed of any plant in the legume family (Fabaceae) used as a vegetable for human consumption or animal feed. The seeds are often preserved through drying, but fresh beans are also sold. Most beans are traditionally soaked and boiled, but they can be cooked in many different ways, including frying and baking, and are used in many traditional dishes throughout the world. The unripe seedpods of some varieties are also eaten whole as green beans or edamame (immature soybean), but fully ripened beans contain toxins like phytohemagglutinin and require cooking. Terminology The word 'bean', for the Old World vegetable, existed in Old English, long before the New World genus Phaseolus was known in Europe. With the Columbian exchange of domestic plants between Europe and the Americas, use of the word was extended to pod-borne seeds of Phaseolus, such as the common bean and the runner bean, and the related genus Vigna. The term has long been applied generally to seeds of similar form, such as Old World soybeans and lupins, and to the fruits or seeds of unrelated plants such as coffee beans, vanilla beans, castor beans, and cocoa beans. History Beans in an early cultivated form were grown in Thailand from the early seventh millennium BCE, predating ceramics. Beans were deposited with the dead in ancient Egypt. Not until the second millennium BCE did cultivated, large-seeded broad beans appear in the Aegean region, Iberia, and transalpine Europe. In the Iliad (8th century BCE), there is a passing mention of beans and chickpeas cast on the threshing floor. The oldest-known domesticated beans in the Americas were found in Guitarrero Cave, Peru, dated to around the second millennium BCE. Genetic analyses of the common bean Phaseolus show that it originated in Mesoamerica, and subsequently spread southward, along with maize and squash, traditional companion crops. Most of the kinds of beans commonly eaten today are part of the genus Phaseolus, which originated in the Americas. The first European to encounter them was Christopher Columbus, while exploring what may have been the Bahamas, and saw them growing in fields. Five kinds of Phaseolus beans were domesticated by pre-Columbian peoples, selecting pods that did not open and scatter their seeds when ripe: common beans (P. vulgaris) grown from Chile to the northern part of the United States; lima and sieva beans (P. lunatus); and the less widely distributed teparies (P. acutifolius), scarlet runner beans (P. coccineus), and polyanthus beans. Pre-Columbian peoples as far north as the Atlantic seaboard grew beans in the "Three Sisters" method of companion planting. The beans were interplanted with maize and squash. Beans were cultivated across Chile in Pre-Hispanic times, likely as far south as the Chiloé Archipelago. Diversity Taxonomic range Most beans are legumes, but from many different genera, native to different regions. Conservation of cultivars The biodiversity of bean cultivars is threatened by modern plant breeding, which selects a small number of the most productive varieties. Efforts are being made to conserve the germplasm of older varieties in different countries. As of 2023, the Norwegian Svalbard Global Seed Vault holds more than 40,000 accessions of Phaseolus bean species. Cultivation Agronomy Unlike the closely related pea, beans are a summer crop that needs warm temperatures to grow. Legumes are capable of nitrogen fixation and hence need less fertiliser than most plants. Maturity is typically 55–60 days from planting to harvest. As the pods mature, they turn yellow and dry up, and the beans inside change from green to their mature colour. Many beans are vines needing external support, such as "bean cages" or poles. Native Americans customarily grew them along with corn and squash, the tall stalks acting as support for the beans. More recently, the commercial "bush bean" which does not require support and produces all its pods simultaneously has been developed. Production The production data for legumes are published by FAO in three categories: Pulses dry: all mature and dry seeds of leguminous plants except soybeans and groundnuts. Oil crops: soybeans and groundnuts. Fresh vegetable: immature green fresh fruits of leguminous plants. The following is a summary of FAO data. The world leader in production of dry beans (Phaseolus spp), is India, followed by Myanmar (Burma) and Brazil. In Africa, the most important producer is Tanzania. Source: UN Food and Agriculture Organization (FAO) Uses Nutrition Raw green beans are 90% water, 7% carbohydrates, 2% protein, and contain negligible fat. In a reference serving, raw green beans supply 31 calories of food energy, and are a moderate source (10-19% of the Daily Value, DV) of vitamin C (15% DV) and vitamin B6 (11% DV), with no other micronutrients in significant content (table). Culinary Beans can be cooked in a wide variety of casseroles, curries, salads, soups, and stews. They can be served whole or mashed alongside meat or toast, or included in an omelette or a flatbread wrap. Other options are to include them in a bake with a cheese sauce, a Mexican-style chili con carne, or to use them as a meat substitute in a burger or in falafels. The French cassoulet is a slow-cooked stew with haricot beans, sausage, pork, mutton, and preserved goose. Soybeans can be processed into bean curd (tofu) or fermented into a cake (tempeh); these can be eaten fried or roasted like meat, or included in stir-fries, curries, and soups. Other Guar beans are used for their gum, a galactomannan polysaccharide. It is used to thicken and stabilise foods and other products. Health concerns Toxins Some kinds of raw beans contain a harmful, flavourless toxin: the lectin phytohaemagglutinin, which must be destroyed by cooking. Red kidney beans are particularly toxic, but other types also pose risks of food poisoning. Even small quantities (4 or 5 raw beans) may cause severe stomachache, vomiting, and diarrhea. This risk does not apply to canned beans because they have already been cooked. A recommended method is to boil the beans for at least ten minutes; under-cooked beans may be more toxic than raw beans. Cooking beans, without bringing them to a boil, in a slow cooker at a temperature well below boiling may not destroy toxins. A case of poisoning by butter beans used to make falafel was reported; the beans were used instead of traditional broad beans or chickpeas, soaked and ground without boiling, made into patties, and shallow fried. Bean poisoning is not well known in the medical community, and many cases may be misdiagnosed or never reported; figures appear not to be available. In the case of the UK National Poisons Information Service, available only to health professionals, the dangers of beans other than red beans were not flagged . Fermentation is used in some parts of Africa to improve the nutritional value of beans by removing toxins. Inexpensive fermentation improves the nutritional impact of flour from dry beans and improves digestibility, according to research co-authored by Emire Shimelis, from the Food Engineering Program at Addis Ababa University. Beans are a major source of dietary protein in Kenya, Malawi, Tanzania, Uganda and Zambia. Other hazards It is common to make beansprouts by letting some types of bean, often mung beans, germinate in moist and warm conditions; beansprouts may be used as ingredients in cooked dishes, or eaten raw or lightly cooked. There have been many outbreaks of disease from bacterial contamination, often by salmonella, listeria, and Escherichia coli, of beansprouts not thoroughly cooked, some causing significant mortality. Many types of bean like kidney bean contain significant amounts of antinutrients that inhibit some enzyme processes in the body. Phytic acid, present in beans, interferes with bone growth and interrupts vitamin D metabolism. Many beans, including broad beans, navy beans, kidney beans and soybeans, contain large sugar molecules, oligosaccharides (particularly raffinose and stachyose). A suitable oligosaccharide-cleaving enzyme is necessary to digest these. As the human digestive tract does not contain such enzymes, consumed oligosaccharides are digested by bacteria in the large intestine, producing gases such as methane, released as flatulence. In human society Beans have often been thought of as a food of the poor, as small farmers ate grains, vegetables, and got their protein from beans, while the wealthier classes were able to afford meat. European society has what Ken Albala calls "a class-based antagonism" to beans. Different cultures agree in disliking the flatulence that beans cause, and possess their own seasonings to attempt to remedy it: Mexico uses the herb epazote; India the aromatic resin asafoetida; Germany applies the herb savory; in the Middle East, cumin; and Japan the seaweed kombu. A substance for which there is evidence of effectiveness in reducing flatulence is the enzyme alpha-galactosidase; extracted from the mould fungus Aspergillus niger, it breaks down glycolipids and glycoproteins. The reputation of beans for flatulence is the theme of a children's song "Beans, Beans, the Musical Fruit". The Mexican jumping bean is a segment of a seed pod occupied by the larva of the moth Cydia saltitans, and sold as a novelty. The pods start to jump when warmed in the palm of the hand. Scientists have suggested that the random walk that results may help the larva to find shade and so to survive on hot days.
Biology and health sciences
Fabales
null
4489
https://en.wikipedia.org/wiki/Breast
Breast
The breasts are two prominences located on the upper ventral region of the torso among humans and other primates. Both sexes develop breasts from the same embryological tissues. The relative size and development of the breasts is a major secondary sex distinction between females and males. There is also considerable variation in size between individuals. Female humans are the only mammals which permanently develop breasts at puberty; all other mammals develop their mammary tissue during the latter period of pregnancy; at puberty, estrogens, in conjunction with growth hormone, cause permanent breast growth. In females, the breast serves as the mammary gland, which produces and secretes milk to feed infants. Subcutaneous fat covers and envelops a network of ducts that converge on the nipple, and these tissues give the breast its distinct size and globular shape. At the ends of the ducts are lobules, or clusters of alveoli, where milk is produced and stored in response to hormonal signals. During pregnancy, the breast responds to a complex interaction of hormones, including estrogens, progesterone, and prolactin, that mediate the completion of its development, namely lobuloalveolar maturation, in preparation of lactation and breastfeeding. Along with their major function in providing nutrition for infants, several cultures ascribe social and sexual characteristics to female breasts, and may regard bare breasts in public as immodest or indecent. Breasts have been featured in ancient and modern sculpture, art, and photography. Breasts can represent fertility, femininity, or abundance. They can figure prominently in the perception of a woman's body and sexual attractiveness. Breasts, especially the nipples, can be an erogenous zone. Etymology and terminology The English word breast derives from the Old English word from Proto-Germanic , from the Proto-Indo-European base . The breast spelling conforms to the Scottish and North English dialectal pronunciations. The Merriam-Webster Dictionary states that "Middle English , [comes] from Old English ; akin to Old High German ..., Old Irish [belly], [and] Russian "; the first known usage of the term was before the 12th century. Breasts is often used to refer to female breasts in particular, though the stricter anatomical term refers to the same region on members of either sex. Male breasts are sometimes referred to in the singular to mean the collective upper chest area, whereas female breasts are referred to in the plural unless speaking of a specific left or right breast. A large number of colloquial terms for female breasts are used in English, ranging from fairly polite terms to vulgar or slang. Some vulgar slang expressions may be considered to be derogatory or sexist to women. Evolutionary development Humans are the only mammals whose breasts become permanently enlarged after sexual maturity (known in humans as puberty). The reason for this evolutionary change is unknown. Several hypotheses have been put forward: A link has been proposed to processes for synthesizing the endogenous steroid hormone precursor dehydroepiandrosterone which takes place in fat rich regions of the body like the buttocks and breasts. These contributed to human brain development and played a part in increasing brain size. Breast enlargement may for this purpose have occurred as early as Homo ergaster (1.7–1.4 MYA). Other breast formation hypotheses may have then taken over as principal drivers. It has been suggested by zoologists Avishag and Amotz Zahavi that the size of the human breasts can be explained by the handicap theory of sexual dimorphism. This would see the explanation for larger breasts as them being an honest display of the women's health and ability to grow and carry them in her life. Prospective mates can then evaluate the genes of a potential mate for their ability to sustain her health even with the additional energy demanding burden she is carrying. The zoologist Desmond Morris describes a sociobiological approach in his science book The Naked Ape. He suggests, by making comparisons with the other primates, that breasts evolved to replace swelling buttocks as a sex signal of ovulation. He notes how humans have, relatively speaking, large penises as well as large breasts. Furthermore, early humans adopted bipedalism and face-to-face coitus. He therefore suggested enlarged sexual signals helped maintain the bond between a mated male and female even though they performed different duties and therefore were separated for lengths of time. A 2001 study proposed that the rounded shape of a woman's breast evolved to prevent the sucking infant offspring from suffocating while feeding at the teat; that is, because of the human infant's small jaw, which did not project from the face to reach the nipple, they might block the nostrils against the mother's breast if it were of a flatter form (compare with the common chimpanzee). Theoretically, as the human jaw receded into the face, the woman's body compensated with round breasts. Ashley Montague (1965) proposed that breasts came about as an adaptation for infant feeding for a different reason, as early human ancestors adopted bipedalism and the loss of body hair. Human upright stance meant infants must be carried at the hip or shoulder instead of on the back as in the apes. This gives the infant less opportunity to find the nipple or the purchase to cling on to the mother's body hair. The mobility of the nipple on a large breast in most human females gives the infant more ability to find it, grasp it and feed. Other suggestions include simply that permanent breasts attracted mates, that "pendulous" breasts gave infants something to cling to, or that permanent breasts shared the function of a camel's hump, to store fat as an energy reserve. Structure In women, the breasts overlie the pectoralis major muscles and extend on average from the level of the second rib to the level of the sixth rib in the front of the rib cage; thus, the breasts cover much of the chest area and the chest walls. At the front of the chest, the breast tissue can extend from the clavicle (collarbone) to the middle of the sternum (breastbone). At the sides of the chest, the breast tissue can extend into the axilla (armpit), and can reach as far to the back as the latissimus dorsi muscle, extending from the lower back to the humerus bone (the bone of the upper arm). As a mammary gland, the breast is composed of differing layers of tissue, predominantly two types: adipose tissue; and glandular tissue, which affects the lactation functions of the breasts. The natural resonant frequency of the human breast is about 2 hertz. Morphologically, the breast is tear-shaped. The superficial tissue layer (superficial fascia) is separated from the skin by 0.5–2.5 cm of subcutaneous fat (adipose tissue). The suspensory Cooper's ligaments are fibrous-tissue prolongations that radiate from the superficial fascia to the skin envelope. The female adult breast contains 14–18 irregular lactiferous lobes that converge at the nipple. The 2.0–4.5 mm milk ducts are immediately surrounded with dense connective tissue that support the glands. Milk exits the breast through the nipple, which is surrounded by a pigmented area of skin called the areola. The size of the areola can vary widely among women. The areola contains modified sweat glands known as Montgomery's glands. These glands secrete oily fluid that lubricate and protect the nipple during breastfeeding. Volatile compounds in these secretions may also serve as an olfactory stimulus for the newborn's appetite. The dimensions and weight of the breast vary widely among women. A small-to-medium-sized breast weighs 500 grams (1.1 pounds) or less, and a large breast can weigh approximately 750 to 1,000 grams (1.7 to 2.2 pounds) or more. In terms of composition, the breasts are about 80 to 90% stromal tissue (fat and connective tissue), while epithelial or glandular tissue only accounts for about 10 to 20% of the volume of the breasts. The tissue composition ratios of the breast also vary among women. Some women's breasts have a higher proportion of glandular tissue than of adipose or connective tissues. The fat-to-connective-tissue ratio determines the density or firmness of the breast. During a woman's life, her breasts change size, shape, and weight due to hormonal changes during puberty, the menstrual cycle, pregnancy, breastfeeding, and menopause. Glandular structure The breast is an apocrine gland that produces the milk used to feed an infant. The nipple of the breast is surrounded by the areola (nipple-areola complex). The areola has many sebaceous glands, and the skin color varies from pink to dark brown. The basic units of the breast are the terminal duct lobular units (TDLUs), which produce the fatty breast milk. They give the breast its offspring-feeding functions as a mammary gland. They are distributed throughout the body of the breast. Approximately two-thirds of the lactiferous tissue is within 30 mm of the base of the nipple. The terminal lactiferous ducts drain the milk from TDLUs into 4–18 lactiferous ducts, which drain to the nipple. The milk-glands-to-fat ratio is 2:1 in a lactating woman, and 1:1 in a non-lactating woman. In addition to the milk glands, the breast is also composed of connective tissues (collagen, elastin), white fat, and the suspensory Cooper's ligaments. Sensation in the breast is provided by the peripheral nervous system innervation by means of the front (anterior) and side (lateral) cutaneous branches of the fourth-, fifth-, and sixth intercostal nerves. The T-4 nerve (Thoracic spinal nerve 4), which innervates the dermatomic area, supplies sensation to the nipple-areola complex. Lymphatic drainage Approximately 75% of the lymph from the breast travels to the axillary lymph nodes on the same side of the body, while 25% of the lymph travels to the parasternal nodes (beside the sternum bone). A small amount of remaining lymph travels to the other breast and to the abdominal lymph nodes. The subareolar region has a lymphatic plexus known as the "subareolar plexus of Sappey". The axillary lymph nodes include the pectoral (chest), subscapular (under the scapula), and humeral (humerus-bone area) lymph-node groups, which drain to the central axillary lymph nodes and to the apical axillary lymph nodes. The lymphatic drainage of the breasts is especially relevant to oncology because breast cancer is common to the mammary gland, and cancer cells can metastasize (break away) from a tumor and be dispersed to other parts of the body by means of the lymphatic system. Morphology The morphologic variations in the size, shape, volume, tissue density, pectoral locale, and spacing of the breasts determine their natural shape, appearance, and position on a woman's chest. Breast size and other characteristics do not predict the fat-to-milk-gland ratio or the potential for the woman to nurse an infant. The size and the shape of the breasts are influenced by normal-life hormonal changes (thelarche, menstruation, pregnancy, menopause) and medical conditions (e.g. virginal breast hypertrophy). The shape of the breasts is naturally determined by the support of the suspensory Cooper's ligaments, the underlying muscle and bone structures of the chest, and by the skin envelope. The suspensory ligaments sustain the breast from the clavicle (collarbone) and the clavico-pectoral fascia (collarbone and chest) by traversing and encompassing the fat and milk-gland tissues. The breast is positioned, affixed to, and supported upon the chest wall, while its shape is established and maintained by the skin envelope. In most women, one breast is slightly larger than the other. More obvious and persistent asymmetry in breast size occurs in up to 25% of women. The base of each breast is attached to the chest by the deep fascia over the pectoralis major muscles. The base of the breast is semi-circular, however the shape and position of the breast above the surface is variable. The space between the breast and the pectoralis major muscle, called retromammary space, gives mobility to the breast. The chest (thoracic cavity) progressively slopes outwards from the thoracic inlet (atop the breastbone) and above to the lowest ribs that support the breasts. The inframammary fold (IMF), where the lower portion of the breast meets the chest, is an anatomic feature created by the adherence of the breast skin and the underlying connective tissues of the chest; the IMF is the lower-most extent of the anatomic breast. Normal breast tissue has a texture that feels nodular or granular, with considerable variation from woman to woman. Breasts have been categorized into four general morphological groups: "flat, spheric, protruded, and drooped", or "small/flat, large/inward, upward, and droopy". Support While it is a common belief that breastfeeding causes breasts to sag, researchers have found that a woman's breasts sag due to four key factors: cigarette smoking, number of pregnancies, gravity, and weight loss or gain. Women sometimes wear bras because they mistakenly believe they prevent breasts from sagging as they get older. Physicians, lingerie retailers, teenagers, and adult women used to believe that bras were medically required to support breasts. In a 1952 article in Parents' Magazine, Frank H. Crowell erroneously reported that it was important for teen girls to begin wearing bras early. According to Crowell, this would prevent sagging breasts, stretched blood vessels, and poor circulation later on. This belief was based on the false idea that breasts cannot anatomically support themselves. Sports bras are sometimes used for cardiovascular exercise, sports bras are designed to secure the breasts closely to the body to prevent movement during high-motion activity such as running. Studies have indicated sports bras which are overly tight may restrict respiratory function. Development The breasts are principally composed of adipose, glandular, and connective tissues. Because these tissues have hormone receptors, their sizes and volumes fluctuate according to the hormonal changes particular to thelarche (sprouting of breasts), menstruation (egg production), pregnancy (reproduction), lactation (feeding of offspring), and menopause (end of menstruation). Puberty The morphological structure of the human breast is identical in males and females until puberty. For pubescent girls in thelarche (the breast-development stage), the female sex hormones (principally estrogens) in conjunction with growth hormone promote the sprouting, growth, and development of the breasts. During this time, the mammary glands grow in size and volume and begin resting on the chest. These development stages of secondary sex characteristics (breasts, pubic hair, etc.) are illustrated in the five-stage Tanner scale. During thelarche, the developing breasts are sometimes of unequal size, and usually the left breast is slightly larger. This condition of asymmetry is transitory and statistically normal in female physical and sexual development. Medical conditions can cause overdevelopment (e.g., virginal breast hypertrophy, macromastia) or underdevelopment (e.g., tuberous breast deformity, micromastia) in girls and women. Approximately two years after the onset of puberty (a girl's first menstrual cycle), estrogen and growth hormone stimulate the development and growth of the glandular fat and suspensory tissues that compose the breast. This continues for approximately four years until the final shape of the breast (size, volume, density) is established at about the age of 21. Mammoplasia (breast enlargement) in girls begins at puberty, unlike all other primates, in which breasts enlarge only during lactation. Hormone replacement therapy Hormone replacement therapy, including gender-affirming hormone therapy, stimulates the growth of glandular and adipose tissue through estrogen supplementation. In menopausal women, HRT helps restore breast volume and skin elasticity diminished by declining estrogen levels, typically using oral or transdermal estradiol. In gender-affirming hormone therapy, breast development is induced through feminizing HRT, often combining estrogen with anti-androgens to suppress testosterone. Maximum growth is usually achieved after 2–3 years. Factors such as age, genetics, and hormone dosage influence outcomes. Changes during the menstrual cycle During the menstrual cycle, the breasts are enlarged by premenstrual water retention and temporary growth as influenced by changing hormone levels. Pregnancy and breastfeeding The breasts reach full maturity only when a woman's first pregnancy occurs. Changes to the breasts are among the first signs of pregnancy. The breasts become larger, the nipple-areola complex becomes larger and darker, the Montgomery's glands enlarge, and veins sometimes become more visible. Breast tenderness during pregnancy is common, especially during the first trimester. By mid-pregnancy, the breast is physiologically capable of lactation and some women can express colostrum, a form of breast milk. Pregnancy causes elevated levels of the hormone prolactin, which has a key role in the production of milk. However, milk production is blocked by the hormones progesterone and estrogen until after delivery, when progesterone and estrogen levels plummet. Menopause At menopause, breast atrophy occurs. The breasts can decrease in size when the levels of circulating estrogen decline. The adipose tissue and milk glands also begin to wither. The breasts can also become enlarged from adverse side effects of combined oral contraceptive pills. The size of the breasts can also increase and decrease in response to weight fluctuations. Physical changes to the breasts are often recorded in the stretch marks of the skin envelope; they can serve as historical indicators of the increments and the decrements of the size and volume of a woman's breasts throughout the course of her life. Breast changes during menopause are sometimes treated with hormone replacement therapy. Cancer Breast cancer is a cancer that develops from breast tissue. Signs of breast cancer may include a lump in the breast, a change in breast shape, dimpling of the skin, milk rejection, fluid coming from the nipple, a newly inverted nipple, or a red or scaly patch of skin. In those with distant spread of the disease, there may be bone pain, swollen lymph nodes, shortness of breath, or yellow skin. Risk factors for developing breast cancer include obesity, a lack of physical exercise, alcohol consumption, hormone replacement therapy during menopause, ionizing radiation, an early age at first menstruation, having children late in life (or not at all), older age, having a prior history of breast cancer, and a family history of breast cancer. About five to ten percent of cases are the result of an inherited genetic predisposition, including BRCA mutations among others. Breast cancer most commonly develops in cells from the lining of milk ducts and the lobules that supply these ducts with milk. Cancers developing from the ducts are known as ductal carcinomas, while those developing from lobules are known as lobular carcinomas. There are more than 18 other sub-types of breast cancer. Some, such as ductal carcinoma in situ, develop from pre-invasive lesions. The diagnosis of breast cancer is confirmed by taking a biopsy of the concerning tissue. Once the diagnosis is made, further tests are carried out to determine if the cancer has spread beyond the breast and which treatments are most likely to be effective. Breastfeeding The primary function of the breasts, as mammary glands, is the nourishing of an infant with breast milk. Milk is produced in milk-secreting cells in the alveoli. When the breasts are stimulated by the suckling of her baby, the mother's brain secretes oxytocin. High levels of oxytocin trigger the contraction of muscle cells surrounding the alveoli, causing milk to flow along the ducts that connect the alveoli to the nipple. Full-term newborns have an instinct and a need to suck on a nipple, and breastfed babies nurse for both nutrition and for comfort. Breast milk provides all necessary nutrients for the first six months of life, and then remains an important source of nutrition, alongside solid foods, until at least one or two years of age. Exercise Biomechanical studies have demonstrated that, depending on the activity and the size of a woman's breast, when she walks or runs braless, her breasts may move up and down by or more, and also oscillate side to side. Researchers have also found that as women's breast size increased, they took part in less physical activity, especially vigorous exercise. Few very-large-breasted women jogged, for example. To avoid exercise-related discomfort and pain, medical experts suggest women wear a well-fitted sports bra during activity. Clinical significance The breast is susceptible to numerous benign and malignant conditions. The most frequent benign conditions are puerperal mastitis, fibrocystic breast changes and mastalgia. Lactation unrelated to pregnancy is known as galactorrhea. It can be caused by certain drugs (such as antipsychotic medications), extreme physical stress, or endocrine disorders. Lactation in newborns is caused by hormones from the mother that crossed into the baby's bloodstream during pregnancy. Breast cancer Breast cancer is the most common cause of cancer death among women and it is one of the leading causes of death among women. Factors that appear to be implicated in decreasing the risk of breast cancer are regular breast examinations by health care professionals, regular mammograms, self-examination of breasts, healthy diet, exercise to decrease excess body fat, and breastfeeding. Male breasts Both females and males develop breasts from the same embryological tissues. Anatomically, male breasts do not normally contain lobules and acini that are present in females. In rare instances, it is possible for very few lobules to be present; this makes it possible for some men to develop lobular carcinoma of the breast. Normally, males produce lower levels of estrogens and higher levels of androgens, namely testosterone, which suppress the effects of estrogens in developing excessive breast tissue. In boys and men, abnormal breast development is manifested as gynecomastia, the consequence of a biochemical imbalance between the normal levels of estrogen and testosterone in the male body. Around 70% of boys temporarily develop breast tissue during adolescence. The condition usually resolves by itself within two years. When male lactation occurs, it is considered a symptom of a disorder of the pituitary gland. Plastic surgery Plastic surgery can be performed to augment or reduce the size of breasts, or reconstruct the breast in cases of deformative disease, such as breast cancer. Breast augmentation and breast lift (mastopexy) procedures are done only for cosmetic reasons, whereas breast reduction is sometimes medically indicated. In cases where a woman's breasts are severely asymmetrical, surgery can be performed to either enlarge the smaller breast, reduce the size of the larger breast, or both. Breast augmentation surgery generally does not interfere with future ability to breastfeed. Breast reduction surgery more frequently leads to decreased sensation in the nipple-areola complex, and to low milk supply in women who choose to breastfeed. Implants can interfere with mammography (breast x-ray images). Society and culture General In Christian iconography, some works of art depict women with their breasts in their hands or on a platter, signifying that they died as a martyr by having their breasts severed; one example of this is Saint Agatha of Sicily. Femen is a feminist activist group which uses topless protests as part of their campaigns against sex tourism religious institutions, sexism, and homophobia. Femen activists have been regularly detained by police in response to their protests. There is a long history of female breasts being used by comedians as a subject for comedy fodder (e.g., British comic Benny Hill's burlesque/slapstick routines). Art history In European pre-historic societies, sculptures of female figures with pronounced or highly exaggerated breasts were common. A typical example is the so-called Venus of Willendorf, one of many Paleolithic Venus figurines with ample hips and bosom. Artifacts such as bowls, rock carvings and sacred statues with breasts have been recorded from 15,000 BC up to late antiquity all across Europe, North Africa and the Middle East. Many female deities representing love and fertility were associated with breasts and breast milk. Figures of the Phoenician goddess Astarte were represented as pillars studded with breasts. Isis, an Egyptian goddess who represented, among many other things, ideal motherhood, was often portrayed as suckling pharaohs, thereby confirming their divine status as rulers. Even certain male deities representing regeneration and fertility were occasionally depicted with breast-like appendices, such as the river god Hapy who was considered to be responsible for the annual overflowing of the Nile. Female breasts were also prominent in Minoan art in the form of the famous Snake Goddess statuettes, and a few other pieces, though most female breasts are covered. In Ancient Greece there were several cults worshipping the "Kourotrophos", the suckling mother, represented by goddesses such as Gaia, Hera and Artemis. The worship of deities symbolized by the female breast in Greece became less common during the first millennium. The popular adoration of female goddesses decreased significantly during the rise of the Greek city states, a legacy which was passed on to the later Roman Empire. During the middle of the first millennium BC, Greek culture experienced a gradual change in the perception of female breasts. Women in art were covered in clothing from the neck down, including female goddesses like Athena, the patron of Athens who represented heroic endeavor. There were exceptions: Aphrodite, the goddess of love, was more frequently portrayed fully nude, though in postures that were intended to portray shyness or modesty, a portrayal that has been compared to modern pin ups by historian Marilyn Yalom. Although nude men were depicted standing upright, most depictions of female nudity in Greek art occurred "usually with drapery near at hand and with a forward-bending, self-protecting posture". A popular legend at the time was of the Amazons, a tribe of fierce female warriors who socialized with men only for procreation and even removed one breast to become better warriors (the idea being that the right breast would interfere with the operation of a bow and arrow). The legend was a popular motif in art during Greek and Roman antiquity and served as an antithetical cautionary tale. Body image Many women regard their breasts as important to their sexual attractiveness, as a sign of femininity that is important to their sense of self. A woman with smaller breasts may regard her breasts as less attractive. Clothing Because breasts are mostly fatty tissue, their shape can—within limits—be molded by clothing, such as foundation garments. Bras are commonly worn by about 90% of Western women, and are often worn for support. The social norm in most Western cultures is to cover breasts in public, though the extent of coverage varies depending on the social context. Some religions ascribe a special status to the female breast, either in formal teachings or through symbolism. Islam forbids free women from exposing their breasts in public. Many cultures, including Western cultures in North America, associate breasts with sexuality and tend to regard bare breasts as immodest or indecent. In some cultures, like the Himba in northern Namibia, bare-breasted women are normal. In some African cultures, for example, the thigh is regarded as highly sexualized and never exposed in public, but breast exposure is not taboo. In a few Western countries and regions female toplessness at a beach is acceptable, although it may not be acceptable in the town center. Social attitudes and laws regarding breastfeeding in public vary widely. In many countries, breastfeeding in public is common, legally protected, and generally not regarded as an issue. However, even though the practice may be legal or socially accepted, some mothers may nevertheless be reluctant to expose a breast in public to breastfeed due to actual or potential objections by other people, negative comments, or harassment. It is estimated that around 63% of mothers across the world have publicly breast-fed. Bare-breasted women are legal and culturally acceptable at public beaches in Australia and much of Europe. Filmmaker Lina Esco made a film entitled Free the Nipple, which is about "...laws against female toplessness or restrictions on images of female, but not male, nipples", which Esco states is an example of sexism in society. Breast binding, also known as chest binding, is the flattening and hiding of breasts with constrictive materials such as cloth strips or purpose-built undergarments. Binders may also be used as alternatives to bras or for reasons of propriety. People who bind include women, trans men, non-binary people, and cisgender men with gynecomastia. Sexual characteristic In some cultures, breasts play a role in human sexual activity. Breasts and especially the nipples are among the various human erogenous zones. They are sensitive to the touch as they have many nerve endings; and it is common to press or massage them with hands or orally before or during sexual activity. During sexual arousal, breast size increases, venous patterns across the breasts become more visible, and nipples harden. Compared to other primates, human breasts are proportionately large throughout adult females' lives. Some writers have suggested that they may have evolved as a visual signal of sexual maturity and fertility. In Patterns of Sexual Behavior, a 1951 analysis of 191 traditional cultures, the researchers noted that stimulation of the female breast by a male sexual partner "seemed absent in all subhuman forms, although it is common among the members of many different human societies." Many people regard bare female breasts to be aesthetically pleasing or erotic, and they can elicit heightened sexual desires in men in many cultures. In the ancient Indian work the Kama Sutra, light scratching of the breasts with nails and biting with teeth are considered erotic. Some people show a sexual interest in female breasts distinct from that of the person, which may be regarded as a breast fetish. A number of Western fashions include clothing which accentuate the breasts, such as the use of push-up bras and decollete (plunging neckline) gowns and blouses which show cleavage. While U.S. culture prefers breasts that are youthful and upright, some cultures venerate women with drooping breasts, indicating mothering and the wisdom of experience. Research conducted at the Victoria University of Wellington showed that breasts are often the first thing men look at, and for a longer time than other body parts. The writers of the study had initially speculated that the reason for this is due to endocrinology with larger breasts indicating higher levels of estrogen and a sign of greater fertility, but the researchers said that "Men may be looking more often at the breasts because they are simply aesthetically pleasing, regardless of the size." Some women report achieving an orgasm from nipple stimulation, but this is rare. Research suggests that the orgasms are genital orgasms, and may also be directly linked to "the genital area of the brain". In these cases, it seems that sensation from the nipples travels to the same part of the brain as sensations from the vagina, clitoris and cervix. Nipple stimulation may trigger uterine contractions, which then produce a sensation in the genital area of the brain. Anthropomorphic geography There are many mountains named after the breast because they resemble it in appearance and so are objects of religious and ancestral veneration as a fertility symbol and of well-being. In Asia, there was "Breast Mountain", which had a cave where the Buddhist monk Bodhidharma (Da Mo) spent much time in meditation. Other such breast mountains are Mount Elgon on the Uganda–Kenya border; and the Maiden Paps in Scotland; the ('Maiden's breast mountains') in Talim Island, Philippines, the twin hills known as the Paps of Anu ( or 'the breasts of Anu'), near Killarney in Ireland; the 2,086 m high or in the , Spain; in Thailand, in Puerto Rico; and the Breasts of Aphrodite in Mykonos, among many others. In the United States, the Teton Range is named after the French word for 'nipple'. Measurement The maturation and size of the breasts can be measured by a variety of different methods. These include Tanner staging, bra cup size, breast volume, breast–chest difference, the breast unit, breast hemicircumference, and breast circumference, among other measures.
Biology and health sciences
Integumentary system
null
4495
https://en.wikipedia.org/wiki/British%20thermal%20unit
British thermal unit
The British thermal unit (Btu) is a measure of heat, which is a form of energy. It was originally defined as the amount of heat required to raise the temperature of one pound of water by one degree Fahrenheit. It is also part of the United States customary units. The SI unit for energy is the joule (J); one Btu equals about 1,055 J (varying within the range of 1,054–1,060 J depending on the specific definition of BTU; see below). While units of heat are often supplanted by energy units in scientific work, they are still used in some fields. For example, in the United States the price of natural gas is quoted in dollars per the amount of natural gas that would give 1 million Btu (1 "MMBtu") of heat energy if burned. Definitions A Btu was originally defined as the amount of heat required to raise the temperature of one pound of liquid water by one degree Fahrenheit at a constant pressure of one atmospheric unit. There are several different definitions of the Btu that differ slightly. This reflects the fact that the temperature change of a mass of water due to the addition of a specific amount of heat (calculated in energy units, usually joules) depends slightly upon the water's initial temperature. As seen in the table below, definitions of the Btu based on different water temperatures vary by up to 0.5%. Prefixes Units of kBtu are used in building energy use tracking and heating system sizing. Energy Use Index (EUI) represents kBtu per square foot of conditioned floor area. "k" stands for 1,000. The unit Mbtu is used in natural gas and other industries to indicate 1,000 Btu. However, there is an ambiguity in that the metric system (SI) uses the prefix "M" to indicate 'Mega-', one million (1,000,000). Even so, "MMbtu" is often used to indicate one million Btu particularly in the oil and gas industry. Energy analysts accustomed to the metric "k" ('kilo-') for 1,000 are more likely to use MBtu to represent one million, especially in documents where M represents one million in other energy or cost units, such as MW, MWh and $. The unit 'therm' is used to represent 100,000 Btu. A decatherm is 10 therms or one million Btu. The unit quad is commonly used to represent one quadrillion (1015) Btu. Conversions One Btu is approximately: (kilojoules) (watt hours) (calories) (kilocalories) 25,031 to 25,160 ft⋅pdl (foot-poundal) (foot-pounds-force) 5.40395 (lbf/in2)⋅ft3 A Btu can be approximated as the heat produced by burning a single wooden kitchen match or as the amount of energy it takes to lift a weight . For natural gas In natural gas pricing, the Canadian definition is that ≡ . The energy content (high or low heating value) of a volume of natural gas varies with the composition of the natural gas, which means there is no universal conversion factor for energy to volume. of average natural gas yields ≈ 1,030 Btu (between 1,010 Btu and 1,070 Btu, depending on quality, when burned) As a coarse approximation, of natural gas yields ≈ ≈ . For natural gas price conversion ≈ 36.9 million Btu and ≈ BTU/h The SI unit of power for heating and cooling systems is the watt. Btu per hour (Btu/h) is sometimes used in North America and the United Kingdom - the latter for air conditioning mainly, though "Btu/h" is sometimes abbreviated to just "Btu". MBH—thousands of Btu per hour—is also common. 1 W is approximately 1,000 Btu/h is approximately 1 hp is approximately Associated units 1 ton of cooling, a common unit in North American refrigeration and air conditioning applications, is . It is the rate of heat transfer needed to freeze of water into ice in 24 hours. In the United States and Canada, the R-value that describes the performance of thermal insulation is typically quoted in square foot degree Fahrenheit hours per British thermal unit (ft2⋅°F⋅h/Btu). For one square foot of the insulation, one Btu per hour of heat flows across the insulator for each degree of temperature difference across it. 1 therm is defined in the United States as 100,000 Btu using the definition. In the EU it was listed in 1979 with the BTUIT definition and planned to be discarded as a legal unit of trade by 1994. United Kingdom regulations were amended to replace therms with joules with effect from 1 January 2000. the therm was still used in natural gas pricing in the United Kingdom. 1 quad (short for quadrillion Btu) is 1015 Btu, which is about 1 exajoule (). Quads are used in the United States for representing the annual energy consumption of large economies: for example, the U.S. economy used 99.75 quads in 2005. One quad/year is about 33.43 gigawatts. The Btu should not be confused with the Board of Trade Unit (BTU), an obsolete UK synonym for kilowatt hour (). The Btu is often used to express the conversion-efficiency of heat into electrical energy in power plants. Figures are quoted in terms of the quantity of heat in Btu required to generate 1 kW⋅h of electrical energy. A typical coal-fired power plant works at , an efficiency of 32–33%. The centigrade heat unit (CHU) is the amount of heat required to raise the temperature of of water by one Celsius degree. It is equal to 1.8 Btu or 1,899 joules. In 1974, this unit was "still sometimes used" in the United Kingdom as an alternative to Btu. Another legacy unit for energy in the metric system is the calorie, which is defined as the amount of heat required to raise the temperature of one gram of water by one degree Celsius.
Physical sciences
Energy
Basics and measurement
4502
https://en.wikipedia.org/wiki/Biotechnology
Biotechnology
Biotechnology is a multidisciplinary field that involves the integration of natural sciences and engineering sciences in order to achieve the application of organisms and parts thereof for products and services. The term biotechnology was first used by Károly Ereky in 1919 to refer to the production of products from raw materials with the aid of living organisms. The core principle of biotechnology involves harnessing biological systems and organisms, such as bacteria, yeast, and plants, to perform specific tasks or produce valuable substances. Biotechnology had a significant impact on many areas of society, from medicine to agriculture to environmental science. One of the key techniques used in biotechnology is genetic engineering, which allows scientists to modify the genetic makeup of organisms to achieve desired outcomes. This can involve inserting genes from one organism into another, and consequently, create new traits or modifying existing ones. Other important techniques used in biotechnology include tissue culture, which allows researchers to grow cells and tissues in the lab for research and medical purposes, and fermentation, which is used to produce a wide range of products such as beer, wine, and cheese. The applications of biotechnology are diverse and have led to the development of products like life-saving drugs, biofuels, genetically modified crops, and innovative materials. It has also been used to address environmental challenges, such as developing biodegradable plastics and using microorganisms to clean up contaminated sites. Biotechnology is a rapidly evolving field with significant potential to address pressing global challenges and improve the quality of life for people around the world; however, despite its numerous benefits, it also poses ethical and societal challenges, such as questions around genetic modification and intellectual property rights. As a result, there is ongoing debate and regulation surrounding the use and application of biotechnology in various industries and fields. Definition The concept of biotechnology encompasses a wide range of procedures for modifying living organisms for human purposes, going back to domestication of animals, cultivation of plants, and "improvements" to these through breeding programs that employ artificial selection and hybridization. Modern usage also includes genetic engineering, as well as cell and tissue culture technologies. The American Chemical Society defines biotechnology as the application of biological organisms, systems, or processes by various industries to learning about the science of life and the improvement of the value of materials and organisms, such as pharmaceuticals, crops, and livestock. As per the European Federation of Biotechnology, biotechnology is the integration of natural science and organisms, cells, parts thereof, and molecular analogues for products and services. Biotechnology is based on the basic biological sciences (e.g., molecular biology, biochemistry, cell biology, embryology, genetics, microbiology) and conversely provides methods to support and perform basic research in biology. Biotechnology is the research and development in the laboratory using bioinformatics for exploration, extraction, exploitation, and production from any living organisms and any source of biomass by means of biochemical engineering where high value-added products could be planned (reproduced by biosynthesis, for example), forecasted, formulated, developed, manufactured, and marketed for the purpose of sustainable operations (for the return from bottomless initial investment on R & D) and gaining durable patents rights (for exclusives rights for sales, and prior to this to receive national and international approval from the results on animal experiment and human experiment, especially on the pharmaceutical branch of biotechnology to prevent any undetected side-effects or safety concerns by using the products). The utilization of biological processes, organisms or systems to produce products that are anticipated to improve human lives is termed biotechnology. By contrast, bioengineering is generally thought of as a related field that more heavily emphasizes higher systems approaches (not necessarily the altering or using of biological materials directly) for interfacing with and utilizing living things. Bioengineering is the application of the principles of engineering and natural sciences to tissues, cells, and molecules. This can be considered as the use of knowledge from working with and manipulating biology to achieve a result that can improve functions in plants and animals. Relatedly, biomedical engineering is an overlapping field that often draws upon and applies biotechnology (by various definitions), especially in certain sub-fields of biomedical or chemical engineering such as tissue engineering, biopharmaceutical engineering, and genetic engineering. History Although not normally what first comes to mind, many forms of human-derived agriculture clearly fit the broad definition of "utilizing a biotechnological system to make products". Indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. Agriculture has been theorized to have become the dominant way of producing food since the Neolithic Revolution. Through early biotechnology, the earliest farmers selected and bred the best-suited crops (e.g., those with the highest yields) to produce enough food to support a growing population. As crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by-products could effectively fertilize, restore nitrogen, and control pests. Throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants — one of the first forms of biotechnology. These processes also were included in early fermentation of beer. These processes were introduced in early Mesopotamia, Egypt, China and India, and still use the same basic biological methods. In brewing, malted grains (containing enzymes) convert starch from grains into sugar and then adding specific yeasts to produce beer. In this process, carbohydrates in the grains broke down into alcohols, such as ethanol. Later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. Fermentation was also used in this time period to produce leavened bread. Although the process of fermentation was not fully understood until Louis Pasteur's work in 1857, it is still the first use of biotechnology to convert a food source into another form. Before the time of Charles Darwin's work and life, animal and plant scientists had already used selective breeding. Darwin added to that body of work with his scientific observations about the ability of science to change species. These accounts contributed to Darwin's theory of natural selection. For thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. In selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. For example, this technique was used with corn to produce the largest and sweetest crops. In the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum, to produce acetone, which the United Kingdom desperately needed to manufacture explosives during World War I. Biotechnology has also led to the development of antibiotics. In 1928, Alexander Fleming discovered the mold Penicillium. His work led to the purification of the antibiotic formed by the mold by Howard Florey, Ernst Boris Chain and Norman Heatley – to form what we today know as penicillin. In 1940, penicillin became available for medicinal use to treat bacterial infections in humans. The field of modern biotechnology is generally thought of as having been born in 1971 when Paul Berg's (Stanford) experiments in gene splicing had early success. Herbert W. Boyer (Univ. Calif. at San Francisco) and Stanley N. Cohen (Stanford) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. The commercial viability of a biotechnology industry was significantly expanded on June 16, 1980, when the United States Supreme Court ruled that a genetically modified microorganism could be patented in the case of Diamond v. Chakrabarty. Indian-born Ananda Chakrabarty, working for General Electric, had modified a bacterium (of the genus Pseudomonas) capable of breaking down crude oil, which he proposed to use in treating oil spills. (Chakrabarty's work did not involve gene manipulation but rather the transfer of entire organelles between strains of the Pseudomonas bacterium). The MOSFET invented at Bell Labs between 1955 and 1960, Two years later, Leland C. Clark and Champ Lyons invented the first biosensor in 1962. Biosensor MOSFETs were later developed, and they have since been widely used to measure physical, chemical, biological and environmental parameters. The first BioFET was the ion-sensitive field-effect transistor (ISFET), invented by Piet Bergveld in 1970. It is a special type of MOSFET, where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization, biomarker detection from blood, antibody detection, glucose measurement, pH sensing, and genetic technology. By the mid-1980s, other BioFETs had been developed, including the gas sensor FET (GASFET), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFETs such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed. A factor influencing the biotechnology sector's success is improved intellectual property rights legislation—and enforcement—worldwide, as well as strengthened demand for medical and pharmaceutical products. Rising demand for biofuels is expected to be good news for the biotechnology sector, with the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry to rapidly increase its supply of corn and soybeans—the main inputs into biofuels—by developing genetically modified seeds that resist pests and drought. By increasing farm productivity, biotechnology boosts biofuel production. Examples Biotechnology has applications in four major industrial areas, including health care (medical), crop production and agriculture, non-food (industrial) uses of crops and other products (e.g., biodegradable plastics, vegetable oil, biofuels), and environmental uses. For example, one application of biotechnology is the directed use of microorganisms for the manufacture of organic products (examples include beer and milk products). Another example is using naturally present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat waste, clean up sites contaminated by industrial activities (bioremediation), and also to produce biological weapons. A series of derived terms have been coined to identify several branches of biotechnology, for example: Bioinformatics (or "gold biotechnology") is an interdisciplinary field that addresses biological problems using computational techniques, and makes the rapid organization as well as analysis of biological data possible. The field may also be referred to as computational biology, and can be defined as, "conceptualizing biology in terms of molecules and then applying informatics techniques to understand and organize the information associated with these molecules, on a large scale". Bioinformatics plays a key role in various areas, such as functional genomics, structural genomics, and proteomics, and forms a key component in the biotechnology and pharmaceutical sector. Blue biotechnology is based on the exploitation of sea resources to create products and industrial applications. This branch of biotechnology is the most used for the industries of refining and combustion principally on the production of bio-oils with photosynthetic micro-algae. Green biotechnology is biotechnology applied to agricultural processes. An example would be the selection and domestication of plants via micropropagation. Another example is the designing of transgenic plants to grow under specific environments in the presence (or absence) of chemicals. One hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. An example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. An example of this would be Bt corn. Whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. It is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. On the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. Red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. This branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. As well as the development of hormones, stem cells, antibodies, siRNA and diagnostic tests. White biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. An example is the designing of an organism to produce a useful chemical. Another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous/polluting chemicals. White biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. "Yellow biotechnology" refers to the use of biotechnology in food production (food industry), for example in making wine (winemaking), cheese (cheesemaking), and beer (brewing) by fermentation. It has also been used to refer to biotechnology applied to insects. This includes biotechnology-based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. Gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of pollutants. Brown biotechnology is related to the management of arid lands and deserts. One application is the creation of enhanced seeds that resist extreme environmental conditions of arid regions, which is related to the innovation, creation of agriculture techniques and management of resources. Violet biotechnology is related to law, ethical and philosophical issues around biotechnology. Microbial biotechnology has been proposed for the rapidly emerging area of biotechnology applications in space and microgravity (space bioeconomy) Dark biotechnology is the color associated with bioterrorism or biological weapons and biowarfare which uses microorganisms, and toxins to cause diseases and death in humans, livestock and crops. Medicine In medicine, modern biotechnology has many applications in areas such as pharmaceutical drug discoveries and production, pharmacogenomics, and genetic testing (or genetic screening). In 2021, nearly 40% of the total company value of pharmaceutical biotech companies worldwide were active in Oncology with Neurology and Rare Diseases being the other two big applications. Pharmacogenomics (a combination of pharmacology and genomics) is the technology that analyses how genetic makeup affects an individual's response to drugs. Researchers in the field investigate the influence of genetic variation on drug responses in patients by correlating gene expression or single-nucleotide polymorphisms with a drug's efficacy or toxicity. The purpose of pharmacogenomics is to develop rational means to optimize drug therapy, with respect to the patients' genotype, to ensure maximum efficacy with minimal adverse effects. Such approaches promise the advent of "personalized medicine"; in which drugs and drug combinations are optimized for each individual's unique genetic makeup. Biotechnology has contributed to the discovery and manufacturing of traditional small molecule pharmaceutical drugs as well as drugs that are the product of biotechnology – biopharmaceutics. Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The first genetically engineered products were medicines designed to treat human diseases. To cite one example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals (cattle or pigs). The genetically engineered bacteria are able to produce large quantities of synthetic human insulin at relatively low cost. Biotechnology has also enabled emerging therapeutics like gene therapy. The application of biotechnology to basic science (for example through the Human Genome Project) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well. Genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used to determine a child's parentage (genetic mother and father) or in general a person's ancestry. In addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. Genetic testing identifies changes in chromosomes, genes, or proteins. Most of the time, testing is used to find changes that are associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. As of 2011 several hundred genetic tests were in use. Since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling. Agriculture Genetically modified crops ("GM crops", or "biotech crops") are plants used in agriculture, the DNA of which has been modified with genetic engineering techniques. In most cases, the main aim is to introduce a new trait that does not occur naturally in the species. Biotechnology firms can contribute to future food security by improving the nutrition and viability of urban agriculture. Furthermore, the protection of intellectual property rights encourages private sector investment in agrobiotechnology. Examples in food crops include resistance to certain pests, diseases, stressful environmental conditions, resistance to chemical treatments (e.g. resistance to a herbicide), reduction of spoilage, or improving the nutrient profile of the crop. Examples in non-food crops include production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation. Farmers have widely adopted GM technology. Between 1996 and 2011, the total surface area of land cultivated with GM crops had increased by a factor of 94, from . 10% of the world's crop lands were planted with GM crops in 2010. As of 2011, 11 different transgenic crops were grown commercially on in 29 countries such as the US, Brazil, Argentina, India, Canada, China, Paraguay, Pakistan, South Africa, Uruguay, Bolivia, Australia, Philippines, Myanmar, Burkina Faso, Mexico and Spain. Genetically modified foods are foods produced from organisms that have had specific changes introduced into their DNA with the methods of genetic engineering. These techniques have allowed for the introduction of new crop traits as well as a far greater control over a food's genetic structure than previously afforded by methods such as selective breeding and mutation breeding. Commercial sale of genetically modified foods began in 1994, when Calgene first marketed its Flavr Savr delayed ripening tomato. To date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. These have been engineered for resistance to pathogens and herbicides and better nutrient profiles. GM livestock have also been experimentally developed; in November 2013 none were available on the market, but in 2015 the FDA approved the first GM salmon for commercial production and consumption. There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. GM crops also provide a number of ecological benefits, if not used in excess. Insect-resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. However, opponents have objected to GM crops per se on several grounds, including environmental concerns, whether food produced from GM crops is safe, whether GM crops are needed to address the world's food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. Biotechnology has several applications in the realm of food security. Crops like Golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. Though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. Additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. Transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in India and other countries. Industrial Industrial biotechnology (known mainly in Europe as white biotechnology) is the application of biotechnology for industrial purposes, including industrial fermentation. It includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels. In the current decades, significant progress has been done in creating genetically modified organisms (GMOs) that enhance the diversity of applications and economical viability of industrial biotechnology. By using renewable raw materials to produce a variety of chemicals and fuels, industrial biotechnology is actively advancing towards lowering greenhouse gas emissions and moving away from a petrochemical-based economy. Synthetic biology is considered one of the essential cornerstones in industrial biotechnology due to its financial and sustainable contribution to the manufacturing sector. Jointly biotechnology and synthetic biology play a crucial role in generating cost-effective products with nature-friendly features by using bio-based production instead of fossil-based. Synthetic biology can be used to engineer model microorganisms, such as Escherichia coli, by genome editing tools to enhance their ability to produce bio-based products, such as bioproduction of medicines and biofuels. For instance, E. coli and Saccharomyces cerevisiae in a consortium could be used as industrial microbes to produce precursors of the chemotherapeutic agent paclitaxel by applying the metabolic engineering in a co-culture approach to exploit the benefits from the two microbes. Another example of synthetic biology applications in industrial biotechnology is the re-engineering of the metabolic pathways of E. coli by CRISPR and CRISPRi systems toward the production of a chemical known as 1,4-butanediol, which is used in fiber manufacturing. In order to produce 1,4-butanediol, the authors alter the metabolic regulation of the Escherichia coli by CRISPR to induce point mutation in the gltA gene, knockout of the sad gene, and knock-in six genes (cat1, sucD, 4hbd, cat2, bld, and bdh). Whereas CRISPRi system used to knockdown the three competing genes (gabD, ybgC, and tesB) that affect the biosynthesis pathway of 1,4-butanediol. Consequently, the yield of 1,4-butanediol significantly increased from 0.9 to 1.8 g/L. Environmental Environmental biotechnology includes various disciplines that play an essential role in reducing environmental waste and providing environmentally safe processes, such as biofiltration and biodegradation. The environment can be affected by biotechnologies, both positively and adversely. Vallero and others have argued that the difference between beneficial biotechnology (e.g., bioremediation is to clean up an oil spill or hazard chemical leak) versus the adverse effects stemming from biotechnological enterprises (e.g., flow of genetic material from transgenic organisms into wild strains) can be seen as applications and implications, respectively. Cleaning up environmental wastes is an example of an application of environmental biotechnology; whereas loss of biodiversity or loss of containment of a harmful microbe are examples of environmental implications of biotechnology. Many cities have installed CityTrees, which use biotechnology to filter pollutants from urban atmospheres. Regulation The regulation of genetic engineering concerns approaches taken by governments to assess and manage the risks associated with the use of genetic engineering technology, and the development and release of genetically modified organisms (GMO), including genetically modified crops and genetically modified fish. There are differences in the regulation of GMOs between countries, with some of the most marked differences occurring between the US and Europe. Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety. The European Union differentiates between approval for cultivation within the EU and approval for import and processing. While only a few GMOs have been approved for cultivation in the EU a number of GMOs have been approved for import and processing. The cultivation of GMOs has triggered a debate about the coexistence of GM and non-GM crops. Depending on the coexistence regulations, incentives for the cultivation of GM crops differ. Database for the GMOs used in the EU The EUginius (European GMO Initiative for a Unified Database System) database is intended to help companies, interested private users and competent authorities to find precise information on the presence, detection and identification of GMOs used in the European Union. The information is provided in English. Learning In 1988, after prompting from the United States Congress, the National Institute of General Medical Sciences (National Institutes of Health) (NIGMS) instituted a funding mechanism for biotechnology training. Universities nationwide compete for these funds to establish Biotechnology Training Programs (BTPs). Each successful application is generally funded for five years then must be competitively renewed. Graduate students in turn compete for acceptance into a BTP; if accepted, then stipend, tuition and health insurance support are provided for two or three years during the course of their PhD thesis work. Nineteen institutions offer NIGMS supported BTPs. Biotechnology training is also offered at the undergraduate level and in community colleges.
Technology
Food and health
null
4519
https://en.wikipedia.org/wiki/Ballpoint%20pen
Ballpoint pen
A ballpoint pen, also known as a biro (British English), ball pen (Hong Kong, Indonesia, Pakistani, Indian and Philippine English), or dot pen (Nepali English and South Asian English), is a pen that dispenses ink (usually in paste form) over a metal ball at its point, i.e., over a "ball point". The metals commonly used are steel, brass, or tungsten carbide. The design was conceived and developed as a cleaner and more reliable alternative to dip pens and fountain pens, and it is now the world's most-used writing instrument; millions are manufactured and sold daily. It has influenced art and graphic design and spawned an artwork genre. History Origins The concept of using a "ball point" within a writing instrument to apply ink to paper has existed since the late 19th century. In these inventions, the ink was placed in a thin tube whose end was blocked by a tiny ball, held so that it could not slip into the tube or fall out of the pen. The first patent for a ballpoint pen was issued on 30 October 1888 to John J. Loud, who was attempting to make a writing instrument that would be able to write "on rough surfaces—such as wood, coarse wrapping paper, and other articles" which fountain pens could not. Loud's pen had a small rotating steel ball held in place by a socket. Although it could be used to mark rough surfaces such as leather, as Loud intended, it proved too coarse for letter-writing. With no commercial viability, its potential went unexploited, and the patent eventually lapsed. The manufacture of economical, reliable ballpoint pens as are known today arose from experimentation, modern chemistry, and the precision manufacturing capabilities of the early 20th century. Patents filed worldwide during early development are testaments to failed attempts at making the pens commercially viable and widely available. Early ballpoints did not deliver the ink evenly; overflow and clogging were among the obstacles faced by early inventors. If the ball socket were too tight or the ink too thick, it would not reach the paper. If the socket were too loose or the ink too thin, the pen would leak, or the ink would smear. Ink reservoirs pressurized by a piston, spring, capillary action, and gravity would all serve as solutions to ink-delivery and flow problems. László Bíró, a Hungarian newspaper editor (later a naturalized Argentine) frustrated by the amount of time that he wasted filling up fountain pens and cleaning up smudged pages, noticed that inks used in newspaper printing dried quickly, leaving the paper dry and smudge-free. He decided to create a pen using the same type of ink. Bíró enlisted the help of his brother György, a dentist with useful knowledge of chemistry, to develop viscous ink formulae for new ballpoint designs. Bíró's innovation successfully coupled viscous ink with a ball-and-socket mechanism that allowed controlled flow while preventing ink from drying inside the reservoir. Bíró filed for a British patent on 15 June 1938. In 1941, the Bíró brothers and a friend, Juan Jorge Meyne, fled Germany and moved to Argentina, where they formed "Bíró Pens of Argentina" and filed a new patent in 1943. Their pen was sold in Argentina as the "Birome", from the names Bíró and Meyne, which is how ballpoint pens are still known in that country. This new design was licensed by the British engineer Frederick George Miles and manufactured by his company Miles Aircraft, to be used by Royal Air Force aircrew as the "Biro". Ballpoint pens were found to be more versatile than fountain pens, especially in airplanes, where fountain pens were prone to leak. Bíró's patent, and other early patents on ballpoint pens, often used the term "ball-point fountain pen". Postwar proliferation Following World War II, many companies vied to commercially produce their own ballpoint pen design. In pre-war Argentina, success of the Birome ballpoint was limited, but in mid-1945, the Eversharp Co., a maker of mechanical pencils, teamed up with Eberhard Faber Co. to license the rights from Birome for sales in the United States. In 1946, a Spanish firm, Vila Sivill Hermanos, began to make a ballpoint, Regia Continua, and from 1953 to 1957 their factory also made Bic ballpoints, on contract with the French firm Société Bic. During the same period, American entrepreneur Milton Reynolds came across a Birome ballpoint pen during a business trip to Buenos Aires, Argentina. Recognizing commercial potential, he purchased several ballpoint samples, returned to the United States, and founded the Reynolds International Pen Company. Reynolds bypassed the Birome patent with sufficient design alterations to obtain an American patent, beating Eversharp and other competitors to introduce the pen to the US market. Debuting at Gimbels department store in New York City on 29 October 1945, for US$12.50 each (1945 US dollar value, about $ in dollars), "Reynolds Rocket" became the first commercially successful ballpoint pen. Reynolds went to great extremes to market the pen, with great success; Gimbel's sold many thousands of pens within one week. In Britain, the Miles-(Harry) Martin pen company was producing the first commercially successful ballpoint pens there by the end of 1945. Neither Reynolds' nor Eversharp's ballpoint lived up to consumer expectations in America. Ballpoint pen sales peaked in 1946, and consumer interest subsequently plunged due to market saturation, going from luxury good to fungible consumable. By the early 1950s the ballpoint boom had subsided and Reynolds' company folded. Paper Mate pens, among the emerging ballpoint brands of the 1950s, bought the rights to distribute their own ballpoint pens in Canada. Facing concerns about ink-reliability, Paper Mate pioneered new ink formulas and advertised them as "banker-approved". In 1954, Parker Pens released "The Jotter"—the company's first ballpoint—boasting additional features and technological advances which also included the use of tungsten-carbide textured ball-bearings in their pens. In less than a year, Parker sold several million pens at prices between three and nine dollars. In the 1960s, the failing Eversharp Co. sold its pen division to Parker and ultimately folded. Marcel Bich also introduced a ballpoint pen to the American marketplace in the 1950s, licensed from Bíró and based on the Argentine designs. Bich shortened his name to Bic in 1953, forming the ballpoint brand Bic now recognized globally. Bic pens struggled until the company launched its "Writes First Time, Every Time!" advertising campaign in the 1960s. Competition during this era forced unit prices to drop considerably. Inks Ballpoint pen ink is normally a paste containing around 25 to 40 percent dye. The dyes are suspended in a mixture of solvents and fatty acids. The most common of the solvents are benzyl alcohol or phenoxyethanol, which mix with the dyes and oils to create a smooth paste that dries quickly. This type of ink is also called "oil-based ink". The fatty acids help to lubricate the ball tip while writing. Hybrid inks also contain added lubricants in the ink to provide a smoother writing experience. The drying time of the ink varies depending upon the viscosity of the ink and the diameter of the ball. In general, the more viscous the ink, the faster it will dry, but more writing pressure needs to be applied to dispense ink. But although they are less viscous, hybrid inks have a faster drying time compared to normal ballpoint inks. Also, a larger ball dispenses more ink and thus increases drying time. The dyes used in blue and black ballpoint pens are basic dyes based on triarylmethane and acid dyes derived from diazo compounds or phthalocyanine. Common dyes in blue (and black) ink are Prussian blue, Victoria blue, methyl violet, crystal violet, and phthalocyanine blue. The dye eosin is commonly used for red ink. The inks are resistant to water after drying but can be defaced by certain solvents which include acetone and various alcohols. Types of ballpoint pens Ballpoint pens are produced in both disposable and refillable models. Refills allow for the entire internal ink reservoir, including a ballpoint and socket, to be replaced. Such characteristics are usually associated with designer-type pens or those constructed of finer materials. The simplest types of ballpoint pens are disposable and have a cap to cover the tip when the pen is not in use, or a mechanism for retracting the tip, which varies between manufacturers but is usually a spring- or screw-mechanism. Rollerball pens employ the same ballpoint mechanics, but with the use of water-based inks instead of oil-based inks. Compared to oil-based ballpoints, rollerball pens are said to provide more fluid ink-flow, but the water-based inks will blot if held stationary against the writing surface. Water-based inks also remain wet longer when freshly applied and are thus prone to "smearing"—posing problems to left-handed people (or right handed people writing right-to-left script)—and "running", should the writing surface become wet. Some ballpoint pens use a hybrid ink formulation whose viscosity is lower than that of standard ballpoint ink, but greater than rollerball ink. The ink dries faster than a gel pen to prevent smearing when writing. These pens are better suited for left-handed persons. Examples are the Zebra Surari, Uni-ball Jetstream and Pilot Acroball ranges. These pens are also labelled "extra smooth", as they offer a smoother writing experience compared to normal ballpoint pens. Ballpoint pens with erasable ink were pioneered by the Paper Mate pen company. The ink formulas of erasable ballpoints have properties similar to rubber cement, allowing the ink to be literally rubbed clean from the writing surface before drying and eventually becoming permanent. Erasable ink is much thicker than standard ballpoint inks, requiring pressurized cartridges to facilitate inkflow—meaning they can also write upside-down. Though these pens are equipped with erasers, any eraser will suffice. Ballpoint tips are fitted with balls whose diameter can vary from 0.28 mm to 1.6 mm. The ball diameter does not correspond to the width of the line produced by the pen. The line width depends on various factors like the type of ink and pressure applied. Some standard ball diameters are: 0.3 mm, 0.38 mm, 0.4 mm, 0.5 mm, 0.7 mm (fine), 0.8 mm, 1.0 mm (medium), 1.2 mm and 1.4 mm (broad). Pens with ball diameters as small as 0.18 mm have been made by Japanese companies, but are extremely rare. The inexpensive, disposable Bic Cristal (also simply "Bic pen" or "Biro") is reportedly the most widely sold pen in the world. It was the Bic company's first product and is still synonymous with the company name. The Bic Cristal is part of the permanent collection at the Museum of Modern Art in New York City, acknowledged for its industrial design. Its hexagonal barrel mimics that of a wooden pencil and is transparent, showing the ink level in the reservoir. Originally a sealed streamlined cap, the modern pen cap has a small hole at the top to meet safety standards, helping to prevent suffocation if children suck it into the throat. Multi-pens are pens that feature multiple varying colored pen refills. Sometimes ballpoint refills are combined with another non-ballpoint refill, usually a mechanical pencil. Sometimes ballpoint pens combine a ballpoint tip on one end and touchscreen stylus on the other. Ballpoint pens are sometimes provided free by businesses, such as hotels and banks, printed with a company's name and logo. Ballpoints have also been produced to commemorate events, such as a pen commemorating the 1963 assassination of President John F. Kennedy. These pens, known as "advertising pens," are the same as standard ballpoint pen models, but have become valued among collectors. Sometimes ballpoint pens are also produced as design objects. With cases made of metal or wood, they become individually styled utility objects. Use of ballpoint pens in space It is generally believed that gravity is needed to coat the ball with ink. In fact most ballpoint pens on the Earth do not work when writing upside-down because the Earth's gravity pulls the ink inside the pen away from the tip of the pen. However, in the microgravity environment of space a regular ballpoint pen can still work, pointed in any direction, because the capillary forces in the ink are stronger than gravitational forces. The functionality of a regular ballpoint pen in space was confirmed by ESA astronaut Pedro Duque in 2003. Technology developed by Fisher pens in the United States resulted in the production of what came to be known as the "Fisher Space Pen". Space Pens combine a more viscous ink with a pressurized ink reservoir that forces the ink toward the point. Unlike a standard ballpoint's ink container, the rear end of a Space Pen's pressurized reservoir is sealed, eliminating evaporation and leakage, thus allowing the pen to write upside-down, in zero-gravity environments, and allegedly underwater. Astronauts have made use of these pens in outer space. As an art medium The ballpoint pen has proven to be a versatile art medium for both professional artists and amateur doodlers. Low cost, availability, and portability are cited by practitioners as qualities which make this common writing tool a convenient art supply. Some artists use them within mixed-media works, while others use them solely as their medium-of-choice. Effects not generally associated with ballpoint pens can be achieved. Traditional pen-and-ink techniques such as stippling and cross-hatching can be used to create half-tones or the illusion of form and volume. For artists whose interests necessitate precision line-work, ballpoints are an obvious attraction; ballpoint pens allow for sharp lines not as effectively executed using a brush. Finely applied, the resulting imagery has been mistaken for airbrushed artwork and photography, causing reactions of disbelief which ballpoint artist Lennie Mace refers to as the "Wow Factor". Famous 20th-century artists including Andy Warhol, have utilized the ballpoint pen during their careers. Ballpoint pen artwork continues to attract interest in the 21st century, with many contemporary artists gaining recognition for their specific use of ballpoint pens as a medium. Korean-American artist Il Lee has been creating large-scale, abstract artwork since the late 1970s solely with ballpoint pens. Since the 1980s, Lennie Mace creates imaginative, ballpoint-only artwork of varying content and complexity, applied to unconventional surfaces including wood and denim. The artist coined terms such as "PENtings" and "Media Graffiti" to describe his varied output. British artist James Mylne has been creating photo-realistic artwork using mostly black ballpoints, sometimes with minimal mixed-media color. The ballpoint pen has several limitations as an art medium. Color availability and sensitivity of ink to light are among concerns of ballpoint pen artists. As a tool that uses ink, marks made with a ballpoint pen can generally not be erased. Additionally, "blobbing" ink on the drawing surface and "skipping" ink-flow require consideration when drawing with a ballpoint pen. Although the mechanics of ballpoint pens remain relatively unchanged, ink composition has evolved to solve certain problems over the years, resulting in unpredictable sensitivity to light and some extent of fading. Manufacturing The common ballpoint pen is a product of mass production, with components produced separately on assembly lines. Basic steps in the manufacturing process include the production of ink formulas, molding of metal and plastic components, and assembly. Marcel Bich (leading to Société Bic) was involved in developing the production of inexpensive ballpoint pens. Although designs and construction vary between brands, basic components of all ballpoint pens are universal. Standard components of a ballpoint tip include the freely rotating "ball" itself (distributing the ink on the writing surface), a "socket" holding the ball in place, small "ink channels" that provide ink to the ball through the socket, and a self-contained "ink reservoir" supplying ink to the ball. In modern disposable pens, narrow plastic tubes contain the ink, which is compelled downward to the ball by gravity. Brass, steel, or tungsten carbide are used to manufacture the ball bearing-like points, then housed in a brass socket. The function of these components can be observed at a larger scale in the ball-applicator of roll-on antiperspirant. The ballpoint tip delivers the ink to the writing surface while acting as a "buffer" between the ink in the reservoir and the air outside, preventing the quick-drying ink from drying inside the reservoir. Modern ballpoints are said to have a two-year shelf life, on average. A ballpoint tip that can write comfortably for a long period of time is not easy to produce, as it requires high-precision machinery and thin high-grade steel alloy plates. China, which produces about 80 percent of the world's ballpoint pens, relied on imported ballpoint tips and metal alloys before 2017. Standards The International Organization for Standardization has published standards for ball point and roller ball pens: ISO 127561998: Drawing and writing instruments – Ball point pens – Vocabulary ISO 12757-11998: Ball point pens and refills – Part 1: General use ISO 12757-21998: Ball point pens and refills – Part 2: Documentary use (DOC) ISO 14145-11998: Roller ball pens and refills – Part 1: General use ISO 14145-21998: Roller ball pens and refills – Part 2: Documentary use (DOC) Guinness World Records The world's largest functioning ballpoint pen was made by Acharya Makunuri Srinivasa in India. The pen measures long and weighs . The world's most popular pen is the Bic Cristal, with the 100 billionth model sold in September, 2006. The Bic Cristal was launched in December 1950 and roughly 57 are sold per second.
Technology
Writing tools
null
4526
https://en.wikipedia.org/wiki/Brick
Brick
A brick is a type of construction material used to build walls, pavements and other elements in masonry construction. Properly, the term brick denotes a unit primarily composed of clay, but is now also used informally to denote units made of other materials or other chemically cured construction blocks. Bricks can be joined using mortar, adhesives or by interlocking. Bricks are usually produced at brickworks in numerous classes, types, materials, and sizes which vary with region, and are produced in bulk quantities. Block is a similar term referring to a rectangular building unit composed of clay or concrete, but is usually larger than a brick. Lightweight bricks (also called lightweight blocks) are made from expanded clay aggregate. Fired bricks are one of the longest-lasting and strongest building materials, sometimes referred to as artificial stone, and have been used since . Air-dried bricks, also known as mudbricks, have a history older than fired bricks, and have an additional ingredient of a mechanical binder such as straw. Bricks are laid in courses and numerous patterns known as bonds, collectively known as brickwork, and may be laid in various kinds of mortar to hold the bricks together to make a durable structure. History Middle East and South Asia The earliest bricks were dried mudbricks, meaning that they were formed from clay-bearing earth or mud and dried (usually in the sun) until they were strong enough for use. The oldest discovered bricks, originally made from shaped mud and dating before 7500 BC, were found at Tell Aswad, in the upper Tigris region and in southeast Anatolia close to Diyarbakir. Mudbrick construction was used at Çatalhöyük, from c. 7,400 BC. Mudbrick structures, dating to c. 7,200 BC have been located in Jericho, Jordan Valley. These structures were made up of the first bricks with dimension 400x150x100 mm. Between 5000 and 4500 BC, Mesopotamia had discovered fired brick. The standard brick sizes in Mesopotamia followed a general rule: the width of the dried or burned brick would be twice its thickness, and its length would be double its width. The South Asian inhabitants of Mehrgarh also constructed air-dried mudbrick structures between 7000 and 3300 BC and later the ancient Indus Valley cities of Mohenjo-daro, Harappa, and Mehrgarh. Ceramic, or fired brick was used as early as 3000 BC in early Indus Valley cities like Kalibangan. In the middle of the third millennium BC, there was a rise in monumental baked brick architecture in Indus cities. Examples included the Great Bath at Mohenjo-daro, the fire altars of Kaalibangan, and the granary of Harappa. There was a uniformity to the brick sizes throughout the Indus Valley region, conforming to the 1:2:4, thickness, width, and length ratio. As the Indus civilization began its decline at the start of the second millennium BC, Harappans migrated east, spreading their knowledge of brickmaking technology. This led to the rise of cities like Pataliputra, Kausambi, and Ujjain, where there was an enormous demand for kiln-made bricks. By 604 BC, bricks were the construction materials for architectural wonders such as the Hanging Gardens of Babylon, where glazed fired bricks were put into practice. China The earliest fired bricks appeared in Neolithic China around 4400 BC at Chengtoushan, a walled settlement of the Daxi culture. These bricks were made of red clay, fired on all sides to above 600 °C, and used as flooring for houses. By the Qujialing period (3300 BC), fired bricks were being used to pave roads and as building foundations at Chengtoushan. According to Lukas Nickel, the use of ceramic pieces for protecting and decorating floors and walls dates back at various cultural sites to 3000-2000 BC and perhaps even before, but these elements should be rather qualified as tiles. For the longest time builders relied on wood, mud and rammed earth, while fired brick and mudbrick played no structural role in architecture. Proper brick construction, for erecting walls and vaults, finally emerges in the third century BC, when baked bricks of regular shape began to be employed for vaulting underground tombs. Hollow brick tomb chambers rose in popularity as builders were forced to adapt due to a lack of readily available wood or stone. The oldest extant brick building above ground is possibly Songyue Pagoda, dated to 523 AD. By the end of the third century BC in China, both hollow and small bricks were available for use in building walls and ceilings. Fired bricks were first mass-produced during the construction of the tomb of China's first Emperor, Qin Shi Huangdi. The floors of the three pits of the terracotta army were paved with an estimated 230,000 bricks, with the majority measuring 28x14x7 cm, following a 4:2:1 ratio. The use of fired bricks in Chinese city walls first appeared in the Eastern Han dynasty (25 AD-220 AD). Up until the Middle Ages, buildings in Central Asia were typically built with unbaked bricks. It was only starting in the ninth century CE when buildings were entirely constructed using fired bricks. The carpenter's manual Yingzao Fashi, published in 1103 at the time of the Song dynasty described the brick making process and glazing techniques then in use. Using the 17th-century encyclopaedic text Tiangong Kaiwu, historian Timothy Brook outlined the brick production process of Ming dynasty China: Europe Early civilisations around the Mediterranean, including the Ancient Greeks and Romans, adopted the use of fired bricks. By the early first century CE, standardised fired bricks were being heavily produced in Rome. The Roman legions operated mobile kilns, and built large brick structures throughout the Roman Empire, stamping the bricks with the seal of the legion. The Romans used brick for walls, arches, forts, aqueducts, etc. Notable mentions of Roman brick structures are the Herculaneum gate of Pompeii and the baths of Caracalla. During the Early Middle Ages the use of bricks in construction became popular in Northern Europe, after being introduced there from Northwestern Italy. An independent style of brick architecture, known as brick Gothic (similar to Gothic architecture) flourished in places that lacked indigenous sources of rocks. Examples of this architectural style can be found in modern-day Denmark, Germany, Poland, and Kaliningrad (former East Prussia). This style evolved into the Brick Renaissance as the stylistic changes associated with the Italian Renaissance spread to northern Europe, leading to the adoption of Renaissance elements into brick building. Identifiable attributes included a low-pitched hipped or flat roof, symmetrical facade, round arch entrances and windows, columns and pilasters, and more. A clear distinction between the two styles only developed at the transition to Baroque architecture. In Lübeck, for example, Brick Renaissance is clearly recognisable in buildings equipped with terracotta reliefs by the artist Statius von Düren, who was also active at Schwerin (Schwerin Castle) and Wismar (Fürstenhof). Long-distance bulk transport of bricks and other construction equipment remained prohibitively expensive until the development of modern transportation infrastructure, with the construction of canal, roads, and railways. Industrial era Production of bricks increased massively with the onset of the Industrial Revolution and the rise in factory building in England. For reasons of speed and economy, bricks were increasingly preferred as building material to stone, even in areas where the stone was readily available. It was at this time in London that bright red brick was chosen for construction to make the buildings more visible in the heavy fog and to help prevent traffic accidents. The transition from the traditional method of production known as hand-moulding to a mechanised form of mass-production slowly took place during the first half of the nineteenth century. The first brick-making machine was patented by Richard A. Ver Valen of Haverstraw, New York, in 1852. The Bradley & Craven Ltd 'Stiff-Plastic Brickmaking Machine' was patented in 1853. Bradley & Craven went on to be a dominant manufacturer of brickmaking machinery. Henry Clayton, employed at the Atlas Works in Middlesex, England, in 1855, patented a brick-making machine that was capable of producing up to 25,000 bricks daily with minimal supervision. His mechanical apparatus soon achieved widespread attention after it was adopted for use by the South Eastern Railway Company for brick-making at their factory near Folkestone. At the end of the 19th century, the Hudson River region of New York State would become the world's largest brick manufacturing region, with 130 brickyards lining the shores of the Hudson River from Mechanicsville to Haverstraw and employing 8,000 people. At its peak, about 1 billion bricks were produced a year, with many being sent to New York City for use in its construction industry. The demand for high office building construction at the turn of the 20th century led to a much greater use of cast and wrought iron, and later, steel and concrete. The use of brick for skyscraper construction severely limited the size of the building – the Monadnock Building, built in 1896 in Chicago, required exceptionally thick walls to maintain the structural integrity of its 17 storeys. Following pioneering work in the 1950s at the Swiss Federal Institute of Technology and the Building Research Establishment in Watford, UK, the use of improved masonry for the construction of tall structures up to 18 storeys high was made viable. However, the use of brick has largely remained restricted to small to medium-sized buildings, as steel and concrete remain superior materials for high-rise construction. Methods of manufacture Four basic types of brick are un-fired, fired, chemically set bricks, and compressed earth blocks. Each type is manufactured differently for various purposes. Mudbrick Unfired bricks, also known as mudbrick, are made from a mixture of silt, clay, sand and other earth materials like gravel and stone, combined with tempers and binding agents such as chopped straw, grasses, tree bark, or dung. Since these bricks are made up of natural materials and only require heat from the Sun to bake, mudbricks have a relatively low embodied energy and carbon footprint. The ingredients are first harvested and added together, with clay content ranging from 30% to 70%. The mixture is broken up with hoes or adzes, and stirred with water to form a homogenous blend. Next, the tempers and binding agents are added in a ratio, roughly one part straw to five parts earth to reduce weight and reinforce the brick by helping reduce shrinkage. However, additional clay could be added to reduce the need for straw, which would prevent the likelihood of insects deteriorating the organic material of the bricks, subsequently weakening the structure. These ingredients are thoroughly mixed together by hand or by treading and are then left to ferment for about a day. The mix is then kneaded with water and molded into rectangular prisms of a desired size. Bricks are lined up and left to dry in the sun for three days on both sides. After the six days, the bricks continue drying until required for use. Typically, longer drying times are preferred, but the average is eight to nine days spanning from initial stages to its application in structures. Unfired bricks could be made in the spring months and left to dry over the summer for use in the autumn. Mudbricks are commonly employed in arid environments to allow for adequate air drying. Fired brick Fired bricks are baked in a kiln which makes them durable. Modern, fired, clay bricks are formed in one of three processes – soft mud, dry press, or extruded. Depending on the country, either the extruded or soft mud method is the most common, since they are the most economical. Clay and shale are the raw ingredients in the recipe for a fired brick. They are the product of thousands of years of decomposition and erosion of rocks, such as pegmatite and granite, leading to a material that has properties of being highly chemically stable and inert. Within the clays and shales are the materials of aluminosilicate (pure clay), free silica (quartz), and decomposed rock. One proposed optimal mix is: Silica (sand) – 50% to 60% by weight Alumina (clay) – 20% to 30% by weight Lime – 2 to 5% by weight Iron oxide – ≤ 7% by weight Magnesia – less than 1% by weight Shaping methods Three main methods are used for shaping the raw materials into bricks to be fired: Moulded bricks – These bricks start with raw clay, preferably in a mix with 25–30% sand to reduce shrinkage. The clay is first ground and mixed with water to the desired consistency. The clay is then pressed into steel moulds with a hydraulic press. The shaped clay is then fired at to achieve strength. Dry-pressed bricks – The dry-press method is similar to the soft-mud moulded method, but starts with a much thicker clay mix, so it forms more accurate, sharper-edged bricks. The greater force in pressing and the longer firing time make this method more expensive. Extruded bricks – For extruded bricks the clay is mixed with 10–15% water (stiff extrusion) or 20–25% water (soft extrusion) in a pugmill. This mixture is forced through a die to create a long cable of material of the desired width and depth. This mass is then cut into bricks of the desired length by a wall of wires. Most structural bricks are made by this method as it produces hard, dense bricks, and suitable dies can produce perforations as well. The introduction of such holes reduces the volume of clay needed, and hence the cost. Hollow bricks are lighter and easier to handle, and have different thermal properties from solid bricks. The cut bricks are hardened by drying for 20 to 40 hours at before being fired. The heat for drying is often waste heat from the kiln. Kilns In many modern brickworks, bricks are usually fired in a continuously fired tunnel kiln, in which the bricks are fired as they move slowly through the kiln on conveyors, rails, or kiln cars, which achieves a more consistent brick product. The bricks often have lime, ash, and organic matter added, which accelerates the burning process. The other major kiln type is the Bull's Trench Kiln (BTK), based on a design developed by British engineer W. Bull in the late 19th century. An oval or circular trench is dug, wide, deep, and in circumference. A tall exhaust chimney is constructed in the centre. Half or more of the trench is filled with "green" (unfired) bricks which are stacked in an open lattice pattern to allow airflow. The lattice is capped with a roofing layer of finished brick. In operation, new green bricks, along with roofing bricks, are stacked at one end of the brick pile. Historically, a stack of unfired bricks covered for protection from the weather was called a "hack". Cooled finished bricks are removed from the other end for transport to their destinations. In the middle, the brick workers create a firing zone by dropping fuel (coal, wood, oil, debris, etc.) through access holes in the roof above the trench. The constant source of fuel maybe grown on the woodlots. The advantage of the BTK design is a much greater energy efficiency compared with clamp or scove kilns. Sheet metal or boards are used to route the airflow through the brick lattice so that fresh air flows first through the recently burned bricks, heating the air, then through the active burning zone. The air continues through the green brick zone (pre-heating and drying the bricks), and finally out the chimney, where the rising gases create suction that pulls air through the system. The reuse of heated air yields savings in fuel cost. As with the rail process, the BTK process is continuous. A half-dozen labourers working around the clock can fire approximately 15,000–25,000 bricks a day. Unlike the rail process, in the BTK process the bricks do not move. Instead, the locations at which the bricks are loaded, fired, and unloaded gradually rotate through the trench. Influences on colour The colour of fired clay bricks is influenced by the chemical and mineral content of the raw materials, the firing temperature, and the atmosphere in the kiln. For example, pink bricks are the result of a high iron content, white or yellow bricks have a higher lime content. Most bricks burn to various red hues; as the temperature is increased the colour moves through dark red, purple, and then to brown or grey at around . The names of bricks may reflect their origin and colour, such as London stock brick and Cambridgeshire White. Brick tinting may be performed to change the colour of bricks to blend-in areas of brickwork with the surrounding masonry. An impervious and ornamental surface may be laid on brick either by salt glazing, in which salt is added during the burning process, or by the use of a slip, which is a glaze material into which the bricks are dipped. Subsequent reheating in the kiln fuses the slip into a glazed surface integral with the brick base. Chemically set bricks Chemically set bricks are not fired but may have the curing process accelerated by the application of heat and pressure in an autoclave. Calcium-silicate bricks Calcium-silicate bricks are also called sandlime or flintlime bricks, depending on their ingredients. Rather than being made with clay they are made with lime binding the silicate material. The raw materials for calcium-silicate bricks include lime mixed in a proportion of about 1 to 10 with sand, quartz, crushed flint, or crushed siliceous rock together with mineral colourants. The materials are mixed and left until the lime is completely hydrated; the mixture is then pressed into moulds and cured in an autoclave for three to fourteen hours to speed the chemical hardening. The finished bricks are very accurate and uniform, although the sharp arrises need careful handling to avoid damage to brick and bricklayer. The bricks can be made in a variety of colours; white, black, buff, and grey-blues are common, and pastel shades can be achieved. This type of brick is common in Sweden as well as Russia and other post-Soviet countries, especially in houses built or renovated in the 1970s. A version known as fly ash bricks, manufactured using fly ash, lime, and gypsum (known as the FaL-G process) are common in South Asia. Calcium-silicate bricks are also manufactured in Canada and the United States, and meet the criteria set forth in ASTM C73 – 10 Standard Specification for Calcium Silicate Brick (Sand-Lime Brick). Concrete bricks Bricks formed from concrete are usually termed as blocks or concrete masonry unit, and are typically pale grey. They are made from a dry, small aggregate concrete which is formed in steel moulds by vibration and compaction in either an "egglayer" or static machine. The finished blocks are cured, rather than fired, using low-pressure steam. Concrete bricks and blocks are manufactured in a wide range of shapes, sizes and face treatments – a number of which simulate the appearance of clay bricks. Concrete bricks are available in many colours and as an engineering brick made with sulfate-resisting Portland cement or equivalent. When made with adequate amount of cement they are suitable for harsh environments such as wet conditions and retaining walls. They are made to standards BS 6073, EN 771-3 or ASTM C55. Concrete bricks contract or shrink so they need movement joints every 5 to 6 metres, but are similar to other bricks of similar density in thermal and sound resistance and fire resistance. Compressed earth blocks Compressed earth blocks are made mostly from slightly moistened local soils compressed with a mechanical hydraulic press or manual lever press. A small amount of a cement binder may be added, resulting in a stabilised compressed earth block. Types There are thousands of types of bricks that are named for their use, size, forming method, origin, quality, texture, and/or materials. Categorized by manufacture method: Extruded – made by being forced through an opening in a steel die, with a very consistent size and shape. Wire-cut – cut to size after extrusion with a tensioned wire which may leave drag marks Moulded – shaped in moulds rather than being extruded Machine-moulded – clay is forced into moulds using pressure Handmade – clay is forced into moulds by a person Dry-pressed – similar to soft mud method, but starts with a much thicker clay mix and is compressed with great force. Categorized by use: Common or building – A brick not intended to be visible, used for internal structure Face – A brick used on exterior surfaces to present a clean appearance Hollow – not solid, the holes are less than 25% of the brick volume Perforated – holes greater than 25% of the brick volume Keyed – indentations in at least one face and end to be used with rendering and plastering Paving – brick intended to be in ground contact as a walkway or roadway Thin – brick with normal height and length but thin width to be used as a veneer Specialized use bricks: Chemically resistant – bricks made with resistance to chemical reactions Acid brick – acid resistant bricks Engineering – a type of hard, dense, brick used where strength, low water porosity or acid (flue gas) resistance are needed. Further classified as type A and type B based on their compressive strength Accrington – a type of engineering brick from England Fire or refractory – highly heat-resistant bricks Clinker – a vitrified brick Ceramic glazed – fire bricks with a decorative glazing Bricks named for place of origin: Chicago common brick - a soft brick made near Chicago, Illinois with a range of colors, like buff yellow, salmon pink, or deep red Cream City brick – a light yellow brick made in Milwaukee, Wisconsin Dutch brick – a hard light coloured brick originally from the Netherlands Fareham red brick – a type of construction brick London stock brick – type of handmade brick which was used for the majority of building work in London and South East England until the growth in the use of machine-made bricks Nanak Shahi bricks – a type of decorative brick in India Roman brick – a long, flat brick typically used by the Romans Staffordshire blue brick – a type of construction brick from England Optimal dimensions, characteristics, and strength For efficient handling and laying, bricks must be small enough and light enough to be picked up by the bricklayer using one hand (leaving the other hand free for the trowel). Bricks are usually laid flat, and as a result, the effective limit on the width of a brick is set by the distance which can conveniently be spanned between the thumb and fingers of one hand, normally about . In most cases, the length of a brick is twice its width plus the width of a mortar joint, about or slightly more. This allows bricks to be laid bonded in a structure which increases stability and strength (for an example, see the illustration of bricks laid in English bond, at the head of this article). The wall is built using alternating courses of stretchers, bricks laid longways, and headers, bricks laid crossways. The headers tie the wall together over its width. In fact, this wall is built in a variation of English bond called English cross bond where the successive layers of stretchers are displaced horizontally from each other by half a brick length. In true English bond, the perpendicular lines of the stretcher courses are in line with each other. A bigger brick makes for a thicker (and thus more insulating) wall. Historically, this meant that bigger bricks were necessary in colder climates (see for instance the slightly larger size of the Russian brick in table below), while a smaller brick was adequate, and more economical, in warmer regions. A notable illustration of this correlation is the Green Gate in Gdansk; built in 1571 of imported Dutch brick, too small for the colder climate of Gdansk, it was notorious for being a chilly and drafty residence. Nowadays this is no longer an issue, as modern walls typically incorporate specialised insulation materials. The correct brick for a job can be selected from a choice of colour, surface texture, density, weight, absorption, and pore structure, thermal characteristics, thermal and moisture movement, and fire resistance. In England, the length and width of the common brick remained fairly constant from 1625 when the size was regulated by statute at 9 x x 3 inches (but see brick tax), but the depth has varied from about or smaller in earlier times to about more recently. In the United Kingdom, the usual size of a modern brick (from 1965) is , which, with a nominal mortar joint, forms a unit size of , for a ratio of 6:3:2. In the United States, modern standard bricks are specified for various uses; The most commonly used is the modular brick has the actual dimensions of   ×   ×  inches (194 × 92 × 57 mm). With the standard inch mortar joint, this gives the nominal dimensions of 8 x 4 x inches which eases the calculation of the number of bricks in a given wall. The 2:1 ratio of modular bricks means that when they turn corners, a 1/2 running bond is formed without needing to cut the brick down or fill the gap with a cut brick; and the height of modular bricks means that a soldier course matches the height of three modular running courses, or one standard CMU course. Some brickmakers create innovative sizes and shapes for bricks used for plastering (and therefore not visible on the inside of the building) where their inherent mechanical properties are more important than their visual ones. These bricks are usually slightly larger, but not as large as blocks and offer the following advantages: A slightly larger brick requires less mortar and handling (fewer bricks), which reduces cost Their ribbed exterior aids plastering More complex interior cavities allow improved insulation, while maintaining strength. Blocks have a much greater range of sizes. Standard co-ordinating sizes in length and height (in mm) include 400×200, 450×150, 450×200, 450×225, 450×300, 600×150, 600×200, and 600×225; depths (work size, mm) include 60, 75, 90, 100, 115, 140, 150, 190, 200, 225, and 250. They are usable across this range as they are lighter than clay bricks. The density of solid clay bricks is around 2000 kg/m3: this is reduced by frogging, hollow bricks, and so on, but aerated autoclaved concrete, even as a solid brick, can have densities in the range of 450–850 kg/m3. Bricks may also be classified as solid (less than 25% perforations by volume, although the brick may be "frogged," having indentations on one of the longer faces), perforated (containing a pattern of small holes through the brick, removing no more than 25% of the volume), cellular (containing a pattern of holes removing more than 20% of the volume, but closed on one face), or hollow (containing a pattern of large holes removing more than 25% of the brick's volume). Blocks may be solid, cellular or hollow. The term "frog" can refer to the indentation or the implement used to make it. Modern brickmakers usually use plastic frogs but in the past they were made of wood. The compressive strength of bricks produced in the United States ranges from about , varying according to the use to which the brick are to be put. In England clay bricks can have strengths of up to 100 MPa, although a common house brick is likely to show a range of 20–40 MPa. Uses Bricks are a versatile building material, able to participate in a wide variety of applications, including: Structural walls, exterior and interior walls Bearing and non-bearing sound proof partitions The fireproofing of structural-steel members in the form of firewalls, party walls, enclosures and fire towers Foundations for stucco Chimneys and fireplaces Porches and terraces Outdoor steps, brick walks and paved floors Swimming pools In the United States, bricks have been used for both buildings and pavement. Examples of brick use in buildings can be seen in colonial era buildings and other notable structures around the country. Bricks have been used in paving roads and sidewalks especially during the late 19th century and early 20th century. The introduction of asphalt and concrete reduced the use of brick for paving, but they are still sometimes installed as a method of traffic calming or as a decorative surface in pedestrian precincts. For example, in the early 1900s, most of the streets in the city of Grand Rapids, Michigan, were paved with bricks. Today, there are only about 20 blocks of brick-paved streets remaining (totalling less than 0.5 percent of all the streets in the city limits). Much like in Grand Rapids, municipalities across the United States began replacing brick streets with inexpensive asphalt concrete by the mid-20th century. In Northwest Europe, bricks have been used in construction for centuries. Until recently, almost all houses were built almost entirely from bricks. Although many houses are now built using a mixture of concrete blocks and other materials, many houses are skinned with a layer of bricks on the outside for aesthetic appeal. Bricks in the metallurgy and glass industries are often used for lining furnaces, in particular refractory bricks such as silica, magnesia, chamotte and neutral (chromomagnesite) refractory bricks. This type of brick must have good thermal shock resistance, refractoriness under load, high melting point, and satisfactory porosity. There is a large refractory brick industry, especially in the United Kingdom, Japan, the United States, Belgium and the Netherlands. Engineering bricks are used where strength, low water porosity or acid (flue gas) resistance are needed. In the UK a red brick university is one founded in the late 19th or early 20th century. The term is used to refer to such institutions collectively to distinguish them from the older Oxbridge institutions, and refers to the use of bricks, as opposed to stone, in their buildings. Colombian architect Rogelio Salmona was noted for his extensive use of red bricks in his buildings and for using natural shapes like spirals, radial geometry and curves in his designs. Limitations Starting in the 20th century, the use of brickwork declined in some areas due to concerns about earthquakes. Earthquakes such as the San Francisco earthquake of 1906 and the 1933 Long Beach earthquake revealed the weaknesses of unreinforced brick masonry in earthquake-prone areas. During seismic events, the mortar cracks and crumbles, so that the bricks are no longer held together. Brick masonry with steel reinforcement, which helps hold the masonry together during earthquakes, has been used to replace unreinforced bricks in many buildings. Retrofitting older unreinforced masonry structures has been mandated in many jurisdictions. However, similar to steel corrosion in reinforced concrete, rebar rusting will compromise the structural integrity of reinforced brick and ultimately limit the expected lifetime, so there is a trade-off between earthquake safety and longevity to a certain extent. Accessibility The United States Access Board does not specify which materials a sidewalk must be made of in order to be ADA compliant, but states that sidewalks must not have surface variances of greater than one inch. Due to the accessibility challenges of bricks, the Federal Highway Administration recommends against the use of bricks as well as cobblestones in its accessibility guide for sidewalks and crosswalks. The Brick Industry Association maintains standards for making brick more accessible for disabled people, with proper and regular maintenance being necessary to keep brick accessible. Some US jurisdictions, such as San Francisco, have taken steps to remove brick sidewalks from certain areas such as Market Street in order to improve accessibility. Gallery
Technology
Building materials
null
4531
https://en.wikipedia.org/wiki/Bipolar%20disorder
Bipolar disorder
Bipolar disorder, previously known as manic depression, is a mental disorder characterized by periods of depression and periods of abnormally elevated mood that each last from days to weeks. If the elevated mood is severe or associated with psychosis, it is called mania; if it is less severe and does not significantly affect functioning, it is called hypomania. During mania, an individual behaves or feels abnormally energetic, happy, or irritable, and they often make impulsive decisions with little regard for the consequences. There is usually, but not always, a reduced need for sleep during manic phases. During periods of depression, the individual may experience crying, have a negative outlook on life, and demonstrate poor eye contact with others. The risk of suicide is high. Over a period of 20 years, 6% of those with bipolar disorder died by suicide. 40-50% overall and 78% of adolescents engaged in self-harm. Other mental health issues, such as anxiety disorders and substance use disorders, are commonly associated with bipolar disorder. The global prevalence of bipolar disorder is estimated to be between 1–5% of the world's population. While the causes of this mood disorder are not clearly understood, both genetic and environmental factors are thought to play a role. Genetic factors may account for up to 70–90% of the risk of developing bipolar disorder. Many genes, each with small effects, may contribute to the development of the disorder. Environmental risk factors include a history of childhood abuse and long-term stress. The condition is classified as bipolar I disorder if there has been at least one manic episode, with or without depressive episodes, and as bipolar II disorder if there has been at least one hypomanic episode (but no full manic episodes) and one major depressive episode. It is classified as cyclothymia if there are hypomanic episodes with periods of depression that do not meet the criteria for major depressive episodes. If these symptoms are due to drugs or medical problems, they are not diagnosed as bipolar disorder. Other conditions that have overlapping symptoms with bipolar disorder include attention deficit hyperactivity disorder, personality disorders, schizophrenia, and substance use disorder as well as many other medical conditions. Medical testing is not required for a diagnosis, though blood tests or medical imaging can rule out other problems. Mood stabilizers, particularly lithium, and certain anticonvulsants, such as lamotrigine and valproate, as well as atypical antipsychotics, including quetiapine, olanzapine, and aripiprazole are the mainstay of long-term pharmacologic relapse prevention. Antipsychotics are additionally given during acute manic episodes as well as in cases where mood stabilizers are poorly tolerated or ineffective. In patients where compliance is of concern, long-acting injectable formulations are available. There is some evidence that psychotherapy improves the course of this disorder. The use of antidepressants in depressive episodes is controversial: they can be effective but certain classes of antidepressants increase the risk of mania. The treatment of depressive episodes, therefore, is often difficult. Electroconvulsive therapy (ECT) is effective in acute manic and depressive episodes, especially with psychosis or catatonia. Admission to a psychiatric hospital may be required if a person is a risk to themselves or others; involuntary treatment is sometimes necessary if the affected person refuses treatment. Bipolar disorder occurs in approximately 2% of the global population. In the United States, about 3% are estimated to be affected at some point in their life; rates appear to be similar in females and males. Symptoms most commonly begin between the ages of 20 and 25 years old; an earlier onset in life is associated with a worse prognosis. Interest in functioning in the assessment of patients with bipolar disorder is growing, with an emphasis on specific domains such as work, education, social life, family, and cognition. Around one-quarter to one-third of people with bipolar disorder have financial, social or work-related problems due to the illness. Bipolar disorder is among the top 20 causes of disability worldwide and leads to substantial costs for society. Due to lifestyle choices and the side effects of medications, the risk of death from natural causes such as coronary heart disease in people with bipolar disorder is twice that of the general population. Signs and symptoms Late adolescence and early adulthood are peak years for the onset of bipolar disorder. The condition is characterized by intermittent episodes of mania, commonly (but not in every patient) alternating with bouts of depression, with an absence of symptoms in between. During these episodes, people with bipolar disorder exhibit disruptions in normal mood, psychomotor activity (the level of physical activity that is influenced by mood)—e.g. constant fidgeting during mania or slowed movements during depression—circadian rhythm and cognition. Mania can present with varying levels of mood disturbance, ranging from euphoria, which is associated with "classic mania", to dysphoria and irritability. Psychotic symptoms such as delusions or hallucinations may occur in both manic and depressive episodes; their content and nature are consistent with the person's prevailing mood. In some people with bipolar disorder, depressive symptoms predominate, and the episodes of mania are always the more subdued hypomania type. According to the DSM-5 criteria, mania is distinguished from hypomania by the duration: hypomania is present if elevated mood symptoms persist for at least four consecutive days, while mania is present if such symptoms persist for more than a week. Unlike mania, hypomania is not always associated with impaired functioning. The biological mechanisms responsible for switching from a manic or hypomanic episode to a depressive episode, or vice versa, remain poorly understood. Manic episodes Also known as a manic episode, mania is a distinct period of at least one week of elevated or irritable mood, which can range from euphoria to delirium. The core symptom of mania involves an increase in energy of psychomotor activity. Mania can also present with increased self-esteem or grandiosity, racing thoughts, pressured speech that is difficult to interrupt, decreased need for sleep, disinhibited social behavior, increased goal-oriented activities and impaired judgement, which can lead to exhibition of behaviors characterized as impulsive or high-risk, such as hypersexuality or excessive spending. To fit the definition of a manic episode, these behaviors must impair the individual's ability to socialize or work. If untreated, a manic episode usually lasts three to six months. In severe manic episodes, a person can experience psychotic symptoms, where thought content is affected along with mood. They may feel unstoppable, persecuted, or as if they have a special relationship with God, a great mission to accomplish, or other grandiose or delusional ideas. This may lead to violent behavior and, sometimes, hospitalization in an inpatient psychiatric hospital. The severity of manic symptoms can be measured by rating scales such as the Young Mania Rating Scale, though questions remain about the reliability of these scales. The onset of a manic or depressive episode is often foreshadowed by sleep disturbance. Manic individuals often have a history of substance use disorder developed over years as a form of "self-medication". Hypomanic episodes Hypomania is the milder form of mania, defined as at least four days of the same criteria as mania, but which does not cause a significant decrease in the individual's ability to socialize or work, lacks psychotic features such as delusions or hallucinations, and does not require psychiatric hospitalization. Overall functioning may actually increase during episodes of hypomania and is thought to serve as a defense mechanism against depression by some. Hypomanic episodes rarely progress to full-blown manic episodes. Some people who experience hypomania show increased creativity, while others are irritable or demonstrate poor judgment. Hypomania may feel good to some individuals who experience it, though most people who experience hypomania state that the stress of the experience is very painful. People with bipolar disorder who experience hypomania tend to forget the effects of their actions on those around them. Even when family and friends recognize mood swings, the individual will often deny that anything is wrong. If not accompanied by depressive episodes, hypomanic episodes are often not deemed problematic unless the mood changes are uncontrollable or volatile. Most commonly, symptoms continue for time periods from a few weeks to a few months. Depressive episodes Symptoms of the depressive phase of bipolar disorder include persistent feelings of sadness, irritability or anger, loss of interest in previously enjoyed activities, excessive or inappropriate guilt, hopelessness, sleeping too much or not enough, changes in appetite or weight, fatigue, problems concentrating, self-loathing or feelings of worthlessness, and thoughts of death or suicide. Although the DSM-5 criteria for diagnosing unipolar and bipolar episodes are the same, some clinical features are more common in the latter, including increased sleep, sudden onset and resolution of symptoms, significant weight gain or loss, and severe episodes after childbirth. The earlier the age of onset, the more likely the first few episodes are to be depressive. For most people with bipolar types 1 and 2, the depressive episodes are much longer than the manic or hypomanic episodes. Since a diagnosis of bipolar disorder requires a manic or hypomanic episode, many affected individuals are initially misdiagnosed as having major depression and treated with prescribed antidepressants. Mixed affective episodes In bipolar disorder, a mixed state is an episode during which symptoms of both mania and depression occur simultaneously. Individuals experiencing a mixed state may have manic symptoms such as grandiose thoughts while simultaneously experiencing depressive symptoms such as excessive guilt or feeling suicidal. They are considered to have a higher risk for suicidal behavior as depressive emotions such as hopelessness are often paired with mood swings or difficulties with impulse control. Anxiety disorders occur more frequently as a comorbidity in mixed bipolar episodes than in non-mixed bipolar depression or mania. Substance (including alcohol) use also follows this trend, thereby appearing to depict bipolar symptoms as no more than a consequence of substance use. Comorbid conditions People with bipolar disorder often have other co-existing psychiatric conditions such as anxiety (present in about 71% of people with bipolar disorder), substance abuse (56%), personality disorders (36%) and attention deficit hyperactivity disorder (10–20%) which can add to the burden of illness and worsen the prognosis. Certain medical conditions are also more common in people with bipolar disorder as compared to the general population. This includes increased rates of metabolic syndrome (present in 37% of people with bipolar disorder), migraine headaches (35%), obesity (21%) and type 2 diabetes (14%). This contributes to a risk of death that is two times higher in those with bipolar disorder as compared to the general population. Hypothyroidism is also common regardless of drug choice. Substance use disorder is a common comorbidity in bipolar disorder; the subject has been widely reviewed. Causes The causes of bipolar disorder likely vary between individuals and the exact mechanism underlying the disorder remains unclear. Genetic influences are believed to account for 73–93% of the risk of developing the disorder indicating a strong hereditary component. The overall heritability of the bipolar spectrum has been estimated at 0.71. Twin studies have been limited by relatively small sample sizes but have indicated a substantial genetic contribution, as well as environmental influence. For bipolar I disorder, the rate at which identical twins (same genes) will both have bipolar I disorder (concordance) is around 40%, compared to about 5% in fraternal twins. A combination of bipolar I, II, and cyclothymia similarly produced rates of 42% and 11% (identical and fraternal twins, respectively). The rates of bipolar II combinations without bipolar I are lowerbipolar II at 23 and 17%, and bipolar II combining with cyclothymia at 33 and 14%which may reflect relatively higher genetic heterogeneity. The cause of bipolar disorders overlaps with major depressive disorder. When defining concordance as the co-twins having either bipolar disorder or major depression, then the concordance rate rises to 67% in identical twins and 19% in fraternal twins. The relatively low concordance between fraternal twins brought up together suggests that shared family environmental effects are limited, although the ability to detect them has been limited by small sample sizes. Genetic Behavioral genetic studies have suggested that many chromosomal regions and candidate genes are related to bipolar disorder susceptibility with each gene exerting a mild to moderate effect. The risk of bipolar disorder is nearly ten-fold higher in first-degree relatives of those with bipolar disorder than in the general population; similarly, the risk of major depressive disorder is three times higher in relatives of those with bipolar disorder than in the general population. Although the first genetic linkage finding for mania was in 1969, linkage studies have been inconsistent. Findings point strongly to heterogeneity, with different genes implicated in different families. Robust and replicable genome-wide significant associations showed several common single-nucleotide polymorphisms (SNPs) are associated with bipolar disorder, including variants within the genes CACNA1C, ODZ4, and NCAN. The largest and most recent genome-wide association study failed to find any locus that exerts a large effect, reinforcing the idea that no single gene is responsible for bipolar disorder in most cases. Polymorphisms in BDNF, DRD4, DAO, and TPH1 have been frequently associated with bipolar disorder and were initially associated in a meta-analysis, but this association disappeared after correction for multiple testing. On the other hand, two polymorphisms in TPH2 were identified as being associated with bipolar disorder. Due to the inconsistent findings in a genome-wide association study, multiple studies have undertaken the approach of analyzing SNPs in biological pathways. Signaling pathways traditionally associated with bipolar disorder that have been supported by these studies include corticotropin-releasing hormone signaling, cardiac β-adrenergic signaling, phospholipase C signaling, glutamate receptor signaling, cardiac hypertrophy signaling, Wnt signaling, Notch signaling, and endothelin 1 signaling. Of the 16 genes identified in these pathways, three were found to be dysregulated in the dorsolateral prefrontal cortex portion of the brain in post-mortem studies: CACNA1C, GNG2, and ITPR2. Bipolar disorder is associated with reduced expression of specific DNA repair enzymes and increased levels of oxidative DNA damages. Environmental Psychosocial factors play a significant role in the development and course of bipolar disorder, and individual psychosocial variables may interact with genetic dispositions. Recent life events and interpersonal relationships likely contribute to the onset and recurrence of bipolar mood episodes, just as they do for unipolar depression. In surveys, 30–50% of adults diagnosed with bipolar disorder report traumatic/abusive experiences in childhood, which is associated with earlier onset, a higher rate of suicide attempts, and more co-occurring disorders such as post-traumatic stress disorder. Subtypes of abuse, such as sexual and emotional abuse, also contribute to violent behaviors seen in patients with bipolar disorder. The number of reported stressful events in childhood is higher in those with an adult diagnosis of bipolar spectrum disorder than in those without, particularly events stemming from a harsh environment rather than from the child's own behavior. Acutely, mania can be induced by sleep deprivation in around 30% of people with bipolar disorder. Neurological Less commonly, bipolar disorder or a bipolar-like disorder may occur as a result of or in association with a neurological condition or injury including stroke, traumatic brain injury, HIV infection, multiple sclerosis, porphyria, and rarely temporal lobe epilepsy. Proposed mechanisms The precise mechanisms that cause bipolar disorder are not well understood. Bipolar disorder is thought to be associated with abnormalities in the structure and function of certain brain areas responsible for cognitive tasks and the processing of emotions. A neurologic model for bipolar disorder proposes that the emotional circuitry of the brain can be divided into two main parts. The ventral system (regulates emotional perception) includes brain structures such as the amygdala, insula, ventral striatum, ventral anterior cingulate cortex, and the prefrontal cortex. The dorsal system (responsible for emotional regulation) includes the hippocampus, dorsal anterior cingulate cortex, and other parts of the prefrontal cortex. The model hypothesizes that bipolar disorder may occur when the ventral system is overactivated and the dorsal system is underactivated. Other models suggest the ability to regulate emotions is disrupted in people with bipolar disorder and that dysfunction of the ventricular prefrontal cortex is crucial to this disruption. Meta-analyses of structural MRI studies have shown that certain brain regions (e.g., the left rostral anterior cingulate cortex, fronto-insular cortex, ventral prefrontal cortex, and claustrum) are smaller in people with bipolar disorder, whereas other regions are larger (lateral ventricles, globus pallidus, subgenual anterior cingulate, and the amygdala). Additionally, these meta-analyses found that people with bipolar disorder have higher rates of deep white matter hyperintensities. Functional MRI findings suggest that the ventricular prefrontal cortex regulates the limbic system, especially the amygdala. In people with bipolar disorder, decreased ventricular prefrontal cortex activity allows for the dysregulated activity of the amygdala, which likely contributes to labile mood and poor emotional regulation. Consistent with this, pharmacological treatment of mania returns ventricular prefrontal cortex activity to the levels in non-manic people, suggesting that ventricular prefrontal cortex activity is an indicator of mood state. However, while pharmacological treatment of mania reduces amygdala hyperactivity, it remains more active than the amygdala of those without bipolar disorder, suggesting amygdala activity may be a marker of the disorder rather than the current mood state. Manic and depressive episodes tend to be characterized by dysfunction in different regions of the ventricular prefrontal cortex. Manic episodes appear to be associated with decreased activation of the right ventricular prefrontal cortex whereas depressive episodes are associated with decreased activation of the left ventricular prefrontal cortex. These disruptions often occur during development linked with synaptic pruning dysfunction. People with bipolar disorder who are in a euthymic mood state show decreased activity in the lingual gyrus compared to people without bipolar disorder. In contrast, they demonstrate decreased activity in the inferior frontal cortex during manic episodes compared to people without the disorder. Similar studies examining the differences in brain activity between people with bipolar disorder and those without did not find a consistent area in the brain that was more or less active when comparing these two groups. People with bipolar have increased activation of left hemisphere ventral limbic areaswhich mediate emotional experiences and generation of emotional responsesand decreased activation of right hemisphere cortical structures related to cognitionstructures associated with the regulation of emotions. However, further research is needed to consolidate neuroimaging findings, which are often heterogeneous and not consistently reported according to a common standard. Neuroscientists have proposed additional models to try to explain the cause of bipolar disorder. One proposed model for bipolar disorder suggests that hypersensitivity of reward circuits consisting of frontostriatal circuits causes mania, and decreased sensitivity of these circuits causes depression. According to the "kindling" hypothesis, when people who are genetically predisposed toward bipolar disorder experience stressful events, the stress threshold at which mood changes occur becomes progressively lower, until the episodes eventually start (and recur) spontaneously. There is evidence supporting an association between early-life stress and dysfunction of the hypothalamic-pituitary-adrenal axis leading to its overactivation, which may play a role in the pathogenesis of bipolar disorder. Other brain components that have been proposed to play a role in bipolar disorder are the mitochondria and a sodium ATPase pump. Circadian rhythms and regulation of the hormone melatonin also seem to be altered. Dopamine, a neurotransmitter responsible for mood cycling, has increased transmission during the manic phase. The dopamine hypothesis states that the increase in dopamine results in secondary homeostatic downregulation of key system elements and receptors such as lower sensitivity of dopaminergic receptors. This results in decreased dopamine transmission characteristic of the depressive phase. The depressive phase ends with homeostatic upregulation potentially restarting the cycle over again. Glutamate is significantly increased within the left dorsolateral prefrontal cortex during the manic phase of bipolar disorder, and returns to normal levels once the phase is over. Medications used to treat bipolar may exert their effect by modulating intracellular signaling, such as through depleting myo-inositol levels, inhibition of cAMP signaling, and through altering subunits of the dopamine-associated G-protein. Consistent with this, elevated levels of Gαi, Gαs, and Gαq/11 have been reported in brain and blood samples, along with increased protein kinase A (PKA) expression and sensitivity; typically, PKA activates as part of the intracellular signalling cascade downstream from the detachment of Gαs subunit from the G protein complex. Decreased levels of 5-hydroxyindoleacetic acid, a byproduct of serotonin, are present in the cerebrospinal fluid of persons with bipolar disorder during both the depressed and manic phases. Increased dopaminergic activity has been hypothesized in manic states due to the ability of dopamine agonists to stimulate mania in people with bipolar disorder. Decreased sensitivity of regulatory α2 adrenergic receptors as well as increased cell counts in the locus coeruleus indicated increased noradrenergic activity in manic people. Low plasma GABA levels on both sides of the mood spectrum have been found. One review found no difference in monoamine levels, but found abnormal norepinephrine turnover in people with bipolar disorder. Tyrosine depletion was found to reduce the effects of methamphetamine in people with bipolar disorder as well as symptoms of mania, implicating dopamine in mania. VMAT2 binding was found to be increased in one study of people with bipolar mania. Diagnosis Bipolar disorder is commonly diagnosed during adolescence or early adulthood, but onset can occur throughout life. Its diagnosis is based on the self-reported experiences of the individual, abnormal behavior reported by family members, friends or co-workers, observable signs of illness as assessed by a clinician, and ideally a medical work-up to rule out other causes. Caregiver-scored rating scales, specifically from the mother, have shown to be more accurate than teacher and youth-scored reports in identifying youths with bipolar disorder. Assessment is usually done on an outpatient basis; admission to an inpatient facility is considered if there is a risk to oneself or others. The most widely used criteria for diagnosing bipolar disorder are from the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) and the World Health Organization's (WHO) International Statistical Classification of Diseases and Related Health Problems, 10th Edition (ICD-10). The ICD-10 criteria are used more often in clinical settings outside of the U.S. while the DSM criteria are used within the U.S. and are the prevailing criteria used internationally in research studies. The DSM-5, published in 2013, includes further and more accurate specifiers compared to its predecessor, the DSM-IV-TR. This work has influenced the eleventh revision of the ICD, which includes the various diagnoses within the bipolar spectrum of the DSM-V. Several rating scales for the screening and evaluation of bipolar disorder exist, including the Bipolar spectrum diagnostic scale, Mood Disorder Questionnaire, the General Behavior Inventory and the Hypomania Checklist. The use of evaluation scales cannot substitute a full clinical interview but they serve to systematize the recollection of symptoms. On the other hand, instruments for screening bipolar disorder tend to have lower sensitivity. Differential diagnosis Bipolar disorder is classified by the International Classification of Diseases as a mental and behavioural disorder. Mental disorders that can have symptoms similar to those seen in bipolar disorder include schizophrenia, major depressive disorder, attention deficit hyperactivity disorder (ADHD), and certain personality disorders, such as borderline personality disorder. A key difference between bipolar disorder and borderline personality disorder is the nature of the mood swings; in contrast to the sustained changes to mood over days to weeks or longer, those of the latter condition (more accurately called emotional dysregulation) are sudden and often short-lived, and secondary to social stressors. Although there are no biological tests that are diagnostic of bipolar disorder, blood tests and/or imaging are carried out to investigate whether medical illnesses with clinical presentations similar to that of bipolar disorder are present before making a definitive diagnosis. Neurologic diseases such as multiple sclerosis, complex partial seizures, strokes, brain tumors, Wilson's disease, traumatic brain injury, Huntington's disease, and complex migraines can mimic features of bipolar disorder. An EEG may be used to exclude neurological disorders such as epilepsy, and a CT scan or MRI of the head may be used to exclude brain lesions. Additionally, disorders of the endocrine system such as hypothyroidism, hyperthyroidism, and Cushing's disease are in the differential as is the connective tissue disease systemic lupus erythematosus. Infectious causes of mania that may appear similar to bipolar mania include herpes encephalitis, HIV, influenza, or neurosyphilis. Certain vitamin deficiencies such as pellagra (niacin deficiency), vitamin B12 deficiency, folate deficiency, and Wernicke–Korsakoff syndrome (thiamine deficiency) can also lead to mania. Common medications that can cause manic symptoms include antidepressants, prednisone, Parkinson's disease medications, thyroid hormone, stimulants (including cocaine and methamphetamine), and certain antibiotics. Bipolar spectrum Bipolar spectrum disorders include: bipolar I disorder, bipolar II disorder, cyclothymic disorder and cases where subthreshold symptoms are found to cause clinically significant impairment or distress. These disorders involve major depressive episodes that alternate with manic or hypomanic episodes, or with mixed episodes that feature symptoms of both mood states. The concept of the bipolar spectrum is similar to that of Emil Kraepelin's original concept of manic depressive illness. Bipolar II disorder was established as a diagnosis in 1994 within DSM IV; though debate continues over whether it is a distinct entity, part of a spectrum, or exists at all. Criteria and subtypes The DSM and the ICD characterize bipolar disorder as a spectrum of disorders occurring on a continuum. The DSM-5 and ICD-11 lists three specific subtypes: Bipolar I disorder: At least one manic episode is necessary to make the diagnosis; depressive episodes are common in the vast majority of cases with bipolar disorder I, but are unnecessary for the diagnosis. Specifiers such as "mild, moderate, moderate-severe, severe" and "with psychotic features" should be added as applicable to indicate the presentation and course of the disorder. Bipolar II disorder: No manic episodes and one or more hypomanic episodes and one or more major depressive episodes. Hypomanic episodes do not go to the full extremes of mania (i.e., do not usually cause severe social or occupational impairment, and are without psychosis), and this can make bipolar II more difficult to diagnose, since the hypomanic episodes may simply appear as periods of successful high productivity and are reported less frequently than a distressing, crippling depression. Cyclothymia: A history of hypomanic episodes with periods of depression that do not meet criteria for major depressive episodes. When relevant, specifiers for peripartum onset and with rapid cycling should be used with any subtype. Individuals who have subthreshold symptoms that cause clinically significant distress or impairment, but do not meet full criteria for one of the three subtypes may be diagnosed with other specified or unspecified bipolar disorder. Other specified bipolar disorder is used when a clinician chooses to explain why the full criteria were not met (e.g., hypomania without a prior major depressive episode). If the condition is thought to have a non-psychiatric medical cause, the diagnosis of bipolar and related disorder due to another medical condition is made, while substance/medication-induced bipolar and related disorder is used if a medication is thought to have triggered the condition. Rapid cycling Most people who meet criteria for bipolar disorder experience a number of episodes, on average 0.4 to 0.7 per year, lasting three to six months. Rapid cycling, however, is a course specifier that may be applied to any bipolar subtype. It is defined as having four or more mood disturbance episodes within a one-year span. Rapid cycling is usually temporary but is common amongst people with bipolar disorder and affects 25.8–45.3% of them at some point in their life. These episodes are separated from each other by a remission (partial or full) for at least two months or a switch in mood polarity (i.e., from a depressive episode to a manic episode or vice versa). The definition of rapid cycling most frequently cited in the literature (including the DSM-V and ICD-11) is that of Dunner and Fieve: at least four major depressive, manic, hypomanic or mixed episodes during a 12-month period. The literature examining the pharmacological treatment of rapid cycling is sparse and there is no clear consensus with respect to its optimal pharmacological management. "Ultra rapid" and "ultradian" have been applied to faster-cycling types of bipolar disorder. People with the rapid cycling or faster-cycling subtypes of bipolar disorder tend to be more difficult to treat and less responsive to medications than other people with bipolar disorder. Coexisting psychiatric conditions The diagnosis of bipolar disorder can be complicated by coexisting (comorbid) psychiatric conditions including obsessive–compulsive disorder, substance-use disorder, eating disorders, attention deficit hyperactivity disorder, social phobia, premenstrual syndrome (including premenstrual dysphoric disorder), or panic disorder. A thorough longitudinal analysis of symptoms and episodes, assisted if possible by discussions with friends and family members, is crucial to establishing a treatment plan where these comorbidities exist. Children of parents with bipolar disorder more frequently have other mental health problems. Children In the 1920s, Kraepelin noted that manic episodes are rare before puberty. In general, bipolar disorder in children was not recognized in the first half of the twentieth century. This issue diminished with an increased following of the DSM criteria in the last part of the twentieth century. The diagnosis of childhood bipolar disorder, while formerly controversial, has gained greater acceptance among childhood and adolescent psychiatrists. American children and adolescents diagnosed with bipolar disorder in community hospitals increased 4-fold reaching rates of up to 40% in 10 years around the beginning of the 21st century, while in outpatient clinics it doubled reaching 6%. Studies using DSM criteria show that up to 1% of youth may have bipolar disorder. The DSM-5 has established a diagnosis—disruptive mood dysregulation disorder—that covers children with long-term, persistent irritability that had at times been misdiagnosed as having bipolar disorder, distinct from irritability in bipolar disorder that is restricted to discrete mood episodes. Adults Bipolar on average, starts during adulthood. Bipolar 1, on average, starts at the age of 18 years old, and Bipolar 2 starts at age 22 years old on average. However, most delay seeking treatment for an average of 8 years after symptoms start. Bipolar is often misdiagnosed with other psychiatric disorders. There is no definitive association between race, ethnicity, or Socioeconomic status (SES). Adults with Bipolar report having a lower quality of life, even outside of a manic or depressive episode. Bipolar can put strain on marriage and other relationships, having a job, and everyday functioning. Bipolar is associated with having higher rates of unemployment. Most have trouble keeping a job, leading to trouble with healthcare access, leading to more decline in their mental health due to not receiving treatment such as medicine and therapy. Elderly Bipolar disorder is uncommon in older patients, with a measured lifetime prevalence of 1% in over 60s and a 12-month prevalence of 0.10.5% in people over 65. Despite this, it is overrepresented in psychiatric admissions, making up 48% of inpatient admission to aged care psychiatry units, and the incidence of mood disorders is increasing overall with the aging population. Depressive episodes more commonly present with sleep disturbance, fatigue, hopelessness about the future, slowed thinking, and poor concentration and memory; the last three symptoms are seen in what is known as pseudodementia. Clinical features also differ between those with late-onset bipolar disorder and those who developed it early in life; the former group present with milder manic episodes, more prominent cognitive changes and have a background of worse psychosocial functioning, while the latter present more commonly with mixed affective episodes, and have a stronger family history of illness. Older people with bipolar disorder experience cognitive changes, particularly in executive functions such as abstract thinking and switching cognitive sets, as well as concentrating for long periods and decision-making. Prevention Attempts at prevention of bipolar disorder have focused on stress (such as childhood adversity or highly conflictual families) which, although not a diagnostically specific causal agent for bipolar, does place genetically and biologically vulnerable individuals at risk for a more severe course of illness. Longitudinal studies have indicated that full-blown manic stages are often preceded by a variety of prodromal clinical features, providing support for the occurrence of an at-risk state of the disorder when an early intervention might prevent its further development and/or improve its outcome. Management The aim of management is to treat acute episodes safely with medication and work with the patient in long-term maintenance to prevent further episodes and optimise function using a combination of pharmacological and psychotherapeutic techniques. Hospitalization may be required especially with the manic episodes present in bipolar I. This can be voluntary or (local legislation permitting) involuntary. Long-term inpatient stays are now less common due to deinstitutionalization, although these can still occur. Following (or in lieu of) a hospital admission, support services available can include drop-in centers, visits from members of a community mental health team or an Assertive Community Treatment team, supported employment, patient-led support groups, and intensive outpatient programs. These are sometimes referred to as partial-inpatient programs. Compared to the general population, people with bipolar disorder are less likely to frequently engage in physical exercise. Exercise may have physical and mental benefits for people with bipolar disorder, but there is a lack of research. Psychosocial Psychotherapy aims to assist a person with bipolar disorder in accepting and understanding their diagnosis, coping with various types of stress, improving their interpersonal relationships, and recognizing prodromal symptoms before full-blown recurrence. Cognitive behavioral therapy (CBT), family-focused therapy, and psychoeducation have the most evidence for efficacy in regard to relapse prevention, while interpersonal and social rhythm therapy and cognitive-behavioral therapy appear the most effective in regard to residual depressive symptoms. Most studies have been based only on bipolar I, however, and treatment during the acute phase can be a particular challenge. Some clinicians emphasize the need to talk with individuals experiencing mania, to develop a therapeutic alliance in support of recovery. Medication Medications are often prescribed to help improve symptoms of bipolar disorder. Medications approved for treating bipolar disorder including mood stabilizers, antipsychotics, and certain antidepressants. Sometimes a combination of medications may also be suggested. The choice of medications may differ depending on the bipolar disorder episode type or if the person is experiencing unipolar or bipolar depression. Other factors to consider when deciding on an appropriate treatment approach includes if the person has any comorbidities, their response to previous therapies, adverse effects, and the desire of the person to be treated. Mood stabilizers Lithium and the anticonvulsants carbamazepine, lamotrigine, and valproic acid are classed as mood stabilizers due to their effect on the mood states in bipolar disorder. Lithium has the best overall evidence and is considered an effective treatment for acute manic episodes, preventing relapses, and bipolar depression. Lithium reduces the risk of suicide, self-harm, and death in people with bipolar disorder. Lithium is preferred for long-term mood stabilization. Lithium treatment is also associated with adverse effects and it has been shown to erode kidney and thyroid function over extended periods. Valproate has become a commonly prescribed treatment and effectively treats manic episodes. Carbamazepine is less effective in preventing relapse than lithium or valproate. Lamotrigine has some efficacy in treating depression, and this benefit is greatest in more severe depression. Lamotrigine may have a similar effectiveness to lithium for treating bipolar disorder, however, there is evidence to suggest that lamotrigine is less effective at preventing recurrent mania episodes. Lamotrigine treatment has been shown to be safer compared to lithium treatment, with less adverse effects. Valproate and carbamazepine are teratogenic and should be avoided as a treatment in women of childbearing age, but discontinuation of these medications during pregnancy is associated with a high risk of relapse. The effectiveness of topiramate is unknown. Carbamazepine effectively treats manic episodes, with some evidence it has greater benefit in rapid-cycling bipolar disorder, or those with more psychotic symptoms or more symptoms similar to that of schizoaffective disorder. Mood stabilizers are used for long-term maintenance but have not demonstrated the ability to quickly treat acute bipolar depression. Antipsychotics Antipsychotic medications are effective for short-term treatment of bipolar manic episodes and appear to be superior to lithium and anticonvulsants for this purpose. Atypical antipsychotics are also indicated for bipolar depression refractory to treatment with mood stabilizers. Olanzapine is effective in preventing relapses, although the supporting evidence is weaker than the evidence for lithium. A 2006 review found that haloperidol was an effective treatment for acute mania, limited data supported no difference in overall efficacy between haloperidol, olanzapine or risperidone, and that it could be less effective than aripiprazole. Antidepressants Antidepressant monotherapy is not recommended in the treatment of bipolar disorder and does not provide any benefit over mood stabilizers. Atypical antipsychotic medications (e.g., aripiprazole) are preferred over antidepressants to augment the effects of mood stabilizers due to the lack of efficacy of antidepressants in bipolar disorder. Treatment of bipolar disorder using antidepressants may carry a risk of affective switches where a person switches from depression to manic or hypomanic phases or mixed states. There may also be a risk of accelerating cycling between phases when antidepressants are used in bipolar disorder. The risk of affective switches is higher in bipolar I depression; antidepressants are generally avoided in bipolar I disorder or only used with mood stabilizers when they are deemed necessary. Whether modern antidepressants cause mania or cycle acceleration in bipolar disorder is highly controversial, as is whether antidepressants provide any benefit over mood stabilizers alone. Combined treatment approaches Antipsychotics and mood stabilizers used together are quicker and more effective at treating mania than either class of drug used alone. Some analyses indicate antipsychotics alone are also more effective at treating acute mania. A first-line treatment for depression in bipolar disorder is a combination of olanzapine and fluoxetine. Other drugs Short courses of benzodiazepines are used in addition to other medications for calming effect until mood stabilizing become effective. Electroconvulsive therapy (ECT) is an effective form of treatment for acute mood disturbances in those with bipolar disorder, especially when psychotic or catatonic features are displayed. ECT is also recommended for use in pregnant women with bipolar disorder. It is unclear if ketamine (a common general dissociative anesthetic used in surgery) is useful in bipolar disorder. Gabapentin and pregabalin are not proven to be effective for treating bipolar disorder. Children Treating bipolar disorder in children involves medication and psychotherapy. The literature and research on the effects of psychosocial therapy on bipolar spectrum disorders are scarce, making it difficult to determine the efficacy of various therapies. Mood stabilizers and atypical antipsychotics are commonly prescribed. Among the former, lithium is the only compound approved by the FDA for children. Psychological treatment combines normally education on the disease, group therapy, and cognitive behavioral therapy. Long-term medication is often needed. Resistance to treatment The poor response from some bipolar patients to treatment has given evidence to the concept of treatment-resistant bipolar disorder. Guidelines to the definition of treatment-resistant bipolar disorder and evidence-based options for its management were reviewed in 2020. Management of obesity A large proportion (approximately 68%) of people who seek treatment for bipolar disorder are obese or overweight and managing obesity is important for reducing the risk of other health conditions that are associated with obesity. Management approaches include non-pharmacological, pharmacological, and surgical. Examples of non-pharmacological include dietary interventions, exercise, behavioral therapies, or combined approaches. Pharmacological approaches include weight-loss medications or changing medications already being prescribed. Some people with bipolar disorder who have obesity may also be eligible for bariatric surgery. The effectiveness of these various approaches to improving or managing obesity in people with bipolar disorder is not clear. Prognosis A lifelong condition with periods of partial or full recovery in between recurrent episodes of relapse, bipolar disorder is considered to be a major health problem worldwide because of the increased rates of disability and premature mortality. It is also associated with co-occurring psychiatric and medical problems, higher rates of death from natural causes (e.g., cardiovascular disease), and high rates of initial under- or misdiagnosis, causing a delay in appropriate treatment and contributing to poorer prognoses. When compared to the general population, people with bipolar disorder also have higher rates of other serious medical comorbidities including diabetes mellitus, respiratory diseases, HIV, and hepatitis C virus infection. After a diagnosis is made, it remains difficult to achieve complete remission of all symptoms with the currently available psychiatric medications and symptoms often become progressively more severe over time. Compliance with medications is one of the most significant factors that can decrease the rate and severity of relapse and have a positive impact on overall prognosis. However, the types of medications used in treating BD commonly cause side effects and more than 75% of individuals with BD inconsistently take their medications for various reasons. Of the various types of the disorder, rapid cycling (four or more episodes in one year) is associated with the worst prognosis due to higher rates of self-harm and suicide. Individuals diagnosed with bipolar who have a family history of bipolar disorder are at a greater risk for more frequent manic/hypomanic episodes. Early onset and psychotic features are also associated with worse outcomes, as well as subtypes that are nonresponsive to lithium. Early recognition and intervention also improve prognosis as the symptoms in earlier stages are less severe and more responsive to treatment. Onset after adolescence is connected to better prognoses for both genders, and being male is a protective factor against higher levels of depression. For women, better social functioning before developing bipolar disorder and being a parent are protective towards suicide attempts. Functioning Changes in cognitive processes and abilities are seen in mood disorders, with those of bipolar disorder being greater than those in major depressive disorder. These include reduced attentional and executive capabilities and impaired memory. People with bipolar disorder often experience a decline in cognitive functioning during (or possibly before) their first episode, after which a certain degree of cognitive dysfunction typically becomes permanent, with more severe impairment during acute phases and moderate impairment during periods of remission. As a result, two-thirds of people with BD continue to experience impaired psychosocial functioning in between episodes even when their mood symptoms are in full remission. A similar pattern is seen in both BD-I and BD-II, but people with BD-II experience a lesser degree of impairment. When bipolar disorder occurs in children, it severely and adversely affects their psychosocial development. Children and adolescents with bipolar disorder have higher rates of significant difficulties with substance use disorders, psychosis, academic difficulties, behavioral problems, social difficulties, and legal problems. Cognitive deficits typically increase over the course of the illness. Higher degrees of impairment correlate with the number of previous manic episodes and hospitalizations, and with the presence of psychotic symptoms. Early intervention can slow the progression of cognitive impairment, while treatment at later stages can help reduce distress and negative consequences related to cognitive dysfunction. Despite the overly ambitious goals that are frequently part of manic episodes, symptoms of mania undermine the ability to achieve these goals and often interfere with an individual's social and occupational functioning. One-third of people with BD remain unemployed for one year following a hospitalization for mania. Depressive symptoms during and between episodes, which occur much more frequently for most people than hypomanic or manic symptoms over the course of illness, are associated with lower functional recovery in between episodes, including unemployment or underemployment for both BD-I and BD-II. However, the course of illness (duration, age of onset, number of hospitalizations, and the presence or not of rapid cycling) and cognitive performance are the best predictors of employment outcomes in individuals with bipolar disorder, followed by symptoms of depression and years of education. Recovery and recurrence A naturalistic study in 2003 by Tohen and coworkers from the first admission for mania or mixed episode (representing the hospitalized and therefore most severe cases) found that 50% achieved syndromal recovery (no longer meeting criteria for the diagnosis) within six weeks and 98% within two years. Within two years, 72% achieved symptomatic recovery (no symptoms at all) and 43% achieved functional recovery (regaining of prior occupational and residential status). However, 40% went on to experience a new episode of mania or depression within 2 years of syndromal recovery, and 19% switched phases without recovery. Symptoms preceding a relapse (prodromal), especially those related to mania, can be reliably identified by people with bipolar disorder. There have been intents to teach patients coping strategies when noticing such symptoms with encouraging results. Suicide Bipolar disorder can cause suicidal ideation that leads to suicide attempts. Individuals whose bipolar disorder begins with a depressive or mixed affective episode seem to have a poorer prognosis and an increased risk of suicide. One out of two people with bipolar disorder attempt suicide at least once during their lifetime and many attempts are successfully completed. The annual average suicide rate is 0.4%-1.4%, which is 30 to 60 times greater than that of the general population. The number of deaths from suicide in bipolar disorder is between 18 and 25 times higher than would be expected in similarly aged people without bipolar disorder. The lifetime risk of suicide is much higher in those with bipolar disorder, with an estimated 34% of people attempting suicide and 15–20% dying by suicide. Risk factors for suicide attempts and death from suicide in people with bipolar disorder include older age, prior suicide attempts, a depressive or mixed index episode (first episode), a manic index episode with psychotic symptoms, hopelessness or psychomotor agitation present during the episodes, co-existing anxiety disorder, a first degree relative with a mood disorder or suicide, interpersonal conflicts, occupational problems, bereavement or social isolation. Epidemiology Bipolar disorder is the sixth leading cause of disability worldwide and has a lifetime prevalence of about 1 to 3% in the general population. However, a reanalysis of data from the National Epidemiological Catchment Area survey in the United States suggested that 0.8% of the population experience a manic episode at least once (the diagnostic threshold for bipolar I) and a further 0.5% have a hypomanic episode (the diagnostic threshold for bipolar II or cyclothymia). Including sub-threshold diagnostic criteria, such as one or two symptoms over a short time-period, an additional 5.1% of the population, adding up to a total of 6.4%, were classified as having a bipolar spectrum disorder. A more recent analysis of data from a second US National Comorbidity Survey found that 1% met lifetime prevalence criteria for bipolar I, 1.1% for bipolar II, and 2.4% for subthreshold symptoms. Estimates vary about how many children and young adults have bipolar disorder. These estimates range from 0.6 to 15% depending on differing settings, methods, and referral settings, raising suspicions of overdiagnosis. One meta-analysis of bipolar disorder in young people worldwide estimated that about 1.8% of people between the ages of seven and 21 have bipolar disorder. Similar to adults, bipolar disorder in children and adolescents is thought to occur at a similar frequency in boys and girls. There are conceptual and methodological limitations and variations in the findings. Prevalence studies of bipolar disorder are typically carried out by lay interviewers who follow fully structured/fixed interview schemes; responses to single items from such interviews may have limited validity. In addition, diagnoses (and therefore estimates of prevalence) vary depending on whether a categorical or spectrum approach is used. This consideration has led to concerns about the potential for both underdiagnosis and overdiagnosis. The incidence of bipolar disorder is similar in men and women as well as across different cultures and ethnic groups. A 2000 study by the World Health Organization found that prevalence and incidence of bipolar disorder are very similar across the world. Age-standardized prevalence per 100,000 ranged from 421.0 in South Asia to 481.7 in Africa and Europe for men and from 450.3 in Africa and Europe to 491.6 in Oceania for women. However, severity may differ widely across the globe. Disability-adjusted life year rates, for example, appear to be higher in developing countries, where medical coverage may be poorer and medication less available. Within the United States, Asian Americans have significantly lower rates than their African American and European American counterparts. In 2017, the Global Burden of Disease Study estimated there were 4.5 million new cases and a total of 45.5 million cases globally. History In the early 1800s, French psychiatrist Jean-Étienne Dominique Esquirol's lypemania, one of his affective monomanias, was the first elaboration on what was to become modern depression. The basis of the current conceptualization of bipolar illness can be traced back to the 1850s. In 1850, Jean-Pierre Falret described "circular insanity" (, ); the lecture was summarized in 1851 in the ("Hospital Gazette"). Three years later, in 1854, Jules-Gabriel-François Baillarger (1809–1890) described to the French Imperial Académie Nationale de Médecine a biphasic mental illness causing recurrent oscillations between mania and melancholia, which he termed (, "madness in double form"). Baillarger's original paper, "", appeared in the medical journal Annales médico-psychologiques (Medico-psychological annals) in 1854. These concepts were developed by the German psychiatrist Emil Kraepelin (1856–1926), who, using Kahlbaum's concept of cyclothymia, categorized and studied the natural course of untreated bipolar patients. He coined the term manic depressive psychosis, after noting that periods of acute illness, manic or depressive, were generally punctuated by relatively symptom-free intervals where the patient was able to function normally. The term "manic–depressive reaction" appeared in the first version of the DSM in 1952, influenced by the legacy of Adolf Meyer. Subtyping into "unipolar" depressive disorders and bipolar disorders has its origin in Karl Kleist's concept – since 1911 – of unipolar and bipolar affective disorders, which was used by Karl Leonhard in 1957 to differentiate between unipolar and bipolar disorder in depression. These subtypes have been regarded as separate conditions since publication of the DSM-III. The subtypes bipolar II and rapid cycling have been included since the DSM-IV, based on work from the 1970s by David Dunner, Elliot Gershon, Frederick Goodwin, Ronald Fieve, and Joseph Fleiss. Society and culture Cost The United States spent approximately $202.1 billion on people diagnosed with bipolar I disorder (excluding other subtypes of bipolar disorder and undiagnosed people) in 2015. One analysis estimated that the United Kingdom spent approximately £5.2 billion on the disorder in 2007. In addition to the economic costs, bipolar disorder is a leading cause of disability and lost productivity worldwide. People with bipolar disorder are generally more disabled, have a lower level of functioning, longer duration of illness, and increased rates of work absenteeism and decreased productivity when compared to people experiencing other mental health disorders. The decrease in the productivity seen in those who care for people with bipolar disorder also significantly contributes to these costs. Advocacy There are widespread issues with social stigma, stereotypes, and prejudice against individuals with a diagnosis of bipolar disorder. In 2000, actress Carrie Fisher went public with her bipolar disorder diagnosis. She became one of the most well-recognized advocates for people with bipolar disorder in the public eye and fiercely advocated to eliminate the stigma surrounding mental illnesses, including bipolar disorder. Stephen Fried, who has written extensively on the topic, noted that Fisher helped to draw attention to the disorder's chronicity, relapsing nature, and that bipolar disorder relapses do not indicate a lack of discipline or moral shortcomings. Since being diagnosed at age 37, actor Stephen Fry has pushed to raise awareness of the condition, with his 2006 documentary Stephen Fry: The Secret Life of the Manic Depressive. In an effort to ease the social stigma associated with bipolar disorder, the orchestra conductor Ronald Braunstein cofounded the ME/2 Orchestra with his wife Caroline Whiddon in 2011. Braunstein was diagnosed with bipolar disorder in 1985 and his concerts with the ME/2 Orchestra were conceived in order to create a welcoming performance environment for his musical colleagues, while also raising public awareness about mental illness. Notable cases Numerous authors have written about bipolar disorder and many successful people have openly discussed their experience with it. Kay Redfield Jamison, a clinical psychologist and professor of psychiatry at the Johns Hopkins University School of Medicine, profiled her own bipolar disorder in her memoir An Unquiet Mind (1995). It is likely that Grigory Potemkin, Russian statesman and alleged husband of Catherine the Great, suffered from some kind of bipolar disorder. Several celebrities have also publicly shared that they have bipolar disorder; in addition to Carrie Fisher and Stephen Fry these include Catherine Zeta-Jones, Mariah Carey, Kanye West, Jane Pauley, Demi Lovato, Selena Gomez, and Russell Brand. Media portrayals Several dramatic works have portrayed characters with traits suggestive of the diagnosis which have been the subject of discussion by psychiatrists and film experts alike. In Mr. Jones (1993), (Richard Gere) swings from a manic episode into a depressive phase and back again, spending time in a psychiatric hospital and displaying many of the features of the syndrome. In The Mosquito Coast (1986), Allie Fox (Harrison Ford) displays some features including recklessness, grandiosity, increased goal-directed activity and mood lability, as well as some paranoia. Psychiatrists have suggested that Willy Loman, the main character in Arthur Miller's classic play Death of a Salesman, has bipolar disorder. The 2009 drama 90210 featured a character, Silver, who was diagnosed with bipolar disorder. Stacey Slater, a character from the BBC soap EastEnders, has been diagnosed with the disorder. The storyline was developed as part of the BBC's Headroom campaign. The Channel 4 soap Brookside had earlier featured a story about bipolar disorder when the character Jimmy Corkhill was diagnosed with the condition. 2011 Showtime's political thriller drama Homeland protagonist Carrie Mathison has bipolar disorder, which she has kept secret since her school days. The 2014 ABC medical drama, Black Box, featured a world-renowned neuroscientist with bipolar disorder. In the TV series Dave, the eponymous main character, played by Lil Dicky as a fictionalized version of himself, is an aspiring rapper. Lil Dicky's real-life hype man GaTa also plays himself. In one episode, after being off his medication and having an episode, GaTa tearfully confesses to having bipolar disorder. GaTa has bipolar disorder in real life but, like his character in the show, he is able to manage it with medication. Creativity A link between mental illness and professional success or creativity has been suggested, including in accounts by Socrates, Seneca the Younger, and Cesare Lombroso. Despite prominence in popular culture, the link between creativity and bipolar has not been rigorously studied. This area of study also is likely affected by confirmation bias. Some evidence suggests that some heritable component of bipolar disorder overlaps with heritable components of creativity. Probands of people with bipolar disorder are more likely to be professionally successful, as well as to demonstrate temperamental traits similar to bipolar disorder. Furthermore, while studies of the frequency of bipolar disorder in creative population samples have been conflicting, full-blown bipolar disorder in creative samples is rare. Research Research directions for bipolar disorder in children include optimizing treatments, increasing the knowledge of the genetic and neurobiological basis of the pediatric disorder and improving diagnostic criteria. Some treatment research suggests that psychosocial interventions that involve the family, psychoeducation, and skills building (through therapies such as CBT, DBT, and IPSRT) can benefit in addition to pharmacotherapy.
Biology and health sciences
Mental disorder
null
4542
https://en.wikipedia.org/wiki/Bra%E2%80%93ket%20notation
Bra–ket notation
Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread. Bra–ket notation was created by Paul Dirac in his 1939 publication A New Notation for Quantum Mechanics. The notation was introduced as an easier way to write quantum mechanical expressions. The name comes from the English word "bracket". Quantum mechanics In quantum mechanics, bra–ket notation is used ubiquitously to denote quantum states. The notation uses angle brackets, and , and a vertical bar , to construct "bras" and "kets". A ket is of the form . Mathematically it denotes a vector, , in an abstract (complex) vector space , and physically it represents a state of some quantum system. A bra is of the form . Mathematically it denotes a linear form , i.e. a linear map that maps each vector in to a number in the complex plane . Letting the linear functional act on a vector is written as . Assume that on there exists an inner product with antilinear first argument, which makes an inner product space. Then with this inner product each vector can be identified with a corresponding linear form, by placing the vector in the anti-linear first slot of the inner product: . The correspondence between these notations is then . The linear form is a covector to , and the set of all covectors forms a subspace of the dual vector space , to the initial vector space . The purpose of this linear form can now be understood in terms of making projections onto the state to find how linearly dependent two states are, etc. For the vector space , kets can be identified with column vectors, and bras with row vectors. Combinations of bras, kets, and linear operators are interpreted using matrix multiplication. If has the standard Hermitian inner product , under this identification, the identification of kets and bras and vice versa provided by the inner product is taking the Hermitian conjugate (denoted ). It is common to suppress the vector or linear form from the bra–ket notation and only use a label inside the typography for the bra or ket. For example, the spin operator on a two-dimensional space of spinors has eigenvalues with eigenspinors . In bra–ket notation, this is typically denoted as , and . As above, kets and bras with the same label are interpreted as kets and bras corresponding to each other using the inner product. In particular, when also identified with row and column vectors, kets and bras with the same label are identified with Hermitian conjugate column and row vectors. Bra–ket notation was effectively established in 1939 by Paul Dirac; it is thus also known as Dirac notation, despite the notation having a precursor in Hermann Grassmann's use of for inner products nearly 100 years earlier. Vector spaces Vectors vs kets In mathematics, the term "vector" is used for an element of any vector space. In physics, however, the term "vector" tends to refer almost exclusively to quantities like displacement or velocity, which have components that relate directly to the three dimensions of space, or relativistically, to the four of spacetime. Such vectors are typically denoted with over arrows (), boldface () or indices (). In quantum mechanics, a quantum state is typically represented as an element of a complex Hilbert space, for example, the infinite-dimensional vector space of all possible wavefunctions (square integrable functions mapping each point of 3D space to a complex number) or some more abstract Hilbert space constructed more algebraically. To distinguish this type of vector from those described above, it is common and useful in physics to denote an element of an abstract complex vector space as a ket , to refer to it as a "ket" rather than as a vector, and to pronounce it "ket-" or "ket-A" for . Symbols, letters, numbers, or even words—whatever serves as a convenient label—can be used as the label inside a ket, with the making clear that the label indicates a vector in vector space. In other words, the symbol "" has a recognizable mathematical meaning as to the kind of variable being represented, while just the "" by itself does not. For example, is not necessarily equal to . Nevertheless, for convenience, there is usually some logical scheme behind the labels inside kets, such as the common practice of labeling energy eigenkets in quantum mechanics through a listing of their quantum numbers. At its simplest, the label inside the ket is the eigenvalue of a physical operator, such as , , , etc. Notation Since kets are just vectors in a Hermitian vector space, they can be manipulated using the usual rules of linear algebra. For example: Note how the last line above involves infinitely many different kets, one for each real number . Since the ket is an element of a vector space, a bra is an element of its dual space, i.e. a bra is a linear functional which is a linear map from the vector space to the complex numbers. Thus, it is useful to think of kets and bras as being elements of different vector spaces (see below however) with both being different useful concepts. A bra and a ket (i.e. a functional and a vector), can be combined to an operator of rank one with outer product Inner product and bra–ket identification on Hilbert space The bra–ket notation is particularly useful in Hilbert spaces which have an inner product that allows Hermitian conjugation and identifying a vector with a continuous linear functional, i.e. a ket with a bra, and vice versa (see Riesz representation theorem). The inner product on Hilbert space (with the first argument anti linear as preferred by physicists) is fully equivalent to an (anti-linear) identification between the space of kets and that of bras in the bra ket notation: for a vector ket define a functional (i.e. bra) by Bras and kets as row and column vectors In the simple case where we consider the vector space , a ket can be identified with a column vector, and a bra as a row vector. If, moreover, we use the standard Hermitian inner product on , the bra corresponding to a ket, in particular a bra and a ket with the same label are conjugate transpose. Moreover, conventions are set up in such a way that writing bras, kets, and linear operators next to each other simply imply matrix multiplication. In particular the outer product of a column and a row vector ket and bra can be identified with matrix multiplication (column vector times row vector equals matrix). For a finite-dimensional vector space, using a fixed orthonormal basis, the inner product can be written as a matrix multiplication of a row vector with a column vector: Based on this, the bras and kets can be defined as: and then it is understood that a bra next to a ket implies matrix multiplication. The conjugate transpose (also called Hermitian conjugate) of a bra is the corresponding ket and vice versa: because if one starts with the bra then performs a complex conjugation, and then a matrix transpose, one ends up with the ket Writing elements of a finite dimensional (or mutatis mutandis, countably infinite) vector space as a column vector of numbers requires picking a basis. Picking a basis is not always helpful because quantum mechanics calculations involve frequently switching between different bases (e.g. position basis, momentum basis, energy eigenbasis), and one can write something like "" without committing to any particular basis. In situations involving two different important basis vectors, the basis vectors can be taken in the notation explicitly and here will be referred simply as "" and "". Non-normalizable states and non-Hilbert spaces Bra–ket notation can be used even if the vector space is not a Hilbert space. In quantum mechanics, it is common practice to write down kets which have infinite norm, i.e. non-normalizable wavefunctions. Examples include states whose wavefunctions are Dirac delta functions or infinite plane waves. These do not, technically, belong to the Hilbert space itself. However, the definition of "Hilbert space" can be broadened to accommodate these states (see the Gelfand–Naimark–Segal construction or rigged Hilbert spaces). The bra–ket notation continues to work in an analogous way in this broader context. Banach spaces are a different generalization of Hilbert spaces. In a Banach space , the vectors may be notated by kets and the continuous linear functionals by bras. Over any vector space without topology, we may also notate the vectors by kets and the linear functionals by bras. In these more general contexts, the bracket does not have the meaning of an inner product, because the Riesz representation theorem does not apply. Usage in quantum mechanics The mathematical structure of quantum mechanics is based in large part on linear algebra: Wave functions and other quantum states can be represented as vectors in a complex Hilbert space. (The exact structure of this Hilbert space depends on the situation.) In bra–ket notation, for example, an electron might be in the "state" . (Technically, the quantum states are rays of vectors in the Hilbert space, as corresponds to the same state for any nonzero complex number .) Quantum superpositions can be described as vector sums of the constituent states. For example, an electron in the state is in a quantum superposition of the states and . Measurements are associated with linear operators (called observables) on the Hilbert space of quantum states. Dynamics are also described by linear operators on the Hilbert space. For example, in the Schrödinger picture, there is a linear time evolution operator with the property that if an electron is in state right now, at a later time it will be in the state , the same for every possible . Wave function normalization is scaling a wave function so that its norm is 1. Since virtually every calculation in quantum mechanics involves vectors and linear operators, it can involve, and often does involve, bra–ket notation. A few examples follow: Spinless position–space wave function The Hilbert space of a spin-0 point particle is spanned by a "position basis" , where the label extends over the set of all points in position space. This label is the eigenvalue of the position operator acting on such a basis state, . Since there are an uncountably infinite number of vector components in the basis, this is an uncountably infinite-dimensional Hilbert space. The dimensions of the Hilbert space (usually infinite) and position space (usually 1, 2 or 3) are not to be conflated. Starting from any ket in this Hilbert space, one may define a complex scalar function of , known as a wavefunction, On the left-hand side, is a function mapping any point in space to a complex number; on the right-hand side, is a ket consisting of a superposition of kets with relative coefficients specified by that function. It is then customary to define linear operators acting on wavefunctions in terms of linear operators acting on kets, by For instance, the momentum operator has the following coordinate representation, One occasionally even encounters an expression such as , though this is something of an abuse of notation. The differential operator must be understood to be an abstract operator, acting on kets, that has the effect of differentiating wavefunctions once the expression is projected onto the position basis, even though, in the momentum basis, this operator amounts to a mere multiplication operator (by ). That is, to say, or Overlap of states In quantum mechanics the expression is typically interpreted as the probability amplitude for the state to collapse into the state . Mathematically, this means the coefficient for the projection of onto . It is also described as the projection of state onto state . Changing basis for a spin-1/2 particle A stationary spin- particle has a two-dimensional Hilbert space. One orthonormal basis is: where is the state with a definite value of the spin operator equal to + and is the state with a definite value of the spin operator equal to −. Since these are a basis, any quantum state of the particle can be expressed as a linear combination (i.e., quantum superposition) of these two states: where and are complex numbers. A different basis for the same Hilbert space is: defined in terms of rather than . Again, any state of the particle can be expressed as a linear combination of these two: In vector form, you might write depending on which basis you are using. In other words, the "coordinates" of a vector depend on the basis used. There is a mathematical relationship between , , and ; see change of basis. Pitfalls and ambiguous uses There are some conventions and uses of notation that may be confusing or ambiguous for the non-initiated or early student. Separation of inner product and vectors A cause for confusion is that the notation does not separate the inner-product operation from the notation for a (bra) vector. If a (dual space) bra-vector is constructed as a linear combination of other bra-vectors (for instance when expressing it in some basis) the notation creates some ambiguity and hides mathematical details. We can compare bra–ket notation to using bold for vectors, such as , and for the inner product. Consider the following dual space bra-vector in the basis : It has to be determined by convention if the complex numbers are inside or outside of the inner product, and each convention gives different results. Reuse of symbols It is common to use the same symbol for labels and constants. For example, , where the symbol is used simultaneously as the name of the operator , its eigenvector and the associated eigenvalue . Sometimes the hat is also dropped for operators, and one can see notation such as . Hermitian conjugate of kets It is common to see the usage , where the dagger () corresponds to the Hermitian conjugate. This is however not correct in a technical sense, since the ket, , represents a vector in a complex Hilbert-space , and the bra, , is a linear functional on vectors in . In other words, is just a vector, while is the combination of a vector and an inner product. Operations inside bras and kets This is done for a fast notation of scaling vectors. For instance, if the vector is scaled by , it may be denoted . This can be ambiguous since is simply a label for a state, and not a mathematical object on which operations can be performed. This usage is more common when denoting vectors as tensor products, where part of the labels are moved outside the designed slot, e.g. . Linear operators Linear operators acting on kets A linear operator is a map that inputs a ket and outputs a ket. (In order to be called "linear", it is required to have certain properties.) In other words, if is a linear operator and is a ket-vector, then is another ket-vector. In an -dimensional Hilbert space, we can impose a basis on the space and represent in terms of its coordinates as a column vector. Using the same basis for , it is represented by an complex matrix. The ket-vector can now be computed by matrix multiplication. Linear operators are ubiquitous in the theory of quantum mechanics. For example, observable physical quantities are represented by self-adjoint operators, such as energy or momentum, whereas transformative processes are represented by unitary linear operators such as rotation or the progression of time. Linear operators acting on bras Operators can also be viewed as acting on bras from the right hand side. Specifically, if is a linear operator and is a bra, then is another bra defined by the rule (in other words, a function composition). This expression is commonly written as (cf. energy inner product) In an -dimensional Hilbert space, can be written as a row vector, and (as in the previous section) is an matrix. Then the bra can be computed by normal matrix multiplication. If the same state vector appears on both bra and ket side, then this expression gives the expectation value, or mean or average value, of the observable represented by operator for the physical system in the state . Outer products A convenient way to define linear operators on a Hilbert space is given by the outer product: if is a bra and is a ket, the outer product denotes the rank-one operator with the rule For a finite-dimensional vector space, the outer product can be understood as simple matrix multiplication: The outer product is an matrix, as expected for a linear operator. One of the uses of the outer product is to construct projection operators. Given a ket of norm 1, the orthogonal projection onto the subspace spanned by is This is an idempotent in the algebra of observables that acts on the Hilbert space. Hermitian conjugate operator Just as kets and bras can be transformed into each other (making into ), the element from the dual space corresponding to is , where denotes the Hermitian conjugate (or adjoint) of the operator . In other words, If is expressed as an matrix, then is its conjugate transpose. Properties Bra–ket notation was designed to facilitate the formal manipulation of linear-algebraic expressions. Some of the properties that allow this manipulation are listed herein. In what follows, and denote arbitrary complex numbers, denotes the complex conjugate of , and denote arbitrary linear operators, and these properties are to hold for any choice of bras and kets. Linearity Since bras are linear functionals, By the definition of addition and scalar multiplication of linear functionals in the dual space, Associativity Given any expression involving complex numbers, bras, kets, inner products, outer products, and/or linear operators (but not addition), written in bra–ket notation, the parenthetical groupings do not matter (i.e., the associative property holds). For example: and so forth. The expressions on the right (with no parentheses whatsoever) are allowed to be written unambiguously because of the equalities on the left. Note that the associative property does not hold for expressions that include nonlinear operators, such as the antilinear time reversal operator in physics. Hermitian conjugation Bra–ket notation makes it particularly easy to compute the Hermitian conjugate (also called dagger, and denoted ) of expressions. The formal rules are: The Hermitian conjugate of a bra is the corresponding ket, and vice versa. The Hermitian conjugate of a complex number is its complex conjugate. The Hermitian conjugate of the Hermitian conjugate of anything (linear operators, bras, kets, numbers) is itself—i.e., Given any combination of complex numbers, bras, kets, inner products, outer products, and/or linear operators, written in bra–ket notation, its Hermitian conjugate can be computed by reversing the order of the components, and taking the Hermitian conjugate of each. These rules are sufficient to formally write the Hermitian conjugate of any such expression; some examples are as follows: Kets: Inner products: Note that is a scalar, so the Hermitian conjugate is just the complex conjugate, i.e., Matrix elements: Outer products: Composite bras and kets Two Hilbert spaces and may form a third space by a tensor product. In quantum mechanics, this is used for describing composite systems. If a system is composed of two subsystems described in and respectively, then the Hilbert space of the entire system is the tensor product of the two spaces. (The exception to this is if the subsystems are actually identical particles. In that case, the situation is a little more complicated.) If is a ket in and is a ket in , the tensor product of the two kets is a ket in . This is written in various notations: See quantum entanglement and the EPR paradox for applications of this product. The unit operator Consider a complete orthonormal system (basis), for a Hilbert space , with respect to the norm from an inner product . From basic functional analysis, it is known that any ket can also be written as with the inner product on the Hilbert space. From the commutativity of kets with (complex) scalars, it follows that must be the identity operator, which sends each vector to itself. This, then, can be inserted in any expression without affecting its value; for example where, in the last line, the Einstein summation convention has been used to avoid clutter. In quantum mechanics, it often occurs that little or no information about the inner product of two arbitrary (state) kets is present, while it is still possible to say something about the expansion coefficients and of those vectors with respect to a specific (orthonormalized) basis. In this case, it is particularly useful to insert the unit operator into the bracket one time or more. For more information, see Resolution of the identity, where Since , plane waves follow, In his book (1958), Ch. III.20, Dirac defines the standard ket which, up to a normalization, is the translationally invariant momentum eigenstate in the momentum representation, i.e., . Consequently, the corresponding wavefunction is a constant, , and as well as Typically, when all matrix elements of an operator such as are available, this resolution serves to reconstitute the full operator, Notation used by mathematicians The object physicists are considering when using bra–ket notation is a Hilbert space (a complete inner product space). Let be a Hilbert space and a vector in . What physicists would denote by is the vector itself. That is, Let be the dual space of . This is the space of linear functionals on . The embedding is defined by , where for every the linear functional satisfies for every the functional equation . Notational confusion arises when identifying and with and respectively. This is because of literal symbolic substitutions. Let and let . This gives One ignores the parentheses and removes the double bars. Moreover, mathematicians usually write the dual entity not at the first place, as the physicists do, but at the second one, and they usually use not an asterisk but an overline (which the physicists reserve for averages and the Dirac spinor adjoint) to denote complex conjugate numbers; i.e., for scalar products mathematicians usually write whereas physicists would write for the same quantity
Physical sciences
Quantum mechanics
Physics
4543
https://en.wikipedia.org/wiki/Blue
Blue
Blue is one of the three primary colours in the RYB colour model (traditional colour theory), as well as in the RGB (additive) colour model. It lies between violet and cyan on the spectrum of visible light. The term blue generally describes colours perceived by humans observing light with a dominant wavelength that's between approximately 450 and 495 nanometres. Most blues contain a slight mixture of other colours; azure contains some green, while ultramarine contains some violet. The clear daytime sky and the deep sea appear blue because of an optical effect known as Rayleigh scattering. An optical effect called the Tyndall effect explains blue eyes. Distant objects appear more blue because of another optical effect called aerial perspective. Blue has been an important colour in art and decoration since ancient times. The semi-precious stone lapis lazuli was used in ancient Egypt for jewellery and ornament and later, in the Renaissance, to make the pigment ultramarine, the most expensive of all pigments. In the eighth century Chinese artists used cobalt blue to colour fine blue and white porcelain. In the Middle Ages, European artists used it in the windows of cathedrals. Europeans wore clothing coloured with the vegetable dye woad until it was replaced by the finer indigo from America. In the 19th century, synthetic blue dyes and pigments gradually replaced organic dyes and mineral pigments. Dark blue became a common colour for military uniforms and later, in the late 20th century, for business suits. Because blue has commonly been associated with harmony, it was chosen as the colour of the flags of the United Nations and the European Union. In the United States and Europe, blue is the colour that both men and women are most likely to choose as their favourite, with at least one recent survey showing the same across several other countries, including China, Malaysia, and Indonesia. Past surveys in the US and Europe have found that blue is the colour most commonly associated with harmony, confidence, masculinity, knowledge, intelligence, calmness, distance, infinity, the imagination, cold, and sadness. Etymology and linguistics The modern English word blue comes from Middle English or , from the Old French , a word of Germanic origin, related to the Old High German word (meaning 'shimmering, lustrous'). In heraldry, the word azure is used for blue. In Russian, Spanish, Mongolian, Irish, and some other languages, there is no single word for blue, but rather different words for light blue (, ; ) and dark blue (, ; ) (see Colour term). Several languages, including Japanese and Lakota Sioux, use the same word to describe blue and green. For example, in Vietnamese, the colour of both tree leaves and the sky is . In Japanese, the word for blue (, ) is often used for colours that English speakers would refer to as green, such as the colour of a traffic signal meaning "go". In Lakota, the word is used for both blue and green, the two colours not being distinguished in older Lakota (for more on this subject, see Blue–green distinction in language). Linguistic research indicates that languages do not begin by having a word for the colour blue. Colour names often developed individually in natural languages, typically beginning with black and white (or dark and light), and then adding red, and only much later – usually as the last main category of colour accepted in a language – adding the colour blue, probably when blue pigments could be manufactured reliably in the culture using that language. Optics and colour theory The term blue generally describes colours perceived by humans observing light with a dominant wavelength between approximately 450 and 495 nanometres. Blues with a higher frequency and thus a shorter wavelength gradually look more violet, while those with a lower frequency and a longer wavelength gradually appear more green. Purer blues are in the middle of this range, e.g., around 470 nanometres. Isaac Newton included blue as one of the seven colours in his first description of the visible spectrum. He chose seven colours because that was the number of notes in the musical scale, which he believed was related to the optical spectrum. He included indigo, the hue between blue and violet, as one of the separate colours, though today it is usually considered a hue of blue. In painting and traditional colour theory, blue is one of the three primary colours of pigments (red, yellow, blue), which can be mixed to form a wide gamut of colours. Red and blue mixed together form violet, blue and yellow together form green. Mixing all three primary colours together produces a dark brown. From the Renaissance onward, painters used this system to create their colours (see RYB colour model). The RYB model was used for colour printing by Jacob Christoph Le Blon as early as 1725. Later, printers discovered that more accurate colours could be created by using combinations of cyan, magenta, yellow, and black ink, put onto separate inked plates and then overlaid one at a time onto paper. This method could produce almost all the colours in the spectrum with reasonable accuracy. On the HSV colour wheel, the complement of blue is yellow; that is, a colour corresponding to an equal mixture of red and green light. On a colour wheel based on traditional colour theory (RYB) where blue was considered a primary colour, its complementary colour is considered to be orange (based on the Munsell colour wheel). LED In 1993, high-brightness blue LEDs were demonstrated by Shuji Nakamura of Nichia Corporation. In parallel, Isamu Akasaki and Hiroshi Amano of Nagoya University were working on a new development which revolutionized LED lighting. Nakamura was awarded the 2006 Millennium Technology Prize for his invention. Nakamura, Hiroshi Amano and Isamu Akasaki were awarded the Nobel Prize in Physics in 2014 for the invention of an efficient blue LED. Lasers Lasers emitting in the blue region of the spectrum became widely available to the public in 2010 with the release of inexpensive high-powered 445–447 nm laser diode technology. Previously the blue wavelengths were accessible only through DPSS which are comparatively expensive and inefficient, but still widely used by scientists for applications including optogenetics, Raman spectroscopy, and particle image velocimetry, due to their superior beam quality. Blue gas lasers are also still commonly used for holography, DNA sequencing, optical pumping, among other scientific and medical applications. Shades and variations Blue is the colour of light between violet and cyan on the visible spectrum. Hues of blue include indigo and ultramarine, closer to violet; pure blue, without any mixture of other colours; Azure, which is a lighter shade of blue, similar to the colour of the sky; Cyan, which is midway in the spectrum between blue and green, and the other blue-greens such as turquoise, teal, and aquamarine. Blue also varies in shade or tint; darker shades of blue contain black or grey, while lighter tints contain white. Darker shades of blue include ultramarine, cobalt blue, navy blue, and Prussian blue; while lighter tints include sky blue, azure, and Egyptian blue (for a more complete list see the List of colours). As a structural colour In nature, many blue phenomena arise from structural colouration, the result of interference between reflections from two or more surfaces of thin films, combined with refraction as light enters and exits such films. The geometry then determines that at certain angles, the light reflected from both surfaces interferes constructively, while at other angles, the light interferes destructively. Diverse colours therefore appear despite the absence of colourants. Colourants Artificial blues Egyptian blue, the first artificial pigment, was produced in the third millennium BC in Ancient Egypt. It is produced by heating pulverized sand, copper, and natron. It was used in tomb paintings and funereal objects to protect the dead in their afterlife. Prior to the 1700s, blue colourants for artwork were mainly based on lapis lazuli and the related mineral ultramarine. A breakthrough occurred in 1709 when German druggist and pigment maker Johann Jacob Diesbach discovered Prussian blue. The new blue arose from experiments involving heating dried blood with iron sulphides and was initially called Berliner Blau. By 1710 it was being used by the French painter Antoine Watteau, and later his successor Nicolas Lancret. It became immensely popular for the manufacture of wallpaper, and in the 19th century was widely used by French impressionist painters. Beginning in the 1820s, Prussian blue was imported into Japan through the port of Nagasaki. It was called bero-ai, or Berlin blue, and it became popular because it did not fade like traditional Japanese blue pigment, ai-gami, made from the dayflower. Prussian blue was used by both Hokusai, in his wave paintings, and Hiroshige. In 1799 a French chemist, Louis Jacques Thénard, made a synthetic cobalt blue pigment which became immensely popular with painters. In 1824 the Societé pour l'Encouragement d'Industrie in France offered a prize for the invention of an artificial ultramarine which could rival the natural colour made from lapis lazuli. The prize was won in 1826 by a chemist named Jean Baptiste Guimet, but he refused to reveal the formula of his colour. In 1828, another scientist, Christian Gmelin then a professor of chemistry in Tübingen, found the process and published his formula. This was the beginning of new industry to manufacture artificial ultramarine, which eventually almost completely replaced the natural product. In 1878 German chemists synthesized indigo. This product rapidly replaced natural indigo, wiping out vast farms growing indigo. It is now the blue of blue jeans. As the pace of organic chemistry accelerated, a succession of synthetic blue dyes were discovered including Indanthrone blue, which had even greater resistance to fading during washing or in the sun, and copper phthalocyanine. Dyes for textiles and food Woad and true indigo were once used but since the early 1900s, all indigo is synthetic. Produced on an industrial scale, indigo is the blue of blue jeans. Blue dyes are organic compounds, both synthetic and natural. For food, the triarylmethane dye Brilliant blue FCF is used for candies. The search continues for stable, natural blue dyes suitable for the food industry. Various raspberry-flavoured foods are dyed blue. This was done to distinguish strawberry, watermelon and raspberry-flavoured foods. The company ICEE used Blue No. 1 for their blue raspberry ICEEs. Pigments for painting and glass Blue pigments were once produced from minerals, especially lapis lazuli and its close relative ultramarine. These minerals were crushed, ground into powder, and then mixed with a quick-drying binding agent, such as egg yolk (tempera painting); or with a slow-drying oil, such as linseed oil, for oil painting. Two inorganic but synthetic blue pigments are cerulean blue (primarily cobalt(II) stanate: ) and Prussian blue (milori blue: primarily ). The chromophore in blue glass and glazes is cobalt(II). Diverse cobalt(II) salts such as cobalt carbonate or cobalt(II) aluminate are mixed with the silica prior to firing. The cobalt occupies sites otherwise filled with silicon. Inks Methyl blue is the dominant blue pigment in inks used in pens. Blueprinting involves the production of Prussian blue in situ. Inorganic compounds Certain metal ions characteristically form blue solutions or blue salts. Of some practical importance, cobalt is used to make the deep blue glazes and glasses. It substitutes for silicon or aluminum ions in these materials. Cobalt is the blue chromophore in stained glass windows, such as those in Gothic cathedrals and in Chinese porcelain beginning in the Tang dynasty. Copper(II) (Cu2+) also produces many blue compounds, including the commercial algicide copper(II) sulfate (CuSO4.5H2O). Similarly, vanadyl salts and solutions are often blue, e.g. vanadyl sulfate. In nature Sky and sea When sunlight passes through the atmosphere, the blue wavelengths are scattered more widely by the oxygen and nitrogen molecules, and more blue comes to our eyes. This effect is called Rayleigh scattering, after Lord Rayleigh and confirmed by Albert Einstein in 1911. The sea is seen as blue for largely the same reason: the water absorbs the longer wavelengths of red and reflects and scatters the blue, which comes to the eye of the viewer. The deeper the observer goes, the darker the blue becomes. In the open sea, only about 1% of light penetrates to a depth of 200 metres (see underwater and euphotic depth). The colour of the sea is also affected by the colour of the sky, reflected by particles in the water; and by algae and plant life in the water, which can make it look green; or by sediment, which can make it look brown. The farther away an object is, the more blue it often appears to the eye. For example, mountains in the distance often appear blue. This is the effect of atmospheric perspective; the farther an object is away from the viewer, the less contrast there is between the object and its background colour, which is usually blue. In a painting where different parts of the composition are blue, green and red, the blue will appear to be more distant, and the red closer to the viewer. The cooler a colour is, the more distant it seems. Blue light is scattered more than other wavelengths by the gases in the atmosphere, hence our "blue planet". Minerals Some of the most desirable gems are blue, including sapphire and tanzanite. Compounds of copper(II) are characteristically blue and so are many copper-containing minerals. Azurite (, with a deep blue colour, was once employed in medieval years, but it is unstable pigment, losing its colour especially under dry conditions. Lapis lazuli, mined in Afghanistan for more than three thousand years, was used for jewelry and ornaments, and later was crushed and powdered and used as a pigment. The more it was ground, the lighter the blue colour became. Natural ultramarine, made by grinding lapis lazuli into a fine powder, was the finest available blue pigment in the Middle Ages and the Renaissance. It was extremely expensive, and in Italian Renaissance art, it was often reserved for the robes of the Virgin Mary. Plants and fungi Intense efforts have focused on blue flowers and the possibility that natural blue colourants could be used as food dyes. Commonly, blue colours in plants are anthocyanins: "the largest group of water-soluble pigments found widespread in the plant kingdom". In the few plants that exploit structural colouration, brilliant colours are produced by structures within cells. The most brilliant blue colouration known in any living tissue is found in the marble berries of Pollia condensata, where a spiral structure of cellulose fibrils scattering blue light. The fruit of quandong (Santalum acuminatum) can appear blue owing to the same effect. Animals Blue-pigmented animals are relatively rare. Examples of which include butterflies of the genus Nessaea, where blue is created by pterobilin. Other blue pigments of animal origin include phorcabilin, used by other butterflies in Graphium and Papilio (specifically P. phorcas and P. weiskei), and sarpedobilin, which is used by Graphium sarpedon. Blue-pigmented organelles, known as "cyanosomes", exist in the chromatophores of at least two fish species, the mandarin fish and the picturesque dragonet. More commonly, blueness in animals is a structural colouration; an optical interference effect induced by organized nanometre-sized scales or fibres. Examples include the plumage of several birds like the blue jay and indigo bunting, the scales of butterflies like the morpho butterfly, collagen fibres in the skin of some species of monkey and opossum, and the iridophore cells in some fish and frogs. Eyes Blue eyes do not actually contain any blue pigment. Eye colour is determined by two factors: the pigmentation of the eye's iris and the scattering of light by the turbid medium in the stroma of the iris. In humans, the pigmentation of the iris varies from light brown to black. The appearance of blue, green, and hazel eyes results from the Tyndall scattering of light in the stroma, an optical effect similar to what accounts for the blueness of the sky. The irises of the eyes of people with blue eyes contain less dark melanin than those of people with brown eyes, which means that they absorb less short-wavelength blue light, which is instead reflected out to the viewer. Eye colour also varies depending on the lighting conditions, especially for lighter-coloured eyes. Blue eyes are most common in Ireland, the Baltic Sea area and Northern Europe, and are also found in Eastern, Central, and Southern Europe. Blue eyes are also found in parts of Western Asia, most notably in Afghanistan, Syria, Iraq, and Iran. In Estonia, 99% of people have blue eyes. In Denmark in 1978, only 8% of the population had brown eyes, though through immigration, today that number is about 11%. In Germany, about 75% have blue eyes. In the United States, as of 2006, 1 out of every 6 people, or 16.6% of the total population, and 22.3% of the white population, have blue eyes, compared with about half of Americans born in 1900, and a third of Americans born in 1950. Blue eyes are becoming less common among American children. In the US, males are 3–5% more likely to have blue eyes than females. History In the ancient world As early as the 7th millennium BC, lapis lazuli was mined in the Sar-i Sang mines, in Shortugai, and in other mines in Badakhshan province in northeast Afghanistan. Lapis lazuli artifacts, dated to 7570 BC, have been found at Bhirrana, which is the oldest site of Indus Valley civilisation. Lapis was highly valued by the Indus Valley Civilisation (7570–1900 BC). Lapis beads have been found at Neolithic burials in Mehrgarh, the Caucasus, and as far away as Mauritania. It was used in the funeral mask of Tutankhamun (1341–1323 BC). A term for Blue was relatively rare in many forms of ancient art and decoration, and even in ancient literature. The Ancient Greek poets described the sea as green, brown or "the colour of wine". The colour is mentioned several times in the Hebrew Bible as 'tekhelet'. Reds, blacks, browns, and ochres are found in cave paintings from the Upper Paleolithic period, but not blue. Blue was also not used for dyeing fabric until long after red, ochre, pink, and purple. This is probably due to the perennial difficulty of making blue dyes and pigments. On the other hand, the rarity of blue pigment made it even more valuable. The earliest known blue dyes were made from plants – woad in Europe, indigo in Asia and Africa, while blue pigments were made from minerals, usually either lapis lazuli or azurite, and required more. Blue glazes posed still another challenge since the early blue dyes and pigments were not thermally robust. In , the blue glaze Egyptian blue was introduced for ceramics, as well as many other objects. The Greeks imported indigo dye from India, calling it indikon, and they painted with Egyptian blue. Blue was not one of the four primary colours for Greek painting described by Pliny the Elder (red, yellow, black, and white). For the Romans, blue was the colour of mourning, as well as the colour of barbarians. The Celts and Germans reportedly dyed their faces blue to frighten their enemies, and tinted their hair blue when they grew old. The Romans made extensive use of indigo and Egyptian blue pigment, as evidenced, in part, by frescos in Pompeii. The Romans had many words for varieties of blue, including , , , , , , , and , but two words, both of foreign origin, became the most enduring; , from the Germanic word blau, which eventually became bleu or blue; and , from the Arabic word , which became azure. Blue was widely used in the decoration of churches in the Byzantine Empire. By contrast, in the Islamic world, blue was of secondary to green, believed to be the favourite colour of the Prophet Mohammed. At certain times in Moorish Spain and other parts of the Islamic world, blue was the colour worn by Christians and Jews, because only Muslims were allowed to wear white and green. In the Middle Ages In the art and life of Europe during the early Middle Ages, blue played a minor role. This changed dramatically between 1130 and 1140 in Paris, when the Abbe Suger rebuilt the Saint Denis Basilica. Suger considered that light was the visible manifestation of the Holy Spirit. He installed stained glass windows coloured with cobalt, which, combined with the light from the red glass, filled the church with a bluish violet light. The church became the marvel of the Christian world, and the colour became known as the . In the years that followed even more elegant blue stained glass windows were installed in other churches, including at Chartres Cathedral and Sainte-Chapelle in Paris. In the 12th century the Roman Catholic Church dictated that painters in Italy (and the rest of Europe consequently) to paint the Virgin Mary with blue, which became associated with holiness, humility and virtue. In medieval paintings, blue was used to attract the attention of the viewer to the Virgin Mary. Paintings of the mythical King Arthur began to show him dressed in blue. The coat of arms of the kings of France became an azure or light blue shield, sprinkled with golden fleur-de-lis or lilies. Blue had come from obscurity to become the royal colour. Renaissance through 18th century Blue came into wider use beginning in the Renaissance, when artists began to paint the world with perspective, depth, shadows, and light from a single source. In Renaissance paintings, artists tried to create harmonies between blue and red, lightening the blue with lead white paint and adding shadows and highlights. Raphael was a master of this technique, carefully balancing the reds and the blues so no one colour dominated the picture. Ultramarine was the most prestigious blue of the Renaissance, being more expensive than gold. Wealthy art patrons commissioned works with the most expensive blues possible. In 1616 Richard Sackville commissioned a portrait of himself by Isaac Oliver with three different blues, including ultramarine pigment for his stockings. An industry for the manufacture of fine blue and white pottery began in the 14th century in Jingdezhen, China, using white Chinese porcelain decorated with patterns of cobalt blue, imported from Persia. It was first made for the family of the Emperor of China, then was exported around the world, with designs for export adapted to European subjects and tastes. The Chinese blue style was also adapted by Dutch craftsmen in Delft and English craftsmen in Staffordshire in the 17th-18th centuries. in the 18th century, blue and white porcelains were produced by Josiah Wedgwood and other British craftsmen. 19th-20th century The early 19th century saw the ancestor of the modern blue business suit, created by Beau Brummel (1776–1840), who set fashion at the London Court. It also saw the invention of blue jeans, a highly popular form of workers's costume, invented in 1853 by Jacob W. Davis who used metal rivets to strengthen blue denim work clothing in the California gold fields. The invention was funded by San Francisco entrepreneur Levi Strauss, and spread around the world. Recognizing the emotional power of blue, many artists made it the central element of paintings in the 19th and 20th centuries. They included Pablo Picasso, Pavel Kuznetsov and the Blue Rose art group, and Kandinsky and Der Blaue Reiter (The Blue Rider) school. Henri Matisse expressed deep emotions with blue, "A certain blue penetrates your soul." In the second half of the 20th century, painters of the abstract expressionist movement use blues to inspire ideas and emotions. Painter Mark Rothko observed that colour was "only an instrument;" his interest was "in expressing human emotions tragedy, ecstasy, doom, and so on". In society and culture Uniforms In the 17th century. The Prince-Elector of Brandenburg, Frederick William I of Prussia, chose Prussian blue as the new colour of Prussian military uniforms, because it was made with Woad, a local crop, rather than Indigo, which was produced by the colonies of Brandenburg's rival, England. It was worn by the German army until World War I, with the exception of the soldiers of Bavaria, who wore sky-blue. In 1748, the Royal Navy adopted a dark shade of blue for the uniform of officers. It was first known as marine blue, now known as navy blue. The militia organized by George Washington selected blue and buff, the colours of the British Whig Party. Blue continued to be the colour of the field uniform of the US Army until 1902, and is still the colour of the dress uniform. In the 19th century, police in the United Kingdom, including the Metropolitan Police and the City of London Police also adopted a navy blue uniform. Similar traditions were embraced in France and Austria. It was also adopted at about the same time for the uniforms of the officers of the New York City Police Department. Gender Blue is used to represent males. Beginning as a trend the mid-19th century and applying primarily to clothing, gendered associations with blue became more widespread from the 1950s. The colour became associated with males after the second world war. Religion Blue in Judaism: In the Torah, the Israelites were commanded to put fringes, tzitzit, on the corners of their garments, and to weave within these fringes a "twisted thread of blue (tekhelet)". In ancient days, this blue thread was made from a dye extracted from a Mediterranean snail called the hilazon. Maimonides claimed that this blue was the colour of "the clear noonday sky"; Rashi, the colour of the evening sky. According to several rabbinic sages, blue is the colour of God's Glory. Staring at this colour aids in mediation, bringing us a glimpse of the "pavement of sapphire, like the very sky for purity", which is a likeness of the Throne of God. (The Hebrew word for glory) Many items in the Mishkan, the portable sanctuary in the wilderness, such as the menorah, many of the vessels, and the Ark of the Covenant, were covered with blue cloth when transported from place to place. Blue in Christianity: Blue is particularly associated with the Virgin Mary. This was the result of a decree of Pope Gregory I (540–601) who ordered that all religious paintings should tell a story which was clearly comprehensible to all viewers, and that figures should be easily recognizable, especially that of the figure of Mary. If she was alone in the image, her costume was usually painted with the finest blue, ultramarine. If she was with Christ, her costume was usually painted with a less expensive pigment, to avoid outshining him. Blue in Hinduism: Many of the gods are depicted as having blue-coloured skin, particularly those associated with Vishnu, who is said to be the preserver of the world, and thus intimately connected to water. Krishna and Rama, Vishnu's avatars, are usually depicted with blue skin. Shiva, the destroyer deity, is also depicted in a light-blue hue, and is called Nīlakaṇṭha, or blue-throated, for having swallowed poison to save the universe during the Samudra Manthana, the churning of the ocean of milk. Blue is used to symbolically represent the fifth, and the throat, chakra (Vishuddha). Blue in Sikhism: The Akali Nihangs warriors wear all-blue attire. Guru Gobind Singh also has a blue roan horse. The Sikh Rehat Maryada states that the Nishan Sahib hoisted outside every Gurudwara should be xanthic (Basanti in Punjabi) or greyish blue (modern day navy blue) (Surmaaee in Punjabi) colour. Blue in Paganism: Blue is associated with peace, truth, wisdom, protection, and patience. It helps with healing, psychic ability, harmony, and understanding. Sports In sports, blue is widely represented in uniforms in part because the majority of national teams wear the colours of their national flag. For example, the national men's football team of France are known as Les Bleus (the Blues). Similarly, Argentina, Italy, and Uruguay wear blue shirts. The Asian Football Confederation and the Oceania Football Confederation use blue text on their logos. Blue is well represented in baseball (Blue Jays), basketball, and American football, and Ice hockey. The Indian national cricket team wears blue uniform during One day international matches, as such the team is also referred to as "Men in Blue". Politics Unlike red or green, blue was not strongly associated with any particular country, religion or political movement. As the colour of harmony, it was chosen as the colour for the flags of the United Nations, the European Union, and NATO. In politics, blue is often used as the colour of conservative parties, contrasting with the red associated with left-wing parties. Some conservative parties that use the colour blue include the Conservative Party (UK), Conservative Party of Canada, Liberal Party of Australia, Liberal Party of Brazil, and Likud of Israel. However, in some countries, blue is not associated main conservative party. In the United States, the liberal Democratic Party is associated with blue, while the conservative Republican Party with red. US states which have been won by the Democratic Party in four consecutive presidential elections are termed "blue states", while those that have been won by the Republican Party are termed "red states". South Korea also uses this colour model, with the Democratic Party on the left using blue and the People Power Party on the right using red.
Physical sciences
Color terms
null
4548
https://en.wikipedia.org/wiki/Blizzard
Blizzard
A blizzard is a severe snowstorm characterized by strong sustained winds and low visibility, lasting for a prolonged period of time—typically at least three or four hours. A ground blizzard is a weather condition where snow that has already fallen is being blown by wind. Blizzards can have an immense size and usually stretch to hundreds or thousands of kilometres. Definition and etymology In the United States, the National Weather Service defines a blizzard as a severe snow storm characterized by strong winds causing blowing snow that results in low visibilities. The difference between a blizzard and a snowstorm is the strength of the wind, not the amount of snow. To be a blizzard, a snow storm must have sustained winds or frequent gusts that are greater than or equal to with blowing or drifting snow which reduces visibility to or less and must last for a prolonged period of time—typically three hours or more. Environment Canada defines a blizzard as a storm with wind speeds exceeding accompanied by visibility of or less, resulting from snowfall, blowing snow, or a combination of the two. These conditions must persist for a period of at least four hours for the storm to be classified as a blizzard, except north of the arctic tree line, where that threshold is raised to six hours. The Australia Bureau of Meteorology describes a blizzard as, "Violent and very cold wind which is laden with snow, some part, at least, of which has been raised from snow covered ground." While severe cold and large amounts of drifting snow may accompany blizzards, they are not required. Blizzards can bring whiteout conditions, and can paralyze regions for days at a time, particularly where snowfall is unusual or rare. A severe blizzard has winds over , near zero visibility, and temperatures of or lower. In Antarctica, blizzards are associated with winds spilling over the edge of the ice plateau at an average velocity of . Ground blizzard refers to a weather condition where loose snow or ice on the ground is lifted and blown by strong winds. The primary difference between a ground blizzard as opposed to a regular blizzard is that in a ground blizzard no precipitation is produced at the time, but rather all the precipitation is already present in the form of snow or ice at the surface. The Oxford English Dictionary concludes the term blizzard is likely onomatopoeic, derived from the same sense as blow, blast, blister, and bluster; the first recorded use of it for weather dates to 1829, when it was defined as a "violent blow". It achieved its modern definition by 1859, when it was in use in the western United States. The term became common in the press during the harsh winter of 1880–81. United States storm systems In the United States, storm systems powerful enough to cause blizzards usually form when the jet stream dips far to the south, allowing cold, dry polar air from the north to clash with warm, humid air moving up from the south. When cold, moist air from the Pacific Ocean moves eastward to the Rocky Mountains and the Great Plains, and warmer, moist air moves north from the Gulf of Mexico, all that is needed is a movement of cold polar air moving south to form potential blizzard conditions that may extend from the Texas Panhandle to the Great Lakes and Midwest. A blizzard also may be formed when a cold front and warm front mix together and a blizzard forms at the border line. Another storm system occurs when a cold core low over the Hudson Bay area in Canada is displaced southward over southeastern Canada, the Great Lakes, and New England. When the rapidly moving cold front collides with warmer air coming north from the Gulf of Mexico, strong surface winds, significant cold air advection, and extensive wintry precipitation occur. Low pressure systems moving out of the Rocky Mountains onto the Great Plains, a broad expanse of flat land, much of it covered in prairie, steppe and grassland, can cause thunderstorms and rain to the south and heavy snows and strong winds to the north. With few trees or other obstructions to reduce wind and blowing, this part of the country is particularly vulnerable to blizzards with very low temperatures and whiteout conditions. In a true whiteout, there is no visible horizon. People can become lost in their own front yards, when the door is only away, and they would have to feel their way back. Motorists have to stop their cars where they are, as the road is impossible to see. Nor'easter blizzards A nor'easter is a macro-scale storm that occurs off the New England and Atlantic Canada coastlines. It gets its name from the direction the wind is coming from. The usage of the term in North America comes from the wind associated with many different types of storms, some of which can form in the North Atlantic Ocean and some of which form as far south as the Gulf of Mexico. The term is most often used in the coastal areas of New England and Atlantic Canada. This type of storm has characteristics similar to a hurricane. More specifically, it describes a low-pressure area whose center of rotation is just off the coast and whose leading winds in the left-forward quadrant rotate onto land from the northeast. High storm waves may sink ships at sea and cause coastal flooding and beach erosion. Notable nor'easters include The Great Blizzard of 1888, one of the worst blizzards in U.S. history. It dropped of snow and had sustained winds of more than that produced snowdrifts in excess of . Railroads were shut down and people were confined to their houses for up to a week. It killed 400 people, mostly in New York. Historic events 1972 Iran blizzard The 1972 Iran blizzard, which caused 4,000 reported deaths, was the deadliest blizzard in recorded history. Dropping as much as of snow, it completely covered 200 villages. After a snowfall lasting nearly a week, an area the size of Wisconsin was entirely buried in snow. 2008 Afghanistan blizzard The 2008 Afghanistan blizzard, was a fierce blizzard that struck Afghanistan on 10 January 2008. Temperatures fell to a low of , with up to of snow in the more mountainous regions, killing at least 926 people. It was the third deadliest blizzard in history. The weather also claimed more than 100,000 sheep and goats, and nearly 315,000 cattle died. The Snow Winter of 1880–1881 The winter of 1880–1881 is widely considered the most severe winter ever known in many parts of the United States. The initial blizzard in October 1880 brought snowfalls so deep that two-story homes experienced accumulations, as opposed to drifts, up to their second-floor windows. No one was prepared for deep snow so early in the winter. Farmers from North Dakota to Virginia were caught flat with fields unharvested, what grain that had been harvested unmilled, and their suddenly all-important winter stocks of wood fuel only partially collected. By January train service was almost entirely suspended from the region. Railroads hired scores of men to dig out the tracks but as soon as they had finished shoveling a stretch of line a new storm arrived, burying it again. There were no winter thaws and on February 2, 1881, a second massive blizzard struck that lasted for nine days. In towns the streets were filled with solid drifts to the tops of the buildings and tunneling was necessary to move about. Homes and barns were completely covered, compelling farmers to construct fragile tunnels in order to feed their stock. When the snow finally melted in late spring of 1881, huge sections of the plains experienced flooding. Massive ice jams clogged the Missouri River, and when they broke the downstream areas were inundated. Most of the town of Yankton, in what is now South Dakota, was washed away when the river overflowed its banks after the thaw. Novelization Many children—and their parents—learned of "The Snow Winter" through the children's book The Long Winter by Laura Ingalls Wilder, in which the author tells of her family's efforts to survive. The snow arrived in October 1880 and blizzard followed blizzard throughout the winter and into March 1881, leaving many areas snowbound throughout the winter. Accurate details in Wilder's novel include the blizzards' frequency and the deep cold, the Chicago and North Western Railway stopping trains until the spring thaw because the snow made the tracks impassable, the near-starvation of the townspeople, and the courage of her future husband Almanzo and another man, Cap Garland, who ventured out on the open prairie in search of a cache of wheat that no one was even sure existed. The Storm of the Century The Storm of the Century, also known as the Great Blizzard of 1993, was a large cyclonic storm that formed over the Gulf of Mexico on March 12, 1993, and dissipated in the North Atlantic Ocean on March 15. It is unique for its intensity, massive size and wide-reaching effect. At its height, the storm stretched from Canada towards Central America, but its main impact was on the United States and Cuba. The cyclone moved through the Gulf of Mexico, and then through the Eastern United States before moving into Canada. Areas as far south as northern Alabama and Georgia received a dusting of snow and areas such as Birmingham, Alabama, received up to with hurricane-force wind gusts and record low barometric pressures. Between Louisiana and Cuba, hurricane-force winds produced high storm surges across northwestern Florida, which along with scattered tornadoes killed dozens of people. In the United States, the storm was responsible for the loss of electric power to over 10 million customers. It is purported to have been directly experienced by nearly 40 percent of the country's population at that time. A total of 310 people, including 10 from Cuba, perished during this storm. The storm cost $6 to $10 billion in damages. List of blizzards North America 1700 to 1799 The Great Snow 1717 series of four snowstorms between February 27 and March 7, 1717. There were reports of about five feet of snow already on the ground when the first of the storms hit. By the end, there were about ten feet of snow and some drifts reaching , burying houses entirely. In the colonial era, this storm made travel impossible until the snow simply melted. Blizzard of 1765. March 24, 1765. Affected area from Philadelphia to Massachusetts. High winds and over of snowfall recorded in some areas. Blizzard of 1772. "The Washington and Jefferson Snowstorm of 1772". January 26–29, 1772. One of largest D.C. and Virginia area snowstorms ever recorded. Snow accumulations of recorded. The "Hessian Storm of 1778". December 26, 1778. Severe blizzard with high winds, heavy snows and bitter cold extending from Pennsylvania to New England. Snow drifts reported to be high in Rhode Island. Storm named for stranded Hessian troops in deep snows stationed in Rhode Island during the Revolutionary War. The Great Snow of 1786. December 4–10, 1786. Blizzard conditions and a succession of three harsh snowstorms produced snow depths of to from Pennsylvania to New England. Reportedly of similar magnitude of 1717 snowstorms. The Long Storm of 1798. November 19–21, 1798. Heavy snowstorm produced snow from Maryland to Maine. 1800 to 1850 Blizzard of 1805. January 26–28, 1805. Cyclone brought heavy snowstorm to New York City and New England. Snow fell continuously for two days where over of snow accumulated. New York City Blizzard of 1811. December 23–24, 1811. Severe blizzard conditions reported on Long Island, in New York City, and southern New England. Strong winds and tides caused damage to shipping in harbor. Luminous Blizzard of 1817. January 17, 1817. In Massachusetts and Vermont, a severe snowstorm was accompanied by frequent lightning and heavy thunder. St. Elmo's fire reportedly lit up trees, fence posts, house roofs, and even people. John Farrar professor at Harvard, recorded the event in his memoir in 1821. Great Snowstorm of 1821. January 5–7, 1821. Extensive snowstorm and blizzard spread from Virginia to New England. Winter of Deep Snow in 1830. December 29, 1830. Blizzard storm dumped in Kansas City and in Illinois. Areas experienced repeated storms thru mid-February 1831. "The Great Snowstorm of 1831" January 14–16, 1831. Produced snowfall over widest geographic area that was only rivaled, or exceeded by, the 1993 Blizzard. Blizzard raged from Georgia, to Ohio Valley, all the way to Maine. "The Big Snow of 1836" January 8–10, 1836. Produced to of snowfall in interior New York, northern Pennsylvania, and western New England. Philadelphia got a reported and New York City of snow. 1851 to 1900 Plains Blizzard of 1856. December 3–5, 1856. Severe blizzard-like storm raged for three days in Kansas and Iowa. Early pioneers suffered. "The Cold Storm of 1857" January 18–19, 1857. Produced severe blizzard conditions from North Carolina to Maine. Heavy snowfalls reported in east coast cities. Midwest Blizzard of 1864. January 1, 1864. Gale-force winds, driving snow, and low temperatures all struck simultaneously around Chicago, Wisconsin and Minnesota. Plains Blizzard of 1873. January 7, 1873. Severe blizzard struck the Great Plains. Many pioneers from the east were unprepared for the storm and perished in Minnesota and Iowa. Great Plains Easter Blizzard of 1873. April 13, 1873 Seattle Blizzard of 1880. January 6, 1880. Seattle area's greatest snowstorm to date. An estimated fell around the town. Many barns collapsed and all transportation halted. The Hard Winter of 1880-81. October 15, 1880. A blizzard in eastern South Dakota marked the beginning of this historically difficult season. Laura Ingalls Wilder's book The Long Winter details the effects of this season on early settlers. In the three year winter period from December 1885 to March 1888, the Great Plains and Eastern United States suffered a series of the worst blizzards in this nation's history ending with the Schoolhouse Blizzard and the Great Blizzard of 1888. The massive explosion of the volcano Krakatoa in the South Pacific late in August 1883 is a suspected cause of these huge blizzards during these several years. The clouds of ash it emitted continued to circulate around the world for many years. Weather patterns continued to be chaotic for years, and temperatures did not return to normal until 1888. Record rainfall was experienced in Southern California during July 1883 to June 1884. The Krakatoa eruption injected an unusually large amount of sulfur dioxide (SO2) gas high into the stratosphere which reflects sunlight and helped cool the planet over the next few years until the suspended atmospheric sulfur fell to ground. Plains Blizzard of late 1885. In Kansas, heavy snows of late 1885 had piled drifts high. Kansas Blizzard of 1886. First week of January 1886. Reported that 80 percent of the cattle were frozen to death in that state alone from the cold and snow. January 1886 Blizzard. January 9, 1886. Same system as Kansas 1886 Blizzard that traveled eastward. Great Plains Blizzards of late 1886. On November 13, 1886, it reportedly began to snow and did not stop for a month in the Great Plains region. Great Plains Blizzard of 1887. January 9–11, 1887. Reported 72-hour blizzard that covered parts of the Great Plains in more than of snow. Winds whipped and temperatures dropped to around . So many cows that were not killed by the cold soon died from starvation. When spring arrived, millions of the animals were dead, with around 90 percent of the open range's cattle rotting where they fell. Those present reported carcasses as far as the eye could see. Dead cattle clogged up rivers and spoiled drinking water. Many ranchers went bankrupt and others simply called it quits and moved back east. The "Great Die-Up" from the blizzard effectively concluded the romantic period of the great Plains cattle drives. Schoolhouse Blizzard of 1888 North American Great Plains. January 12–13, 1888. What made the storm so deadly was the timing (during work and school hours), the suddenness, and the brief spell of warmer weather that preceded it. In addition, the very strong wind fields behind the cold front and the powdery nature of the snow reduced visibilities on the open plains to zero. People ventured from the safety of their homes to do chores, go to town, attend school, or simply enjoy the relative warmth of the day. As a result, thousands of people—including many schoolchildren—got caught in the blizzard. Great Blizzard of March 1888 March 11–14, 1888. One of the most severe recorded blizzards in the history of the United States. On March 12, an unexpected northeaster hit New England and the mid-Atlantic, dropping up to of snow in the space of three days. New York City experienced its heaviest snowfall recorded to date at that time, all street railcars were stranded, and the storm led to the creation of the NYC subway system. Snowdrifts reached up to the second story of some buildings. Some 400 people died from this blizzard, including many sailors aboard vessels that were beset by gale-force winds and turbulent seas. Great Blizzard of 1899 February 11–14, 1899. An extremely unusual blizzard in that it reached into the far southern states of the US. It hit in February, and the area around Washington, D.C., experienced 51 hours straight of snowfall. The port of New Orleans was totally iced over; revelers participating in the New Orleans Mardi Gras had to wait for the parade routes to be shoveled free of snow. Concurrent with this blizzard was the extremely cold arctic air. Many city and state record low temperatures date back to this event, including all-time records for locations in the Midwest and South. State record lows: Nebraska reached , Ohio experienced , Louisiana bottomed out at , and Florida dipped below zero to . 1901 to 1939 Great Lakes Storm of 1913 November 7–10, 1913. "The White Hurricane" of 1913 was the deadliest and most destructive natural disaster ever to hit the Great Lakes Basin in the Midwestern United States and the Canadian province of Ontario. It produced wind gusts, waves over high, and whiteout snowsqualls. It killed more than 250 people, destroyed 19 ships, and stranded 19 others. Blizzard of 1918. January 11, 1918. Vast blizzard-like storm moved through Great Lakes and Ohio Valley. 1920 North Dakota blizzard March 15–18, 1920 Knickerbocker Storm January 27–28, 1922 1940 to 1949 Armistice Day Blizzard of 1940 November 10–12, 1940. Took place in the Midwest region of the United States on Armistice Day. This "Panhandle hook" winter storm cut a through the middle of the country from Kansas to Michigan. The morning of the storm was unseasonably warm but by mid afternoon conditions quickly deteriorated into a raging blizzard that would last into the next day. A total of 145 deaths were blamed on the storm, almost a third of them duck hunters who had taken time off to take advantage of the ideal hunting conditions. Weather forecasters had not predicted the severity of the oncoming storm, and as a result the hunters were not dressed for cold weather. When the storm began many hunters took shelter on small islands in the Mississippi River, and the winds and waves overcame their encampments. Some became stranded on the islands and then froze to death in the single-digit temperatures that moved in over night. Others tried to make it to shore and drowned. North American blizzard of 1947 December 25–26, 1947. Was a record-breaking snowfall that began on Christmas Day and brought the Northeast United States to a standstill. Central Park in New York City got of snowfall in 24 hours with deeper snows in suburbs. It was not accompanied by high winds, but the snow fell steadily with drifts reaching . Seventy-seven deaths were attributed to the blizzard. The Blizzard of 1949 - The first blizzard started on Sunday, January 2, 1949; it lasted for three days. It was followed by two more months of blizzard after blizzard with high winds and bitter cold. Deep drifts isolated southeast Wyoming, northern Colorado, western South Dakota and western Nebraska, for weeks. Railroad tracks and roads were all drifted in with drifts of and more. Hundreds of people that had been traveling on trains were stranded. Motorists that had set out on January 2 found their way to private farm homes in rural areas and hotels and other buildings in towns; some dwellings were so crowded that there wasn't enough room for all to sleep at once. It would be weeks before they were plowed out. The Federal government quickly responded with aid, airlifting food and hay for livestock. The total rescue effort involved numerous volunteers and local agencies plus at least ten major state and federal agencies from the U.S. Army to the National Park Service. Private businesses, including railroad and oil companies, also lent manpower and heavy equipment to the work of plowing out. The official death toll was 76 people and one million livestock. Youtube video Storm of the Century - the Blizzard of '49 Storm of the Century - the Blizzard of '49 1950 to 1959 Great Appalachian Storm of November 1950 November 24–30, 1950 March 1958 Nor'easter blizzard March 18–21, 1958. The Mount Shasta California Snowstorm of 1959 – The storm dumped of snow on Mount Shasta. The bulk of the snow fell on unpopulated mountainous areas, barely disrupting the residents of the Mount Shasta area. The amount of snow recorded is the largest snowfall from a single storm in North America. 1960 to 1969 March 1960 Nor'easter blizzard March 2–5, 1960 December 1960 Nor'easter blizzard December 12–14, 1960. Wind gusts up to . March 1962 Nor'easter Great March Storm of 1962 – Ash Wednesday. North Carolina and Virginia blizzards. Struck during Spring high tide season and remained mostly stationary for almost 5 days causing significant damage along eastern coast, Assateague island was under water, and dumped of snow in Virginia. North American blizzard of 1966 January 27–31, 1966 Chicago Blizzard of 1967 January 26–27, 1967 February 1969 nor'easter February 8–10, 1969 March 1969 Nor'easter blizzard March 9, 1969 December 1969 Nor'easter blizzard December 25–28, 1969. 1970 to 1979 The Great Storm of 1975 known as the "Super Bowl Blizzard" or "Minnesota's Storm of the Century". January 9–12, 1975. Wind chills of to recorded, deep snowfalls. Groundhog Day gale of 1976 February 2, 1976 Buffalo Blizzard of 1977 January 28 – February 1, 1977. There were several feet of packed snow already on the ground, and the blizzard brought with it enough snow to reach Buffalo's record for the most snow in one season – . Great Blizzard of 1978 also called the "Cleveland Superbomb". January 25–27, 1978. Was one of the worst snowstorms the Midwest has ever seen. Wind gusts approached , causing snowdrifts to reach heights of in some areas, making roadways impassable. Storm reached maximum intensity over southern Ontario Canada. Northeastern United States Blizzard of 1978 – February 6–7, 1978. Just one week following the Cleveland Superbomb blizzard, New England was hit with its most severe blizzard in 90 years since 1888. Chicago Blizzard of 1979 January 13–14, 1979 1980 to 1989 February 1987 Nor'easter blizzard February 22–24, 1987 1990 to 1999 1991 Halloween blizzard Upper Mid-West US, October 31 – November 3, 1991 December 1992 Nor'easter blizzard December 10–12, 1992 1993 Storm of the Century March 12–15, 1993. While the southern and eastern U.S. and Cuba received the brunt of this massive blizzard, the Storm of the Century impacted a wider area than any in recorded history. February 1995 Nor'easter blizzard February 3–6, 1995 Blizzard of 1996 January 6–10, 1996 April Fool's Day Blizzard March 31 – April 1, 1997. US East Coast 1997 Western Plains winter storms October 24–26, 1997 Mid West Blizzard of 1999 January 2–4, 1999 2000 to 2009 January 25, 2000 Southeastern United States winter storm January 25, 2000. North Carolina and Virginia December 2000 Nor'easter blizzard December 27–31, 2000 North American blizzard of 2003 February 14–19, 2003 (Presidents' Day Storm II) December 2003 Nor'easter blizzard December 6–7, 2003 North American blizzard of 2005 January 20–23, 2005 North American blizzard of 2006 February 11–13, 2006 Early winter 2006 North American storm complex Late November 2006 Colorado Holiday Blizzards (2006–07) December 20–29, 2006 Colorado February 2007 North America blizzard February 12–20, 2007 January 2008 North American storm complex January, 2008 West Coast US North American blizzard of 2008 March 6–10, 2008 2009 Midwest Blizzard 6–8 December 2009, a bomb cyclogenesis event that also affected parts of Canada North American blizzard of 2009 December 16–20, 2009 2009 North American Christmas blizzard December 22–28, 2009 2010 to 2019 February 5–6, 2010 North American blizzard February 5–6, 2010 Referred to at the time as Snowmageddon was a Category 3 ("major") nor'easter and severe weather event. February 9–10, 2010 North American blizzard February 9–10, 2010 February 25–27, 2010 North American blizzard February 25–27, 2010 October 2010 North American storm complex October 23–28, 2010 December 2010 North American blizzard December 26–29, 2010 January 31 – February 2, 2011 North American blizzard January 31 – February 2, 2011. Groundhog Day Blizzard of 2011 2011 Halloween nor'easter October 28 – Nov 1, 2011 Hurricane Sandy October 29–31, 2012. West Virginia, western North Carolina, and southwest Pennsylvania received heavy snowfall and blizzard conditions from this hurricane November 2012 nor'easter November 7–10, 2012 December 17–22, 2012 North American blizzard December 17–22, 2012 Late December 2012 North American storm complex December 25–28, 2012 February 2013 nor'easter February 7–20, 2013 February 2013 Great Plains blizzard February 19 – March 6, 2013 March 2013 nor'easter March 6, 2013 October 2013 North American storm complex October 3–5, 2013 Buffalo, NY blizzard of 2014. Buffalo got over of snow during November 18–20, 2014. January 2015 North American blizzard January 26–27, 2015 Late December 2015 North American storm complex December 26–27, 2015 Was one of the most notorious blizzards in the state of New Mexico and West Texas ever reported. It had sustained winds of over and continuous snow precipitation that lasted over 30 hours. Dozens of vehicles were stranded in small county roads in the areas of Hobbs, Roswell, and Carlsbad New Mexico. Strong sustained winds destroyed various mobile homes. January 2016 United States blizzard January 20–23, 2016 February 2016 North American storm complex February 1–8, 2016 February 2017 North American blizzard February 6–11, 2017 March 2017 North American blizzard March 9–16, 2017 Early January 2018 nor’easter January 3–6, 2018 March 2019 North American blizzard March 8–16, 2019 April 2019 North American blizzard April 10–14, 2019 2020 to present December 5–6, 2020 nor'easter December 5–6, 2020 January 31 – February 3, 2021 nor'easter January 31 – February 3, 2021 February 13–17, 2021 North American winter storm February 13–17, 2021 March 2021 North American blizzard March 11–14, 2021 January 2022 North American blizzard January 27–30, 2022 December 2022 North American winter storm December 21–26, 2022 March 2023 North American winter storm March 12–15, 2023 January 8–10, 2024 North American storm complex January 8–10, 2024 January 10–13, 2024 North American storm complex January 10–13, 2024 January 5–6, 2025 United States blizzard January 5–6, 2025 January 20–22, 2025 Gulf Coast blizzard January 20–22, 2025 Canada The Eastern Canadian Blizzard of 1971 – Dumped a foot and a half (45.7 cm) of snow on Montreal and more than elsewhere in the region. The blizzard caused the cancellation of a Montreal Canadiens hockey game for the first time since 1918. Saskatchewan blizzard of 2007 – January 10, 2007, Canada United Kingdom Great Frost of 1709 Blizzard of January 1881 Winter of 1894–95 in the United Kingdom Winter of 1946–1947 in the United Kingdom Winter of 1962–1963 in the United Kingdom January 1987 Southeast England snowfall Winter of 1990–91 in Western Europe February 2009 Great Britain and Ireland snowfall Winter of 2009–10 in Great Britain and Ireland Winter of 2010–11 in Great Britain and Ireland Early 2012 European cold wave Other locations 1954 Romanian blizzard 1972 Iran blizzard Winter of 1990–1991 in Western Europe July 2007 Argentine winter storm 2008 Afghanistan blizzard 2008 Chinese winter storms Winter storms of 2009–2010 in East Asia
Physical sciences
Storms
null
4583
https://en.wikipedia.org/wiki/Bison
Bison
A bison (: bison) is a large bovine in the genus Bison (Greek: "wild ox" (bison)) within the tribe Bovini. Two extant and numerous extinct species are recognised. Of the two surviving species, the American bison, B. bison, found only in North America, is the more numerous. Although colloquially referred to as a buffalo in the United States and Canada, it is only distantly related to the true buffalo. The North American species is composed of two subspecies, the Plains bison, B. b. bison, and the wood bison, B. b. athabascae, which is the namesake of Wood Buffalo National Park in Canada. A third subspecies, the eastern bison (B. b. pennsylvanicus) is no longer considered a valid taxon, being a junior synonym of B. b. bison.
Biology and health sciences
Bovidae
Animals
4584
https://en.wikipedia.org/wiki/Baryon
Baryon
In particle physics, a baryon is a type of composite subatomic particle that contains an odd number of valence quarks, conventionally three. Protons and neutrons are examples of baryons; because baryons are composed of quarks, they belong to the hadron family of particles. Baryons are also classified as fermions because they have half-integer spin. The name "baryon", introduced by Abraham Pais, comes from the Greek word for "heavy" (βαρύς, barýs), because, at the time of their naming, most known elementary particles had lower masses than the baryons. Each baryon has a corresponding antiparticle (antibaryon) where their corresponding antiquarks replace quarks. For example, a proton is made of two up quarks and one down quark; and its corresponding antiparticle, the antiproton, is made of two up antiquarks and one down antiquark. Baryons participate in the residual strong force, which is mediated by particles known as mesons. The most familiar baryons are protons and neutrons, both of which contain three quarks, and for this reason they are sometimes called triquarks. These particles make up most of the mass of the visible matter in the universe and compose the nucleus of every atom (electrons, the other major component of the atom, are members of a different family of particles called leptons; leptons do not interact via the strong force). Exotic baryons containing five quarks, called pentaquarks, have also been discovered and studied. A census of the Universe's baryons indicates that 10% of them could be found inside galaxies, 50 to 60% in the circumgalactic medium, and the remaining 30 to 40% could be located in the warm–hot intergalactic medium (WHIM). Background Baryons are strongly interacting fermions; that is, they are acted on by the strong nuclear force and are described by Fermi–Dirac statistics, which apply to all particles obeying the Pauli exclusion principle. This is in contrast to the bosons, which do not obey the exclusion principle. Baryons, alongside mesons, are hadrons, composite particles composed of quarks. Quarks have baryon numbers of B =  and antiquarks have baryon numbers of B = −. The term "baryon" usually refers to triquarks—baryons made of three quarks (B =  +  +  = 1). Other exotic baryons have been proposed, such as pentaquarks—baryons made of four quarks and one antiquark (B =  +  +  +  −  = 1), but their existence is not generally accepted. The particle physics community as a whole did not view their existence as likely in 2006, and in 2008, considered evidence to be overwhelmingly against the existence of the reported pentaquarks. However, in July 2015, the LHCb experiment observed two resonances consistent with pentaquark states in the Λ → J/ψKp decay, with a combined statistical significance of 15σ. In theory, heptaquarks (5 quarks, 2 antiquarks), nonaquarks (6 quarks, 3 antiquarks), etc. could also exist. Baryonic matter Nearly all matter that may be encountered or experienced in everyday life is baryonic matter, which includes atoms of any sort, and provides them with the property of mass. Non-baryonic matter, as implied by the name, is any sort of matter that is not composed primarily of baryons. This might include neutrinos and free electrons, dark matter, supersymmetric particles, axions, and black holes. The very existence of baryons is also a significant issue in cosmology because it is assumed that the Big Bang produced a state with equal amounts of baryons and antibaryons. The process by which baryons came to outnumber their antiparticles is called baryogenesis. Baryogenesis Experiments are consistent with the number of quarks in the universe being conserved alongside the total baryon number, with antibaryons being counted as negative quantities. Within the prevailing Standard Model of particle physics, the number of baryons may change in multiples of three due to the action of sphalerons, although this is rare and has not been observed under experiment. Some grand unified theories of particle physics also predict that a single proton can decay, changing the baryon number by one; however, this has not yet been observed under experiment. The excess of baryons over antibaryons in the present universe is thought to be due to non-conservation of baryon number in the very early universe, though this is not well understood. Properties Isospin and charge The concept of isospin was first proposed by Werner Heisenberg in 1932 to explain the similarities between protons and neutrons under the strong interaction. Although they had different electric charges, their masses were so similar that physicists believed they were the same particle. The different electric charges were explained as being the result of some unknown excitation similar to spin. This unknown excitation was later dubbed isospin by Eugene Wigner in 1937. This belief lasted until Murray Gell-Mann proposed the quark model in 1964 (containing originally only the u, d, and s quarks). The success of the isospin model is now understood to be the result of the similar masses of u and d quarks. Since u and d quarks have similar masses, particles made of the same number then also have similar masses. The exact specific u and d quark composition determines the charge, as u quarks carry charge + while d quarks carry charge −. For example, the four Deltas all have different charges ( (uuu), (uud), (udd), (ddd)), but have similar masses (~1,232 MeV/c2) as they are each made of a combination of three u or d quarks. Under the isospin model, they were considered to be a single particle in different charged states. The mathematics of isospin was modeled after that of spin. Isospin projections varied in increments of 1 just like those of spin, and to each projection was associated a "charged state". Since the "Delta particle" had four "charged states", it was said to be of isospin I = . Its "charged states" , , , and , corresponded to the isospin projections I3 = +, I3 = +, I3 = −, and I3 = −, respectively. Another example is the "nucleon particle". As there were two nucleon "charged states", it was said to be of isospin . The positive nucleon (proton) was identified with I3 = + and the neutral nucleon (neutron) with I3 = −. It was later noted that the isospin projections were related to the up and down quark content of particles by the relation: where the n'''s are the number of up and down quarks and antiquarks. In the "isospin picture", the four Deltas and the two nucleons were thought to be the different states of two particles. However, in the quark model, Deltas are different states of nucleons (the N++ or N− are forbidden by Pauli's exclusion principle). Isospin, although conveying an inaccurate picture of things, is still used to classify baryons, leading to unnatural and often confusing nomenclature. Flavour quantum numbers The strangeness flavour quantum number S (not to be confused with spin) was noticed to go up and down along with particle mass. The higher the mass, the lower the strangeness (the more s quarks). Particles could be described with isospin projections (related to charge) and strangeness (mass) (see the uds octet and decuplet figures on the right). As other quarks were discovered, new quantum numbers were made to have similar description of udc and udb octets and decuplets. Since only the u and d mass are similar, this description of particle mass and charge in terms of isospin and flavour quantum numbers works well only for octet and decuplet made of one u, one d, and one other quark, and breaks down for the other octets and decuplets (for example, ucb octet and decuplet). If the quarks all had the same mass, their behaviour would be called symmetric, as they would all behave in the same way to the strong interaction. Since quarks do not have the same mass, they do not interact in the same way (exactly like an electron placed in an electric field will accelerate more than a proton placed in the same field because of its lighter mass), and the symmetry is said to be broken. It was noted that charge (Q) was related to the isospin projection (I3), the baryon number (B) and flavour quantum numbers (S, C, B′, T) by the Gell-Mann–Nishijima formula: where S, C, B′, and T represent the strangeness, charm, bottomness and topness flavour quantum numbers, respectively. They are related to the number of strange, charm, bottom, and top quarks and antiquark according to the relations: meaning that the Gell-Mann–Nishijima formula is equivalent to the expression of charge in terms of quark content: Spin, orbital angular momentum, and total angular momentum Spin (quantum number S) is a vector quantity that represents the "intrinsic" angular momentum of a particle. It comes in increments of  ħ (pronounced "h-bar"). The ħ is often dropped because it is the "fundamental" unit of spin, and it is implied that "spin 1" means "spin 1 ħ". In some systems of natural units, ħ is chosen to be 1, and therefore does not appear anywhere. Quarks are fermionic particles of spin (S = ). Because spin projections vary in increments of 1 (that is 1 ħ), a single quark has a spin vector of length , and has two spin projections (Sz = + and Sz = −). Two quarks can have their spins aligned, in which case the two spin vectors add to make a vector of length S = 1 and three spin projections (Sz = +1, Sz = 0, and Sz = −1). If two quarks have unaligned spins, the spin vectors add up to make a vector of length S = 0 and has only one spin projection (Sz = 0), etc. Since baryons are made of three quarks, their spin vectors can add to make a vector of length S = , which has four spin projections (Sz = +, Sz = +, Sz = −, and Sz = −), or a vector of length S =  with two spin projections (Sz = +, and Sz = −). There is another quantity of angular momentum, called the orbital angular momentum (azimuthal quantum number L), that comes in increments of 1 ħ, which represent the angular moment due to quarks orbiting around each other. The total angular momentum (total angular momentum quantum number J) of a particle is therefore the combination of intrinsic angular momentum (spin) and orbital angular momentum. It can take any value from to , in increments of 1. Particle physicists are most interested in baryons with no orbital angular momentum (L = 0), as they correspond to ground states—states of minimal energy. Therefore, the two groups of baryons most studied are the S = ; L = 0 and S = ; L = 0, which corresponds to J = + and J = +, respectively, although they are not the only ones. It is also possible to obtain J = + particles from S =  and L = 2, as well as S =  and L = 2. This phenomenon of having multiple particles in the same total angular momentum configuration is called degeneracy. How to distinguish between these degenerate baryons is an active area of research in baryon spectroscopy.D.M. Manley (2005) Parity If the universe were reflected in a mirror, most of the laws of physics would be identical—things would behave the same way regardless of what we call "left" and what we call "right". This concept of mirror reflection is called "intrinsic parity" or simply "parity" (P). Gravity, the electromagnetic force, and the strong interaction all behave in the same way regardless of whether or not the universe is reflected in a mirror, and thus are said to conserve parity (P-symmetry). However, the weak interaction does distinguish "left" from "right", a phenomenon called parity violation (P-violation). Based on this, if the wavefunction for each particle (in more precise terms, the quantum field for each particle type) were simultaneously mirror-reversed, then the new set of wavefunctions would perfectly satisfy the laws of physics (apart from the weak interaction). It turns out that this is not quite true: for the equations to be satisfied, the wavefunctions of certain types of particles have to be multiplied by −1, in addition to being mirror-reversed. Such particle types are said to have negative or odd parity (P = −1, or alternatively P = –), while the other particles are said to have positive or even parity (P = +1, or alternatively P = +). For baryons, the parity is related to the orbital angular momentum by the relation: As a consequence, baryons with no orbital angular momentum (L = 0) all have even parity (P = +). Nomenclature Baryons are classified into groups according to their isospin (I) values and quark (q) content. There are six groups of baryons: nucleon (), Delta (), Lambda (), Sigma (), Xi (), and Omega (). The rules for classification are defined by the Particle Data Group. These rules consider the up (), down () and strange () quarks to be light and the charm (), bottom (), and top () quarks to be heavy. The rules cover all the particles that can be made from three of each of the six quarks, even though baryons made of top quarks are not expected to exist because of the top quark's short lifetime. The rules do not cover pentaquarks. Baryons with (any combination of) three and/or quarks are s (I = ) or baryons (I = ). Baryons containing two and/or quarks are baryons (I = 0) or baryons (I = 1). If the third quark is heavy, its identity is given by a subscript. Baryons containing one or quark are baryons (I = ). One or two subscripts are used if one or both of the remaining quarks are heavy. Baryons containing no or quarks are baryons (I = 0), and subscripts indicate any heavy quark content. Baryons that decay strongly have their masses as part of their names. For example, Σ0 does not decay strongly, but Δ++(1232) does. It is also a widespread (but not universal) practice to follow some additional rules when distinguishing between some states that would otherwise have the same symbol. Baryons in total angular momentum J =  configuration that have the same symbols as their J =  counterparts are denoted by an asterisk ( * ). Two baryons can be made of three different quarks in J =  configuration. In this case, a prime ( ′ ) is used to distinguish between them. Exception: When two of the three quarks are one up and one down quark, one baryon is dubbed Λ while the other is dubbed Σ. Quarks carry a charge, so knowing the charge of a particle indirectly gives the quark content. For example, the rules above say that a contains a c quark and some combination of two u and/or d quarks. The c quark has a charge of (Q = +), therefore the other two must be a u quark (Q = +), and a d quark (Q = −) to have the correct total charge (Q'' = +1).
Physical sciences
Fermions
null
4594
https://en.wikipedia.org/wiki/Block%20cipher
Block cipher
In cryptography, a block cipher is a deterministic algorithm that operates on fixed-length groups of bits, called blocks. Block ciphers are the elementary building blocks of many cryptographic protocols. They are ubiquitous in the storage and exchange of data, where such data is secured and authenticated via encryption. A block cipher uses blocks as an unvarying transformation. Even a secure block cipher is suitable for the encryption of only a single block of data at a time, using a fixed key. A multitude of modes of operation have been designed to allow their repeated use in a secure way to achieve the security goals of confidentiality and authenticity. However, block ciphers may also feature as building blocks in other cryptographic protocols, such as universal hash functions and pseudorandom number generators. Definition A block cipher consists of two paired algorithms, one for encryption, , and the other for decryption, . Both algorithms accept two inputs: an input block of size bits and a key of size bits; and both yield an -bit output block. The decryption algorithm is defined to be the inverse function of encryption, i.e., . More formally, a block cipher is specified by an encryption function which takes as input a key , of bit length (called the key size), and a bit string , of length (called the block size), and returns a string of bits. is called the plaintext, and is termed the ciphertext. For each , the function () is required to be an invertible mapping on . The inverse for is defined as a function taking a key and a ciphertext to return a plaintext value , such that For example, a block cipher encryption algorithm might take a 128-bit block of plaintext as input, and output a corresponding 128-bit block of ciphertext. The exact transformation is controlled using a second input – the secret key. Decryption is similar: the decryption algorithm takes, in this example, a 128-bit block of ciphertext together with the secret key, and yields the original 128-bit block of plain text. For each key K, EK is a permutation (a bijective mapping) over the set of input blocks. Each key selects one permutation from the set of possible permutations. History The modern design of block ciphers is based on the concept of an iterated product cipher. In his seminal 1949 publication, Communication Theory of Secrecy Systems, Claude Shannon analyzed product ciphers and suggested them as a means of effectively improving security by combining simple operations such as substitutions and permutations. Iterated product ciphers carry out encryption in multiple rounds, each of which uses a different subkey derived from the original key. One widespread implementation of such ciphers named a Feistel network after Horst Feistel is notably implemented in the DES cipher. Many other realizations of block ciphers, such as the AES, are classified as substitution–permutation networks. The root of all cryptographic block formats used within the Payment Card Industry Data Security Standard (PCI DSS) and American National Standards Institute (ANSI) standards lies with the Atalla Key Block (AKB), which was a key innovation of the Atalla Box, the first hardware security module (HSM). It was developed in 1972 by Mohamed M. Atalla, founder of Atalla Corporation (now Utimaco Atalla), and released in 1973. The AKB was a key block, which is required to securely interchange symmetric keys or PINs with other actors in the banking industry. This secure interchange is performed using the AKB format. The Atalla Box protected over 90% of all ATM networks in operation as of 1998, and Atalla products still secure the majority of the world's ATM transactions as of 2014. The publication of the DES cipher by the United States National Bureau of Standards (subsequently the U.S. National Institute of Standards and Technology, NIST) in 1977 was fundamental in the public understanding of modern block cipher design. It also influenced the academic development of cryptanalytic attacks. Both differential and linear cryptanalysis arose out of studies on DES design. , there is a palette of attack techniques against which a block cipher must be secure, in addition to being robust against brute-force attacks. Design Iterated block ciphers Most block cipher algorithms are classified as iterated block ciphers which means that they transform fixed-size blocks of plaintext into identically sized blocks of ciphertext, via the repeated application of an invertible transformation known as the round function, with each iteration referred to as a round. Usually, the round function R takes different round keys Ki as a second input, which is derived from the original key: where is the plaintext and the ciphertext, with r being the number of rounds. Frequently, key whitening is used in addition to this. At the beginning and the end, the data is modified with key material (often with XOR): Given one of the standard iterated block cipher design schemes, it is fairly easy to construct a block cipher that is cryptographically secure, simply by using a large number of rounds. However, this will make the cipher inefficient. Thus, efficiency is the most important additional design criterion for professional ciphers. Further, a good block cipher is designed to avoid side-channel attacks, such as branch prediction and input-dependent memory accesses that might leak secret data via the cache state or the execution time. In addition, the cipher should be concise, for small hardware and software implementations. Substitution–permutation networks One important type of iterated block cipher known as a substitution–permutation network (SPN) takes a block of the plaintext and the key as inputs and applies several alternating rounds consisting of a substitution stage followed by a permutation stage—to produce each block of ciphertext output. The non-linear substitution stage mixes the key bits with those of the plaintext, creating Shannon's confusion. The linear permutation stage then dissipates redundancies, creating diffusion. A substitution box (S-box) substitutes a small block of input bits with another block of output bits. This substitution must be one-to-one, to ensure invertibility (hence decryption). A secure S-box will have the property that changing one input bit will change about half of the output bits on average, exhibiting what is known as the avalanche effect—i.e. it has the property that each output bit will depend on every input bit. A permutation box (P-box) is a permutation of all the bits: it takes the outputs of all the S-boxes of one round, permutes the bits, and feeds them into the S-boxes of the next round. A good P-box has the property that the output bits of any S-box are distributed to as many S-box inputs as possible. At each round, the round key (obtained from the key with some simple operations, for instance, using S-boxes and P-boxes) is combined using some group operation, typically XOR. Decryption is done by simply reversing the process (using the inverses of the S-boxes and P-boxes and applying the round keys in reversed order). Feistel ciphers In a Feistel cipher, the block of plain text to be encrypted is split into two equal-sized halves. The round function is applied to one half, using a subkey, and then the output is XORed with the other half. The two halves are then swapped. Let be the round function and let be the sub-keys for the rounds respectively. Then the basic operation is as follows: Split the plaintext block into two equal pieces, (, ) For each round , compute . Then the ciphertext is . The decryption of a ciphertext is accomplished by computing for . Then is the plaintext again. One advantage of the Feistel model compared to a substitution–permutation network is that the round function does not have to be invertible. Lai–Massey ciphers The Lai–Massey scheme offers security properties similar to those of the Feistel structure. It also shares the advantage that the round function does not have to be invertible. Another similarity is that it also splits the input block into two equal pieces. However, the round function is applied to the difference between the two, and the result is then added to both half blocks. Let be the round function and a half-round function and let be the sub-keys for the rounds respectively. Then the basic operation is as follows: Split the plaintext block into two equal pieces, (, ) For each round , compute where and Then the ciphertext is . The decryption of a ciphertext is accomplished by computing for where and Then is the plaintext again. Operations ARX (add–rotate–XOR) Many modern block ciphers and hashes are ARX algorithms—their round function involves only three operations: (A) modular addition, (R) rotation with fixed rotation amounts, and (X) XOR. Examples include ChaCha20, Speck, XXTEA, and BLAKE. Many authors draw an ARX network, a kind of data flow diagram, to illustrate such a round function. These ARX operations are popular because they are relatively fast and cheap in hardware and software, their implementation can be made extremely simple, and also because they run in constant time, and therefore are immune to timing attacks. The rotational cryptanalysis technique attempts to attack such round functions. Other operations Other operations often used in block ciphers include data-dependent rotations as in RC5 and RC6, a substitution box implemented as a lookup table as in Data Encryption Standard and Advanced Encryption Standard, a permutation box, and multiplication as in IDEA. Modes of operation A block cipher by itself allows encryption only of a single data block of the cipher's block length. For a variable-length message, the data must first be partitioned into separate cipher blocks. In the simplest case, known as electronic codebook (ECB) mode, a message is first split into separate blocks of the cipher's block size (possibly extending the last block with padding bits), and then each block is encrypted and decrypted independently. However, such a naive method is generally insecure because equal plaintext blocks will always generate equal ciphertext blocks (for the same key), so patterns in the plaintext message become evident in the ciphertext output. To overcome this limitation, several so-called block cipher modes of operation have been designed and specified in national recommendations such as NIST 800-38A and BSI TR-02102 and international standards such as ISO/IEC 10116. The general concept is to use randomization of the plaintext data based on an additional input value, frequently called an initialization vector, to create what is termed probabilistic encryption. In the popular cipher block chaining (CBC) mode, for encryption to be secure the initialization vector passed along with the plaintext message must be a random or pseudo-random value, which is added in an exclusive-or manner to the first plaintext block before it is encrypted. The resultant ciphertext block is then used as the new initialization vector for the next plaintext block. In the cipher feedback (CFB) mode, which emulates a self-synchronizing stream cipher, the initialization vector is first encrypted and then added to the plaintext block. The output feedback (OFB) mode repeatedly encrypts the initialization vector to create a key stream for the emulation of a synchronous stream cipher. The newer counter (CTR) mode similarly creates a key stream, but has the advantage of only needing unique and not (pseudo-)random values as initialization vectors; the needed randomness is derived internally by using the initialization vector as a block counter and encrypting this counter for each block. From a security-theoretic point of view, modes of operation must provide what is known as semantic security. Informally, it means that given some ciphertext under an unknown key one cannot practically derive any information from the ciphertext (other than the length of the message) over what one would have known without seeing the ciphertext. It has been shown that all of the modes discussed above, with the exception of the ECB mode, provide this property under so-called chosen plaintext attacks. Padding Some modes such as the CBC mode only operate on complete plaintext blocks. Simply extending the last block of a message with zero bits is insufficient since it does not allow a receiver to easily distinguish messages that differ only in the number of padding bits. More importantly, such a simple solution gives rise to very efficient padding oracle attacks. A suitable padding scheme is therefore needed to extend the last plaintext block to the cipher's block size. While many popular schemes described in standards and in the literature have been shown to be vulnerable to padding oracle attacks, a solution that adds a one-bit and then extends the last block with zero-bits, standardized as "padding method 2" in ISO/IEC 9797-1, has been proven secure against these attacks. Cryptanalysis Brute-force attacks This property results in the cipher's security degrading quadratically, and needs to be taken into account when selecting a block size. There is a trade-off though as large block sizes can result in the algorithm becoming inefficient to operate. Earlier block ciphers such as the DES have typically selected a 64-bit block size, while newer designs such as the AES support block sizes of 128 bits or more, with some ciphers supporting a range of different block sizes. Differential cryptanalysis Linear cryptanalysis A linear cryptanalysis is a form of cryptanalysis based on finding affine approximations to the action of a cipher. Linear cryptanalysis is one of the two most widely used attacks on block ciphers; the other being differential cryptanalysis. The discovery is attributed to Mitsuru Matsui, who first applied the technique to the FEAL cipher (Matsui and Yamagishi, 1992). Integral cryptanalysis Integral cryptanalysis is a cryptanalytic attack that is particularly applicable to block ciphers based on substitution–permutation networks. Unlike differential cryptanalysis, which uses pairs of chosen plaintexts with a fixed XOR difference, integral cryptanalysis uses sets or even multisets of chosen plaintexts of which part is held constant and another part varies through all possibilities. For example, an attack might use 256 chosen plaintexts that have all but 8 of their bits the same, but all differ in those 8 bits. Such a set necessarily has an XOR sum of 0, and the XOR sums of the corresponding sets of ciphertexts provide information about the cipher's operation. This contrast between the differences between pairs of texts and the sums of larger sets of texts inspired the name "integral cryptanalysis", borrowing the terminology of calculus. Other techniques In addition to linear and differential cryptanalysis, there is a growing catalog of attacks: truncated differential cryptanalysis, partial differential cryptanalysis, integral cryptanalysis, which encompasses square and integral attacks, slide attacks, boomerang attacks, the XSL attack, impossible differential cryptanalysis, and algebraic attacks. For a new block cipher design to have any credibility, it must demonstrate evidence of security against known attacks. Provable security When a block cipher is used in a given mode of operation, the resulting algorithm should ideally be about as secure as the block cipher itself. ECB (discussed above) emphatically lacks this property: regardless of how secure the underlying block cipher is, ECB mode can easily be attacked. On the other hand, CBC mode can be proven to be secure under the assumption that the underlying block cipher is likewise secure. Note, however, that making statements like this requires formal mathematical definitions for what it means for an encryption algorithm or a block cipher to "be secure". This section describes two common notions for what properties a block cipher should have. Each corresponds to a mathematical model that can be used to prove properties of higher-level algorithms, such as CBC. This general approach to cryptography – proving higher-level algorithms (such as CBC) are secure under explicitly stated assumptions regarding their components (such as a block cipher) – is known as provable security. Standard model Informally, a block cipher is secure in the standard model if an attacker cannot tell the difference between the block cipher (equipped with a random key) and a random permutation. To be a bit more precise, let E be an n-bit block cipher. We imagine the following game: The person running the game flips a coin. If the coin lands on heads, he chooses a random key K and defines the function f = EK. If the coin lands on tails, he chooses a random permutation on the set of n-bit strings and defines the function f = . The attacker chooses an n-bit string X, and the person running the game tells him the value of f(X). Step 2 is repeated a total of q times. (Each of these q interactions is a query.) The attacker guesses how the coin landed. He wins if his guess is correct. The attacker, which we can model as an algorithm, is called an adversary. The function f (which the adversary was able to query) is called an oracle. Note that an adversary can trivially ensure a 50% chance of winning simply by guessing at random (or even by, for example, always guessing "heads"). Therefore, let PE(A) denote the probability that adversary A wins this game against E, and define the advantage of A as 2(PE(A) − 1/2). It follows that if A guesses randomly, its advantage will be 0; on the other hand, if A always wins, then its advantage is 1. The block cipher E is a pseudo-random permutation (PRP) if no adversary has an advantage significantly greater than 0, given specified restrictions on q and the adversary's running time. If in Step 2 above adversaries have the option of learning f−1(X) instead of f(X) (but still have only small advantages) then E is a strong PRP (SPRP). An adversary is non-adaptive if it chooses all q values for X before the game begins (that is, it does not use any information gleaned from previous queries to choose each X as it goes). These definitions have proven useful for analyzing various modes of operation. For example, one can define a similar game for measuring the security of a block cipher-based encryption algorithm, and then try to show (through a reduction argument) that the probability of an adversary winning this new game is not much more than PE(A) for some A. (The reduction typically provides limits on q and the running time of A.) Equivalently, if PE(A) is small for all relevant A, then no attacker has a significant probability of winning the new game. This formalizes the idea that the higher-level algorithm inherits the block cipher's security. Ideal cipher model Practical evaluation Block ciphers may be evaluated according to multiple criteria in practice. Common factors include: Key parameters, such as its key size and block size, both of which provide an upper bound on the security of the cipher. The estimated security level, which is based on the confidence gained in the block cipher design after it has largely withstood major efforts in cryptanalysis over time, the design's mathematical soundness, and the existence of practical or certificational attacks. The cipher's complexity and its suitability for implementation in hardware or software. Hardware implementations may measure the complexity in terms of gate count or energy consumption, which are important parameters for resource-constrained devices. The cipher's performance in terms of processing throughput on various platforms, including its memory requirements. The cost of the cipher refers to licensing requirements that may apply due to intellectual property rights. The flexibility of the cipher includes its ability to support multiple key sizes and block lengths. Notable block ciphers Lucifer / DES Lucifer is generally considered to be the first civilian block cipher, developed at IBM in the 1970s based on work done by Horst Feistel. A revised version of the algorithm was adopted as a U.S. government Federal Information Processing Standard: FIPS PUB 46 Data Encryption Standard (DES). It was chosen by the U.S. National Bureau of Standards (NBS) after a public invitation for submissions and some internal changes by NBS (and, potentially, the NSA). DES was publicly released in 1976 and has been widely used. DES was designed to, among other things, resist a certain cryptanalytic attack known to the NSA and rediscovered by IBM, though unknown publicly until rediscovered again and published by Eli Biham and Adi Shamir in the late 1980s. The technique is called differential cryptanalysis and remains one of the few general attacks against block ciphers; linear cryptanalysis is another but may have been unknown even to the NSA, prior to its publication by Mitsuru Matsui. DES prompted a large amount of other work and publications in cryptography and cryptanalysis in the open community and it inspired many new cipher designs. DES has a block size of 64 bits and a key size of 56 bits. 64-bit blocks became common in block cipher designs after DES. Key length depended on several factors, including government regulation. Many observers in the 1970s commented that the 56-bit key length used for DES was too short. As time went on, its inadequacy became apparent, especially after a special-purpose machine designed to break DES was demonstrated in 1998 by the Electronic Frontier Foundation. An extension to DES, Triple DES, triple-encrypts each block with either two independent keys (112-bit key and 80-bit security) or three independent keys (168-bit key and 112-bit security). It was widely adopted as a replacement. As of 2011, the three-key version is still considered secure, though the National Institute of Standards and Technology (NIST) standards no longer permit the use of the two-key version in new applications, due to its 80-bit security level. IDEA The International Data Encryption Algorithm (IDEA) is a block cipher designed by James Massey of ETH Zurich and Xuejia Lai; it was first described in 1991, as an intended replacement for DES. IDEA operates on 64-bit blocks using a 128-bit key and consists of a series of eight identical transformations (a round) and an output transformation (the half-round). The processes for encryption and decryption are similar. IDEA derives much of its security by interleaving operations from different groups – modular addition and multiplication, and bitwise exclusive or (XOR) – which are algebraically "incompatible" in some sense. The designers analysed IDEA to measure its strength against differential cryptanalysis and concluded that it is immune under certain assumptions. No successful linear or algebraic weaknesses have been reported. , the best attack which applies to all keys can break a full 8.5-round IDEA using a narrow-bicliques attack about four times faster than brute force. RC5 RC5 is a block cipher designed by Ronald Rivest in 1994 which, unlike many other ciphers, has a variable block size (32, 64, or 128 bits), key size (0 to 2040 bits), and a number of rounds (0 to 255). The original suggested choice of parameters was a block size of 64 bits, a 128-bit key, and 12 rounds. A key feature of RC5 is the use of data-dependent rotations; one of the goals of RC5 was to prompt the study and evaluation of such operations as a cryptographic primitive. RC5 also consists of a number of modular additions and XORs. The general structure of the algorithm is a Feistel-like a network. The encryption and decryption routines can be specified in a few lines of code. The key schedule, however, is more complex, expanding the key using an essentially one-way function with the binary expansions of both e and the golden ratio as sources of "nothing up my sleeve numbers". The tantalizing simplicity of the algorithm together with the novelty of the data-dependent rotations has made RC5 an attractive object of study for cryptanalysts. 12-round RC5 (with 64-bit blocks) is susceptible to a differential attack using 244 chosen plaintexts. 18–20 rounds are suggested as sufficient protection. Rijndael / AES The Rijndael cipher developed by Belgian cryptographers, Joan Daemen and Vincent Rijmen was one of the competing designs to replace DES. It won the 5-year public competition to become the AES (Advanced Encryption Standard). Adopted by NIST in 2001, AES has a fixed block size of 128 bits and a key size of 128, 192, or 256 bits, whereas Rijndael can be specified with block and key sizes in any multiple of 32 bits, with a minimum of 128 bits. The block size has a maximum of 256 bits, but the key size has no theoretical maximum. AES operates on a 4×4 column-major order matrix of bytes, termed the state (versions of Rijndael with a larger block size have additional columns in the state). Blowfish Blowfish is a block cipher, designed in 1993 by Bruce Schneier and included in a large number of cipher suites and encryption products. Blowfish has a 64-bit block size and a variable key length from 1 bit up to 448 bits. It is a 16-round Feistel cipher and uses large key-dependent S-boxes. Notable features of the design include the key-dependent S-boxes and a highly complex key schedule. It was designed as a general-purpose algorithm, intended as an alternative to the aging DES and free of the problems and constraints associated with other algorithms. At the time Blowfish was released, many other designs were proprietary, encumbered by patents, or were commercial/government secrets. Schneier has stated that "Blowfish is unpatented, and will remain so in all countries. The algorithm is hereby placed in the public domain, and can be freely used by anyone." The same applies to Twofish, a successor algorithm from Schneier. Generalizations Tweakable block ciphers M. Liskov, R. Rivest, and D. Wagner have described a generalized version of block ciphers called "tweakable" block ciphers. A tweakable block cipher accepts a second input called the tweak along with its usual plaintext or ciphertext input. The tweak, along with the key, selects the permutation computed by the cipher. If changing tweaks is sufficiently lightweight (compared with a usually fairly expensive key setup operation), then some interesting new operation modes become possible. The disk encryption theory article describes some of these modes. Format-preserving encryption Block ciphers traditionally work over a binary alphabet. That is, both the input and the output are binary strings, consisting of n zeroes and ones. In some situations, however, one may wish to have a block cipher that works over some other alphabet; for example, encrypting 16-digit credit card numbers in such a way that the ciphertext is also a 16-digit number might facilitate adding an encryption layer to legacy software. This is an example of format-preserving encryption. More generally, format-preserving encryption requires a keyed permutation on some finite language. This makes format-preserving encryption schemes a natural generalization of (tweakable) block ciphers. In contrast, traditional encryption schemes, such as CBC, are not permutations because the same plaintext can encrypt multiple different ciphertexts, even when using a fixed key. Relation to other cryptographic primitives Block ciphers can be used to build other cryptographic primitives, such as those below. For these other primitives to be cryptographically secure, care has to be taken to build them the right way. Stream ciphers can be built using block ciphers. OFB mode and CTR mode are block modes that turn a block cipher into a stream cipher. Cryptographic hash functions can be built using block ciphers. See the one-way compression function for descriptions of several such methods. The methods resemble the block cipher modes of operation usually used for encryption. Cryptographically secure pseudorandom number generators (CSPRNGs) can be built using block ciphers. Secure pseudorandom permutations of arbitrarily sized finite sets can be constructed with block ciphers; see Format-Preserving Encryption. A publicly known unpredictable permutation combined with key whitening is enough to construct a block cipher -- such as the single-key Even–Mansour cipher, perhaps the simplest possible provably secure block cipher. Message authentication codes (MACs) are often built from block ciphers. CBC-MAC, OMAC, and PMAC are such MACs. Authenticated encryption is also built from block ciphers. It means to both encrypt and MAC at the same time. That is to both provide confidentiality and authentication. CCM, EAX, GCM, and OCB are such authenticated encryption modes. Just as block ciphers can be used to build hash functions, like SHA-1 and SHA-2 are based on block ciphers which are also used independently as SHACAL, hash functions can be used to build block ciphers. Examples of such block ciphers are BEAR and LION.
Technology
Computer security
null
4614
https://en.wikipedia.org/wiki/Boeing%20747
Boeing 747
The Boeing 747 is a long-range wide-body airliner designed and manufactured by Boeing Commercial Airplanes in the United States between 1968 and 2023. After the introduction of the 707 in October 1958, Pan Am wanted a jet times its size, to reduce its seat cost by 30%. In 1965, Joe Sutter left the 737 development program to design the 747. In April 1966, Pan Am ordered 25 Boeing 747-100 aircraft, and in late 1966, Pratt & Whitney agreed to develop the JT9D engine, a high-bypass turbofan. On September 30, 1968, the first 747 was rolled out of the custom-built Everett Plant, the world's largest building by volume. The 747's first flight took place on February 9, 1969, and the 747 was certified in December of that year. It entered service with Pan Am on January 22, 1970. The 747 was the first airplane called a "Jumbo Jet" as the first wide-body airliner. The 747 is a four-engined jet aircraft, initially powered by Pratt & Whitney JT9D turbofan engines, then General Electric CF6 and Rolls-Royce RB211 engines for the original variants. With a ten-abreast economy seating, it typically accommodates 366 passengers in three travel classes. It has a pronounced 37.5° wing sweep, allowing a cruise speed, and its heavy weight is supported by four main landing gear legs, each with a four-wheel bogie. The partial double-deck aircraft was designed with a raised cockpit so it could be converted to a freighter airplane by installing a front cargo door, as it was initially thought that it would eventually be superseded by supersonic transports. Boeing introduced the -200 in 1971, with uprated engines for a heavier maximum takeoff weight (MTOW) of from the initial , increasing the maximum range from . It was shortened for the longer-range 747SP in 1976, and the 747-300 followed in 1983 with a stretched upper deck for up to 400 seats in three classes. The heavier 747-400 with improved RB211 and CF6 engines or the new PW4000 engine (the JT9D successor), and a two-crew glass cockpit, was introduced in 1989 and is the most common variant. After several studies, the stretched 747-8 was launched on November 14, 2005, using the General Electric GEnx engine first developed for the Boeing 787 Dreamliner (the inspiration for the -8 in the name), and was first delivered in October 2011. The 747 is the basis for several government and military variants, such as the VC-25 (Air Force One), E-4 Emergency Airborne Command Post, Shuttle Carrier Aircraft, and some experimental test aircraft such as the YAL-1 and SOFIA airborne observatory. Initial competition came from the smaller trijet widebodies: the Lockheed L-1011 (introduced in 1972), McDonnell Douglas DC-10 (1971) and later MD-11 (1990). Airbus competed with later variants with the heaviest versions of the A340 until surpassing the 747 in size with the A380, delivered between 2007 and 2021. Freighter variants of the 747 remain popular with cargo airlines. The final 747 was delivered to Atlas Air in January 2023 after a 54-year production run, with 1,574 aircraft built. , 64 Boeing 747s (%) have been lost in accidents and incidents, in which a total of 3,746 people have died. Development Background In 1963, the United States Air Force started a series of study projects on a very large strategic transport aircraft. Although the C-141 Starlifter was being introduced, officials believed that a much larger and more capable aircraft was needed, especially to carry cargo that would not fit in any existing aircraft. These studies led to initial requirements for the CX-Heavy Logistics System (CX-HLS) in March 1964 for an aircraft with a load capacity of and a speed of Mach 0.75 (), and an unrefueled range of with a payload of . The payload bay had to be wide by high and long with access through doors at the front and rear. The desire to keep the number of engines to four required new engine designs with greatly increased power and better fuel economy. In May 1964, airframe proposals arrived from Boeing, Douglas, General Dynamics, Lockheed, and Martin Marietta; engine proposals were submitted by General Electric, Curtiss-Wright, and Pratt & Whitney. Boeing, Douglas, and Lockheed were given additional study contracts for the airframe, along with General Electric and Pratt & Whitney for the engines. The airframe proposals shared several features. As the CX-HLS needed to be able to be loaded from the front, a door had to be included where the cockpit usually was. All of the companies solved this problem by moving the cockpit above the cargo area; Douglas had a small "pod" just forward and above the wing, Lockheed used a long "spine" running the length of the aircraft with the wing spar passing through it, while Boeing blended the two, with a longer pod that ran from just behind the nose to just behind the wing. In 1965, Lockheed's aircraft design and General Electric's engine design were selected for the new C-5 Galaxy transport, which was the largest military aircraft in the world at the time. Boeing carried the nose door and raised cockpit concepts over to the design of the 747. Airliner proposal The 747 was conceived while air travel was increasing in the 1960s. The era of commercial jet transportation, led by the enormous popularity of the Boeing 707 and Douglas DC-8, had revolutionized long-distance travel. In this growing jet age, Juan Trippe, president of Pan American Airways (Pan Am), one of Boeing's most important airline customers, asked for a new jet airliner times size of the 707, with a 30% lower cost per unit of passenger-distance and the capability to offer mass air travel on international routes. Trippe also thought that airport congestion could be addressed by a larger new aircraft. In 1965, Joe Sutter was transferred from Boeing's 737 development team to manage the design studies for the new airliner, already assigned the model number 747. Sutter began a design study with Pan Am and other airlines to better understand their requirements. At the time, many thought that long-range subsonic airliners would eventually be superseded by supersonic transport aircraft. Boeing responded by designing the 747 so it could be adapted easily to carry freight and remain in production even if sales of the passenger version declined. In April 1966, Pan Am ordered 25 Boeing 747-100 aircraft for US$525 million (equivalent to $ billion in dollars). During the ceremonial 747 contract-signing banquet in Seattle on Boeing's 50th Anniversary, Juan Trippe predicted that the 747 would be "…a great weapon for peace, competing with intercontinental missiles for mankind's destiny". As launch customer, and because of its early involvement before placing a formal order, Pan Am was able to influence the design and development of the 747 to an extent unmatched by a single airline before or since. Design effort Ultimately, the high-winged CX-HLS Boeing design was not used for the 747, although technologies developed for their bid had an influence. The original design included a full-length double-deck fuselage with eight-across seating and two aisles on the lower deck and seven-across seating and two aisles on the upper deck. However, concern over evacuation routes and limited cargo-carrying capability caused this idea to be scrapped in early 1966 in favor of a wider single deck design. The cockpit was therefore placed on a shortened upper deck so that a freight-loading door could be included in the nose cone; this design feature produced the 747's distinctive "hump". In early models, what to do with the small space in the pod behind the cockpit was not clear, and this was initially specified as a "lounge" area with no permanent seating. (A different configuration that had been considered to keep the flight deck out of the way for freight loading had the pilots below the passengers, and was dubbed the "anteater".) One of the principal technologies that enabled an aircraft as large as the 747 to be drawn up was the high-bypass turbofan engine. This engine technology was thought to be capable of delivering double the power of the earlier turbojets while consuming one-third less fuel. General Electric had pioneered the concept but was committed to developing the engine for the C-5 Galaxy and did not enter the commercial market until later. Pratt & Whitney was also working on the same principle and, by late 1966, Boeing, Pan Am and Pratt & Whitney agreed to develop a new engine, designated the JT9D to power the 747. The project was designed with a new methodology called fault tree analysis, which allowed the effects of a failure of a single part to be studied to determine its impact on other systems. To address concerns about safety and flyability, the 747's design included structural redundancy, redundant hydraulic systems, quadruple main landing gear and dual control surfaces. Additionally, some of the most advanced high-lift devices used in the industry were included in the new design, to allow it to operate from existing airports. These included Krueger flaps running almost the entire length of the wing's leading edge, as well as complex three-part slotted flaps along the trailing edge of the wing. The wing's complex three-part flaps increase wing area by 21% and lift by 90% when fully deployed compared to their non-deployed configuration. Boeing agreed to deliver the first 747 to Pan Am by the end of 1969. The delivery date left 28 months to design the aircraft, which was two-thirds of the normal time. The schedule was so fast-paced that the people who worked on it were given the nickname "The Incredibles". Developing the aircraft was such a technical and financial challenge that management was said to have "bet the company" when it started the project. Due to its massive size, Boeing subcontracted the assembly of subcomponents to other manufacturers, most notably Northrop and Grumman (later merged into Northrop Grumman in 1994) for fuselage parts and trailing edge flaps respectively, Fairchild for tailplane ailerons, and Ling-Temco-Vought (LTV) for the empennage. Production plant As Boeing did not have a plant large enough to assemble the giant airliner, they chose to build a new plant. The company considered locations in about 50 cities, and eventually decided to build the new plant some north of Seattle on a site adjoining a military base at Paine Field near Everett, Washington. It bought the site in June 1966. Developing the 747 had been a major challenge, and building its assembly plant was also a huge undertaking. Boeing president William M. Allen asked Malcolm T. Stamper, then head of the company's turbine division, to oversee construction of the Everett factory and to start production of the 747. To level the site, more than of earth had to be moved. Time was so short that the 747's full-scale mock-up was built before the factory roof above it was finished. The plant is the largest building by volume ever built, and has been substantially expanded several times to permit construction of other models of Boeing wide-body commercial jets. Flight testing Before the first 747 was fully assembled, testing began on many components and systems. One important test involved the evacuation of 560 volunteers from a cabin mock-up via the aircraft's emergency chutes. The first full-scale evacuation took two and a half minutes instead of the maximum of 90 seconds mandated by the Federal Aviation Administration (FAA), and several volunteers were injured. Subsequent test evacuations achieved the 90-second goal but caused more injuries. Most problematic was evacuation from the aircraft's upper deck; instead of using a conventional slide, volunteer passengers escaped by using a harness attached to a reel. Tests also involved taxiing such a large aircraft. Boeing built an unusual training device known as "Waddell's Wagon" (named for a 747 test pilot, Jack Waddell) that consisted of a mock-up cockpit mounted on the roof of a truck. While the first 747s were still being built, the device allowed pilots to practice taxi maneuvers from a high upper-deck position. In 1968, the program cost was US$1 billion (equivalent to $ billion in dollars). On September 30, 1968, the first 747 was rolled out of the Everett assembly building before the world's press and representatives of the 26 airlines that had ordered the airliner. Over the following months, preparations were made for the first flight, which took place on February 9, 1969, with test pilots Jack Waddell and Brien Wygle at the controls and Jess Wallick at the flight engineer's station. Despite a minor problem with one of the flaps, the flight confirmed that the 747 handled extremely well. The 747 was found to be largely immune to "Dutch roll", a phenomenon that had been a major hazard to the early swept-wing jets. Issues, delays and certification During later stages of the flight test program, flutter testing showed that the wings suffered oscillation under certain conditions. This difficulty was partly solved by reducing the stiffness of some wing components. However, a particularly severe high-speed flutter problem was solved only by inserting depleted uranium counterweights as ballast in the outboard engine nacelles of the early 747s. This measure caused some concern when these aircraft crashed, for example El Al Flight 1862 at Amsterdam in 1992 with of uranium in the tailplane (horizontal stabilizer); detailed investigations showed, however, that the best estimate of the exposure to depleted uranium was ".. several orders of magnitude less than the workers' limit for chronic exposure." The flight test program was hampered by problems with the 747's JT9D engines. Difficulties included engine stalls caused by rapid throttle movements and distortion of the turbine casings after a short period of service. The problems delayed 747 deliveries for several months; up to 20 aircraft at the Everett plant were stranded while awaiting engine installation. The program was further delayed when one of the five test aircraft suffered serious damage during a landing attempt at Renton Municipal Airport, the site of Boeing's Renton factory. The incident happened on December 13, 1969, when a test aircraft was flown to Renton to have test equipment removed and a cabin installed. Pilot Ralph C. Cokely undershot the airport's short runway and the 747's right, outer landing gear was torn off and two engine nacelles were damaged. However, these difficulties did not prevent Boeing from taking a test aircraft to the 28th Paris Air Show in mid-1969, where it was displayed to the public for the first time. Finally, in December 1969, the 747 received its FAA airworthiness certificate, clearing it for introduction into service. The huge cost of developing the 747 and building the Everett factory meant that Boeing had to borrow heavily from a banking syndicate. During the final months before delivery of the first aircraft, the company had to repeatedly request additional funding to complete the project. Had this been refused, Boeing's survival would have been threatened. The firm's debt exceeded $2 billion, with the $1.2 billion owed to the banks setting a record for all companies. Allen later said, "It was really too large a project for us." Ultimately, the gamble succeeded, and Boeing held a monopoly in very large passenger aircraft production for many years. Entry into service On January 15, 1970, First Lady Pat Nixon christened Pan Am's first 747 at Dulles International Airport in the presence of Pan Am chairman Najeeb Halaby. Instead of champagne, red, white, and blue water was sprayed on the aircraft. The 747 entered service on January 22, 1970, on Pan Am's New York–London route; the flight had been planned for the evening of January 21, but engine overheating made the original aircraft (Clipper Young America, registration N735PA) unusable. Finding a substitute delayed the flight by more than six hours to the following day when Clipper Victor (registration N736PA) was used. The 747 enjoyed a fairly smooth introduction into service, overcoming concerns that some airports would not be able to accommodate an aircraft that large. Although technical problems occurred, they were relatively minor and quickly solved. Improved 747 versions After the initial , Boeing developed the , a higher maximum takeoff weight (MTOW) variant, and the (Short Range), with higher passenger capacity. Increased maximum takeoff weight allows aircraft to carry more fuel and have longer range. The model followed in 1971, featuring more powerful engines and a higher MTOW. Passenger, freighter and combination passenger-freighter versions of the were produced. The shortened 747SP (special performance) with a longer range was also developed, and entered service in 1976. The 747 line was further developed with the launch of the on June 11, 1980, followed by interest from Swissair a month later and the go-ahead for the project. The 300 series resulted from Boeing studies to increase the seating capacity of the 747, during which modifications such as fuselage plugs and extending the upper deck over the entire length of the fuselage were rejected. The first , completed in 1983, included a stretched upper deck, increased cruise speed, and increased seating capacity. The -300 variant was previously designated 747SUD for stretched upper deck, then 747-200 SUD, followed by 747EUD, before the 747-300 designation was used. Passenger, short range and combination freighter-passenger versions of the 300 series were produced. In 1985, development of the longer range 747-400 began. The variant had a new glass cockpit, which allowed for a cockpit crew of two instead of three, new engines, lighter construction materials, and a redesigned interior. Development costs soared, and production delays occurred as new technologies were incorporated at the request of airlines. Insufficient workforce experience and reliance on overtime contributed to early production problems on the . The -400 entered service in 1989. In 1991, a record-breaking 1,087 passengers were flown in a 747 during a covert operation to airlift Ethiopian Jews to Israel. Generally, the 747-400 held between 416 and 524 passengers. The 747 remained the heaviest commercial aircraft in regular service until the debut of the Antonov An-124 Ruslan in 1982; variants of the 747-400 surpassed the An-124's weight in 2000. The Antonov An-225 Mriya cargo transport, which debuted in 1988, remains the world's largest aircraft by several measures (including the most accepted measures of maximum takeoff weight and length); one aircraft has been completed and was in service until 2022. The Scaled Composites Stratolaunch is currently the largest aircraft by wingspan. Further developments After the arrival of the , several stretching schemes for the 747 were proposed. Boeing announced the larger 747-500X and preliminary designs in 1996. The new variants would have cost more than US$5 billion to develop, and interest was not sufficient to launch the program. In 2000, Boeing offered the more modest 747X and 747X stretch derivatives as alternatives to the Airbus A38X. However, the 747X family was unable to attract enough interest to enter production. A year later, Boeing switched from the 747X studies to pursue the Sonic Cruiser, and after the Sonic Cruiser program was put on hold, the 787 Dreamliner. Some of the ideas developed for the 747X were used on the 747-400ER, a longer range variant of the . After several variants were proposed but later abandoned, some industry observers became skeptical of new aircraft proposals from Boeing. However, in early 2004, Boeing announced tentative plans for the 747 Advanced that were eventually adopted. Similar in nature to the 747-X, the stretched 747 Advanced used technology from the 787 to modernize the design and its systems. The 747 remained the largest passenger airliner in service until the Airbus A380 began airline service in 2007. On November 14, 2005, Boeing announced it was launching the 747 Advanced as the Boeing 747-8. The last 747-400s were completed in 2009. , most orders of the 747-8 were for the freighter variant. On February 8, 2010, the 747-8 Freighter made its maiden flight. The first delivery of the 747-8 went to Cargolux in 2011. The first 747-8 Intercontinental passenger variant was delivered to Lufthansa on May 5, 2012. The 1,500th Boeing 747 was delivered in June 2014 to Lufthansa. In January 2016, Boeing stated it was reducing 747-8 production to six per year beginning in September 2016, incurring a $569 million post-tax charge against its fourth-quarter 2015 profits. At the end of 2015, the company had 20 orders outstanding. On January 29, 2016, Boeing announced that it had begun the preliminary work on the modifications to a commercial 747-8 for the next Air Force One presidential aircraft, then expected to be operational by 2020. On July 12, 2016, Boeing announced that it had finalized an order from Volga-Dnepr Group for 20 747-8 freighters, valued at $7.58 billion (~$ in ) at list prices. Four aircraft were delivered beginning in 2012. Volga-Dnepr Group is the parent of three major Russian air-freight carriers – Volga-Dnepr Airlines, AirBridgeCargo Airlines and Atran Airlines. The new 747-8 freighters would replace AirBridgeCargo's current 747-400 aircraft and expand the airline's fleet and will be acquired through a mix of direct purchases and leasing over the next six years, Boeing said. End of production On July 27, 2016, in its quarterly report to the Securities and Exchange Commission, Boeing discussed the potential termination of 747 production due to insufficient demand and market for the aircraft. With a firm order backlog of 21 aircraft and a production rate of six per year, program accounting had been reduced to 1,555 aircraft. In October 2016, UPS Airlines ordered 14 -8Fs to add capacity, along with 14 options, which it took in February 2018 to increase the total to 28 -8Fs on order. The backlog then stood at 25 aircraft, though several of these were orders from airlines that no longer intended to take delivery. On July 2, 2020, it was reported that Boeing planned to end 747 production in 2022 upon delivery of the remaining jets on order to UPS and the Volga-Dnepr Group due to low demand. On July 29, 2020, Boeing confirmed that the final 747 would be delivered in 2022 as a result of "current market dynamics and outlook" stemming from the COVID-19 pandemic, according to CEO David Calhoun. The last aircraft, a 747-8F for Atlas Air registered N863GT, rolled off the production line on December 6, 2022, and was delivered on January 31, 2023. Boeing hosted an event at the Everett factory for thousands of workers as well as industry executives to commemorate the delivery. Design The Boeing 747 is a large, wide-body (two-aisle) airliner with four wing-mounted engines. Its wings have a high sweep angle of 37.5° for a fast, efficient cruise speed of Mach 0.84 to 0.88, depending on the variant. The sweep also reduces the wingspan, allowing the 747 to use existing hangars. Its seating capacity is over 366 with a 3–4–3 seat arrangement (a cross section of three seats, an aisle, four seats, another aisle, and three seats) in economy class and a 2–3–2 layout in first class on the main deck. The upper deck has a 3–3 seat arrangement in economy class and a 2–2 layout in first class. Raised above the main deck, the cockpit creates a hump. This raised cockpit allows front loading of cargo on freight variants. The upper deck behind the cockpit provides space for a lounge and/or extra seating. The "stretched upper deck" became available as an alternative on the variant and later as standard beginning on the 747-300. The upper deck was stretched more on the 747-8. The 747 cockpit roof section also has an escape hatch from which crew can exit during the events of an emergency if they cannot do so through the cabin. The 747's maximum takeoff weight ranges from for the -100 to for the -8. Its range has increased from on the -100 to on the -8I. The 747 has redundant structures along with four redundant hydraulic systems and four main landing gears each with four wheels; these provide a good spread of support on the ground and safety in case of tire blow-outs. The main gear are redundant so that landing can be performed on two opposing landing gears if the others are not functioning properly. The 747 also has split control surfaces and was designed with sophisticated triple-slotted flaps that minimize landing speeds and allow the 747 to use standard-length runways. For transportation of spare engines, the 747 can accommodate a non-functioning fifth-pod engine under the aircraft's port wing between the inner functioning engine and the fuselage. The fifth engine mount point was also used by Virgin Orbit's LauncherOne program to carry an orbital-class rocket to cruise altitude where it was deployed. Operational history After the aircraft's introduction with Pan Am in 1970, other airlines that had bought the 747 to stay competitive began to put their own 747s into service. Boeing estimated that half of the early 747 sales were to airlines desiring the aircraft's long range rather than its payload capacity. While the 747 had the lowest potential operating cost per seat, this could only be achieved when the aircraft was fully loaded; costs per seat increased rapidly as occupancy declined. A moderately loaded 747, one with only 70 percent of its seats occupied, used more than 95 percent of the fuel needed by a fully occupied 747. Nonetheless, many flag-carriers purchased the 747 due to its prestige "even if it made no sense economically" to operate. During the 1970s and 1980s, over 30 regularly scheduled 747s could often be seen at John F. Kennedy International Airport. The recession of 1969–1970, despite having been characterized as relatively mild, greatly affected Boeing. For the year and a half after September 1970, it only sold two 747s in the world, both to Irish flag carrier Aer Lingus. No 747s were sold to any American carrier for almost three years. When economic problems in the US and other countries after the 1973 oil crisis led to reduced passenger traffic, several airlines found they did not have enough passengers to fly the 747 economically, and they replaced them with the smaller and recently introduced McDonnell Douglas DC-10 and Lockheed L-1011 TriStar trijet wide bodies (and later the 767 and Airbus A300/A310 twinjets). Having tried replacing coach seats on its 747s with piano bars in an attempt to attract more customers, American Airlines eventually relegated its 747s to cargo service and in 1983 exchanged them with Pan Am for smaller aircraft; Delta Air Lines also removed its 747s from service after several years. Later, Delta acquired 747s again in 2008 as part of its merger with Northwest Airlines, although it retired the Boeing 747-400 fleet in December 2017. International flights bypassing traditional hub airports and landing at smaller cities became more common throughout the 1980s, thus eroding the 747's original market. Many international carriers continued to use the 747 on Pacific routes. In Japan, 747s on domestic routes were configured to carry nearly the maximum passenger capacity. Variants The 747-100 with a range of 4,620 nautical miles (8,556 km), was the original variant launched in 1966. The 747-200 soon followed, with its launch in 1968. The 747-300 was launched in 1980 and was followed by the in 1985. Ultimately, the 747-8 was announced in 2005. Several versions of each variant have been produced, and many of the early variants were in production simultaneously. The International Civil Aviation Organization (ICAO) classifies variants using a shortened code formed by combining the model number and the variant designator (e.g. "B741" for all -100 models). 747-100 The first 747-100s were built with six upper deck windows (three per side) to accommodate upstairs lounge areas. Later, as airlines began to use the upper deck for premium passenger seating instead of lounge space, Boeing offered an upper deck with ten windows on either side as an option. Some early -100s were retrofitted with the new configuration. The -100 was equipped with Pratt & Whitney JT9D-3A engines. No freighter version of this model was developed, but many 747-100s were converted into freighters as 747-100(SF). The first 747-100(SF) was delivered to Flying Tiger Line in 1974. A total of 168 747-100s were built; 167 were delivered to customers, while Boeing kept the prototype, City of Everett. In 1972, its unit cost was US$24M (M today). 747SR Responding to requests from Japanese airlines for a high-capacity aircraft to serve domestic routes between major cities, Boeing developed the 747SR as a short-range version of the with lower fuel capacity and greater payload capability. With increased economy class seating, up to 498 passengers could be carried in early versions and up to 550 in later models. The 747SR had an economic design life objective of 52,000 flights during 20 years of operation, compared to 24,600 flights in 20 years for the standard 747. The initial 747SR model, the -100SR, had a strengthened body structure and landing gear to accommodate the added stress accumulated from a greater number of takeoffs and landings. Extra structural support was built into the wings, fuselage, and the landing gear along with a 20% reduction in fuel capacity. The initial order for the -100SR – four aircraft for Japan Air Lines (JAL, later Japan Airlines) – was announced on October 30, 1972; rollout occurred on August 3, 1973, and the first flight took place on August 31, 1973. The type was certified by the FAA on September 26, 1973, with the first delivery on the same day. The -100SR entered service with JAL, the type's sole customer, on October 7, 1973, and typically operated flights within Japan. Seven -100SRs were built between 1973 and 1975, each with a MTOW and Pratt & Whitney JT9D-7A engines derated to of thrust. Following the -100SR, Boeing produced the -100BSR, a 747SR variant with increased takeoff weight capability. Debuting in 1978, the -100BSR also incorporated structural modifications for a high cycle-to-flying hour ratio; a related standard -100B model debuted in 1979. The -100BSR first flew on November 3, 1978, with first delivery to All Nippon Airways (ANA) on December 21, 1978. A total of 20 -100BSRs were produced for ANA and JAL. The -100BSR had a MTOW and was powered by the same JT9D-7A or General Electric CF6-45 engines used on the -100SR. ANA operated this variant on domestic Japanese routes with 455 or 456 seats until retiring its last aircraft in March 2006. In 1986, two -100BSR SUD models, featuring the stretched upper deck (SUD) of the -300, were produced for JAL. The type's maiden flight occurred on February 26, 1986, with FAA certification and first delivery on March 24, 1986. JAL operated the -100BSR SUD with 563 seats on domestic routes until their retirement in the third quarter of 2006. While only two -100BSR SUDs were produced, in theory, standard -100Bs can be modified to the SUD certification. Overall, 29 Boeing 747SRs were built. 747-100B The 747-100B model was developed from the -100SR, using its stronger airframe and landing gear design. The type had an increased fuel capacity of , allowing for a range with a typical 452-passenger payload, and an increased MTOW of was offered. The first -100B order, one aircraft for Iran Air, was announced on June 1, 1978. This version first flew on June 20, 1979, received FAA certification on August 1, 1979, and was delivered the next day. Nine -100Bs were built, one for Iran Air and eight for Saudi Arabian Airlines. Unlike the original -100, the -100B was offered with Pratt & Whitney JT9D-7A, CF6-50, or Rolls-Royce RB211-524 engines. However, only RB211-524 (Saudia) and JT9D-7A (Iran Air) engines were ordered. The last 747-100B, EP-IAM was retired by Iran Air in 2014, the last commercial operator of the 747-100 and -100B. 747SP The development of the 747SP stemmed from a joint request between Pan American World Airways and Iran Air, who were looking for a high-capacity airliner with enough range to cover Pan Am's New York–Middle Eastern routes and Iran Air's planned Tehran–New York route. The Tehran–New York route, when launched, was the longest non-stop commercial flight in the world. The 747SP is shorter than the . Fuselage sections were eliminated fore and aft of the wing, and the center section of the fuselage was redesigned to fit mating fuselage sections. The SP's flaps used a simplified single-slotted configuration. The 747SP, compared to earlier variants, had a tapering of the aft upper fuselage into the empennage, a double-hinged rudder, and longer vertical and horizontal stabilizers. Power was provided by Pratt & Whitney JT9D-7(A/F/J/FW) or Rolls-Royce RB211-524 engines. The 747SP was granted a type certificate on February 4, 1976, and entered service with launch customers Pan Am and Iran Air that same year. The aircraft was chosen by airlines wishing to serve major airports with short runways. A total of 45 747SPs were built, with the 44th 747SP delivered on August 30, 1982. In 1987, Boeing re-opened the 747SP production line after five years to build one last 747SP for an order by the United Arab Emirates government. In addition to airline use, one 747SP was modified for the NASA/German Aerospace Center SOFIA experiment. Iran Air is the last civil operator of the type; its final 747-SP (EP-IAC) was retired in June 2016. 747-200 While the 747-100 powered by Pratt & Whitney JT9D-3A engines offered enough payload and range for medium-haul operations, it was marginal for long-haul route sectors. The demand for longer range aircraft with increased payload quickly led to the improved -200, which featured more powerful engines, increased MTOW, and greater range than the -100. A few early -200s retained the three-window configuration of the -100 on the upper deck, but most were built with a ten-window configuration on each side. The 747-200 was produced in passenger (-200B), freighter (-200F), convertible (-200C), and combi (-200M) versions. The 747-200B was the basic passenger version, with increased fuel capacity and more powerful engines; it entered service in February 1971. In its first three years of production, the -200 was equipped with Pratt & Whitney JT9D-7 engines (initially the only engine available). Range with a full passenger load started at over and increased to with later engines. Most -200Bs had an internally stretched upper deck, allowing for up to 16 passenger seats. The freighter model, the 747-200F, had a hinged nose cargo door and could be fitted with an optional side cargo door, and had a capacity of 105 tons (95.3 tonnes) and an MTOW of up to . It entered service in 1972 with Lufthansa. The convertible version, the 747-200C, could be converted between a passenger and a freighter or used in mixed configurations, and featured removable seats and a nose cargo door. The -200C could also be outfitted with an optional side cargo door on the main deck. The combi aircraft model, the 747-200M (originally designated 747-200BC), could carry freight in the rear section of the main deck via a side cargo door. A removable partition on the main deck separated the cargo area at the rear from the passengers at the front. The -200M could carry up to 238 passengers in a three-class configuration with cargo carried on the main deck. The model was also known as the 747-200 Combi. As on the -100, a stretched upper deck (SUD) modification was later offered. A total of 10 747-200s operated by KLM were converted. Union de Transports Aériens (UTA) also had two aircraft converted. After launching the -200 with Pratt & Whitney JT9D-7 engines, on August 1, 1972, Boeing announced that it had reached an agreement with General Electric to certify the 747 with CF6-50 series engines to increase the aircraft's market potential. Rolls-Royce followed 747 engine production with a launch order from British Airways for four aircraft. The option of RB211-524B engines was announced on June 17, 1975. The -200 was the first 747 to provide a choice of powerplant from the three major engine manufacturers. In 1976, its unit cost was US$39M (M today). A total of 393 of the 747-200 versions had been built when production ended in 1991. Of these, 225 were -200B, 73 were -200F, 13 were -200C, 78 were -200M, and 4 were military. Iran Air retired the last passenger in May 2016, 36 years after it was delivered. , five 747-200s remain in service as freighters. 747-300 The 747-300 features a upper deck than the -200. The stretched upper deck (SUD) has two emergency exit doors and is the most visible difference between the -300 and previous models. After being made standard on the 747-300, the SUD was offered as a retrofit, and as an option to earlier variants still in-production. An example for a retrofit were two UTA -200 Combis being converted in 1986, and an example for the option were two brand-new JAL -100 aircraft (designated -100BSR SUD), the first of which was delivered on March 24, 1986. The 747-300 introduced a new straight stairway to the upper deck, instead of a spiral staircase on earlier variants, which creates room above and below for more seats. Minor aerodynamic changes allowed the -300's cruise speed to reach Mach 0.85 compared with Mach 0.84 on the -200 and -100 models, while retaining the same takeoff weight. The -300 could be equipped with the same Pratt & Whitney and Rolls-Royce powerplants as on the -200, as well as updated General Electric CF6-80C2B1 engines. Swissair placed the first order for the on June 11, 1980. The variant revived the 747-300 designation, which had been previously used on a design study that did not reach production. The 747-300 first flew on October 5, 1982, and the type's first delivery went to Swissair on March 23, 1983. In 1982, its unit cost was US$83M (M today). Besides the passenger model, two other versions (-300M, -300SR) were produced. The 747-300M features cargo capacity on the rear portion of the main deck, similar to the -200M, but with the stretched upper deck it can carry more passengers. The 747-300SR, a short range, high-capacity domestic model, was produced for Japanese markets with a maximum seating for 584. No production freighter version of the 747-300 was built, but Boeing began modifications of used passenger -300 models into freighters in 2000. A total of 81 series aircraft were delivered, 56 for passenger use, 21 -300M and 4 -300SR versions. In 1985, just two years after the -300 entered service, the type was superseded by the announcement of the more advanced 747-400. The last 747-300 was delivered in September 1990 to Sabena. While some -300 customers continued operating the type, several large carriers replaced their 747-300s with 747-400s. Air France, Air India, Japan Airlines, Pakistan International Airlines, and Qantas were some of the last major carriers to operate the . On December 29, 2008, Qantas flew its last scheduled 747-300 service, operating from Melbourne to Los Angeles via Auckland. In July 2015, Pakistan International Airlines retired their final 747-300 after 30 years of service. Mahan Air was the last passenger operator of the Boeing 747-300. In 2022, their last 747-300M was leased by Emtrasur Cargo. The 747-300M was later seized by the US Department of Justice and scrapped in 2024. As of 2024, TransAVIAExport, a Belarusian cargo airline operates one Boeing 747-300F. As of 2024, a former Saudia 747-300 is used for VVIP transport, operated by the Saudi Arabian Government. 747-400 The 747-400 is an improved model with increased range. It has wingtip extensions of and winglets of , which improve the type's fuel efficiency by four percent compared to previous 747 versions. The 747-400 introduced a new glass cockpit designed for a flight crew of two instead of three, with a reduction in the number of dials, gauges and knobs from 971 to 365 through the use of electronics. The type also features tail fuel tanks, revised engines, and a new interior. The longer range has been used by some airlines to bypass traditional fuel stops, such as Anchorage. A 747-400 loaded with of fuel flying consumes an average of . Powerplants include the Pratt & Whitney PW4062, General Electric CF6-80C2, and Rolls-Royce RB211-524. As a result of the Boeing 767 development overlapping with the 747-400's development, both aircraft can use the same three powerplants and are even interchangeable between the two aircraft models. The was offered in passenger (-400), freighter (-400F), combi (-400M), domestic (-400D), extended range passenger (-400ER), and extended range freighter (-400ERF) versions. Passenger versions retain the same upper deck as the , while the freighter version does not have an extended upper deck. The 747-400D was designed for short-range operations with maximum seating for 624. So winglets were not included though they can be retrofitted. Cruising speed is up to Mach 0.855 on different versions of the . The passenger version first entered service in February 1989 with launch customer Northwest Airlines on the Minneapolis to Phoenix route. The combi version entered service in September 1989 with KLM, while the freighter version entered service in November 1993 with Cargolux. The 747-400ERF entered service with Air France in October 2002, while the 747-400ER entered service with Qantas, its sole customer, in November 2002. In January 2004, Boeing and Cathay Pacific launched the Boeing 747-400 Special Freighter program, later referred to as the Boeing Converted Freighter (BCF), to modify passenger 747-400s for cargo use. The first 747-400BCF was redelivered in December 2005. In March 2007, Boeing announced that it had no plans to produce further passenger versions of the -400. However, orders for 36 -400F and -400ERF freighters were already in place at the time of the announcement. The last passenger version of the 747-400 was delivered in April 2005 to China Airlines. Some of the last built 747-400s were delivered with Dreamliner livery along with the modern Signature interior from the Boeing 777. A total of 694 of the series aircraft were delivered. At various times, the largest 747-400 operator has included Singapore Airlines, Japan Airlines, and British Airways. , 331 Boeing 747-400s were in service; there were only 10 Boeing 747-400s in passenger service as of September 2021. 747 LCF Dreamlifter The 747-400 Dreamlifter (originally called the 747 Large Cargo Freighter or LCF) is a Boeing-designed modification of existing 747-400s into a larger outsize cargo freighter configuration to ferry 787 Dreamliner sub-assemblies. Evergreen Aviation Technologies Corporation of Taiwan was contracted to complete modifications of 747-400s into Dreamlifters in Taoyuan. The aircraft flew for the first time on September 9, 2006, in a test flight. Modification of four aircraft was completed by February 2010. The Dreamlifters have been placed into service transporting sub-assemblies for the 787 program to the Boeing plant in Everett, Washington, for final assembly. The aircraft is certified to carry only essential crew with no passengers. 747-8 Boeing announced a new 747 variant, the , on November 14, 2005. Referred to as the 747 Advanced prior to its launch, Boeing selected the designation 747-8 to show the connection with the Boeing 787 Dreamliner, as the aircraft would use technology and the General Electric GEnx engines from the 787 to modernize the design and its systems. The variant is designed to be quieter, more economical, and more environmentally friendly. The 747-8's fuselage is lengthened from to , marking the first stretch variant of the aircraft. The 747-8 Freighter, or 747-8F, has 16% more payload capacity than its predecessor, allowing it to carry seven more standard air cargo containers, with a maximum payload capacity of of cargo. As on previous 747 freighters, the 747-8F features a flip up nose-door, a side-door on the main deck, and a side-door on the lower deck ("belly") to aid loading and unloading. The 747-8F made its maiden flight on February 8, 2010. The variant received its amended type certificate jointly from the FAA and the European Aviation Safety Agency (EASA) on August 19, 2011. The -8F was first delivered to Cargolux on October 12, 2011. The passenger version, named 747-8 Intercontinental or 747-8I, is designed to carry up to 467 passengers in a 3-class configuration and fly more than at Mach 0.855. As a derivative of the already common , the 747-8I has the economic benefit of similar training and interchangeable parts. The type's first test flight occurred on March 20, 2011. The 747-8 has surpassed the Airbus A340-600 as the world's longest airliner, a record it would hold until the 777X, which first flew in 2020. The first -8I was delivered in May 2012 to Lufthansa. The 747-8 has received 155 total orders, including 106 for the -8F and 47 for the -8I . The final 747-8F was delivered to Atlas Air on January 31, 2023, marking the end of the production of the Boeing 747 series. Government, military, and other variants VC-25 – This aircraft is the U.S. Air Force very important person (VIP) version of the 747-200B. The U.S. Air Force operates two of them in VIP configuration as the VC-25A. Tail numbers 28000 and 29000 are popularly known as Air Force One, which is technically the air-traffic call sign for any United States Air Force aircraft carrying the U.S. president. Partially completed aircraft from Everett, Washington, were flown to Wichita, Kansas, for final outfitting by Boeing Military Airplane Company. Two new aircraft, based around the , are being procured which will be designated as VC-25B. E-4B – This is an airborne command post designed for use in nuclear war. Three E-4As, based on the 747-200B, with a fourth aircraft, with more powerful engines and upgraded systems delivered in 1979 as an E-4B, with the three E-4As upgraded to this standard. Formerly known as the National Emergency Airborne Command Post (referred to colloquially as "Kneecap"), this type is now referred to as the National Airborne Operations Center (NAOC). Survivable Airborne Operations Center - In April 2024, Sierra Nevada Corporation was awarded a contract to develop and build the Survivable Airborne Operations Center aircraft to replace the Boeing E-4 NAOC. Five 747-8Is were purchased from Korean Air for conversion, with the contract calling for nine in total. YAL-1 – This was the experimental Airborne Laser, a planned component of the U.S. National Missile Defense. Shuttle Carrier Aircraft (SCA) – Two 747s were modified to carry the Space Shuttle orbiter. The first was a 747-100 (N905NA), and the other was a 747-100SR (N911NA). The first SCA carried the prototype Enterprise during the Approach and Landing Tests in the late 1970s. The two SCA later carried all five operational Space Shuttle orbiters. C-33 – This aircraft was a proposed U.S. military version of the 747-400F intended to augment the C-17 fleet. The plan was canceled in favor of additional C-17s. KC-25/33 – A proposed 747-200F was also adapted as an aerial refueling tanker and was bid against the DC-10-30 during the 1970s Advanced Cargo Transport Aircraft (ACTA) program that produced the KC-10 Extender. Before the 1979 Iranian Revolution, Iran bought four 747-100 aircraft with air-refueling boom conversions to support its fleet of F-4 Phantoms. There is a report of the Iranians using a 747 Tanker in H-3 airstrike during Iran–Iraq War. It is unknown whether these aircraft remain usable as tankers. Since then there have been proposals to use a 747-400 for that role. 747F Airlifter – Proposed US military transport version of the 747-200F intended as an alternative to further purchases of the C-5 Galaxy. This 747 would have had a special nose jack to lower the sill height for the nose door. System tested in 1980 on a Flying Tiger Line 747-200F. 747 CMCA – This "Cruise Missile Carrier Aircraft" variant was considered by the U.S. Air Force during the development of the B-1 Lancer strategic bomber. It would have been equipped with 50 to 100 AGM-86 ALCM cruise missiles on rotary launchers. This plan was abandoned in favor of more conventional strategic bombers. MC-747 – Two separate studies from the 1970s and 2005, the first by Boeing and the second by ATK and BAE Systems, to horizontally store up to four Peacekeeper ICBMs or seven Minutemen above bomb bay-like doors in the first study, and to vertically store twelve Minutemen or 32 JDAM-equipped conventional missiles for launch from in situ tubes in the second. 747 AAC – A Boeing study under contract from the USAF for an "airborne aircraft carrier" for up to 10 Boeing Model 985-121 "microfighters" with the ability to launch, retrieve, re-arm, and refuel. Boeing believed that the scheme would be able to deliver a flexible and fast carrier platform with global reach, particularly where other bases were not available. Modified versions of the 747-200 and Lockheed C-5A were considered as the base aircraft. The concept, which included a complementary 747 AWACS version with two reconnaissance "microfighters", was considered technically feasible in 1973. Evergreen 747 Supertanker – A Boeing 747-200 modified as an aerial application platform for fire fighting using of firefighting chemicals. Stratospheric Observatory for Infrared Astronomy (SOFIA) – A former Pan Am Boeing 747SP modified to carry a large infrared-sensitive telescope, in a joint venture of NASA and DLR. High altitudes are needed for infrared astronomy, to rise above infrared-absorbing water vapor in the atmosphere. A number of other governments also use the 747 as a VIP transport, including Bahrain, Brunei, India, Iran, Japan, Kuwait, Oman, Pakistan, Qatar, Saudi Arabia and United Arab Emirates. Several Boeing 747-8s have been ordered by Boeing Business Jet for conversion to VIP transports for several unidentified customers. Proposed variants Boeing has studied a number of 747 variants that have not gone beyond the concept stage. 747 trijet During the late 1960s and early 1970s, Boeing studied the development of a shorter 747 with three engines, to compete with the smaller Lockheed L-1011 TriStar and McDonnell Douglas DC-10. The center engine would have been fitted in the tail with an S-duct intake similar to the L-1011's. Overall, the 747 trijet would have had more payload, range, and passenger capacity than either of the two other aircraft. However, engineering studies showed that a major redesign of the 747 wing would be necessary. Maintaining the same 747 handling characteristics would be important to minimize pilot retraining. Boeing decided instead to pursue a shortened four-engine 747, resulting in the 747SP. 747-500 In January 1986, Boeing outlined preliminary studies to build a larger, ultra-long haul version named the , which would enter service in the mid- to late-1990s. The aircraft derivative would use engines evolved from unducted fan (UDF) (propfan) technology by General Electric, but the engines would have shrouds, sport a bypass ratio of 15–20, and have a propfan diameter of . The aircraft would be stretched (including the upper deck section) to a capacity of 500 seats, have a new wing to reduce drag, cruise at a faster speed to reduce flight times, and have a range of at least , which would allow airlines to fly nonstop between London, England and Sydney, Australia. 747 ASB Boeing announced the 747 ASB (Advanced Short Body) in 1986 as a response to the Airbus A340 and the McDonnell Douglas MD-11. This aircraft design would have combined the advanced technology used on the 747-400 with the foreshortened 747SP fuselage. The aircraft was to carry 295 passengers over a range of . However, airlines were not interested in the project and it was canceled later that year. 747-500X, -600X, and -700X Boeing announced the 747-500X and -600X at the 1996 Farnborough Airshow. The proposed models would have combined the 747's fuselage with a new wing spanning derived from the 777. Other changes included adding more powerful engines and increasing the number of tires from two to four on the nose landing gear and from 16 to 20 on the main landing gear. The 747-500X concept featured a fuselage length increased by to , and the aircraft was to carry 462 passengers over a range up to , with a gross weight of over 1.0 Mlb (450 tonnes). The 747-600X concept featured a greater stretch to with seating for 548 passengers, a range of up to , and a gross weight of 1.2 Mlb (540 tonnes). A third study concept, the 747-700X, would have combined the wing of the 747-600X with a widened fuselage, allowing it to carry 650 passengers over the same range as a . The cost of the changes from previous 747 models, in particular the new wing for the 747-500X and -600X, was estimated to be more than US$5 billion. Boeing was not able to attract enough interest to launch the aircraft. 747X and 747X Stretch As Airbus progressed with its A3XX study, Boeing offered a 747 derivative as an alternative in 2000; a more modest proposal than the previous -500X and -600X with the 747's overall wing design and a new segment at the root, increasing the span to . Power would have been supplied by either the Engine Alliance GP7172 or the Rolls-Royce Trent 600, which were also proposed for the 767-400ERX. A new flight deck based on the 777's would be used. The 747X aircraft was to carry 430 passengers over ranges of up to . The 747X Stretch would be extended to long, allowing it to carry 500 passengers over ranges of up to . Both would feature an interior based on the 777. Freighter versions of the 747X and 747X Stretch were also studied. Like its predecessor, the 747X family was unable to garner enough interest to justify production, and it was shelved along with the 767-400ERX in March 2001, when Boeing announced the Sonic Cruiser concept. Though the 747X design was less costly than the 747-500X and -600X, it was criticized for not offering a sufficient advance from the existing . The 747X did not make it beyond the drawing board, but the 747-400X being developed concurrently moved into production to become the 747-400ER. 747-400XQLR After the end of the 747X program, Boeing continued to study improvements that could be made to the 747. The 747-400XQLR (Quiet Long Range) was meant to have an increased range of , with improvements to boost efficiency and reduce noise. Improvements studied included raked wingtips similar to those used on the 767-400ER and a sawtooth engine nacelle for noise reduction. Although the 747-400XQLR did not move to production, many of its features were used for the 747 Advanced, which was launched as the 747-8 in 2005. Operators In 1979, Qantas became the first airline in the world to operate an all Boeing 747 fleet, with seventeen aircraft. , there were 462 Boeing 747s in airline service, with Atlas Air and British Airways being the largest operators with 33 747-400s each. The last US passenger Boeing 747 was retired from Delta Air Lines in December 2017. The model flew for almost every American major carrier since its 1970 introduction. Delta flew three of its last four aircraft on a farewell tour, from Seattle to Atlanta on December 19 then to Los Angeles and Minneapolis/St Paul on December 20. As the IATA forecast an increase in air freight from 4% to 5% in 2018 fueled by booming trade for time-sensitive goods, from smartphones to fresh flowers, demand for freighters is strong while passenger 747s are phased out. Of the 1,544 produced, 890 are retired; , a small subset of those which were intended to be parted-out got $3 million D-checks before flying again. Young -400s were sold for 320 million yuan ($50 million) and Boeing stopped converting freighters, which used to cost nearly $30 million. This comeback helped the airframer financing arm Boeing Capital to shrink its exposure to the 747-8 from $1.07 billion in 2017 to $481 million in 2018. In July 2020, British Airways announced that it was retiring its 747 fleet. The final British Airways 747 flights departed London Heathrow on October 8, 2020. Orders and deliveries Boeing 747 orders and deliveries (cumulative, by year): Orders and deliveries through to the end of February 2023. Model summary Accidents and incidents , the 747 has been involved in 173 aviation accidents and incidents, including 64 hull losses (52 in-flight accidents), causing fatalities. There have been several hijackings of Boeing 747s, such as Pan Am Flight 73, a 747-100 hijacked by four terrorists, causing 20 deaths. The 747 also fell victim to three mid-air bombings, two of which resulted in fatalities and hull losses, Air India Flight 182 in 1985, and Pan Am Flight 103 in 1988. Few crashes have been attributed to 747 design flaws. The Tenerife airport disaster resulted from pilot error and communications failure, while the Japan Air Lines Flight 123 and China Airlines Flight 611 crashes stemmed from improper aircraft repair due to a tailstrike. United Airlines Flight 811, which suffered an explosive decompression mid-flight on February 24, 1989, led the National Transportation Safety Board (NTSB) to issue a recommendation that the Boeing 747-100 and 747-200 cargo doors similar to those on the Flight 811 aircraft be modified to those featured on the Boeing . Korean Air Lines Flight 007 was shot down by a Soviet fighter aircraft in 1983 after it had strayed into Soviet territory, causing US President Ronald Reagan to authorize the then-strictly-military global positioning system (GPS) for civilian use. South African Airways Flight 295, a 747-200M Combi, which crashed on 28 November 1987 due to an inflight fire, led to the mandate of adding fire-suppression systems on board Combi variants. The lack of adequate warning systems combined with flight crew error led to a preventable crash of Lufthansa Flight 540 in November 1974, which was the first fatal crash of a 747, while an instrument malfunction leading to disorientation of the crew led to the crash of Air India Flight 855 on New Years Day in 1978. TWA Flight 800, a 747-100 that exploded in mid-air on July 17, 1996, was probably caused due to sparking from the old and cracked electrical wires inside the fuel tank, where voltage levels exceeded the maximum limit, causing ignition of the fuel vapors inside the tank. This finding led the FAA to adopt a rule in July 2008 requiring installation of an inerting system in the center fuel tank of most large aircraft, after years of research into solutions. At the time, the new safety system was expected to cost US$100,000 to $450,000 per aircraft and weigh approximately . Two 747-200F freighters - China Airlines Flight 358 in December 1991 and El Al Flight 1862 in October 1992, crashed after the fuse pins for an engine (no. 3) broke off shortly after take-off due to metal fatigue, and instead of simply dropping away from the wing, the engine knocked off the adjacent engine and damaged the wing. Following these crashes, Boeing issued a directive to examine and replace all fuse pins found to be cracked. Other incidents did not result in any hull losses, but the planes suffered certain damages and were put back into service after repair. On July 30, 1971, Pan Am Flight 845 struck approach lighting system structures while taking off from San Francisco for Tokyo, Japan; the plane dumped fuel and landed back. The cause was pilot error with improper calculations, and the plane was repaired and returned to service. On June 24, 1982, British Airways Flight 9, a Boeing 747-200, registration G-BDXH, flew through a cloud of volcanic ash and dust from the eruption of Mount Galunggung, suffering an all engine flameout; the crew restarted the engines and successfully landed at Jakarta. The volcanic ash caused windscreens to be sandblasted along with engine damage and paint rip-off; the plane was repaired with engines replaced and returned to service. On December 11, 1994, on board Philippine Airlines Flight 434 from Manila to Tokyo via Cebu, a bomb exploded under a seat, killing one passenger; the plane landed safely at Okinawa despite damage to the plane's controls. The bomber, Ramzi Yousef, was caught on 7 February 1995 in Islamabad, Pakistan, and the plane was repaired, but converted for cargo use. Preserved aircraft Aircraft on display As increasing numbers of "classic" 747-100 and series aircraft have been retired, some have been used for other uses such as museum displays. Some older 747-300s and 747-400s were later added to museum collections. 20235/001 – 747-121 registration N7470 City of Everett, the first 747 and prototype, is at the Museum of Flight, Seattle, Washington. 19651/025 – 747-121 registration N747GE at the Pima Air & Space Museum, Tucson, Arizona, US. 19778/027 – 747-151 registration N601US nose at the National Air and Space Museum, Washington, D.C. 19661/070 – 747-121(SF) registration N681UP preserved at a plaza on Jungong Road, Shanghai, China. 19896/072 – 747-132(SF) registration N481EV at the Evergreen Aviation & Space Museum, McMinnville, Oregon, US. 20107/086 – 747-123 registration N905NA, a NASA Shuttle Carrier Aircraft, at the Johnson Space Center, Houston, Texas, US. 20269/150 – 747-136 registration G-AWNG nose at Hiller Aviation Museum, San Carlos, California. 20239/160 – 747-244B registration ZS-SAN nicknamed Lebombo, at the South African Airways Museum Society, Rand Airport, Johannesburg, South Africa. 20541/200 – 747-128 registration F-BPVJ at Musée de l'Air et de l'Espace, Paris, France. 20770/213 – 747-2B5B registration HL7463 at Jeongseok Aviation Center, Jeju, South Korea. 20713/219 - 747-212B(SF) registration N482EV at the Evergreen Aviation & Space Museum, McMinnville, Oregon, US. 20825/223 - 747-200 registration SX-OAB at the site of Ellinikon International Airport, Athens, Greece. After over 20 years sitting at the closed airport, it was moved to a permanent location within the boundaries of the airport and put on display as part of the ongoing regeneration work. 21134/288 – 747SP-44 registration ZS-SPC at the South African Airways Museum Society, Rand Airport, Johannesburg, South Africa. 21549/336 – 747-206B registration PH-BUK at the Aviodrome, Lelystad, Netherlands. 21588/342 – 747-230B(M) registration D-ABYM preserved at Technik Museum Speyer, Germany. 21650/354 – 747-2R7F/SCD registration G-MKGA preserved at Cotswold Airport, UK as an event space. 22145/410 – 747-238B registration VH-EBQ at the Qantas Founders Outback Museum, Longreach, Queensland, Australia. 21942/471 – 747-212B registration N642NW nose at the Museum of Aeronautical Science in Narita, Japan, near Narita International Airport. 22455/515 – 747-256BM registration EC-DLD Lope de Vega nose at the National Museum of Science and Technology, A Coruña, Spain. 23223/606 – 747-338 registration VH-EBU at Melbourne Avalon Airport, Avalon, Victoria, Australia. VH-EBU is an ex-Qantas airframe formerly decorated in the Nalanji Dreaming livery, currently in use as a training aircraft and film set. 23719/696 – 747-451 registration N661US at the Delta Flight Museum, Atlanta, Georgia, US. This particular plane was the first in service, as well as the prototype. 24354/731 – 747-438 registration VH-OJA at Shellharbour Airport, Albion Park Rail, New South Wales, Australia. 21441/306 - SOFIA - 747SP-21 registration N747NA at Pima Air and Space Museum in Tucson, Arizona, US. Former Pan Am and United Airlines 747SP bought by NASA and converted into a flying telescope, for astronomy purposes. Named Clipper Lindbergh. Other uses Upon its retirement from service, the 747 which was number two in the production line was dismantled and shipped to Hopyeong, Namyangju, Gyeonggi-do, South Korea where it was re-assembled, repainted in a livery similar to that of Air Force One and converted into a restaurant. Originally flown commercially by Pan Am as N747PA, Clipper Juan T. Trippe, and repaired for service following a tailstrike, it stayed with the airline until its bankruptcy. The restaurant closed by 2009, and the aircraft was scrapped in 2010. A former British Airways 747-200B, G-BDXJ, is parked at the Dunsfold Aerodrome in Surrey, England and has been used as a movie set for productions such as the 2006 James Bond film, Casino Royale. The airplane also appears frequently in the television series Top Gear, which is filmed at Dunsfold. The Jumbo Stay hostel, using a converted 747-200 formerly operated by Singapore Airlines and registered as 9V-SQE, opened at Arlanda Airport, Stockholm in January 2009. A former Pakistan International Airlines 747-300 was converted into a restaurant by Pakistan's Airports Security Force in 2017. It is located at Jinnah International Airport, Karachi. The wings of a 747 have been repurposed as roofs of a house in Malibu, California. In 2023, a Boeing 747-412, retired from Lion Air, was turned into a steak restaurant in Bekasi, Indonesia. The aircraft had been sitting since 2018 but the construction of the restaurant was delayed due to the COVID-19 pandemic. Specifications Cultural impact Following its debut, the 747 rapidly achieved iconic status. The aircraft entered the cultural lexicon as the original Jumbo Jet, a term coined by the aviation media to describe its size, and was also nicknamed Queen of the Skies. Test pilot David P. Davies described it as "a most impressive aeroplane with a number of exceptionally fine qualities", and praised its flight control system as "truly outstanding" because of its redundancy. Appearing in over 300 film productions, the 747 is one of the most widely depicted civilian aircraft and is considered by many as one of the most iconic in film history. It has appeared in film productions such as the disaster films Airport 1975 and Airport '77, as well as Air Force One, Die Hard 2, and Executive Decision.
Technology
Specific aircraft_2
null
4650
https://en.wikipedia.org/wiki/Black%20hole
Black hole
A black hole is a region of spacetime wherein gravity is so strong that no matter or electromagnetic energy (e.g. light) can escape it. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of no escape is called the event horizon. A black hole has a great effect on the fate and circumstances of an object crossing it, but it has no locally detectable features according to general relativity. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first published the interpretation of "black hole" as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars by Jocelyn Bell Burnell in 1967 sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes of stellar mass form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses () may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Any matter that falls toward a black hole can form an external accretion disk heated by friction, forming quasars, some of the brightest objects in the universe. Stars passing too close to a supermassive black hole can be shredded into streamers that shine very brightly before being "swallowed." If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so big that even light could not escape was briefly proposed by English astronomical pioneer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars rather than the modern model of stars with extraordinary density. Mitchel's idea appeared in a letter published in November 1784. Michell's simplistic calculations assumed such a body might have the same density as the Sun, and concluded that one would form when a star's diameter exceeds the Sun's by a factor of 500, and its surface escape velocity exceeds the usual speed of light. Michell correctly noted that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in journal edited by von Zach. Scholars of the time were initially excited by the proposal that giant but invisible 'dark stars' might be hiding in plain view, but enthusiasm dampened when the wavelike nature of light became apparent in the early nineteenth century, as if light were a wave rather than a particle, it was unclear what, if any, influence gravity would have on escaping light waves. General relativity In 1915, Albert Einstein developed his theory of general relativity, having earlier shown that gravity does influence light's motion. Only a few months later, Karl Schwarzschild found a solution to the Einstein field equations that describes the gravitational field of a point mass and a spherical mass. A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution for the point mass and wrote more extensively about its properties. This solution had a peculiar behaviour at what is now called the Schwarzschild radius, where it became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this surface was not quite understood at the time. In 1924, Arthur Eddington showed that the singularity disappeared after a change of coordinates. In 1933, Georges Lemaître realised that this meant the singularity at the Schwarzschild radius was a non-physical coordinate singularity. Arthur Eddington commented on the possibility of a star with mass compressed to the Schwarzschild radius in a 1926 book, noting that Einstein's theory allows us to rule out overly large densities for visible stars like Betelgeuse because "a star of 250 million km radius could not possibly have so high a density as the Sun. Firstly, the force of gravitation would be so great that light would be unable to escape from it, the rays falling back to the star like a stone to the earth. Secondly, the red shift of the spectral lines would be so great that the spectrum would be shifted out of existence. Thirdly, the mass would produce so much curvature of the spacetime metric that space would close up around the star, leaving us outside (i.e., nowhere)." In 1931, Subrahmanyan Chandrasekhar calculated, using special relativity, that a non-rotating body of electron-degenerate matter above a certain limiting mass (now called the Chandrasekhar limit at ) has no stable solutions. His arguments were opposed by many of his contemporaries like Eddington and Lev Landau, who argued that some yet unknown mechanism would stop the collapse. They were partly correct: a white dwarf slightly more massive than the Chandrasekhar limit will collapse into a neutron star, which is itself stable. In 1939, Robert Oppenheimer and others predicted that neutron stars above another limit, the Tolman–Oppenheimer–Volkoff limit, would collapse further for the reasons presented by Chandrasekhar, and concluded that no law of physics was likely to intervene and stop at least some stars from collapsing to black holes. Their original calculations, based on the Pauli exclusion principle, gave it as . Subsequent consideration of neutron-neutron repulsion mediated by the strong force raised the estimate to approximately to . Observations of the neutron star merger GW170817, which is thought to have generated a black hole shortly afterward, have refined the TOV limit estimate to ~. Oppenheimer and his co-authors interpreted the singularity at the boundary of the Schwarzschild radius as indicating that this was the boundary of a bubble in which time stopped. This is a valid point of view for external observers, but not for infalling observers. The hypothetical collapsed stars were called "frozen stars", because an outside observer would see the surface of the star frozen in time at the instant where its collapse takes it to the Schwarzschild radius. Also in 1939, Einstein attempted to prove that black holes were impossible in his publication "On a Stationary System with Spherical Symmetry Consisting of Many Gravitating Masses", using his theory of general relativity to defend his argument. Months later, Oppenheimer and his student Hartland Snyder provided the Oppenheimer–Snyder model in their paper "On Continued Gravitational Contraction", which predicted the existence of black holes. In the paper, which made no reference to Einstein's recent publication, Oppenheimer and Snyder used Einstein's own theory of general relativity to show the conditions on how a black hole could develop, for the first time in contemporary physics. Golden age In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, "a perfect unidirectional membrane: causal influences can cross it in only one direction". This did not strictly contradict Oppenheimer's results, but extended them to include the point of view of infalling observers. Finkelstein's solution extended the Schwarzschild solution for the future of observers falling into a black hole. A complete extension had already been found by Martin Kruskal, who was urged to publish it. These results came at the beginning of the golden age of general relativity, which was marked by general relativity and black holes becoming mainstream subjects of research. This process was helped by the discovery of pulsars by Jocelyn Bell Burnell in 1967, which, by 1969, were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities; but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. In this period more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the axisymmetric solution for a black hole that is both rotating and electrically charged. Through the work of Werner Israel, Brandon Carter, and David Robinson the no-hair theorem emerged, stating that a stationary black hole solution is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange features of the black hole solutions were pathological artefacts from the symmetry conditions imposed, and that the singularities would not appear in generic situations. This view was held in particular by Vladimir Belinsky, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions. However, in the late 1960s Roger Penrose and Stephen Hawking used global techniques to prove that singularities appear generically. For this work, Penrose received half of the 2020 Nobel Prize in Physics, Hawking having died in 2018. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. Observation On 11 February 2016, the LIGO Scientific Collaboration and the Virgo collaboration announced the first direct detection of gravitational waves, representing the first observation of a black hole merger. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. , the nearest known body thought to be a black hole, Gaia BH1, is around away. Though only a couple dozen black holes have been found so far in the Milky Way, there are thought to be hundreds of millions, most of which are solitary and do not cause emission of radiation. Therefore, they would only be detectable by gravitational lensing. Etymology Science writer Marcia Bartusiak traces the term "black hole" to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term "black hole" was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. In December 1967, a student reportedly suggested the phrase "black hole" at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and it quickly caught on, leading some to credit Wheeler with coining the phrase. Properties and structure The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes under the laws of modern physics is currently an unsolved problem. These properties are special because they are visible from outside a black hole. For example, a charged black hole repels other like charges just like any other charged object. Similarly, the total mass inside a sphere containing a black hole can be found by using the gravitational analogue of Gauss's law (through the ADM mass), far away from the black hole. Likewise, the angular momentum (or spin) can be measured from far away using frame dragging by the gravitomagnetic field, through for example the Lense–Thirring effect. When an object falls into a black hole, any information about the shape of the object or distribution of charge on it is evenly distributed along the horizon of the black hole, and is lost to outside observers. The behaviour of the horizon in this situation is a dissipative system that is closely analogous to that of a conductive stretchy membrane with friction and electrical resistance—the membrane paradigm. This is different from other field theories such as electromagnetism, which do not have any friction or resistivity at the microscopic level, because they are time-reversible. Because a black hole eventually achieves a stable state with only three parameters, there is no way to avoid losing information about the initial conditions: the gravitational and electric fields of a black hole give very little information about what went in. The information that is lost includes every quantity that cannot be measured far away from the black hole horizon, including approximately conserved quantum numbers such as the total baryon number and lepton number. This behaviour is so puzzling that it has been called the black hole information loss paradox. Physical properties The simplest static black holes have mass but neither electric charge nor angular momentum. These black holes are often referred to as Schwarzschild black holes after Karl Schwarzschild who discovered this solution in 1916. According to Birkhoff's theorem, it is the only vacuum solution that is spherically symmetric. This means there is no observable difference at a distance between the gravitational field of such a black hole and that of any other spherical object of the same mass. The popular notion of a black hole "sucking in everything" in its surroundings is therefore correct only near a black hole's horizon; far away, the external gravitational field is identical to that of any other body of the same mass. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. While the mass of a black hole can take any positive value, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality for a black hole of mass M. Black holes with the minimum possible mass satisfying this inequality are called extremal. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These solutions have so-called naked singularities that can be observed from the outside, and hence are deemed unphysical. The cosmic censorship hypothesis rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. This is supported by numerical simulations. Due to the relatively large strength of the electromagnetic force, black holes forming from the collapse of stars are expected to retain the nearly neutral charge of the star. Rotation, however, is expected to be a universal feature of compact astrophysical objects. The black-hole candidate binary X-ray source GRS 1915+105 appears to have an angular momentum near the maximum allowed value. That uncharged limit is allowing definition of a dimensionless spin parameter such that Black holes are commonly classified according to their mass, independent of angular momentum, J. The size of a black hole, as determined by the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through where r is the Schwarzschild radius and is the mass of the Sun. For a black hole with nonzero spin or electric charge, the radius is smaller, until an extremal black hole could have an event horizon close to Event horizon The defining feature of a black hole is the appearance of an event horizon—a boundary in spacetime through which matter and light can pass only inward towards the mass of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach an outside observer, making it impossible to determine whether such an event occurred. As predicted by general relativity, the presence of a mass deforms spacetime in such a way that the paths taken by particles bend towards the mass. At the event horizon of a black hole, this deformation becomes so strong that there are no paths that lead away from the black hole. To a distant observer, clocks near a black hole would appear to tick more slowly than those farther away from the black hole. Due to this effect, known as gravitational time dilation, an object falling into a black hole appears to slow as it approaches the event horizon, taking an infinite amount of time to reach it. At the same time, all processes on this object slow down, from the viewpoint of a fixed outside observer, causing any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. Eventually, the falling object fades away until it can no longer be seen. Typically this process happens very rapidly with an object disappearing from view within less than a second. On the other hand, indestructible observers falling into a black hole do not notice any of these effects as they cross the event horizon. According to their own clocks, which appear to them to tick normally, they cross the event horizon after a finite time without noting any singular behaviour; in classical general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle. The topology of the event horizon of a black hole at equilibrium is always spherical. For non-rotating (static) black holes the geometry of the event horizon is precisely spherical, while for rotating black holes the event horizon is oblate. Singularity At the centre of a black hole, as described by general relativity, may lie a gravitational singularity, a region where the spacetime curvature becomes infinite. For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation. In both cases, the singular region has zero volume. It can also be shown that the singular region contains all the mass of the black hole solution. The singular region can thus be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. They can prolong the experience by accelerating away to slow their descent, but only up to a limit. When they reach the singularity, they are crushed to infinite density and their mass is added to the total of the black hole. Before that happens, they will have been torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the "noodle effect". In the case of a charged (Reissner–Nordström) or rotating (Kerr) black hole, it is possible to avoid the singularity. Extending these solutions as far as possible reveals the hypothetical possibility of exiting the black hole into a different spacetime with the black hole acting as a wormhole. The possibility of travelling to another universe is, however, only theoretical since any perturbation would destroy this possibility. It also appears to be possible to follow closed timelike curves (returning to one's own past) around the Kerr singularity, which leads to problems with causality like the grandfather paradox. It is expected that none of these peculiar effects would survive in a proper quantum treatment of rotating and charged black holes. The appearance of singularities in general relativity is commonly perceived as signalling the breakdown of the theory. This breakdown, however, is expected; it occurs in a situation where quantum effects should describe these actions, due to the extremely high density and therefore particle interactions. To date, it has not been possible to combine quantum and gravitational effects into a single theory, although there exist attempts to formulate such a theory of quantum gravity. It is generally expected that such a theory will not feature any singularities. Photon sphere The photon sphere is a spherical boundary where photons that move on tangents to that sphere would be trapped in a non-stable but circular orbit around the black hole. For non-rotating black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius. Their orbits would be dynamically unstable, hence any small perturbation, such as a particle of infalling matter, would cause an instability that would grow over time, either setting the photon on an outward trajectory causing it to escape the black hole, or on an inward spiral where it would eventually cross the event horizon. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Hence any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. For a Kerr black hole the radius of the photon sphere depends on the spin parameter and on the details of the photon orbit, which can be prograde (the photon rotates in the same sense of the black hole spin) or retrograde. Ergosphere Rotating black holes are surrounded by a region of spacetime in which it is impossible to stand still, called the ergosphere. This is the result of a process known as frame-dragging; general relativity predicts that any rotating mass will tend to slightly "drag" along the spacetime immediately surrounding it. Any object near the rotating mass will tend to start moving in the direction of rotation. For a rotating black hole, this effect is so strong near the event horizon that an object would have to move faster than the speed of light in the opposite direction to just stand still. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but is at a much greater distance around the equator. Objects and radiation can escape normally from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole. Thereby the rotation of the black hole slows down. A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. Innermost stable circular orbit (ISCO) In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists an innermost stable circular orbit (often called the ISCO), for which any infinitesimal inward perturbations to a circular orbit will lead to spiraling into the black hole, and any outward perturbations will, depending on the energy, result in spiraling in, stably orbiting between apastron and periastron, or escaping to infinity. The location of the ISCO depends on the spin of the black hole, in the case of a Schwarzschild black hole (spin zero) is: and decreases with increasing black hole spin for particles orbiting in the same direction as the spin. Plunging region The final observable region of spacetime around a black hole is called the plunging region. In this area it is no longer possible for matter to follow circular orbits or to stop a final descent into the black hole. Instead it will rapidly plunge toward the black hole close to the speed of light. Formation and evolution Given the bizarre character of black holes, it was long questioned whether such objects could actually exist in nature or whether they were merely pathological solutions to Einstein's equations. Einstein himself wrongly thought black holes would not form, because he held that the angular momentum of collapsing particles would stabilise their motion at some radius. This led the general relativity community to dismiss all results to the contrary for many years. However, a minority of relativists continued to contend that black holes were physical objects, and by the end of the 1960s, they had persuaded the majority of researchers in the field that there is no obstacle to the formation of an event horizon. Penrose demonstrated that once an event horizon forms, general relativity without quantum mechanics requires that a singularity will form within. Shortly afterwards, Hawking showed that many cosmological solutions that describe the Big Bang have singularities without scalar fields or other exotic matter. The Kerr solution, the no-hair theorem, and the laws of black hole thermodynamics showed that the physical properties of black holes were simple and comprehensible, making them respectable subjects for research. Conventional black holes are formed by gravitational collapse of heavy objects such as stars, but they can also in theory be formed by other processes. Gravitational collapse Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. For stars this usually occurs either because a star has too little "fuel" left to maintain its temperature through stellar nucleosynthesis, or because a star that would have been stable receives extra matter in a way that does not raise its core temperature. In either case the star's temperature is no longer high enough to prevent it from collapsing under its own weight. The collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. The result is one of the various types of compact star. Which type forms depends on the mass of the remnant of the original star left if the outer layers have been blown away (for example, in a Type II supernova). The mass of the remnant, the collapsed object that survives the explosion, can be substantially less than that of the original star. Remnants exceeding are produced by stars that were over before the collapse. If the mass of the remnant exceeds about (the Tolman–Oppenheimer–Volkoff limit), either because the original star was very heavy or because the remnant collected additional mass through accretion of matter, even the degeneracy pressure of neutrons is insufficient to stop the collapse. No known mechanism (except possibly quark degeneracy pressure) is powerful enough to stop the implosion and the object will inevitably collapse to form a black hole. The gravitational collapse of heavy stars is assumed to be responsible for the formation of stellar mass black holes. Star formation in the early universe may have resulted in very massive stars, which upon their collapse would have produced black holes of up to . These black holes could be the seeds of the supermassive black holes found in the centres of most galaxies. It has further been suggested that massive black holes with typical masses of ~ could have formed from the direct collapse of gas clouds in the young universe. These massive objects have been proposed as the seeds that eventually formed the earliest quasars observed already at redshift . Some candidates for such objects have been found in observations of the young universe. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the light emitted just before the event horizon forms delayed an infinite amount of time. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Primordial black holes and the Big Bang Gravitational collapse requires great density. In the current epoch of the universe these high densities are found only in stars, but in the early universe shortly after the Big Bang densities were much greater, possibly allowing for the creation of black holes. High density alone is not enough to allow black hole formation since a uniform mass distribution will not allow the mass to bunch up. In order for primordial black holes to have formed in such a dense medium, there must have been initial density perturbations that could then grow under their own gravity. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging in size from a Planck mass ( ≈ ≈ ) to hundreds of thousands of solar masses. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the expansion rate was greater than the attraction. Following inflation theory there was a net repulsive gravitation in the beginning until the end of inflation. Since then the Hubble flow was slowed by the energy density of the universe. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. High-energy collisions Gravitational collapse is not the only process that could create black holes. In principle, black holes could be formed in high-energy collisions that achieve sufficient density. As of 2002, no such events have been detected, either directly or indirectly as a deficiency of the mass balance in particle accelerator experiments. This suggests that there must be a lower limit for the mass of black holes. Theoretically, this boundary is expected to lie around the Planck mass, where quantum effects are expected to invalidate the predictions of general relativity. This would put the creation of black holes firmly out of reach of any high-energy process occurring on or near the Earth. However, certain developments in quantum gravity suggest that the minimum black hole mass could be much lower: some braneworld scenarios for example put the boundary as low as . This would make it conceivable for micro black holes to be created in the high-energy collisions that occur when cosmic rays hit the Earth's atmosphere, or possibly in the Large Hadron Collider at CERN. These theories are very speculative, and the creation of black holes in these processes is deemed unlikely by many specialists. Even if micro black holes could be formed, it is expected that they would evaporate in about 10 seconds, posing no threat to the Earth. Growth Once a black hole has formed, it can continue to grow by absorbing additional matter. Any black hole will continually absorb gas and interstellar dust from its surroundings. This growth process is one possible way through which some supermassive black holes may have been formed, although the formation of supermassive black holes is still an open field of research. A similar process has been suggested for the formation of intermediate-mass black holes found in globular clusters. Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Evaporation In 1974, Hawking predicted that black holes are not entirely black but emit small amounts of thermal radiation at a temperature ħc/(8πGMk); this effect has become known as Hawking radiation. By applying quantum field theory to a static black hole background, he determined that a black hole should emit particles that display a perfect black body spectrum. Since Hawking's publication, many others have verified the result through various approaches. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which, for a Schwarzschild black hole, is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes. A stellar black hole of has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. If a black hole is very small, the radiation effects are expected to become very strong. A black hole with the mass of a car would have a diameter of about 10 m and take a nanosecond to evaporate, during which time it would briefly have a luminosity of more than 200 times that of the Sun. Lower-mass black holes are expected to evaporate even faster; for example, a black hole of mass 1 TeV/c would take less than 10 seconds to evaporate completely. For such a small black hole, quantum gravity effects are expected to play an important role and could hypothetically make such a small black hole stable, although current developments in quantum gravity do not indicate this is the case. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception, however, is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes. NASA's Fermi Gamma-ray Space Telescope launched in 2008 will continue the search for these flashes. If black holes evaporate via Hawking radiation, a solar mass black hole will evaporate (beginning once the temperature of the cosmic microwave background drops below that of the black hole) over a period of 10 years. A supermassive black hole with a mass of will evaporate in around 2×10 years. Some monster black holes in the universe are predicted to continue to grow up to perhaps during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 10 years. Observational evidence By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. For example, a black hole's existence can sometimes be inferred by observing its gravitational influence on its surroundings. Direct interferometry The Event Horizon Telescope (EHT) is an active program that directly observes the immediate environment of black holes' event horizons, such as the black hole at the centre of the Milky Way. In April 2017, EHT began observing the black hole at the centre of Messier 87. "In all, eight radio observatories on six mountains and four continents observed the galaxy in Virgo on and off for 10 days in April 2017" to provide the data yielding the image in April 2019. After two years of data processing, EHT released the first direct image of a black hole. Specifically, the supermassive black hole that lies in the centre of the aforementioned galaxy. What is visible is not the black hole—which shows as black because of the loss of all light within this dark region. Instead, it is the gases at the edge of the event horizon, displayed as orange or red, that define the black hole. On 12 May 2022, the EHT released the first image of Sagittarius A*, the supermassive black hole at the centre of the Milky Way galaxy. The published image displayed the same ring-like structure and circular shadow as seen in the M87* black hole, and the image was created using the same techniques as for the M87 black hole. The imaging process for Sagittarius A*, which is more than a thousand times smaller and less massive than M87*, was significantly more complex because of the instability of its surroundings. The image of Sagittarius A* was partially blurred by turbulent plasma on the way to the galactic centre, an effect which prevents resolution of the image at longer wavelengths. The brightening of this material in the 'bottom' half of the processed EHT image is thought to be caused by Doppler beaming, whereby material approaching the viewer at relativistic speeds is perceived as brighter than material moving away. In the case of a black hole, this phenomenon implies that the visible material is rotating at relativistic speeds (>), the only speeds at which it is possible to centrifugally balance the immense gravitational attraction of the singularity, and thereby remain in orbit above the event horizon. This configuration of bright material implies that the EHT observed M87* from a perspective catching the black hole's accretion disc nearly edge-on, as the whole system rotated clockwise. The extreme gravitational lensing associated with black holes produces the illusion of a perspective that sees the accretion disc from above. In reality, most of the ring in the EHT image was created when the light emitted by the far side of the accretion disc bent around the black hole's gravity well and escaped, meaning that most of the possible perspectives on M87* can see the entire disc, even that directly behind the "shadow". In 2015, the EHT detected magnetic fields just outside the event horizon of Sagittarius A* and even discerned some of their properties. The field lines that pass through the accretion disc were a complex mixture of ordered and tangled. Theoretical studies of black holes had predicted the existence of magnetic fields. In April 2023, an image of the shadow of the Messier 87 black hole and the related high-energy jet, viewed together for the first time, was presented. Detection of gravitational waves from merging black holes On 14 September 2015, the LIGO gravitational wave observatory made the first-ever successful direct observation of gravitational waves. The signal was consistent with theoretical predictions for the gravitational waves produced by the merger of two black holes: one with about 36 solar masses, and the other around 29 solar masses. This observation provides the most concrete evidence for the existence of black holes to date. For instance, the gravitational wave signal suggests that the separation of the two objects before the merger was just 350 km, or roughly four times the Schwarzschild radius corresponding to the inferred masses. The objects must therefore have been extremely compact, leaving black holes as the most plausible interpretation. More importantly, the signal observed by LIGO also included the start of the post-merger ringdown, the signal produced as the newly formed compact object settles down to a stationary state. Arguably, the ringdown is the most direct way of observing a black hole. From the LIGO signal, it is possible to extract the frequency and damping time of the dominant mode of the ringdown. From these, it is possible to infer the mass and angular momentum of the final object, which match independent predictions from numerical simulations of the merger. The frequency and decay time of the dominant mode are determined by the geometry of the photon sphere. Hence, observation of this mode confirms the presence of a photon sphere; however, it cannot exclude possible exotic alternatives to black holes that are compact enough to have a photon sphere. The observation also provides the first observational evidence for the existence of stellar-mass black hole binaries. Furthermore, it is the first observational evidence of stellar-mass black holes weighing 25 solar masses or more. Since then, many more gravitational wave events have been observed. Stars orbiting Sagittarius A* The proper motions of stars near the centre of our own Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. By fitting their motions to Keplerian orbits, the astronomers were able to infer, in 1998, that a object must be contained in a volume with a radius of 0.02 light-years to cause the motions of those stars. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass to and a radius of less than 0.002 light-years for the object causing the orbital motion of those stars. The upper limit on the object's size is still too large to test whether it is smaller than its Schwarzschild radius. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. Accretion of matter Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object. Artists' impressions such as the accompanying representation of a black hole with corona commonly depict the black hole as if it were a flat-space body hiding the part of the disk just behind it, but in reality gravitational lensing would greatly distort the image of the accretion disk. Within such a disk, friction would cause angular momentum to be transported outward, allowing matter to fall farther inward, thus releasing potential energy and increasing the temperature of the gas. When the accreting object is a neutron star or a black hole, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the compact object. The resulting friction is so significant that it heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays). These bright X-ray sources may be detected by telescopes. This process of accretion is one of the most efficient energy-producing processes known. Up to 40% of the rest mass of the accreted material can be emitted as radiation. In nuclear fusion only about 0.7% of the rest mass will be emitted as energy. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. As such, many of the universe's more energetic phenomena have been attributed to the accretion of matter on black holes. In particular, active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. Similarly, X-ray binaries are generally accepted to be binary star systems in which one of the two stars is a compact object accreting matter from its companion. It has also been suggested that some ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. In November 2011 the first direct observation of a quasar accretion disk around a supermassive black hole was reported. X-ray binaries X-ray binaries are binary star systems that emit a majority of their radiation in the X-ray part of the spectrum. These X-ray emissions are generally thought to result when one of the stars (compact object) accretes matter from another (regular) star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. If such a system emits signals that can be directly traced back to the compact object, it cannot be a black hole. The absence of such a signal does, however, not exclude the possibility that the compact object is a neutron star. By studying the companion star it is often possible to obtain the orbital parameters of the system and to obtain an estimate for the mass of the compact object. If this is much larger than the Tolman–Oppenheimer–Volkoff limit (the maximum mass a star can have without collapsing) then the object cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Some doubt remained, due to the uncertainties that result from the companion star being much heavier than the candidate black hole. Currently, better candidates for black holes are found in a class of X-ray binaries called soft X-ray transients. In this class of system, the companion star is of relatively low mass allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star during this period. One of the best such candidates is V404 Cygni. Quasi-periodic oscillations The X-ray emissions from accretion disks sometimes flicker at certain frequencies. These signals are called quasi-periodic oscillations and are thought to be caused by material moving along the inner edge of the accretion disk (the innermost stable circular orbit). As such their frequency is linked to the mass of the compact object. They can thus be used as an alternative way to determine the mass of candidate black holes. Galactic nuclei Astronomers use the term "active galaxy" to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the activity in these active galactic nuclei (AGN) may be explained by the presence of supermassive black holes, which can be millions of times more massive than stellar ones. The models of these AGN consist of a central black hole that may be millions or billions of times more massive than the Sun; a disk of interstellar gas and dust called an accretion disk; and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, M32, M87, NGC 3115, NGC 3377, NGC 4258, NGC 4889, NGC 1277, OJ 287, APM 08279+5255 and the Sombrero Galaxy. It is now widely accepted that the centre of nearly every galaxy, not just active ones, contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Microlensing Another way the black hole nature of an object may be tested is through observation of effects caused by a strong gravitational field in their vicinity. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, such as light passing through an optic lens. Observations have been made of weak gravitational lensing, in which light rays are deflected by only a few arcseconds. Microlensing occurs when the sources are unresolved and the observer sees a small brightening. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. Another possibility for observing gravitational lensing by a black hole would be to observe stars orbiting the black hole. There are several candidates for such an observation in orbit around Sagittarius A*. Alternatives The evidence for stellar black holes strongly relies on the existence of an upper limit for the mass of a neutron star. The size of this limit heavily depends on the assumptions made about the properties of dense matter. New exotic phases of matter could push up this bound. A phase of free quarks at high density might allow the existence of dense quark stars, and some supersymmetric models predict the existence of Q stars. Some extensions of the standard model posit the existence of preons as fundamental building blocks of quarks and leptons, which could hypothetically form preon stars. These hypothetical models could potentially explain a number of observations of stellar black hole candidates. However, it can be shown from arguments in general relativity that any such object will have a maximum mass. Since the average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass, supermassive black holes are much less dense than stellar black holes. The average density of a black hole is comparable to that of water. Consequently, the physics of matter forming a supermassive black hole is much better understood and the possible alternative explanations for supermassive black hole observations are much more mundane. For example, a supermassive black hole could be modelled by a large cluster of very dark objects. However, such alternatives are typically not stable enough to explain the supermassive black hole candidates. The evidence for the existence of stellar and supermassive black holes implies that in order for black holes not to form, general relativity must fail as a theory of gravity, perhaps due to the onset of quantum mechanical corrections. A much anticipated feature of a theory of quantum gravity is that it will not feature singularities or event horizons and thus black holes would not be real artefacts. For example, in the fuzzball model based on string theory, the individual states of a black hole solution do not generally have an event horizon or singularity, but for a classical/semiclassical observer the statistical average of such states appears just as an ordinary black hole as deduced from general relativity. A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. These include the gravastar, the black star, related nestar and the dark-energy star. Open questions Entropy and thermodynamics In 1971, Hawking showed under general conditions that the total area of the event horizons of any collection of classical black holes can never decrease, even if they collide and merge. This result, now known as the second law of black hole mechanics, is remarkably similar to the second law of thermodynamics, which states that the total entropy of an isolated system can never decrease. As with classical objects at absolute zero temperature, it was assumed that black holes had zero entropy. If this were the case, the second law of thermodynamics would be violated by entropy-laden matter entering a black hole, resulting in a decrease in the total entropy of the universe. Therefore, Bekenstein proposed that a black hole should have an entropy, and that it should be proportional to its horizon area. The link with the laws of thermodynamics was further strengthened by Hawking's discovery in 1974 that quantum field theory predicts that a black hole radiates blackbody radiation at a constant temperature. This seemingly causes a violation of the second law of black hole mechanics, since the radiation will carry away energy from the black hole causing it to shrink. The radiation also carries away entropy, and it can be proven under general assumptions that the sum of the entropy of the matter surrounding a black hole and one quarter of the area of the horizon as measured in Planck units is in fact always increasing. This allows the formulation of the first law of black hole mechanics as an analogue of the first law of thermodynamics, with the mass acting as energy, the surface gravity as temperature and the area as entropy. One puzzling feature is that the entropy of a black hole scales with its area rather than with its volume, since entropy is normally an extensive quantity that scales linearly with the volume of the system. This odd property led Gerard 't Hooft and Leonard Susskind to propose the holographic principle, which suggests that anything that happens in a volume of spacetime can be described by data on the boundary of that volume. Although general relativity can be used to perform a semiclassical calculation of black hole entropy, this situation is theoretically unsatisfying. In statistical mechanics, entropy is understood as counting the number of microscopic configurations of a system that have the same macroscopic qualities, such as mass, charge, pressure, etc. Without a satisfactory theory of quantum gravity, one cannot perform such a computation for black holes. Some progress has been made in various approaches to quantum gravity. In 1995, Andrew Strominger and Cumrun Vafa showed that counting the microstates of a specific supersymmetric black hole in string theory reproduced the Bekenstein–Hawking entropy. Since then, similar results have been reported for different black holes both in string theory and in other approaches to quantum gravity like loop quantum gravity. Information loss paradox Because a black hole has only a few internal parameters, most of the information about the matter that went into forming the black hole is lost. Regardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conserved. As long as black holes were thought to persist forever this information loss is not that problematic, as the information can be thought of as existing inside the black hole, inaccessible from the outside, but represented on the event horizon in accordance with the holographic principle. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information appears to be gone forever. The question whether information is truly lost in black holes (the black hole information paradox) has divided the theoretical physics community. In quantum mechanics, loss of information corresponds to the violation of a property called unitarity, and it has been argued that loss of unitarity would also imply violation of conservation of energy, though this has also been disputed. Over recent years evidence has been building that indeed information and unitarity are preserved in a full quantum gravitational treatment of the problem. One attempt to resolve the black hole information paradox is known as black hole complementarity. In 2012, the "firewall paradox" was introduced with the goal of demonstrating that black hole complementarity fails to solve the information paradox. According to quantum field theory in curved spacetime, a single emission of Hawking radiation involves two mutually entangled particles. The outgoing particle escapes and is emitted as a quantum of Hawking radiation; the infalling particle is swallowed by the black hole. Assume a black hole formed a finite time in the past and will fully evaporate away in some finite time in the future. Then, it will emit only a finite amount of information encoded within its Hawking radiation. According to research by physicists like Don Page and Leonard Susskind, there will eventually be a time by which an outgoing particle must be entangled with all the Hawking radiation the black hole has previously emitted. This seemingly creates a paradox: a principle called "monogamy of entanglement" requires that, like any quantum system, the outgoing particle cannot be fully entangled with two other systems at the same time; yet here the outgoing particle appears to be entangled both with the infalling particle and, independently, with past Hawking radiation. In order to resolve this contradiction, physicists may eventually be forced to give up one of three time-tested principles: Einstein's equivalence principle, unitarity, or local quantum field theory. One possible solution, which violates the equivalence principle, is that a "firewall" destroys incoming particles at the event horizon. In general, which—if any—of these assumptions should be abandoned remains a topic of debate. In science fiction Christopher Nolan's 2014 science fiction epic Interstellar features a black hole known as Gargantua, which is the central object of a planetary system in a distant galaxy. Humanity accessed this system via a wormhole in the outer solar system, near Saturn.
Physical sciences
Astronomy
null
4651
https://en.wikipedia.org/wiki/Beta%20decay
Beta decay
In nuclear physics, beta decay (β-decay) is a type of radioactive decay in which an atomic nucleus emits a beta particle (fast energetic electron or positron), transforming into an isobar of that nuclide. For example, beta decay of a neutron transforms it into a proton by the emission of an electron accompanied by an antineutrino; or, conversely a proton is converted into a neutron by the emission of a positron with a neutrino in what is called positron emission. Neither the beta particle nor its associated (anti-)neutrino exist within the nucleus prior to beta decay, but are created in the decay process. By this process, unstable atoms obtain a more stable ratio of protons to neutrons. The probability of a nuclide decaying due to beta and other forms of decay is determined by its nuclear binding energy. The binding energies of all existing nuclides form what is called the nuclear band or valley of stability. For either electron or positron emission to be energetically possible, the energy release (see below) or Q value must be positive. Beta decay is a consequence of the weak force, which is characterized by relatively long decay times. Nucleons are composed of up quarks and down quarks, and the weak force allows a quark to change its flavour by means of a virtual W boson leading to creation of an electron/antineutrino or positron/neutrino pair. For example, a neutron, composed of two down quarks and an up quark, decays to a proton composed of a down quark and two up quarks. Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In electron capture, an inner atomic electron is captured by a proton in the nucleus, transforming it into a neutron, and an electron neutrino is released. Description The two types of beta decay are known as beta minus and beta plus. In beta minus (β−) decay, a neutron is converted to a proton, and the process creates an electron and an electron antineutrino; while in beta plus (β+) decay, a proton is converted to a neutron and the process creates a positron and an electron neutrino. β+ decay is also known as positron emission. Beta decay conserves a quantum number known as the lepton number, or the number of electrons and their associated neutrinos (other leptons are the muon and tau particles). These particles have lepton number +1, while their antiparticles have lepton number −1. Since a proton or neutron has lepton number zero, β+ decay (a positron, or antielectron) must be accompanied with an electron neutrino, while β− decay (an electron) must be accompanied by an electron antineutrino. An example of electron emission (β− decay) is the decay of carbon-14 into nitrogen-14 with a half-life of about 5,730 years: → + + In this form of decay, the original element becomes a new chemical element in a process known as nuclear transmutation. This new element has an unchanged mass number , but an atomic number that is increased by one. As in all nuclear decays, the decaying element (in this case ) is known as the parent nuclide while the resulting element (in this case ) is known as the daughter nuclide. Another example is the decay of hydrogen-3 (tritium) into helium-3 with a half-life of about 12.3 years: → + + An example of positron emission (β+ decay) is the decay of magnesium-23 into sodium-23 with a half-life of about 11.3 s: → + + β+ decay also results in nuclear transmutation, with the resulting element having an atomic number that is decreased by one. The beta spectrum, or distribution of energy values for the beta particles, is continuous. The total energy of the decay process is divided between the electron, the antineutrino, and the recoiling nuclide. In the figure to the right, an example of an electron with 0.40 MeV energy from the beta decay of 210Bi is shown. In this example, the total decay energy is 1.16 MeV, so the antineutrino has the remaining energy: . An electron at the far right of the curve would have the maximum possible kinetic energy, leaving the energy of the neutrino to be only its small rest mass. History Discovery and initial characterization Radioactivity was discovered in 1896 by Henri Becquerel in uranium, and subsequently observed by Marie and Pierre Curie in thorium and in the new elements polonium and radium. In 1899, Ernest Rutherford separated radioactive emissions into two types: alpha and beta (now beta minus), based on penetration of objects and ability to cause ionization. Alpha rays could be stopped by thin sheets of paper or aluminium, whereas beta rays could penetrate several millimetres of aluminium. In 1900, Paul Villard identified a still more penetrating type of radiation, which Rutherford identified as a fundamentally new type in 1903 and termed gamma rays. Alpha, beta, and gamma are the first three letters of the Greek alphabet. In 1900, Becquerel measured the mass-to-charge ratio () for beta particles by the method of J.J. Thomson used to study cathode rays and identify the electron. He found that for a beta particle is the same as for Thomson's electron, and therefore suggested that the beta particle is in fact an electron. In 1901, Rutherford and Frederick Soddy showed that alpha and beta radioactivity involves the transmutation of atoms into atoms of other chemical elements. In 1913, after the products of more radioactive decays were known, Soddy and Kazimierz Fajans independently proposed their radioactive displacement law, which states that beta (i.e., ) emission from one element produces another element one place to the right in the periodic table, while alpha emission produces an element two places to the left. Neutrinos The study of beta decay provided the first physical evidence for the existence of the neutrino. In both alpha and gamma decay, the resulting alpha or gamma particle has a narrow energy distribution, since the particle carries the energy from the difference between the initial and final nuclear states. However, the kinetic energy distribution, or spectrum, of beta particles measured by Lise Meitner and Otto Hahn in 1911 and by Jean Danysz in 1913 showed multiple lines on a diffuse background. These measurements offered the first hint that beta particles have a continuous spectrum. In 1914, James Chadwick used a magnetic spectrometer with one of Hans Geiger's new counters to make more accurate measurements which showed that the spectrum was continuous. The results, which appeared to be in contradiction to the law of conservation of energy, were validated by means of calorimetric measurements in 1929 by Lise Meitner and Wilhelm Orthmann. If beta decay were simply electron emission as assumed at the time, then the energy of the emitted electron should have a particular, well-defined value. For beta decay, however, the observed broad distribution of energies suggested that energy is lost in the beta decay process. This spectrum was puzzling for many years. A second problem is related to the conservation of angular momentum. Molecular band spectra showed that the nuclear spin of nitrogen-14 is 1 (i.e., equal to the reduced Planck constant) and more generally that the spin is integral for nuclei of even mass number and half-integral for nuclei of odd mass number. This was later explained by the proton-neutron model of the nucleus. Beta decay leaves the mass number unchanged, so the change of nuclear spin must be an integer. However, the electron spin is 1/2, hence angular momentum would not be conserved if beta decay were simply electron emission. From 1920 to 1927, Charles Drummond Ellis (along with Chadwick and colleagues) further established that the beta decay spectrum is continuous. In 1933, Ellis and Nevill Mott obtained strong evidence that the beta spectrum has an effective upper bound in energy. Niels Bohr had suggested that the beta spectrum could be explained if conservation of energy was true only in a statistical sense, thus this principle might be violated in any given decay. However, the upper bound in beta energies determined by Ellis and Mott ruled out that notion. Now, the problem of how to account for the variability of energy in known beta decay products, as well as for conservation of momentum and angular momentum in the process, became acute. In a famous letter written in 1930, Wolfgang Pauli attempted to resolve the beta-particle energy conundrum by suggesting that, in addition to electrons and protons, atomic nuclei also contained an extremely light neutral particle, which he called the neutron. He suggested that this "neutron" was also emitted during beta decay (thus accounting for the known missing energy, momentum, and angular momentum), but it had simply not yet been observed. In 1931, Enrico Fermi renamed Pauli's "neutron" the "neutrino" ('little neutral one' in Italian). In 1933, Fermi published his landmark theory for beta decay, where he applied the principles of quantum mechanics to matter particles, supposing that they can be created and annihilated, just as the light quanta in atomic transitions. Thus, according to Fermi, neutrinos are created in the beta-decay process, rather than contained in the nucleus; the same happens to electrons. The neutrino interaction with matter was so weak that detecting it proved a severe experimental challenge. Further indirect evidence of the existence of the neutrino was obtained by observing the recoil of nuclei that emitted such a particle after absorbing an electron. Neutrinos were finally detected directly in 1956 by the American physicists Clyde Cowan and Frederick Reines in the Cowan–Reines neutrino experiment. The properties of neutrinos were (with a few minor modifications) as predicted by Pauli and Fermi. decay and electron capture In 1934, Frédéric and Irène Joliot-Curie bombarded aluminium with alpha particles to effect the nuclear reaction  +  →  + , and observed that the product isotope emits a positron identical to those found in cosmic rays (discovered by Carl David Anderson in 1932). This was the first example of  decay (positron emission), which they termed artificial radioactivity since is a short-lived nuclide which does not exist in nature. In recognition of their discovery, the couple were awarded the Nobel Prize in Chemistry in 1935. The theory of electron capture was first discussed by Gian-Carlo Wick in a 1934 paper, and then developed by Hideki Yukawa and others. K-electron capture was first observed in 1937 by Luis Alvarez, in the nuclide 48V. Alvarez went on to study electron capture in 67Ga and other nuclides. Non-conservation of parity In 1956, Tsung-Dao Lee and Chen Ning Yang noticed that there was no evidence that parity was conserved in weak interactions, and so they postulated that this symmetry may not be preserved by the weak force. They sketched the design for an experiment for testing conservation of parity in the laboratory. Later that year, Chien-Shiung Wu and coworkers conducted the Wu experiment showing an asymmetrical beta decay of at cold temperatures that proved that parity is not conserved in beta decay. This surprising result overturned long-held assumptions about parity and the weak force. In recognition of their theoretical work, Lee and Yang were awarded the Nobel Prize for Physics in 1957. However Wu, who was female, was not awarded the Nobel prize. β− decay In  decay, the weak interaction converts an atomic nucleus into a nucleus with atomic number increased by one, while emitting an electron () and an electron antineutrino ().  decay generally occurs in neutron-rich nuclei. The generic equation is: → + + where and are the mass number and atomic number of the decaying nucleus, and X and X′ are the initial and final elements, respectively. Another example is when the free neutron () decays by  decay into a proton (): → + + . At the fundamental level (as depicted in the Feynman diagram on the right), this is caused by the conversion of the negatively charged () down quark to the positively charged () up quark promoteby by a virtual boson; the boson subsequently decays into an electron and an electron antineutrino: → + + . β+ decay In  decay, or positron emission, the weak interaction converts an atomic nucleus into a nucleus with atomic number decreased by one, while emitting a positron () and an electron neutrino ().  decay generally occurs in proton-rich nuclei. The generic equation is: → + + This may be considered as the decay of a proton inside the nucleus to a neutron: p → n + + However,  decay cannot occur in an isolated proton because it requires energy, due to the mass of the neutron being greater than the mass of the proton.  decay can only happen inside nuclei when the daughter nucleus has a greater binding energy (and therefore a lower total energy) than the mother nucleus. The difference between these energies goes into the reaction of converting a proton into a neutron, a positron, and a neutrino and into the kinetic energy of these particles. This process is opposite to negative beta decay, in that the weak interaction converts a proton into a neutron by converting an up quark into a down quark resulting in the emission of a or the absorption of a . When a boson is emitted, it decays into a positron and an electron neutrino: → + + . Electron capture (K-capture/L-capture) In all cases where  decay (positron emission) of a nucleus is allowed energetically, so too is electron capture allowed. This is a process during which a nucleus captures one of its atomic electrons, resulting in the emission of a neutrino: + → + An example of electron capture is one of the decay modes of krypton-81 into bromine-81: + → + All emitted neutrinos are of the same energy. In proton-rich nuclei where the energy difference between the initial and final states is less than 2,  decay is not energetically possible, and electron capture is the sole decay mode. If the captured electron comes from the innermost shell of the atom, the K-shell, which has the highest probability to interact with the nucleus, the process is called K-capture. If it comes from the L-shell, the process is called L-capture, etc. Electron capture is a competing (simultaneous) decay process for all nuclei that can undergo β+ decay. The converse, however, is not true: electron capture is the only type of decay that is allowed in proton-rich nuclides that do not have sufficient energy to emit a positron and neutrino. Nuclear transmutation If the proton and neutron are part of an atomic nucleus, the above described decay processes transmute one chemical element into another. For example: :{|border="0" |- style="height:2em;" | || || ||→ || ||+ || ||+ || ||(beta minus decay) |- style="height:2em;" | || || ||→ || ||+ || ||+ || ||(beta plus decay) |- style="height:2em;" | ||+ || ||→ || ||+ || || || ||(electron capture) |} Beta decay does not change the number () of nucleons in the nucleus, but changes only its charge . Thus the set of all nuclides with the same  can be introduced; these isobaric nuclides may turn into each other via beta decay. For a given there is one that is most stable. It is said to be beta stable, because it presents a local minimum of the mass excess: if such a nucleus has numbers, the neighbour nuclei and have higher mass excess and can beta decay into , but not vice versa. For all odd mass numbers , there is only one known beta-stable isobar. For even , there are up to three different beta-stable isobars experimentally known; for example, , , and are all beta-stable. There are about 350 known beta-decay stable nuclides. Competition of beta decay types Usually unstable nuclides are clearly either "neutron rich" or "proton rich", with the former undergoing beta decay and the latter undergoing electron capture (or more rarely, due to the higher energy requirements, positron decay). However, in a few cases of odd-proton, odd-neutron radionuclides, it may be energetically favorable for the radionuclide to decay to an even-proton, even-neutron isobar either by undergoing beta-positive or beta-negative decay. An often-cited example is the single isotope (29 protons, 35 neutrons), which illustrates three types of beta decay in competition. Copper-64 has a half-life of about 12.7 hours. This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay. This particular nuclide (though not all nuclides in this situation) is almost equally likely to decay through proton decay by positron emission () or electron capture () to , as it is through neutron decay by electron emission () to . Stability of naturally occurring nuclides Most naturally occurring nuclides on earth are beta stable. Nuclides that are not beta stable have half-lives ranging from under a second to periods of time significantly greater than the age of the universe. One common example of a long-lived isotope is the odd-proton odd-neutron nuclide , which undergoes all three types of beta decay (, and electron capture) with a half-life of . Conservation rules for beta decay Baryon number is conserved where is the number of constituent quarks, and is the number of constituent antiquarks. Beta decay just changes neutron to proton or, in the case of positive beta decay (electron capture) proton to neutron so the number of individual quarks doesn't change. It is only the baryon flavor that changes, here labelled as the isospin. Up and down quarks have total isospin and isospin projections All other quarks have . In general Lepton number is conserved so all leptons have assigned a value of +1, antileptons −1, and non-leptonic particles 0. Angular momentum For allowed decays, the net orbital angular momentum is zero, hence only spin quantum numbers are considered. The electron and antineutrino are fermions, spin-1/2 objects, therefore they may couple to total (parallel) or (anti-parallel). For forbidden decays, orbital angular momentum must also be taken into consideration. Energy release The value is defined as the total energy released in a given nuclear decay. In beta decay, is therefore also the sum of the kinetic energies of the emitted beta particle, neutrino, and recoiling nucleus. (Because of the large mass of the nucleus compared to that of the beta particle and neutrino, the kinetic energy of the recoiling nucleus can generally be neglected.) Beta particles can therefore be emitted with any kinetic energy ranging from 0 to . A typical is around 1 MeV, but can range from a few keV to a few tens of MeV. Since the rest mass of the electron is 511 keV, the most energetic beta particles are ultrarelativistic, with speeds very close to the speed of light. In the case of Re, the maximum speed of the beta particle is only 9.8% of the speed of light. The following table gives some examples: β− decay Consider the generic equation for beta decay → + + . The value for this decay is , where is the mass of the nucleus of the atom, is the mass of the electron, and is the mass of the electron antineutrino. In other words, the total energy released is the mass energy of the initial nucleus, minus the mass energy of the final nucleus, electron, and antineutrino. The mass of the nucleus is related to the standard atomic mass by That is, the total atomic mass is the mass of the nucleus, plus the mass of the electrons, minus the sum of all electron binding energies for the atom. This equation is rearranged to find , and is found similarly. Substituting these nuclear masses into the -value equation, while neglecting the nearly-zero antineutrino mass and the difference in electron binding energies, which is very small for high- atoms, we have This energy is carried away as kinetic energy by the electron and antineutrino. Because the reaction will proceed only when the  value is positive, β− decay can occur when the mass of atom is greater than the mass of atom . β+ decay The equations for β+ decay are similar, with the generic equation → + + giving However, in this equation, the electron masses do not cancel, and we are left with Because the reaction will proceed only when the  value is positive, β+ decay can occur when the mass of atom exceeds that of by at least twice the mass of the electron. Electron capture The analogous calculation for electron capture must take into account the binding energy of the electrons. This is because the atom will be left in an excited state after capturing the electron, and the binding energy of the captured innermost electron is significant. Using the generic equation for electron capture + → + we have which simplifies to where is the binding energy of the captured electron. Because the binding energy of the electron is much less than the mass of the electron, nuclei that can undergo β+ decay can always also undergo electron capture, but the reverse is not true. Beta emission spectrum Beta decay can be considered as a perturbation as described in quantum mechanics, and thus Fermi's Golden Rule can be applied. This leads to an expression for the kinetic energy spectrum of emitted betas as follows: where is the kinetic energy, is a shape function that depends on the forbiddenness of the decay (it is constant for allowed decays), is the Fermi Function (see below) with Z the charge of the final-state nucleus, is the total energy, is the momentum, and is the Q value of the decay. The kinetic energy of the emitted neutrino is given approximately by minus the kinetic energy of the beta. As an example, the beta decay spectrum of 210Bi (originally called RaE) is shown to the right. Fermi function The Fermi function that appears in the beta spectrum formula accounts for the Coulomb attraction / repulsion between the emitted beta and the final state nucleus. Approximating the associated wavefunctions to be spherically symmetric, the Fermi function can be analytically calculated to be: where is the final momentum, Γ the Gamma function, and (if is the fine-structure constant and the radius of the final state nucleus) , (+ for electrons, − for positrons), and . For non-relativistic betas (), this expression can be approximated by: Other approximations can be found in the literature. Kurie plot A Kurie plot (also known as a Fermi–Kurie plot) is a graph used in studying beta decay developed by Franz N. D. Kurie, in which the square root of the number of beta particles whose momenta (or energy) lie within a certain narrow range, divided by the Fermi function, is plotted against beta-particle energy. It is a straight line for allowed transitions and some forbidden transitions, in accord with the Fermi beta-decay theory. The energy-axis (x-axis) intercept of a Kurie plot corresponds to the maximum energy imparted to the electron/positron (the decay's  value). With a Kurie plot one can find the limit on the effective mass of a neutrino. Helicity (polarization) of neutrinos, electrons and positrons emitted in beta decay After the discovery of parity non-conservation (see History), it was found that, in beta decay, electrons are emitted mostly with negative helicity, i.e., they move, naively speaking, like left-handed screws driven into a material (they have negative longitudinal polarization). Conversely, positrons have mostly positive helicity, i.e., they move like right-handed screws. Neutrinos (emitted in positron decay) have negative helicity, while antineutrinos (emitted in electron decay) have positive helicity. The higher the energy of the particles, the higher their polarization. Types of beta decay transitions Beta decays can be classified according to the angular momentum ( value) and total spin ( value) of the emitted radiation. Since total angular momentum must be conserved, including orbital and spin angular momentum, beta decay occurs by a variety of quantum state transitions to various nuclear angular momentum or spin states, known as "Fermi" or "Gamow–Teller" transitions. When beta decay particles carry no angular momentum (), the decay is referred to as "allowed", otherwise it is "forbidden". Other decay modes, which are rare, are known as bound state decay and double beta decay. Fermi transitions A Fermi transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin , leading to an angular momentum change between the initial and final states of the nucleus (assuming an allowed transition). In the non-relativistic limit, the nuclear part of the operator for a Fermi transition is given by with the weak vector coupling constant, the isospin raising and lowering operators, and running over all protons and neutrons in the nucleus. Gamow–Teller transitions A Gamow–Teller transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin , leading to an angular momentum change between the initial and final states of the nucleus (assuming an allowed transition). In this case, the nuclear part of the operator is given by with the weak axial-vector coupling constant, and the spin Pauli matrices, which can produce a spin-flip in the decaying nucleon. Forbidden transitions When , the decay is referred to as "forbidden". Nuclear selection rules require high  values to be accompanied by changes in nuclear spin () and parity (). The selection rules for the th forbidden transitions are: where corresponds to no parity change or parity change, respectively. The special case of a transition between isobaric analogue states, where the structure of the final state is very similar to the structure of the initial state, is referred to as "superallowed" for beta decay, and proceeds very quickly. The following table lists the Δ and Δ values for the first few values of : Rare decay modes Bound-state β decay A very small minority of free neutron decays (about four per million) are "two-body decays": the proton, electron and antineutrino are produced, but the electron fails to gain the 13.6 eV energy necessary to escape the proton, and therefore simply remains bound to it, as a neutral hydrogen atom. In this type of beta decay, in essence all of the neutron decay energy is carried off by the antineutrino. For fully ionized atoms (bare nuclei), it is possible in likewise manner for electrons to fail to escape the atom, and to be emitted from the nucleus into low-lying atomic bound states (orbitals). This cannot occur for neutral atoms with low-lying bound states which are already filled by electrons. Bound-state β decays were predicted by Daudel, Jean, and Lecoin in 1947, and the phenomenon in fully ionized atoms was first observed for Dy in 1992 by Jung et al. of the Darmstadt Heavy-Ion Research Center. Though neutral Dy is stable, fully ionized Dy undergoes β decay into the K and L shells with a half-life of 47 days. The resulting nucleus – Ho – is stable only in this almost fully ionized state and will decay via electron capture into Dy in the neutral state. Likewise, while being stable in the neutral state, the fully ionized Tl undergoes bound-state β decay to Pb with a half-life of days. The half-lives of neutral Ho and Pb are respectively 4570 years and years. In addition, it is estimated that β decay is energetically impossible for natural atoms but theoretically possible when fully ionized also for 193Ir, 194Au, 202Tl, 215At, 243Am, and 246Bk. Another possibility is that a fully ionized atom undergoes greatly accelerated β decay, as observed for Re by Bosch et al., also at Darmstadt. Neutral Re does undergo β decay, with half-life years, but for fully ionized Re this is shortened to only 32.9 years. This is because Re is energetically allowed to undergo β decay to the first-excited state in Os, a process energetically disallowed for natural Re. Similarly, neutral Pu undergoes β decay with a half-life of 14.3 years, but in its fully ionized state the beta-decay half-life of Pu decreases to 4.2 days. For comparison, the variation of decay rates of other nuclear processes due to chemical environment is less than 1%. Moreover, current mass determinations cannot decisively determine whether Rn is energetically possible to undergo β decay (the decay energy given in AME2020 is (−6 ± 8) keV), but in either case it is predicted that β will be greatly accelerated for fully ionized Rn. Double beta decay Some nuclei can undergo double beta decay (2β) where the charge of the nucleus changes by two units. Double beta decay is difficult to study, as it has an extremely long half-life. In nuclei for which both β decay and 2β are possible, the rarer 2β process is effectively impossible to observe. However, in nuclei where β decay is forbidden but 2β is allowed, the process can be seen and a half-life measured. Thus, 2β is usually studied only for beta stable nuclei. Like single beta decay, double beta decay does not change ; thus, at least one of the nuclides with some given has to be stable with regard to both single and double beta decay. "Ordinary" 2β results in the emission of two electrons and two antineutrinos. If neutrinos are Majorana particles (i.e., they are their own antiparticles), then a decay known as neutrinoless double beta decay will occur. Most neutrino physicists believe that neutrinoless 2β has never been observed.
Physical sciences
Nuclear physics
Physics
4654
https://en.wikipedia.org/wiki/Bee
Bee
Bees are winged insects closely related to wasps and ants, known for their roles in pollination and, in the case of the best-known bee species, the western honey bee, for producing honey. Bees are a monophyletic lineage within the superfamily Apoidea. They are currently considered a clade, called Anthophila. There are over 20,000 known species of bees in seven recognized biological families. Some speciesincluding honey bees, bumblebees, and stingless beeslive socially in colonies while most species (>90%)including mason bees, carpenter bees, leafcutter bees, and sweat beesare solitary. Bees are found on every continent except Antarctica, in every habitat on the planet that contains insect-pollinated flowering plants. The most common bees in the Northern Hemisphere are the Halictidae, or sweat bees, but they are small and often mistaken for wasps or flies. Bees range in size from tiny stingless bee species, whose workers are less than long, to the leafcutter bee Megachile pluto, the largest species of bee, whose females can attain a length of . Bees feed on nectar and pollen, the former primarily as an energy source and the latter primarily for protein and other nutrients. Most pollen is used as food for their larvae. Vertebrate predators of bees include primates and birds such as bee-eaters; insect predators include beewolves and dragonflies. Bee pollination is important both ecologically and commercially, and the decline in wild bees has increased the value of pollination by commercially managed hives of honey bees. The analysis of 353 wild bee and hoverfly species across Britain from 1980 to 2013 found the insects have been lost from a quarter of the places they inhabited in 1980. Human beekeeping or apiculture (meliponiculture for stingless bees) has been practiced for millennia, since at least the times of Ancient Egypt and Ancient Greece. Bees have appeared in mythology and folklore, through all phases of art and literature from ancient times to the present day, although primarily focused in the Northern Hemisphere where beekeeping is far more common. In Mesoamerica, the Mayans have practiced large-scale intensive meliponiculture since pre-Columbian times. Evolution The immediate ancestors of bees were stinging wasps in the family Crabronidae, which were predators of other insects. The switch from insect prey to pollen may have resulted from the consumption of prey insects which were flower visitors and were partially covered with pollen when they were fed to the wasp larvae. This same evolutionary scenario may have occurred within the vespoid wasps, where the pollen wasps evolved from predatory ancestors. Based on phylogenetic analysis, bees are thought to have originated during the Early Cretaceous (about 124 million years ago) on the supercontinent of West Gondwana, just prior to its breakup into South America and Africa. The supercontinent is thought to have been a largely xeric environment at this time; modern bee diversity hotspots are also in xeric and seasonal temperate environments, suggesting strong niche conservatism among bees ever since their origins. Genomic analysis indicates that despite only appearing much later in the fossil record, all modern bee families had already diverged from one another by the end of the Cretaceous. The Melittidae, Apidae, and Megachilidae had already evolved on the supercontinent prior to its fragmentation. Further divergences were facilitated by West Gondwana's breakup around 100 million years ago, leading to a deep Africa-South America split within both the Apidae and Megachilidae, the isolation of the Melittidae in Africa, and the origins of the Colletidae, Andrenidae and Halictidae in South America. The rapid radiation of the South American bee families is thought to have followed the concurrent radiation of flowering plants in the same region. Later in the Cretaceous (80 million years ago), colletid bees colonized Australia from South America (with an offshoot lineage evolving into the Stenotritidae), and by the end of the Cretaceous, South American bees had also colonized North America. The North American fossil taxon Cretotrigona belongs to a group that is no longer found in North America, suggesting that many bee lineages went extinct during the Cretaceous-Paleogene extinction event. Following the K-Pg extinction, surviving bee lineages continued to spread into the Northern Hemisphere, colonizing Europe from Africa by the Paleocene, and then spreading east to Asia. This was facilitated by the warming climate around the same time, allowing bees to move to higher latitudes following the spread of tropical and subtropical habitats. By the Eocene (~45 mya) there was already considerable diversity among eusocial bee lineages. A second extinction event among bees is thought to have occurred due to rapid climatic cooling around the Eocene-Oligocene boundary, leading to the extinction of some bee lineages such as the tribe Melikertini. Over the Paleogene and Neogene, different bee lineages continued to spread all over the world, and the shifting habitats and connectedness of continents led to the isolation and evolution of many new bee tribes. Fossils The oldest non-compression bee fossil is Cretotrigona prisca, a corbiculate bee of Late Cretaceous age (~70 mya) found in New Jersey amber. A fossil from the early Cretaceous (~100 mya), Melittosphex burmensis, was initially considered "an extinct lineage of pollen-collecting Apoidea sister to the modern bees", but subsequent research has rejected the claim that Melittosphex is a bee, or even a member of the superfamily Apoidea to which bees belong, instead treating the lineage as incertae sedis within the Aculeata. The Allodapini (within the Apidae) appeared around 53 Mya. The Colletidae appear as fossils only from the late Oligocene (~25 Mya) to early Miocene. The Melittidae are known from Palaeomacropis eocenicus in the Early Eocene. The Megachilidae are known from trace fossils (characteristic leaf cuttings) from the Middle Eocene. The Andrenidae are known from the Eocene-Oligocene boundary, around 34 Mya, of the Florissant shale. The Halictidae first appear in the Early Eocene with species found in amber. The Stenotritidae are known from fossil brood cells of Pleistocene age. Coevolution The earliest animal-pollinated flowers were shallow, cup-shaped blooms pollinated by insects such as beetles, so the syndrome of insect pollination was well established before the first appearance of bees. The novelty is that bees are specialized as pollination agents, with behavioral and physical modifications that specifically enhance pollination, and are the most efficient pollinating insects. In a process of coevolution, flowers developed floral rewards such as nectar and longer tubes, and bees developed longer tongues to extract the nectar. Bees also developed structures known as scopal hairs and pollen baskets to collect and carry pollen. The location and type differ among and between groups of bees. Most species have scopal hairs on their hind legs or on the underside of their abdomens. Some species in the family Apidae have pollen baskets on their hind legs, while very few lack these and instead collect pollen in their crops. The appearance of these structures drove the adaptive radiation of the angiosperms, and, in turn, bees themselves. Bees coevolved not only with flowers but it is believed that some species coevolved with mites. Some provide tufts of hairs called acarinaria that appear to provide lodgings for mites; in return, it is believed that mites eat fungi that attack pollen, so the relationship in this case may be mutualistic. Phylogeny External Molecular phylogeny was used by Debevic et al, 2012, to demonstrate that the bees (Anthophila) arose from deep within the Crabronidae sensu lato, which was thus rendered paraphyletic. In their study, the placement of the monogeneric Heterogynaidae was uncertain. The small family Mellinidae was not included in this analysis. Further studies by Sann et al., 2018, elevated the subfamilies (plus one tribe and one subtribe) of Crabronidae sensu lato to family status. They also recovered the placement of Heterogyna within Nyssonini and sunk Heterogynaidae. The newly erected family, Ammoplanidae, formerly a subtribe of Pemphredoninae, was recovered as the most sister family to bees. Internal This cladogram of the bee families is based on Hedtke et al., 2013, which places the former families Dasypodaidae and Meganomiidae as subfamilies inside the Melittidae. English names, where available, are given in parentheses. Characteristics Bees differ from closely related groups such as wasps by having branched or plume-like setae (hairs), combs on the forelimbs for cleaning their antennae, small anatomical differences in limb structure, and the venation of the hind wings; and in females, by having the seventh dorsal abdominal plate divided into two half-plates. Bees have the following characteristics: A pair of large compound eyes which cover much of the surface of the head. Between and above these are three small simple eyes (ocelli) which provide information on light intensity. The antennae usually have 13 segments in males and 12 in females, and are geniculate, having an elbow joint part way along. They house large numbers of sense organs that can detect touch (mechanoreceptors), smell and taste; and small, hairlike mechanoreceptors that can detect air movement so as to "hear" sounds. The mouthparts are adapted for both chewing and sucking by having both a pair of mandibles and a long proboscis for sucking up nectar. The thorax has three segments, each with a pair of robust legs, and a pair of membranous wings on the hind two segments. The front legs of corbiculate bees bear combs for cleaning the antennae, and in many species the hind legs bear pollen baskets, flattened sections with incurving hairs to secure the collected pollen. The wings are synchronized in flight, and the somewhat smaller hind wings connect to the forewings by a row of hooks along their margin which connect to a groove in the forewing. The abdomen has nine segments, the hindermost three being modified into the sting. The largest species of bee is thought to be Wallace's giant bee Megachile pluto, whose females can attain a length of . The smallest species may be dwarf stingless bees in the tribe Meliponini whose workers are less than in length. Sociality Haplodiploid breeding system According to inclusive fitness theory, organisms can gain fitness not just through increasing their own reproductive output, but also that of close relatives. In evolutionary terms, individuals should help relatives when Cost < Relatedness * Benefit. The requirements for eusociality are more easily fulfilled by haplodiploid species such as bees because of their unusual relatedness structure. In haplodiploid species, females develop from fertilized eggs and males from unfertilized eggs. Because a male is haploid (has only one copy of each gene), his daughters (which are diploid, with two copies of each gene) share 100% of his genes and 50% of their mother's. Therefore, they share 75% of their genes with each other. This mechanism of sex determination gives rise to what W. D. Hamilton termed "supersisters", more closely related to their sisters than they would be to their own offspring. Workers often do not reproduce, but they can pass on more of their genes by helping to raise their sisters (as queens) than they would by having their own offspring (each of which would only have 50% of their genes), assuming they would produce similar numbers. This unusual situation has been proposed as an explanation of the multiple (at least nine) evolutions of eusociality within Hymenoptera. Haplodiploidy is neither necessary nor sufficient for eusociality. Some eusocial species such as termites are not haplodiploid. Conversely, all bees are haplodiploid but not all are eusocial, and among eusocial species many queens mate with multiple males, creating half-sisters that share only 25% of each other's genes. But, monogamy (queens mating singly) is the ancestral state for all eusocial species so far investigated, so it is likely that haplodiploidy contributed to the evolution of eusociality in bees. Eusociality Bees may be solitary or may live in various types of communities. Eusociality appears to have originated from at least three independent origins in halictid bees. The most advanced of these are species with eusocial colonies; these are characterized by cooperative brood care and a division of labour into reproductive and non-reproductive adults, plus overlapping generations. This division of labour creates specialized groups within eusocial societies which are called castes. In some species, groups of cohabiting females may be sisters, and if there is a division of labour within the group, they are considered semisocial. The group is called eusocial if, in addition, the group consists of a mother (the queen) and her daughters (workers). When the castes are purely behavioural alternatives, with no morphological differentiation other than size, the system is considered primitively eusocial, as in many paper wasps; when the castes are morphologically discrete, the system is considered highly eusocial. True honey bees (genus Apis, of which eight species are currently recognized) are highly eusocial, and are among the best known insects. Their colonies are established by swarms, consisting of a queen and several thousand workers. There are 29 subspecies of one of these species, Apis mellifera, native to Europe, the Middle East, and Africa. Africanized bees are a hybrid strain of A. mellifera that escaped from experiments involving crossing European and African subspecies; they are extremely defensive. Stingless bees are also highly eusocial. They practice mass provisioning, with complex nest architecture and perennial colonies also established via swarming. Many bumblebees are eusocial, similar to the eusocial Vespidae such as hornets in that the queen initiates a nest on her own rather than by swarming. Bumblebee colonies typically have from 50 to 200 bees at peak population, which occurs in mid to late summer. Nest architecture is simple, limited by the size of the pre-existing nest cavity, and colonies rarely last more than a year. In 2011, the International Union for Conservation of Nature set up the Bumblebee Specialist Group to review the threat status of all bumblebee species worldwide using the IUCN Red List criteria. There are many more species of primitively eusocial than highly eusocial bees, but they have been studied less often. Most are in the family Halictidae, or "sweat bees". Colonies are typically small, with a dozen or fewer workers, on average. Queens and workers differ only in size, if at all. Most species have a single season colony cycle, even in the tropics, and only mated females hibernate. A few species have long active seasons and attain colony sizes in the hundreds, such as Halictus hesperus. Some species are eusocial in parts of their range and solitary in others, or have a mix of eusocial and solitary nests in the same population. The orchid bees (Apidae) include some primitively eusocial species with similar biology. Some allodapine bees (Apidae) form primitively eusocial colonies, with progressive provisioning: a larva's food is supplied gradually as it develops, as is the case in honey bees and some bumblebees. Solitary and communal bees Most other bees, including familiar insects such as carpenter bees, leafcutter bees and mason bees are solitary in the sense that every female is fertile, and typically inhabits a nest she constructs herself. There is no division of labor so these nests lack queens and worker bees for these species. Solitary bees typically produce neither honey nor beeswax. Bees collect pollen to feed their young, and have the necessary adaptations to do this. However, certain wasp species such as pollen wasps have similar behaviours, and a few species of bee scavenge from carcases to feed their offspring. Solitary bees are important pollinators; they gather pollen to provision their nests with food for their brood. Often it is mixed with nectar to form a paste-like consistency. Some solitary bees have advanced types of pollen-carrying structures on their bodies. Very few species of solitary bee are being cultured for commercial pollination. Most of these species belong to a distinct set of genera which are commonly known by their nesting behavior or preferences, namely: carpenter bees, sweat bees, mason bees, plasterer bees, squash bees, dwarf carpenter bees, leafcutter bees, alkali bees and digger bees. Most solitary bees are fossorial, digging nests in the ground in a variety of soil textures and conditions, while others create nests in hollow reeds or twigs, or holes in wood. The female typically creates a compartment (a "cell") with an egg and some provisions for the resulting larva, then seals it off. A nest may consist of numerous cells. When the nest is in wood, usually the last (those closer to the entrance) contain eggs that will become males. The adult does not provide care for the brood once the egg is laid, and usually dies after making one or more nests. The males typically emerge first and are ready for mating when the females emerge. Solitary bees are very unlikely to sting (only in self-defense, if ever), and some (esp. in the family Andrenidae) are stingless. While solitary, females each make individual nests. Some species, such as the European mason bee Hoplitis anthocopoides, and the Dawson's Burrowing bee, Amegilla dawsoni, are gregarious, preferring to make nests near others of the same species, and giving the appearance of being social. Large groups of solitary bee nests are called aggregations, to distinguish them from colonies. In some species, multiple females share a common nest, but each makes and provisions her own cells independently. This type of group is called "communal" and is not uncommon. The primary advantage appears to be that a nest entrance is easier to defend from predators and parasites when multiple females use that same entrance regularly. Biology Life cycle The life cycle of a bee, be it a solitary or social species, involves the laying of an egg, the development through several moults of a legless larva, a pupation stage during which the insect undergoes complete metamorphosis, followed by the emergence of a winged adult. The number of eggs laid by a female during her lifetime can vary from eight or less in some solitary bees, to more than a million in highly social species. Most solitary bees and bumble bees in temperate climates overwinter as adults or pupae and emerge in spring when increasing numbers of flowering plants come into bloom. The males usually emerge first and search for females with which to mate. Like the other members of Hymenoptera bees are haplodiploid; the sex of a bee is determined by whether or not the egg is fertilized. After mating, a female stores the sperm, and determines which sex is required at the time each individual egg is laid, fertilized eggs producing female offspring and unfertilized eggs, males. Tropical bees may have several generations in a year and no diapause stage. The egg is generally oblong, slightly curved and tapering at one end. Solitary bees, lay each egg in a separate cell with a supply of mixed pollen and nectar next to it. This may be rolled into a pellet or placed in a pile and is known as mass provisioning. Social bee species provision progressively, that is, they feed the larva regularly while it grows. The nest varies from a hole in the ground or in wood, in solitary bees, to a substantial structure with wax combs in bumblebees and honey bees. In most species, larvae are whitish grubs, roughly oval and bluntly-pointed at both ends. They have 15 segments and spiracles in each segment for breathing. They have no legs but move within the cell, helped by tubercles on their sides. They have short horns on the head, jaws for chewing food and an appendage on either side of the mouth tipped with a bristle. There is a gland under the mouth that secretes a viscous liquid which solidifies into the silk they use to produce a cocoon. The cocoon is semi-transparent and the pupa can be seen through it. Over the course of a few days, the larva undergoes metamorphosis into a winged adult. When ready to emerge, the adult splits its skin dorsally and climbs out of the exuviae and breaks out of the cell. Flight Antoine Magnan's 1934 book says that he and André Sainte-Laguë had applied the equations of air resistance to insects and found that their flight could not be explained by fixed-wing calculations, but that "One shouldn't be surprised that the results of the calculations don't square with reality". This has led to a common misconception that bees "violate aerodynamic theory". In fact it merely confirms that bees do not engage in fixed-wing flight, and that their flight is explained by other mechanics, such as those used by helicopters. In 1996 it was shown that vortices created by many insects' wings helped to provide lift. High-speed cinematography and robotic mock-up of a bee wing showed that lift was generated by "the unconventional combination of short, choppy wing strokes, a rapid rotation of the wing as it flops over and reverses direction, and a very fast wing-beat frequency". Wing-beat frequency normally increases as size decreases, but as the bee's wing beat covers such a small arc, it flaps approximately 230 times per second, faster than a fruitfly (200 times per second) which is 80 times smaller. Navigation, communication, and finding food The ethologist Karl von Frisch studied navigation in the honey bee. He showed that honey bees communicate by the waggle dance, in which a worker indicates the location of a food source to other workers in the hive. He demonstrated that bees can recognize a desired compass direction in three different ways: by the Sun, by the polarization pattern of the blue sky, and by the Earth's magnetic field. He showed that the Sun is the preferred or main compass; the other mechanisms are used under cloudy skies or inside a dark beehive. Bees navigate using spatial memory with a "rich, map-like organization". Digestion The gut of bees is relatively simple, but multiple metabolic strategies exist in the gut microbiota. Pollinating bees consume nectar and pollen, which require different digestion strategies by somewhat specialized bacteria. While nectar is a liquid of mostly monosaccharide sugars and so easily absorbed, pollen contains complex polysaccharides: branching pectin and hemicellulose. Approximately five groups of bacteria are involved in digestion. Three groups specialize in simple sugars (Snodgrassella and two groups of Lactobacillus), and two other groups in complex sugars (Gilliamella and Bifidobacterium). Digestion of pectin and hemicellulose is dominated by bacterial clades Gilliamella and Bifidobacterium respectively. Bacteria that cannot digest polysaccharides obtain enzymes from their neighbors, and bacteria that lack certain amino acids do the same, creating multiple ecological niches. Although most bee species are nectarivorous and palynivorous, some are not. Particularly unusual are vulture bees in the genus Trigona, which consume carrion and wasp brood, turning meat into a honey-like substance. Drinking guttation drops from leaves is also a source of energy and nutrients. Ecology Floral relationships Most bees are polylectic (generalist) meaning they collect pollen from a range of flowering plants, but some are oligoleges (specialists), in that they only gather pollen from one or a few species or genera of closely related plants. In Melittidae and Apidae we also find a few genera that are highly specialized for collecting plant oils both in addition to, and instead of, nectar, which is mixed with pollen as larval food. Male orchid bees in some species gather aromatic compounds from orchids, which is one of the few cases where male bees are effective pollinators. Bees are able to sense the presence of desirable flowers through ultraviolet patterning on flowers, floral odors, and even electromagnetic fields. Once landed, a bee then uses nectar quality and pollen taste to determine whether to continue visiting similar flowers. In rare cases, a plant species may only be effectively pollinated by a single bee species, and some plants are endangered at least in part because their pollinator is also threatened. But, there is a pronounced tendency for oligolectic bees to be associated with common, widespread plants visited by multiple pollinator species. For example, the creosote bush in the arid parts of the United States southwest is associated with some 40 oligoleges. As mimics and models Many bees are aposematically colored, typically orange and black, warning of their ability to defend themselves with a powerful sting. As such they are models for Batesian mimicry by non-stinging insects such as bee-flies, robber flies and hoverflies, all of which gain a measure of protection by superficially looking and behaving like bees. Bees are themselves Müllerian mimics of other aposematic insects with the same color scheme, including wasps, lycid and other beetles, and many butterflies and moths (Lepidoptera) which are themselves distasteful, often through acquiring bitter and poisonous chemicals from their plant food. All the Müllerian mimics, including bees, benefit from the reduced risk of predation that results from their easily recognized warning coloration. Bees are also mimicked by plants such as the bee orchid which imitates both the appearance and the scent of a female bee; male bees attempt to mate (pseudocopulation) with the furry lip of the flower, thus pollinating it. As brood parasites Brood parasites occur in several bee families including the apid subfamily Nomadinae. Females of these species lack pollen collecting structures (the scopa) and do not construct their own nests. They typically enter the nests of pollen collecting species, and lay their eggs in cells provisioned by the host bee. When the "cuckoo" bee larva hatches, it consumes the host larva's pollen ball, and often the host egg also. In particular, the Arctic bee species, Bombus hyperboreus is an aggressive species that attacks and enslaves other bees of the same subgenus. However, unlike many other bee brood parasites, they have pollen baskets and often collect pollen. In Southern Africa, hives of African honeybees (A. mellifera scutellata) are being destroyed by parasitic workers of the Cape honeybee, A. m. capensis. These lay diploid eggs ("thelytoky"), escaping normal worker policing, leading to the colony's destruction; the parasites can then move to other hives. The cuckoo bees in the Bombus subgenus Psithyrus are closely related to, and resemble, their hosts in looks and size. This common pattern gave rise to the ecological principle "Emery's rule". Others parasitize bees in different families, like Townsendiella, a nomadine apid, two species of which are cleptoparasites of the dasypodaid genus Hesperapis, while the other species in the same genus attacks halictid bees. Nocturnal bees Four bee families (Andrenidae, Colletidae, Halictidae, and Apidae) contain some species that are crepuscular. Most are tropical or subtropical, but some live in arid regions at higher latitudes. These bees have greatly enlarged ocelli, which are extremely sensitive to light and dark, though incapable of forming images. Some have refracting superposition compound eyes: these combine the output of many elements of their compound eyes to provide enough light for each retinal photoreceptor. Their ability to fly by night enables them to avoid many predators, and to exploit flowers that produce nectar only or also at night. Predators, parasites and pathogens Vertebrate predators of bees include bee-eaters, shrikes and flycatchers, which make short sallies to catch insects in flight. Swifts and swallows fly almost continually, catching insects as they go. The honey buzzard attacks bees' nests and eats the larvae. The greater honeyguide interacts with humans by guiding them to the nests of wild bees. The humans break open the nests and take the honey and the bird feeds on the larvae and the wax. Among mammals, predators such as the badger dig up bumblebee nests and eat both the larvae and any stored food. Specialist ambush predators of visitors to flowers include crab spiders, which wait on flowering plants for pollinating insects; predatory bugs, and praying mantises, some of which (the flower mantises of the tropics) wait motionless, aggressive mimics camouflaged as flowers. Beewolves are large wasps that habitually attack bees; the ethologist Niko Tinbergen estimated that a single colony of the beewolf Philanthus triangulum might kill several thousand honeybees in a day: all the prey he observed were honeybees. Other predatory insects that sometimes catch bees include robber flies and dragonflies. Honey bees are affected by parasites including tracheal and Varroa mites. However, some bees are believed to have a mutualistic relationship with mites. Some mites of genus Tarsonemus are associated with bees. They live in bee nests and ride on adult bees for dispersal. They are presumed to feed on fungi, nest materials or pollen. However, the impact they have on bees remains uncertain. Relationship with humans In mythology and folklore Homer's Hymn to Hermes describes three bee-maidens with the power of divination and thus speaking truth, and identifies the food of the gods as honey. Sources associated the bee maidens with Apollo and, until the 1980s, scholars followed Gottfried Hermann (1806) in incorrectly identifying the bee-maidens with the Thriae. Honey, according to a Greek myth, was discovered by a nymph called Melissa ("Bee"); and honey was offered to the Greek gods from Mycenean times. Bees were also associated with the Delphic oracle and the prophetess was sometimes called a bee. The image of a community of honey bees has been used from ancient to modern times, in Aristotle and Plato; in Virgil and Seneca; in Erasmus and Shakespeare; Tolstoy, and by political and social theorists such as Bernard Mandeville and Karl Marx as a model for human society. In English folklore, bees would be told of important events in the household, in a custom known as "Telling the bees". In art and literature Some of the oldest examples of bees in art are rock paintings in Spain which have been dated to 15,000 BC. W. B. Yeats's poem The Lake Isle of Innisfree (1888) contains the couplet "Nine bean rows will I have there, a hive for the honey bee, / And live alone in the bee loud glade." At the time he was living in Bedford Park in the West of London. Beatrix Potter's illustrated book The Tale of Mrs Tittlemouse (1910) features Babbity Bumble and her brood (pictured). Kit Williams' treasure hunt book The Bee on the Comb (1984) uses bees and beekeeping as part of its story and puzzle. Sue Monk Kidd's The Secret Life of Bees (2004), and the 2009 film starring Dakota Fanning, tells the story of a girl who escapes her abusive home and finds her way to live with a family of beekeepers, the Boatwrights. Bees have appeared in movies, such as Jerry Seinfeld's animated Bee Movie, or Dave Goulson's A Sting in the Tale (2014). The playwright Laline Paull's fantasy The Bees (2015) tells the tale of a hive bee named Flora 717 from hatching onwards. Beekeeping Humans have kept honey bee colonies, commonly in hives, for millennia. Depictions of humans collecting honey from wild bees date to 15,000 years ago; efforts to domesticate them are shown in Egyptian art around 4,500 years ago. Simple hives and smoke were used. Among Classical Era authors, beekeeping with the use of smoke is described in Aristotle's History of Animals Book 9. The account mentions that bees die after stinging; that workers remove corpses from the hive, and guard it; castes including workers and non-working drones, but "kings" rather than queens; predators including toads and bee-eaters; and the waggle dance, with the "irresistible suggestion" of ("", it waggles) and ("", they watch). Beekeeping is described in detail by Virgil in his Georgics; it is mentioned in his Aeneid, and in Pliny's Natural History. From the 18th century, European understanding of the colonies and biology of bees allowed the construction of the moveable comb hive so that honey could be harvested without destroying the colony. As commercial pollinators Bees play an important role in pollinating flowering plants, and are the major type of pollinator in many ecosystems that contain flowering plants. It is estimated that one third of the human food supply depends on pollination by insects, birds and bats, most of which is accomplished by bees, whether wild or domesticated. Since the 1970s, there has been a general decline in the species richness of wild bees and other pollinators, probably attributable to stress from increased parasites and disease, the use of pesticides, and a decrease in the number of wild flowers. Climate change probably exacerbates the problem. This is a major cause of concern, as it can cause biodiversity loss and ecosystem degradation as well as increase climate change. Contract pollination has overtaken the role of honey production for beekeepers in many countries. After the introduction of Varroa mites, feral honey bees declined dramatically in the US, though their numbers have since recovered. The number of colonies kept by beekeepers declined slightly, through urbanization, systematic pesticide use, tracheal and Varroa mites, and the closure of beekeeping businesses. In 2006 and 2007 the rate of attrition increased, and was described as colony collapse disorder. In 2010 invertebrate iridescent virus and the fungus Nosema ceranae were shown to be in every killed colony, and deadly in combination. Winter losses increased to about 1/3. Varroa mites were thought to be responsible for about half the losses. Apart from colony collapse disorder, losses outside the US have been attributed to causes including pesticide seed dressings, using neonicotinoids such as clothianidin, imidacloprid and thiamethoxam. From 2013 the European Union restricted some pesticides to stop bee populations from declining further. In 2014 the Intergovernmental Panel on Climate Change report warned that bees faced increased risk of extinction because of global warming. In 2018 the European Union decided to ban field use of all three major neonicotinoids; they remain permitted in veterinary, greenhouse, and vehicle transport usage. Farmers have focused on alternative solutions to mitigate these problems. By raising native plants, they provide food for native bee pollinators like Lasioglossum vierecki and L. leucozonium, leading to less reliance on honey bee populations. As food producers Honey is a natural product produced by bees and stored for their own use, but its sweetness has always appealed to humans. Before domestication of bees was even attempted, humans were raiding their nests for their honey. Smoke was often used to subdue the bees and such activities are depicted in rock paintings in Spain dated to 15,000 BC. Honey bees are used commercially to produce honey. As food Bees are considered edible insects. People in some countries eat insects, including the larvae and pupae of bees, mostly stingless species. They also gather larvae, pupae and surrounding cells, known as bee brood, for consumption. In the Indonesian dish botok tawon from Central and East Java, bee larvae are eaten as a companion to rice, after being mixed with shredded coconut, wrapped in banana leaves, and steamed. Bee brood (pupae and larvae) although low in calcium, has been found to be high in protein and carbohydrate, and a useful source of phosphorus, magnesium, potassium, and trace minerals iron, zinc, copper, and selenium. In addition, while bee brood was high in fat, it contained no fat soluble vitamins (such as A, D, and E) but it was a good source of most of the water-soluble B vitamins including choline as well as vitamin C. The fat was composed mostly of saturated and monounsaturated fatty acids with 2.0% being polyunsaturated fatty acids. As alternative medicine Apitherapy is a branch of alternative medicine that uses honey bee products, including raw honey, royal jelly, pollen, propolis, beeswax and apitoxin (Bee venom). The claim that apitherapy treats cancer, which some proponents of apitherapy make, remains unsupported by evidence-based medicine. Stings The painful stings of bees are mostly associated with the poison gland and the Dufour's gland which are abdominal exocrine glands containing various chemicals. In Lasioglossum leucozonium, the Dufour's Gland mostly contains octadecanolide as well as some eicosanolide. There is also evidence of n-triscosane, n-heptacosane, and 22-docosanolide.
Biology and health sciences
Hymenoptera
null
15975113
https://en.wikipedia.org/wiki/Hawaiian%20honeycreeper
Hawaiian honeycreeper
Hawaiian honeycreepers are a group of small birds endemic to Hawaii. They are members of the finch family Fringillidae, closely related to the rosefinches (Carpodacus), but many species have evolved features unlike those present in any other finch. Their great morphological diversity is the result of adaptive radiation in an insular environment. Many have been driven to extinction since the first humans arrived in Hawaii, with extinctions increasing over the last two centuries following European discovery of the islands, with habitat destruction and especially invasive species being the main causes. Taxonomy Before the introduction of molecular phylogenetic techniques, the relationship of the Hawaiian honeycreepers to other bird species was controversial. The honeycreepers were sometimes categorized as a family Drepanididae, other authorities considered them a subfamily, Drepanidinae, of Fringillidae, the finch family. The entire group was also called Drepanidini in treatments where buntings and American sparrows (Passerellidae) were included in the finch family; this term is preferred for just one subgroup of the birds today. Most recently, the entire group has been subsumed into the finch subfamily Carduelinae. The Hawaiian honeycreepers are the sister taxon to the Carpodacus rosefinches. Their ancestors are thought to have been from Asia and diverged from Carpodacus about 7.2 million years ago, and they are thought to have first arrived and radiated on the Hawaiian Islands between 5.7-7.2 million years ago, which was roughly the same time that the islands of Ni'ihau and Kauai formed. The lineage of the recently extinct po'ouli (Melamprosops) was the most ancient of the Hawaiian honeycreeper lineages to survive to recent times, diverging about 5.7-5.8 million years ago. The lineage containing Oreomystis and Paroreomyza was the second to diverge, diverging about a million years after the po'ouli's lineage. Most of the other lineages with highly distinctive morphologies are thought to have originated in the mid-late Pliocene, after the formation of Oahu but prior to the formation of Maui. Due to this, Oahu likely played a key role in the formation of diverse morphologies among honeycreepers, allowing for cycles of colonization and speciation between Kauai and Oahu. A phylogenetic tree of the recent Hawaiian honeycreeper lineages is shown here. Genera or clades with question marks (?) are of controversial or uncertain taxonomic placement. The classification of Paroreomyza and Oreomystis as sister genera and forming the second most basal group is based on genetic and molecular evidence, and has been affirmed by numerous studies; however, when morphological evidence only is used, Paroreomyza is instead the second most basal genus, with Oreomystis being the third most basal genus and more closely allied with the derived Hawaiian honeycreepers, as Oreomystis shares traits with the derived honeycreepers, such as a squared-off tongue and a distinct musty odor, that Paroreomyza does not. This does not align with the genetic evidence supporting Paroreomyza and Oreomystis as sister genera, and it would be seemingly impossible for only Paroreomyza to have lost the distinctive traits but Oreomystis and all core honeycreepers to have retained or convergently evolved them, thus presenting a taxonomic conundrum. Viridonia (containing the greater ʻamakihi) may be associated with or even synonymous with the genus Aidemedia (containing the prehistoric icterid-like and sickle-billed gapers), and has the most debated taxonomy; it was long classified within the "greater Hemignathus" radiation (a now-paraphyletic grouping containing species formerly lumped within Hemignathus, including Hemignathus, Akialoa, and Chlorodrepanis) and while some sources speculate it as being sister to Chlorodrepanis (containing the lesser ʻamakihis), other sources speculate it may be a sister genus to the genus Loxops (containing the 'akepas, ʻakekeʻe and ʻalawī). Characteristics Nearly all species of Hawaiian honeycreepers have been noted as having a unique odor to their plumage, described by many researchers as "rather like that of old canvas tents". Today, the flowers of the native ōhia (Metrosideros polymorpha) are favored by a number of nectarivorous honeycreepers. The wide range of bill shapes in this group, from thick, finch-like bills to slender, down-curved bills for probing flowers have arisen through adaptive radiation, where an ancestral finch has evolved to fill a large number of ecological niches. Some 20 species of Hawaiian honeycreeper have become extinct in the recent past, and many more in earlier times, following the arrival of humans who introduced non-native animals (ex: rats, pigs, goats, cows) and converted habitat for agriculture. Genera and species The term "prehistoric" indicates species that became extinct between the initial human settlement of Hawaii (i.e., from the late 1st millennium AD on) and European contact in 1778. Subfamily Carduelinae Drepanidini Genus Aidemedia Olson & James, 1991 – straight thin bills, insectivores Aidemedia chascax Olson & James, 1991 – Oahu icterid-like gaper (prehistoric) Aidemedia lutetiae Olson & James, 1991 – Maui Nui icterid-like gaper (prehistoric) Aidemedia zanclops Olson & James, 1991 – sickle-billed gaper (prehistoric) Genus Akialoa Olson & James, 1995 – pointed, long and down-curved bills, insectivorous or nectarivorous Akialoa ellisiana Gray, 1859 – Oʻahu ʻakialoa (extinct, 1940) Akialoa lanaiensis Rothschild, 1893 – Maui Nui ʻakialoa (extinct, 1892) Akialoa stejnegeri Wilson, 1889 – Kauaʻi ʻakialoa (extinct, 1969) Akialoa obscura Cabanis, 1889 – lesser ʻakialoa (extinct, 1940) Akialoa upupirostris – hoopoe-billed ʻakialoa (prehistoric) Genus Chloridops Wilson, 1888 – thick-billed, hard seed (e.g. Myoporum sandwicense) specialist Chloridops kona Wilson, 1888 – Kona grosbeak (extinct, 1894) Chloridops regiskongi – King Kong grosbeak (prehistoric) Chloridops wahi – wahi grosbeak (prehistoric) Genus Chlorodrepanis Olson & James, 1995 – pointed bills, insectivorous and nectarivorous Chlorodrepanis stejnegeri Pratt, 1989 – Kauaʻi ʻamakihi Chlorodrepanis flava Bloxam, 1827 – Oʻahu ʻamakihi Chlorodrepanis virens Cabanis, 1851 – Hawaiʻi ʻamakihi Genus Ciridops Newton, 1892 – finch-like, fed on fruit of Pritchardia species Ciridops anna Dole, 1879 – ʻula-ʻai-hāwane (extinct, 1892 or 1937) Ciridops tenax Olson & James, 1991 stout-legged finch (prehistoric) Genus Drepanis Temminck, 1820 – down-curved bills, nectarivores Drepanis funerea Newton, 1894 – black mamo (extinct, 1907) Drepanis pacifica Gmelin, 1788 – Hawaiʻi mamo (extinct, 1898) Drepanis coccinea Forster, 1780 – ʻiʻiwi Genus Dysmorodrepanis Perkins, 1919 – pincer-like bill, possibly snail specialist Dysmorodrepanis munroi Perkins, 1919 – Lanaʻi hookbill (extinct, 1918) Genus Hemignathus Lichtenstein, 1839 – pointed or long and down-curved bills, insectivorous Hemignathus affinis – Maui nukupuʻu (extinct, 1995–1998) Hemignathus hanapepe – Kauaʻi nukupuʻu (extinct, 1998) Hemignathus lucidus – Oʻahu nukupuʻu (extinct, 1837) Hemignathus vorpalis James & Olson, 2003 – giant nukupu'u (prehistoric) Hemignathus wilsoni Rothschild, 1893 – ʻakiapolaʻau Genus Himatione – thin-billed, nectarivorous Himatione sanguinea Gmelin, 1788 – ʻapapane Himatione fraithii – Laysan honeycreeper (extinct, 1923) Genus Loxioides Oustalet, 1877 – finch-like, Fabales seed specialists Loxioides bailleui Oustalet, 1877 – palila Loxioides kikuichi Olson & James, 2006 – Kaua'i palila (prehistoric, possibly survived to the early 18th century) Genus Loxops – small pointed bills with the tips slightly crossed, insectivorous Loxops caeruleirostris Wilson, 1890 – ‘akeke‘e Loxops coccineus Gmelin, 1789 – Hawaiʻi ʻakepa Loxops ochraceus Rothschild, 1893 - Maui ʻakepa (extinct, 1988) Loxops wolstenholmei Rothschild, 1895 – Oʻahu ʻakepa (extinct, 1990s) Loxops mana Wilson, 1891 – Hawaiʻi creeper Genus Magumma - small pointed bills, insectivorous and nectarivorous Magumma parva Stejneger, 1887 - ʻanianiau Genus Melamprosops Casey & Jacobi, 1974 – short pointed bill, insectivorous and snail specialist Melamprosops phaeosoma Casey & Jacobi, 1974 – poʻouli (extinct, 2004) Genus Oreomystis Wilson, 1891 – short pointed bills, insectivorous Oreomystis bairdi Stejneger, 1887 – ʻakikiki Genus Orthiospiza – large weak bill, possibly soft seed or fruit specialist? Orthiospiza howarthi James & Olson, 1991 - highland finch (prehistoric) Genus Palmeria Rothschild, 1893 – thin-billed, nectarivorous, favors Metrosideros polymorpha Palmeria dolei Wilson, 1891 – ʻakohekohe Genus Paroreomyza – short pointed bills, insectivorous Paroreomyza maculata Cabanis, 1850 – Oʻahu ʻalauahio (possibly extinct, early 1990s?) Paroreomyza flammea (Wilson, 1889) – kākāwahie (extinct, 1963) Paroreomyza Paroreomyza Wilson, 1890 – Lana'i 'alauahio (extinct, 1937) Paroreomyza newtoni (Rothschild, 1893) – Maui ‘alauahio Genus Pseudonestor – parrot-like bill, probes wood for insect larvae Pseudonestor xanthophrys Rothschild, 1893 – Maui parrotbill or kiwikiu Genus Psittirostra – slightly hooked bill, Freycinetia arborea fruit specialist Psittirostra psittacea Gmelin, 1789 – ʻōʻū (probably extinct, 1998?) Genus Rhodacanthis – large-billed, granivorous, legume specialists Rhodacanthis flaviceps Rothschild, 1892 – lesser koa-finch (extinct, 1891) Rhodacanthis forfex James & Olson, 2005 – scissor-billed koa-finch (prehistoric) Rhodacanthis litotes James & Olson, 2005 – primitive koa-finch (prehistoric) Rhodacanthis palmeri Rothschild, 1892 – greater koa-finch (extinct, 1896) Genus Telespiza Wilson, 1890 – finch-like, granivorous, opportunistic scavengers Telespiza cantans Wilson, 1890 – Laysan finch Telespiza persecutrix James & Olson, 1991 – Kauaʻi finch (prehistoric) Telespiza ultima Bryan, 1917 – Nihoa finch Telespiza ypsilon James & Olson, 1991 – Maui Nui finch (prehistoric) Genus Vangulifer – flat rounded bills, possibly caught flying insects Vangulifer mirandus – strange-billed finch (prehistoric) Vangulifer neophasis – thin-billed finch (prehistoric) Genus Viridonia Viridonia sagittirostris Rothschild, 1892 – greater ʻamakihi (extinct, 1901) Genus Xestospiza James & Oslon, 1991 – cone-shaped bills, possibly insectivorous Xestospiza conica James & Olson, 1991 – cone-billed finch (prehistoric) Xestospiza fastigialis James & Olson, 1991 – ridge-billed finch (prehistoric) Hawaiian honeycreepers were formerly classified into three tribes – Hemignathini, Psittirostrini, and Drepanidini – but they are not currently classified as such. Conservation
Biology and health sciences
Passerida
Animals
15982783
https://en.wikipedia.org/wiki/Winchester%20measure
Winchester measure
Winchester measure is a set of legal standards of volume instituted in the late 15th century (1495) by King Henry VII of England and in use, with some modifications, until the present day. It consists of the Winchester bushel and its dependent quantities, the peck, (dry) gallon and (dry) quart. They would later become known as the Winchester Standards, named because the examples were kept in the city of Winchester. Winchester measure may also refer to: the systems of weights and measures used in the Kingdom of Wessex during the Anglo-Saxon period, later adopted as the national standards of England, as well as the physical standards (prototypes) associated with these systems of units a set of avoirdupois weight standards dating to the mid-14th century, in particular, the 56-pound standard commissioned by King Edward III, which served as the prototype for Queen Elizabeth I's reform of the avoirdupois weight system in 1588 a type of glass bottle, usually amber, used in the drug and chemical industry, known variously as the Boston round, Winchester bottle, or Winchester quart bottle History During the 10th century, the capital city of the English king, Edgar, was at Winchester and, at his direction, standards of measurement were instituted. However, nothing is known of these standards except that, following the Norman Conquest, the physical standards (prototypes) were removed to London. In 1496, a law of King Henry VII instituted the bushel that would later come to be known by the name "Winchester". In 1588 Queen Elizabeth I, while reforming the English weight system (which, at the time, included no less than three different pounds going by the name "avoirdupois") based the new Exchequer standard on an ancient set of bronze weights found at Winchester and dating to the reign of Edward III. These incidents have led to the widespread belief that the Winchester units of dry capacity measure, namely, the bushel and its dependent quantities the peck, gallon and quart, must have originated in the time of King Edgar. However, contemporary scholarship can find no evidence for the existence of any these units in Britain prior to the Norman Conquest. Furthermore, all of the units associated with Winchester measure (quarter, bushel, peck, gallon, pottle, quart, pint) have names of French derivation, at least suggestive of Norman origin. Capacity measures in the Anglo-Saxon period Prior to the Norman Conquest, the following units of capacity measure were used: sester, amber, mitta, coomb, and seam. A statute of 1196 (9 Ric. 1. c. 27) decreed: It is established that all measures of the whole of England be of the same amount, as well of corn as of vegetables and of like things, to wit, one good horse load; and that this measure be level as well in cities and boroughs as without. This appears to be a description of the seam, which would later be equated with the quarter. The word seam is of Latin derivation (from the Vulgar Latin sauma = packsaddle). Some of the other units are likewise of Latin derivation, sester from sextarius, amber from amphora. The sester could thus be taken as roughly a pint, the amber a bushel. However, the values of these units, as well as their relationships to one another, varied considerably over the centuries so that no clear definitions are possible except by specifying the time and place in which the units were used. After the Norman Conquest One of the earliest documents defining the gallon, bushel and quarter is the Assize of Weights and Measures, also known as the Tractatus de Ponderibus et Mensuris, sometimes attributed to Henry III or Edward I, but nowadays generally listed under Ancient Statutes of Uncertain Date and presumed to be from −1305. It states, By Consent of the whole Realm the King’s Measure was made, so that an English Penny, which is called the Sterling, round without clipping, shall weigh Thirty-two Grains of Wheat dry in the midst of the Ear; Twenty-pence make an Ounce; and Twelve Ounces make a Pound, and Eight Pounds make a Gallon of Wine; and Eight Gallons of Wine make a Bushel of London; which is the Eighth Part of a Quarter. In 1496, An Act for Weights and Measures (12 Hen. 7. c. 5) stated That the Measure of a Bushel contain Gallons of Wheat, and that every Gallon contain of Wheat of Troy Weight, and every Pound contain Ounces of Troy Weight, and every Ounce contain Sterlings, and every Sterling be of the Weight of Corns of Wheat that grew in the Midst of the Ear of Wheat, according to the old Laws of this Land. Even though this bushel does not quite fit the description of the Winchester bushel, the national standard prototype bushel constructed the following year (and still in existence) is near enough to a Winchester bushel that it is generally considered the first, even though it was not known by that name at the time. The Winchester bushel is first mentioned by name in a statute of 1670 entitled An Act for ascertaining the Measures of Corn and Salt (22 Cha. 2. c. 8) which states, And that if any person or persons after the time aforesaid shall sell any sort of corn or grain, ground or unground, or any kind of salt, usually sold by the bushel, either in open market, or any other place, by any other bushel or measure than that which is agreeable to the standard, marked in his Majesty's exchequer, commonly called the Winchester measure, containing eight gallons to the bushel, and no more or less, and the said bushel strucken even by the wood or brim of the same by the seller, and sealed as this act directs, he or they shall forfeit for every such offence the sum of forty shillings. It is first defined in law by a statute of 1696–97 (8 & 9 Will. 3. c. 22 ss. 9 & 45) And to the End all His Majesties Subjects may know the Content of the Winchester Bushell whereunto this Act refers, and that all Disputes and Differences about Measure may be prevented for the future, it is hereby declared that every round Bushel with a plain and even Bottom, being Eighteen Inches and a Halfe wide throughout, & Eight Inches deep, shall be esteemed a legal Winchester Bushel according to the Standard in His Majesty's Exchequer. In 1824 a new Act was passed in which the gallon was defined as the volume of ten pounds of pure water at with the other units of volume changing accordingly. The "Winchester bushel", which was some 3% smaller than the new bushel (eight new gallons), was retained in the English grain trade until formally abolished in 1835. In 1836, the United States Department of the Treasury formally adopted the Winchester bushel as the standard for dealing in grain and, defined as 2,150.42 cubic inches, it remains so today. While the United Kingdom and the British Colonies changed to "Imperial" measures in 1826, the US continued to use Winchester measures and still does. Measures in the city museum None of Edgar's standard measures, which were probably made of wood, remain, but the city's copy of the standard yard, although stamped with the official mark of Elizabeth I, may date from the early twelfth century, during the reign of Henry I. Preserved standard weights date from 1357, and although the original bushel is lost, a standard bushel, gallon and quart made of bronze, issued in 1497 and stamped with the mark of Henry VII are still held.
Physical sciences
Measurement systems
Basics and measurement
10806718
https://en.wikipedia.org/wiki/Data%20sharing
Data sharing
Data sharing is the practice of making data used for scholarly research available to other investigators. Many funding agencies, institutions, and publication venues have policies regarding data sharing because transparency and openness are considered by many to be part of the scientific method. A number of funding agencies and science journals require authors of peer-reviewed papers to share any supplemental information (raw data, statistical methods or source code) necessary to understand, develop or reproduce published research. A great deal of scientific research is not subject to data sharing requirements, and many of these policies have liberal exceptions. In the absence of any binding requirement, data sharing is at the discretion of the scientists themselves. In addition, in certain situations governments and institutions prohibit or severely limit data sharing to protect proprietary interests, national security, and subject/patient/victim confidentiality. Data sharing may also be restricted to protect institutions and scientists from use of data for political purposes. Data and methods may be requested from an author years after publication. In order to encourage data sharing and prevent the loss or corruption of data, a number of funding agencies and journals established policies on data archiving. Access to publicly archived data is a recent development in the history of science made possible by technological advances in communications and information technology. To take full advantage of modern rapid communication may require consensual agreement on the criteria underlying mutual recognition of respective contributions. Models recognized for improving the timely sharing of data for more effective response to emergent infectious disease threats include the data sharing mechanism introduced by the GISAID Initiative. Despite policies on data sharing and archiving, data withholding still happens. Authors may fail to archive data or they only archive a portion of the data. Failure to archive data alone is not data withholding. When a researcher requests additional information, an author sometimes refuses to provide it. When authors withhold data like this, they run the risk of losing the trust of the science community. A 2022 study identified about 3500 research papers which contained statements that the data was available, but upon request and further seeking the data, found that it was unavailable for 94% of papers. Data sharing may also indicate the sharing of personal information on a social media platform. U.S. government policies Federal law On August 9, 2007, President Bush signed the America COMPETES Act (or the "America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science Act") requiring civilian federal agencies to provide guidelines, policies and procedures, to facilitate and optimize the open exchange of data and research between agencies, the public and policymakers. See Section 1009. NIH data sharing policy The NIH Final Statement of Sharing of Research Data says: NSF Policy from Grant General Conditions Office of Research Integrity Allegations of misconduct in medical research carry severe consequences. The United States Department of Health and Human Services established an office to oversee investigations of allegations of misconduct, including data withholding. The website defines the mission: Ideals in data sharing Some research organizations feel particularly strongly about data sharing. Stanford University's WaveLab has a philosophy about reproducible research and disclosing all algorithms and source code necessary to reproduce the research. In a paper titled "WaveLab and Reproducible Research," the authors describe some of the problems they encountered in trying to reproduce their own research after a period of time. In many cases, it was so difficult they gave up the effort. These experiences are what convinced them of the importance of disclosing source code. The philosophy is described: The idea is: An article about computational science in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures. The Data Observation Network for Earth (DataONE) and Data Conservancy are projects supported by the National Science Foundation to encourage and facilitate data sharing among research scientists and better support meta-analysis. In environmental sciences, the research community is recognizing that major scientific advances involving integration of knowledge in and across fields will require that researchers overcome not only the technological barriers to data sharing but also the historically entrenched institutional and sociological barriers. Dr. Richard J. Hodes, director of the National Institute on Aging has stated, "the old model in which researchers jealously guarded their data is no longer applicable". The Alliance for Taxpayer Access is a group of organizations that support open access to government sponsored research. The group has expressed a "Statement of Principles" explaining why they believe open access is important. They also list a number of international public access policies. This is no more so than in timely communication of essential information to effectively respond to health emergencies. While public domain archives have been embraced for depositing data, mainly post formal publication, they have failed to encourage rapid data sharing during health emergencies, among them the Ebola and Zika, outbreaks. More clearly defined principles are required to recognize the interests of those generating the data while permitting free, unencumbered access to and use of the data (pre-publication) for research and practical application, such as those adopted by the GISAID Initiative to counter emergent threats from influenza. International policies Australia Austria Europe — Commission of European Communities Germany United Kingdom 'Omic Data Sharing — a list of policies of major science funders FAIRsharing.org Catalogue of Data Policies India -National Data Sharing and Accessibility Policy – Government of India Data sharing problems in academia Genetics Withholding of data has become so commonplace in genetics that researchers at Massachusetts General Hospital published a journal article on the subject. The study found that "Because they were denied access to data, 28% of geneticists reported that they had been unable to confirm published research." Psychology In a 2006 study, it was observed that, of 141 authors of a publication from the American Psychological Association (APA) empirical articles, 103 (73%) did not respond with their data over a 6-month period. In a follow-up study published in 2015, it was found that 246 out of 394 contacted authors of papers in APA journals did not share their data upon request (62%). Archaeology A 2018 study reported on study of a random sample of 48 articles published during February–May 2017 in the Journal of Archaeological Science which found openly available raw data for 18 papers (53%), with compositional and dating data being the most frequently shared types. The same study also emailed authors of articles on experiments with stone artifacts that were published during 2009 and 2015 to request data relating to the publications. They contacted the authors of 23 articles and received 15 replies, resulting in a 70% response rate. They received five responses that included data files, giving an overall sharing rate of 20%. Scientists in training A study of scientists in training indicated many had already experienced data withholding. This study has given rise to the fear the future generation of scientists will not abide by the established practices. Differing approaches in different fields Requirements for data sharing are more commonly imposed by institutions, funding agencies, and publication venues in the medical and biological sciences than in the physical sciences. Requirements vary widely regarding whether data must be shared at all, with whom the data must be shared, and who must bear the expense of data sharing. Funding agencies such as the NIH and NSF tend to require greater sharing of data, but even these requirements tend to acknowledge the concerns of patient confidentiality, costs incurred in sharing data, and the legitimacy of the request. Private interests and public agencies with national security interests (defense and law enforcement) often discourage sharing of data and methods through non-disclosure agreements. Data sharing poses specific challenges in participatory monitoring initiatives, for example where forest communities collect data on local social and environmental conditions. In this case, a rights-based approach to the development of data-sharing protocols can be based on principles of free, prior and informed consent, and prioritise the protection of the rights of those who generated the data, and/or those potentially affected by data-sharing.
Physical sciences
Science basics
Basics and measurement
10809525
https://en.wikipedia.org/wiki/Port%20of%20Ningbo-Zhoushan
Port of Ningbo-Zhoushan
The Port of Ningbo-Zhoushan is the busiest port in the world in terms of cargo tonnage. It handled 888.96 million tons of cargo in 2015. The port is located in Ningbo and Zhoushan, on the coast of the East China Sea, in Zhejiang province on the southeast end of Hangzhou Bay, across which it faces the municipality of Shanghai. The port is at the crossroads of the north–south inland and coastal shipping route, including canals to the important inland waterway to interior China, the Yangtze River, to the north. The port consists of several ports which are Beilun (seaport), Zhenhai (estuary port), and old Ningbo harbor (inland river port). The operator of the port, Ningbo Zhoushan Port Co., Ltd. (NZP), is a listed company, but it is 76.31% owned by state-owned Ningbo Zhoushan Port Group Co., Ltd., . History Ningbo Port was established in 1738. During the Tang dynasty (618–907), it was known as one of the three major seaports for foreign trade under the name "Mingzhou", along with Yangzhou and Guangzhou. In the Song dynasty, it became one of the three major port cities for foreign trade, together with Guangzhou and Quanzhou. It was designated as one of the "Five Treaty Ports" along with Guangzhou, Xiamen, Fuzhou and Shanghai after the 1842 Treaty of Nanking that ended the First Opium War. In 2006, the Port of Ningbo was merged with the neighboring Port of Zhoushan to form a combined cargo handling center. The combined Ningbo-Zhoushan Port handled a total cargo volume of 744,000,000 metric tons of cargo in 2012, making it the largest port in the world in terms of cargo tonnage, surpassing the Port of Shanghai for the first time. The port is part of the 21st Century Maritime Silk Road that runs from the Chinese coast to Singapore, towards the southern tip of India to Mombasa, from there through the Red Sea via the Suez Canal to the Mediterranean, there to the Upper Adriatic region to the northern Italian hub of Trieste with its connections to Central Europe and the North Sea. Economic trade The Port of Ningbo-Zhoushan is involved in economic trade with cargo shipment, raw materials and manufactured goods from as far as North and South America and Oceania. It has economic trade with over 560 ports from more than 90 countries and regions in the world. It is one of a growing number of ports in China with a cargo throughput volume exceeding 100 million tons annually. The water quality within Ningbo-Zhoushan Port has become badly polluted over the past ten years, due to the massive scale of maritime traffic constantly in operation. Port infrastructure The Port of Ningbo-Zhoushan complex is a modern multi-purpose deep water port, consisting of inland, estuary, and coastal harbors. There are a total of 191 berths including 39 deep water berths with 10,000 and more tonnage. The larger ports include a 250,000 tonnage crude oil terminal and a 200,000+ tonnage ore loading berth. There is also a purpose-built terminal for 6th generation container vessels and a 50,000 tonnage berth dedicated for liquid chemical products. In August 2020, the Ningbo-Zhoushan Port (NZP) Group, together with Brazilian iron ore miner Vale, inaugurated the Shulanghu () grinding hub, after a collaboration that began in 2016. This was followed in November 2020 by an () investment deal. The Zhejiang Free Trade Zone was quoted as stating that an "iron ore storage yard, with a maximum capacity of 4.1 million tonnes, an ore blending and processing facility and two shipping berths" would be built.
Technology
Specific piers and ports
null
9311172
https://en.wikipedia.org/wiki/Drug
Drug
A drug is any chemical substance other than a nutrient or an essential dietary ingredient, which, when administered to a living organism, produces a biological effect. Consumption of drugs can be via inhalation, injection, smoking, ingestion, absorption via a patch on the skin, suppository, or dissolution under the tongue. In pharmacology, a drug is a chemical substance, typically of known structure, which, when administered to a living organism, produces a biological effect. A pharmaceutical drug, also called a medication or medicine, is a chemical substance used to treat, cure, prevent, or diagnose a disease or to promote well-being. Traditionally drugs were obtained through extraction from medicinal plants, but more recently also by organic synthesis. Pharmaceutical drugs may be used for a limited duration, or on a regular basis for chronic disorders. Classification Pharmaceutical drugs are often classified into drug classes—groups of related drugs that have similar chemical structures, the same mechanism of action (binding to the same biological target), a related mode of action, and that are used to treat the same disease. The Anatomical Therapeutic Chemical Classification System (ATC), the most widely used drug classification system, assigns drugs a unique ATC code, which is an alphanumeric code that assigns it to specific drug classes within the ATC system. Another major classification system is the Biopharmaceutics Classification System. This classifies drugs according to their solubility and permeability or absorption properties. Psychoactive drugs are substances that affect the function of the central nervous system, altering perception, mood or consciousness. These drugs are divided into different groups such as: stimulants, depressants, antidepressants, anxiolytics, antipsychotics, and hallucinogens. These psychoactive drugs have been proven useful in treating a wide range of medical conditions including mental disorders around the world. The most widely used drugs in the world include caffeine, nicotine and alcohol, which are also considered recreational drugs, since they are used for pleasure rather than medicinal purposes. All drugs can have potential side effects. Abuse of several psychoactive drugs can cause addiction or physical dependence. Excessive use of stimulants can promote stimulant psychosis. Many recreational drugs are illicit; international treaties such as the Single Convention on Narcotic Drugs exist for the purpose of their prohibition. Etymology In English, the noun "drug" is thought to originate from Old French "", possibly deriving from " ()" from Middle Dutch meaning "dry (barrels)", referring to medicinal plants preserved as dry matter in barrels. In the 1990s however, Spanish lexicographer Federico Corriente Córdoba documented the possible origin of the word in {ḥṭr} an early romanized form of the Al-Andalus language from the northwestern part of the Iberian peninsula. The term could approximately be transcribed as حطروكة or hatruka. The term "drug" has become a skunked term with negative connotation, being used as a synonym for illegal substances like cocaine or heroin or for drugs used recreationally. In other contexts the terms "drug" and "medicine" are used interchangeably. Efficacy Drug action is highly specific and their effects may only be detected in certain individuals. For instance, the 10 highest-grossing drugs in the US may help only 4-25% of people. Often, the activity of a drug depends on the genotype of a patient. For example, Erbitux (cetuximab) increases the survival rate of colorectal cancer patients if they carry a particular mutation in the EGFR gene. Some drugs are specifically approved for certain genotypes. Vemurafenib is such a case which is used for melanoma patients who carry a mutation in the BRAF gene. The number of people who benefit from a drug determines if drug trials are worth carrying out, given that phase III trials may cost between $100 million and $700 million per drug. This is the motivation behind personalized medicine, that is, to develop drugs that are adapted to individual patients. Medication A medication or medicine is a drug taken to cure or ameliorate any symptoms of an illness or medical condition. The use may also be as preventive medicine that has future benefits but does not treat any existing or pre-existing diseases or symptoms. Dispensing of medication is often regulated by governments into three categories—over-the-counter medications, which are available in pharmacies and supermarkets without special restrictions; behind-the-counter medicines, which are dispensed by a pharmacist without needing a doctor's prescription, and prescription only medicines, which must be prescribed by a licensed medical professional, usually a physician. In the United Kingdom, behind-the-counter medicines are called pharmacy medicines which can only be sold in registered pharmacies, by or under the supervision of a pharmacist. These medications are designated by the letter P on the label. The range of medicines available without a prescription varies from country to country. Medications are typically produced by pharmaceutical companies and are often patented to give the developer exclusive rights to produce them. Those that are not patented (or with expired patents) are called generic drugs since they can be produced by other companies without restrictions or licenses from the patent holder. Pharmaceutical drugs are usually categorised into drug classes. A group of drugs will share a similar chemical structure, have the same mechanism of action or the same related mode of action, or target the same illness or related illnesses. The Anatomical Therapeutic Chemical Classification System (ATC), the most widely used drug classification system, assigns drugs a unique ATC code, which is an alphanumeric code that assigns it to specific drug classes within the ATC system. Another major classification system is the Biopharmaceutics Classification System. This groups drugs according to their solubility and permeability or absorption properties. Spiritual and religious use Some religions, particularly ethnic religions, are based completely on the use of certain drugs, known as entheogens, which are mostly hallucinogens,—psychedelics, dissociatives, or deliriants. Some entheogens include kava which can act as a stimulant, a sedative, a euphoriant and an anesthetic. The roots of the kava plant are used to produce a drink consumed throughout the cultures of the Pacific Ocean. Some shamans from different cultures use entheogens, defined as "generating the divine within," to achieve religious ecstasy. Amazonian shamans use ayahuasca (yagé), a hallucinogenic brew, for this purpose. Mazatec shamans have a long and continuous tradition of religious use of Salvia divinorum, a psychoactive plant. Its use is to facilitate visionary states of consciousness during spiritual healing sessions. Silene undulata is regarded by the Xhosa people as a sacred plant and used as an entheogen. Its roots are traditionally used to induce vivid (and according to the Xhosa, prophetic) lucid dreams during the initiation process of shamans, classifying it a naturally occurring oneirogen similar to the more well-known dream herb Calea ternifolia. Peyote, a small spineless cactus, has been a major source of psychedelic mescaline and has probably been used by Native Americans for at least five thousand years. Most mescaline is now obtained from a few species of columnar cacti in particular from San Pedro and not from the vulnerable peyote. The entheogenic use of cannabis has also been widely practised for centuries. Rastafari use marijuana (ganja) as a sacrament in their religious ceremonies. Psychedelic mushrooms (psilocybin mushrooms), commonly called magic mushrooms or shrooms have also long been used as entheogens. Smart drugs and designer drugs Nootropics, also commonly referred to as "smart drugs", are drugs that are claimed to improve human cognitive abilities. Nootropics are used to improve memory, concentration, thought, mood, and learning. An increasingly used nootropic among students, also known as a study drug, is methylphenidate branded commonly as Ritalin and used for the treatment of attention deficit hyperactivity disorder (ADHD) and narcolepsy. At high doses methylphenidate can become highly addictive. Serious addiction can lead to psychosis, anxiety and heart problems, and the use of this drug is related to a rise in suicides, and overdoses. Evidence for use outside of student settings is limited but suggests that it is commonplace. Intravenous use of methylphenidate can lead to emphysematous damage to the lungs, known as Ritalin lung. Other drugs known as designer drugs are produced. An early example of what today would be labelled a 'designer drug' was LSD, which was synthesised from ergot. Other examples include analogs of performance-enhancing drugs such as designer steroids taken to improve physical capabilities; these are sometimes used (legally or not) for this purpose, often by professional athletes. Other designer drugs mimic the effects of psychoactive drugs. Since the late 1990s there has been the identification of many of these synthesised drugs. In Japan and the United Kingdom this has spurred the addition of many designer drugs into a newer class of controlled substances known as a temporary class drug. Synthetic cannabinoids have been produced for a longer period of time and are used in the designer drug synthetic cannabis. Recreational drug use Recreational drug use is the use of a drug (legal, controlled, or illegal) with the primary intention of altering the state of consciousness through alteration of the central nervous system in order to create positive emotions and feelings. The hallucinogen LSD is a psychoactive drug commonly used as a recreational drug. Ketamine is a drug used for anesthesia, and is also used as a recreational drug, both in powder and liquid form, for its hallucinogenic and dissociative effects. Some national laws prohibit the use of different recreational drugs; medicinal drugs that have the potential for recreational use are often heavily regulated. However, there are many recreational drugs that are legal in many jurisdictions and widely culturally accepted. Cannabis is the most commonly consumed controlled recreational drug in the world (as of 2012). Its use in many countries is illegal but is legally used in several countries usually with the proviso that it can only be used for personal use. It can be used in the leaf form of marijuana (grass), or in the resin form of hashish. Marijuana is a more mild form of cannabis than hashish. There may be an age restriction on the consumption and purchase of legal recreational drugs. Some recreational drugs that are legal and accepted in many places include alcohol, tobacco, betel nut, and caffeine products, and in some areas of the world the legal use of drugs such as khat is common. There are a number of legal intoxicants commonly called legal highs that are used recreationally. The most widely used of these is alcohol. Administration of drugs All drugs have a route of administration, and many can be administered by more than one. A bolus is the administration of a medication, drug or other compound that is given to raise its concentration in blood rapdily to an effective level, regardless of the route of administration Control of drugs Numerous governmental offices in many countries deal with the control and supervision of drug manufacture and use, and the implementation of various drug laws. The Single Convention on Narcotic Drugs is an international treaty brought about in 1961 to prohibit the use of narcotics save for those used in medical research and treatment. In 1971, a second treaty the Convention on Psychotropic Substances had to be introduced to deal with newer recreational psychoactive and psychedelic drugs. The legal status of Salvia divinorum varies in many countries and even in states within the United States. Where it is legislated against, the degree of prohibition also varies. The Food and Drug Administration (FDA) in the United States is a federal agency responsible for protecting and promoting public health through the regulation and supervision of food safety, tobacco products, dietary supplements, prescription and over-the-counter medications, vaccines, biopharmaceuticals, blood transfusions, medical devices, electromagnetic radiation emitting devices, cosmetics, animal foods and veterinary drugs. In India, the Narcotics Control Bureau (NCB), an Indian federal law enforcement and intelligence agency under the Ministry of Home Affairs, is tasked with combating drug trafficking and assisting international use of illegal substances under the provisions of Narcotic Drugs and Psychotropic Substances Act.
Biology and health sciences
Drugs and medication
null
1524030
https://en.wikipedia.org/wiki/Cyclohexane%20conformation
Cyclohexane conformation
Cyclohexane conformations are any of several three-dimensional shapes adopted by cyclohexane. Because many compounds feature structurally similar six-membered rings, the structure and dynamics of cyclohexane are important prototypes of a wide range of compounds. The internal angles of a regular, flat hexagon are 120°, while the preferred angle between successive bonds in a carbon chain is about 109.5°, the tetrahedral angle (the arc cosine of −). Therefore, the cyclohexane ring tends to assume non-planar (warped) conformations, which have all angles closer to 109.5° and therefore a lower strain energy than the flat hexagonal shape. Consider the carbon atoms numbered from 1 to 6 around the ring. If we hold carbon atoms 1, 2, and 3 stationary, with the correct bond lengths and the tetrahedral angle between the two bonds, and then continue by adding carbon atoms 4, 5, and 6 with the correct bond length and the tetrahedral angle, we can vary the three dihedral angles for the sequences (2,3,4), (3,4,5), and (4,5,6). The next bond, from atom 6, is also oriented by a dihedral angle, so we have four degrees of freedom. But that last bond has to end at the position of atom 1, which imposes three conditions in three-dimensional space. If the bond angle in the chain (6,1,2) should also be the tetrahedral angle then we have four conditions. In principle this means that there are no degrees of freedom of conformation, assuming all the bond lengths are equal and all the angles between bonds are equal. It turns out that, with atoms 1, 2, and 3 fixed, there are two solutions called chair, depending on whether the dihedral angle for (1,2,3,4) is positive or negative, and these two solutions are the same under a rotation. But there is also a continuum of solutions, a topological circle where angle strain is zero, including the twist boat and the boat conformations. All the conformations on this continuum have a twofold axis of symmetry running through the ring, whereas the chair conformations do not (they have D symmetry, with a threefold axis running through the ring). It is because of the symmetry of the conformations on this continuum that it is possible to satisfy all four constraints with a range of dihedral angles at (1,2,3,4). On this continuum the energy varies because of Pitzer strain related to the dihedral angles. The twist boat has a lower energy than the boat. In order to go from the chair conformation to a twist-boat conformation or the other chair conformation, bond angles have to be changed, leading to a high-energy half-chair conformation. So the relative stabilities are: . All relative conformational energies are shown below. At room temperature the molecule can easily move among these conformations, but only chair and twist-boat can be isolated in pure form, because the others are not at local energy minima. The boat and twist-boat conformations, as said, lie along a continuum of zero angle strain. If there are substituents that allow the different carbon atoms to be distinguished, then this continuum is like a circle with six boat conformations and six twist-boat conformations between them, three "right-handed" and three "left-handed". (Which should be called right-handed is unimportant.) But if the carbon atoms are indistinguishable, as in cyclohexane itself, then moving along the continuum takes the molecule from the boat form to a "right-handed" twist-boat, and then back to the same boat form (with a permutation of the carbon atoms), then to a "left-handed" twist-boat, and then back again to the achiral boat. The passage boat⊣twist-boat⊣boat⊣twist-boat⊣boat constitutes a pseudorotation. Coplanar carbons Another way to compare the stability within two molecules of cyclohexane in the same conformation is to evaluate the number of coplanar carbons in each molecule. Coplanar carbons are carbons that are all on the same plane. Increasing the number of coplanar carbons increases the number of eclipsing substituents trying to form a 120°, which is unattainable due to the overlapping hydrogens. This overlap increases the overall torsional strain and decreases the stability of the conformation. Cyclohexane diminishes the torsional strain from eclipsing substituents through adopting a conformation with a lower number of nonplanar carbons. For example, if a half-chair conformation contains four coplanar carbons and another half-chair conformation contains five coplanar carbons, the conformation with four coplanar carbons will be more stable. Principal conformers The different conformations are called "conformers", a blend of the words "conformation" and "isomer". Chair conformation The chair conformation is the most stable conformer. At , 99.99% of all molecules in a cyclohexane solution adopt this conformation. The C–C ring of the chair conformation has the same shape as the 6-membered rings in the diamond cubic lattice. This can be modeled as follows. Consider a carbon atom to be a point with four half-bonds sticking out towards the vertices of a tetrahedron. Place it on a flat surface with one half-bond pointing straight up. Looking from directly above, the other three half-bonds will appear to point outwards towards the vertices of an equilateral triangle, so the bonds will appear to have an angle of 120° between them. Arrange six such atoms above the surface so that these 120° angles form a regular hexagon. Reflecting three of the atoms to be below the surface yields the desired geometry. All carbon centers are equivalent. They alternate between two parallel planes, one containing C1, C3 and C5, and the other containing C2, C4, and C6. The chair conformation is left unchanged after a rotation of 120° about the symmetry axis perpendicular to these planes, as well as after a rotation of 60° followed by a reflection in the midpoint plane, resulting in a symmetry group of D3d. While all C–C bonds are tilted relative to the plane, diametrically opposite bonds (such as C1–C2 and C4–C5) are parallel to each other. Six of the twelve C–H bonds are axial, pointing upwards or downwards almost parallel to the symmetry axis. The other six C–H bonds are equatorial, oriented radially outwards with an upwards or downwards tilt. Each carbon center has one axial C–H bond (pointed alternately upwards or downwards) and one equatorial C–H bond (tilted alternately downwards or upwards), enabling each X–C–C–Y unit to adopt a staggered conformation with minimal torsional strain. In this model, the dihedral angles for series of four carbon atoms going around the ring alternate between exactly +60° (gauche+) and −60° (gauche−). The chair conformation cannot be deformed without changing bond angles or lengths. It can be represented as two linked chains, C1–C2–C3–C4 and C1–C6–C5–C4, each mirroring the other, with opposite dihedral angles. The C1–C4 distance depends on the absolute value of this dihedral angle, so in a rigid model, changing one angle requires changing the other angle. If both dihedral angles change while remaining opposites of each other, it is not possible to maintain the correct C–C–C bond angles at C1 and C4. The chair geometry is often preserved when the hydrogen atoms are replaced by halogens or other simple groups. However, when these hydrogens are substituted for a larger group, additional strain may occur due to diaxial interactions between pairs of substituents occupying the same-orientation axial position, which are typically repulsive due to steric crowding. Boat and twist-boat conformations The boat conformations have higher energy than the chair conformations. The interaction between the two flagpole hydrogens, in particular, generates steric strain. Torsional strain also exists between the C2–C3 and C5–C6 bonds (carbon number 1 is one of the two on a mirror plane), which are eclipsed — that is, these two bonds are parallel one to the other across a mirror plane. Because of this strain, the boat configuration is unstable (i.e. is not a local energy minimum). The molecular symmetry is C2v. The boat conformations spontaneously distorts to twist-boat conformations. Here the symmetry is D2, a purely rotational point group with three twofold axes. This conformation can be derived from the boat conformation by applying a slight twist to the molecule so as to remove eclipsing of two pairs of methylene groups. The twist-boat conformation is chiral, existing in right-handed and left-handed versions. The concentration of the twist-boat conformation at room temperature is less than 0.1%, but at it can reach 30%. Rapid cooling of a sample of cyclohexane from to will freeze in a large concentration of twist-boat conformation, which will then slowly convert to the chair conformation upon heating. Dynamics Chair to chair The interconversion of chair conformers is called ring flipping or chair-flipping. Carbon–hydrogen bonds that are axial in one configuration become equatorial in the other, and vice versa. At room temperature the two chair conformations rapidly equilibrate. The proton NMR spectrum of cyclohexane is a singlet at room temperature, with no separation into separate signals for axial and equatorial hydrogens. In one chair form, the dihedral angle of the chain of carbon atoms (1,2,3,4) is positive whereas that of the chain (1,6,5,4) is negative, but in the other chair form, the situation is the opposite. So both these chains have to undergo a reversal of dihedral angle. When one of these two four-atom chains flattens to a dihedral angle of zero, we have the half-chair conformation, at a maximum energy along the conversion path. When the dihedral angle of this chain then becomes equal (in sign as well as magnitude) to that of the other four-atom chain, the molecule has reached the continuum of conformations, including the twist boat and the boat, where the bond angles and lengths can all be at their normal values and the energy is therefore relatively low. After that, the other four-carbon chain has to switch the sign of its dihedral angle in order to attain the target chair form, so again the molecule has to pass through the half-chair as the dihedral angle of this chain goes through zero. Switching the signs of the two chains sequentially in this way minimizes the maximum energy state along the way (at the half-chair state) — having the dihedral angles of both four-atom chains switch sign simultaneously would mean going through a conformation of even higher energy due to angle strain at carbons 1 and 4. The detailed mechanism of the chair-to-chair interconversion has been the subject of much study and debate. The half-chair state (D, in figure below) is the key transition state in the interconversion between the chair and twist-boat conformations. The half-chair has C2 symmetry. The interconversion between the two chair conformations involves the following sequence: chair → half-chair → twist-boat → half-chair′ → chair′. Twist-boat to twist-boat The boat conformation (C, below) is a transition state, allowing the interconversion between two different twist-boat conformations. While the boat conformation is not necessary for interconversion between the two chair conformations of cyclohexane, it is often included in the reaction coordinate diagram used to describe this interconversion because its energy is considerably lower than that of the half-chair, so any molecule with enough energy to go from twist-boat to chair also has enough energy to go from twist-boat to boat. Thus, there are multiple pathways by which a molecule of cyclohexane in the twist-boat conformation can achieve the chair conformation again. Substituted derivatives In cyclohexane, the two chair conformations have the same energy. The situation becomes more complex with substituted derivatives. Monosubstituted cyclohexanes A monosubstituted cyclohexane is one in which there is one non-hydrogen substituent in the cyclohexane ring. The most energetically favorable conformation for a monosubstituted cyclohexane is the chair conformation with the non-hydrogen substituent in the equatorial position because it prevents high steric strain from 1,3 diaxial interactions. In methylcyclohexane the two chair conformers are not isoenergetic. The methyl group prefers the equatorial orientation. The preference of a substituent towards the equatorial conformation is measured in terms of its A value, which is the Gibbs free energy difference between the two chair conformations. A positive A value indicates preference towards the equatorial position. The magnitude of the A values ranges from nearly zero for very small substituents such as deuterium, to about 5 kcal/mol (21 kJ/mol) for very bulky substituents such as the tert-butyl group. Thus, the magnitude of the A value will also correspond to the preference for the equatorial position. Though an equatorial substituent has no 1,3 diaxial interaction that causes steric strain, it has a Gauche interaction in which an equatorial substituent repels the electron density from a neighboring equatorial substituent. Disubstituted cyclohexanes For 1,2- and 1,4-disubstituted cyclohexanes, a cis configuration leads to one axial and one equatorial group. Such species undergo rapid, degenerate chair flipping. For 1,2- and 1,4-disubstituted cyclohexane, a trans configuration, the diaxial conformation is effectively prevented by its high steric strain. For 1,3-disubstituted cyclohexanes, the cis form is diequatorial and the flipped conformation suffers additional steric interaction between the two axial groups. trans-1,3-Disubstituted cyclohexanes are like cis-1,2- and cis-1,4- and can flip between the two equivalent axial/equatorial forms. Cis-1,4-Di-tert-butylcyclohexane has an axial tert-butyl group in the chair conformation and conversion to the twist-boat conformation places both groups in more favorable equatorial positions. As a result, the twist-boat conformation is more stable by at as measured by NMR spectroscopy. Also, for a disubstituted cyclohexane, as well as more highly substituted molecules, the aforementioned A values are additive for each substituent. For example, if calculating the A value of a dimethylcyclohexane, any methyl group in the axial position contributes 1.70 kcal/mol- this number is specific to methyl groups and is different for each possible substituent. Therefore, the overall A value for the molecule is 1.70 kcal/mol per methyl group in the axial position. 1,3 diaxial interactions and gauche interactions 1,3 Diaxial interactions occur when the non-hydrogen substituent on a cyclohexane occupies the axial position. This axial substituent is in the eclipsed position with the axial substituents on the 3-carbons relative to itself (there will be two such carbons and thus two 1,3 diaxial interactions). This eclipsed position increases the steric strain on the cyclohexane conformation and the confirmation will shift towards a more energetically favorable equilibrium. Gauche interactions occur when a non-hydrogen substituent on a cyclohexane occupies the equatorial position. The equatorial substituent is in a staggered position with the 2-carbons relative to itself (there will be two such carbons and thus two 1,2 gauche interactions). This creates a dihedral angle of ~60°. This staggered position is generally preferred to the eclipsed positioning. Effects of substituent size on stability Once again, the conformation and position of groups (ie. substituents) larger than a singular hydrogen are critical to the overall stability of the molecule. The larger the group, the less likely to prefer the axial position on its respective carbon. Maintaining said position with a larger size costs more energy from the molecule as a whole because of steric repulsion between the large groups' nonbonded electron pairs and the electrons of the smaller groups (ie. hydrogens). Such steric repulsions are absent for equatorial groups. The cyclohexane model thus assesses steric size of functional groups on the basis of gauche interactions. The gauche interaction will increase in energy as the size of the substituent involved increases. For example, a t-butyl substituent would sustain a higher energy gauche interaction as compared to a methyl group, and therefore, contribute more to the instability of the molecule as a whole. In comparison, a staggered conformation is thus preferred; the larger groups would maintain the equatorial position and lower the energy of the entire molecule. This preference for the equatorial position among bulkier groups lowers the energy barriers between different conformations of the ring. When the molecule is activated, there will be a loss in entropy due to the stability of the larger substituents. Therefore, the preference of the equatorial positions by large molecules (such as a methyl group) inhibits the reactivity of the molecule and thus makes the molecule more stable as a whole. Effects on conformational equilibrium Conformational equilibrium is the tendency to favor the conformation where cyclohexane is the most stable. This equilibrium depends on the interactions between the molecules in the compound and the solvent. Polarity and nonpolarity are the main factors in determining how well a solvent interacts with a compound. Cyclohexane is considered nonpolar, meaning that there is no electronegative difference between its bonds and its overall structure is symmetrical. Due to this, when cyclohexane is immersed in a polar solvent, it will have less solvent distribution, which signifies a poor interaction between the solvent and solute. This produces a limited catalytic effect. Moreover, when cyclohexane comes into contact with a nonpolar solvent, the solvent distribution is much greater, showing a strong interaction between the solvent and solute. This strong interaction yields a heighten catalytic effect. Heterocyclic analogs Heterocyclic analogs of cyclohexane are pervasive in sugars, piperidines, dioxanes, etc. They exist generally follow the trends seen for cyclohexane, i.e. the chair conformer being most stable. The axial–equatorial equilibria (A values) are however strongly affected by the replacement of a methylene by O or NH. Illustrative are the conformations of the glucosides. 1,2,4,5-Tetrathiane ((SCH2)3) lacks the unfavorable 1,3-diaxial interactions of cyclohexane. Consequently its twist-boat conformation is populated; in the corresponding tetramethyl structure, 3,3,6,6-tetramethyl-1,2,4,5-tetrathiane, the twist-boat conformation dominates. Historical background In 1890, , a 28-year-old assistant in Berlin, published instructions for folding a piece of paper to represent two forms of cyclohexane he called symmetrical and asymmetrical (what we would now call chair and boat). He clearly understood that these forms had two positions for the hydrogen atoms (again, to use modern terminology, axial and equatorial), that two chairs would probably interconvert, and even how certain substituents might favor one of the chair forms (). Because he expressed all this in mathematical language, few chemists of the time understood his arguments. He had several attempts at publishing these ideas, but none succeeded in capturing the imagination of chemists. His death in 1893 at the age of 31 meant his ideas sank into obscurity. It was only in 1918 that , based on the molecular structure of diamond that had recently been solved using the then very new technique of X-ray crystallography, was able to successfully argue that Sachse's chair was the pivotal motif. Derek Barton and Odd Hassel shared the 1969 Nobel Prize in Chemistry for work on the conformations of cyclohexane and various other molecules. Practical applications Cyclohexane is the most stable of the cycloalkanes, due to the stability of adapting to its chair conformer. This conformer stability allows cyclohexane to be used as a standard in lab analyses. More specifically, cyclohexane is used as a standard for pharmaceutical reference in solvent analysis of pharmaceutical compounds and raw materials. This specific standard signifies that cyclohexane is used in quality analysis of food and beverages, pharmaceutical release testing, and pharmaceutical method development; these various methods test for purity, biosafety, and bioavailability of products. The stability of the chair conformer of cyclohexane gives the cycloalkane a versatile and important application when regarding the safety and properties of pharmaceuticals.
Physical sciences
Stereochemistry
Chemistry
1524766
https://en.wikipedia.org/wiki/Low-Frequency%20Array
Low-Frequency Array
The Low-Frequency Array (LOFAR) is a large radio telescope, with an antenna network located mainly in the Netherlands, and spreading across 7 other European countries as of 2019. Originally designed and built by ASTRON, the Netherlands Institute for Radio Astronomy, it was first opened by Queen Beatrix of The Netherlands in 2010, and has since been operated by ASTRON on behalf first of the International LOFAR Telescope (ILT) partnership and now of the LOFAR ERIC by ASTRON. LOFAR consists of a vast array of omnidirectional radio antennas using a modern concept, in which the signals from the separate antennas are not connected directly electrically to act as a single large antenna, as they are in most array antennas. Instead, the LOFAR dipole antennas (of two types) are distributed in stations, within which the antenna signals can be partly combined in analogue electronics, then digitised, then combined again across the full station. This step-wise approach provides great flexibility in setting and rapidly changing the directional sensitivity on the sky of an antenna station. The data from all stations are then transported over fiber to a central digital processor, and combined in software to emulate a conventional radio telescope dish with a resolving power corresponding to the greatest distance between the antenna stations across Europe. LOFAR is thus an interferometric array, using about 20,000 small antennas concentrated in 52 stations since 2019. 38 of these stations are distributed across the Netherlands, built with regional and national funding. The six stations in Germany, three in Poland, and one each in France, Great Britain, Ireland, Latvia, and Sweden, with various national, regional, and local funding and ownership. Italy officially joined the International LOFAR Telescope (ILT) in 2018; construction at the INAF observatory site in Medicina, near Bologna, is planned as soon as upgraded (so-called LOFAR2.0) hardware becomes available. Further stations in other European countries are in various stages of planning. The total effective collecting area is approximately 300,000 square meters, depending on frequency and antenna configuration. Until 2014, data processing was performed by a Blue Gene/P supercomputer situated in the Netherlands at the University of Groningen. Since 2014 LOFAR uses a GPU-based correlator and beamformer, COBALT, for that task. LOFAR is also a technology and science pathfinder for the Square Kilometre Array. Technical information LOFAR was conceived as an innovative effort to force a breakthrough in sensitivity for astronomical observations at radio-frequencies below 250 MHz. Astronomical radio interferometers usually consist either of arrays of parabolic dishes (e.g. the One-Mile Telescope or the Very Large Array), arrays of one-dimensional antennas (e.g. the Molonglo Observatory Synthesis Telescope) or two-dimensional arrays of omnidirectional antennas (e.g. Antony Hewish's Interplanetary Scintillation Array). LOFAR combines aspects of many of these earlier telescopes; in particular, it uses omnidirectional dipole antennas as elements of a phased array at individual stations, and combines those phased arrays using the aperture synthesis technique developed in the 1950s. Like the earlier Cambridge Low Frequency Synthesis Telescope (CLFST) low-frequency radio telescope, the design of LOFAR has concentrated on the use of large numbers of relatively cheap antennas without any moving parts, concentrated in stations, with the mapping performed using aperture synthesis software. The direction of observation ("beam") of the stations is chosen electronically by phase delays between the antennas. LOFAR can observe in several directions simultaneously, as long as the aggregated data rate remains under its cap. This in principle allows a multi-user operation. LOFAR makes observations in the 10 MHz to 240 MHz frequency range with two types of antennas: Low Band Antenna (LBA) and High Band Antenna (HBA), optimized for 10–80 MHz and 120–240 MHz respectively. The electric signals from the LOFAR stations are digitised, transported to a central digital processor, and combined in software in order to map the sky. Therefore, LOFAR is a "software telescope". The cost of such telescopes is dominated by the cost of electronics and will therefore mostly follow Moore's law, becoming cheaper with time and allowing increasingly large telescopes to be built. Each antenna is fairly simple- but there are about 20,000 of them in the LOFAR array. LOFAR stations To make radio surveys of the sky with adequate resolution, the antennas are arranged in clusters that are spread out over an area of more than 1000 km in diameter. The LOFAR stations in the Netherlands reach baselines of about 100 km. LOFAR currently receives data from 24 core stations (in Exloo), 14 'remote' stations in The Netherlands, and 14 international stations. Each of the core and remote stations has 48 HBAs and 96 LBAs and a total of 48 digital Receiver Units (RCUs). International stations have 96 LBAs and 96 HBAs and a total of 96 digital Receiver Units (RCUs). The locations of the international LOFAR stations are: Germany Effelsberg – run by Max Planck Institute for Radio Astronomy, at the site of the Effelsberg Radio Telescope Unterweilenbach/Garching – run by Max Planck Institute for Astrophysics Tautenburg – at the site of the Thüringer Landessternwarte Tautenburg (Thuringian State Observatory) Potsdam-Bornim – run by Astrophysikalisches Institut Potsdam Jülich – run by the University of Bochum, Jacobs University Bremen, and Forschungszentrum Jülich Norderstedt – run by Hamburger Sternwarte and Universität Bielefeld United Kingdom Chilbolton – at the site of the Chilbolton Observatory France Nançay – at the site of the Nançay Radio Telescope Sweden Onsala – at the site Onsala Space Observatory Poland Bałdy – run by the University of Warmia and Mazury in Olsztyn Borówiec – run by the Space Research Centre of Polish Academy of Sciences Łazy – run by Jagiellonian University Ireland Birr – run by Trinity College Dublin at the Rosse Observatory on the grounds of Birr Castle Latvia Ventspils – at the site of Ventspils International Radio Astronomy Centre in Irbene Italy – planned at the site of the Medicina Observatory Bulgaria – planned at the site of the National Astronomical Observatory Rozhen NenuFAR The NenuFAR telescope is co-located at the Nançay radio telescope. It is an extension of the Nançay LOFAR station (FR606), adding 96 low frequency tiles, each consisting of a "mini-array" of 19 crossed-dipole antennas, distributed in a circle with a diameter of approximately 400 m. The tiles are a hexagonal cluster with analogically phased antennas. The telescope can capture radio frequencies in the 10–85 MHz range, covering the LOFAR-Low Band (30–80 MHz) range as well. The NenuFAR array can work as a high-sensitivity LOFAR-compatible super-LBA station (LSS), operating together with rest of LOFAR to increase to array's global sensitivity by nearly a factor of two, and improve the array's imaging capabilities. It can also function as a second supercore to improve array availability. Due to its dedicated receiver, NenuFAR can also operate as a standalone instrument, known as NenuFAR/Standalone in this mode. Other stations Additionally, a set of LOFAR antennas is deployed at the KAIRA (Kilpisjärvi Atmospheric Imaging Receiver Array) near Kilpisjärvi, Finland. This installation functions as a VHF receiver either in stand-alone mode or part of a bistatic radar system together with EISCAT transmitter in Tromsø. Data transfer Data transport requirements are in the range of several gigabits per second per station and the processing power needed is several tens of TeraFLOPS. The data from LOFAR is stored in the LOFAR long-term archive. The archive is implemented as distributed storage, with data spread over the Target data centre located in the Donald Smits Center for Information Technology at the University of Groningen, centre in Amsterdam, and the Forschungszentrum Jülich in Germany. Sensitivity The mission of LOFAR is to map the Universe at radio frequencies from ~10–240 MHz with greater resolution and greater sensitivity than previous surveys, such as the 7C and 8C surveys, and surveys by the Very Large Array (VLA) and Giant Meterwave Radio Telescope (GMRT). LOFAR will be the most sensitive radio observatory at its low observing frequencies until the Square Kilometre Array (SKA) comes online in the late 2020s. Even then, the SKA will only observe at frequencies >50 MHz and LOFAR's angular resolution will remain far superior. Science case The sensitivities and spatial resolutions attainable with LOFAR make possible several fundamental new studies of the Universe as well as facilitating unique practical investigations of the Earth's environment. In the following list the term is a dimensionless quantity indicating the redshift of the radio sources seen by LOFAR. In the very distant Universe (), LOFAR can search for the signature produced by the reionization of neutral hydrogen. This crucial phase change is predicted to occur at the epoch of the formation of the first stars and galaxies, marking the end of the so-called "dark ages". The redshift at which reionization is thought to occur will shift the 21 cm line of neutral hydrogen at 1420.40575 MHz into the LOFAR observing window. The frequency observed today is lower by a factor of 1/(z+1). In the distant "formative" Universe (), LOFAR is capable of detecting the most distant massive galaxies and will study the processes by which the earliest structures in the Universe (galaxies, clusters and active nuclei) form and probe the intergalactic gas. In the magnetic Universe, LOFAR is mapping the distribution of cosmic rays and global magnetic fields in our own and nearby galaxies, in galaxy clusters and in the intergalactic medium. The high-energy Universe, LOFAR detects the ultra high energy cosmic rays as they pierce the Earth's atmosphere. A dedicated test station for this purpose, LOPES, has been in operation since 2003. Within the Milky Way galaxy, LOFAR has detected many new pulsars within a few kpc from the Sun, has searched for short-lived transient events produced by stellar mergers or black hole accretion, and will search for bursts from Jupiter-like extrasolar planets. Within the Solar System, LOFAR detects coronal mass ejections from the Sun and provide continuous large-scale maps of the solar wind. This crucial information about solar weather and its effect on Earth facilitates predictions of costly and damaging geomagnetic storms. Within the Earth's immediate environment, LOFAR will map irregularities in the ionosphere continuously, detect the ionizing effects of distant gamma-ray bursts and the flashes predicted to arise from the highest energy cosmic rays, the origins of which are unclear. By exploring a new spectral window LOFAR is likely to make serendipitous discoveries. Detection of new classes of objects or new astrophysical phenomena have resulted from almost all previous facilities that open new regions of the spectrum, or pushed instrumental parameters, such as sensitivity by more than an order of magnitude. Key projects The epoch of reionization One of the most exciting, but technically most challenging, applications of LOFAR will be the search for redshifted 21 cm line emission from the Epoch of Reionization (EoR). It is thought that the 'Dark Ages', the period after recombination when the Universe turned neutral, lasted until around z=20. WMAP polarization results appear to suggest that there may have been extended, or even multiple phases of reionisation, the start possibly being around z~15–20 and ending at z~6. Using LOFAR, the redshift range from z=11.4 (115 MHz) to z=6 (200 MHz) can be probed. The expected signal is small, and disentangling it from the much stronger foreground emission is challenging. Deep extragalactic surveys One of the most important applications of LOFAR will be to carry out large-sky surveys. Such surveys are well suited to the characteristics of LOFAR and have been designated as one of the key projects that have driven LOFAR since its inception. Such deep LOFAR surveys of the accessible sky at several frequencies will provide unique catalogues of radio sources for investigating several fundamental areas of astrophysics, including the formation of massive black holes, galaxies and clusters of galaxies. Because the LOFAR surveys will probe an unexplored parameter of the Universe, it is likely that they will discover new phenomena. In February 2021, astronomers released, for the first time, a very high-resolution image of 25,000 active supermassive black holes, covering four percent of the Northern celestial hemisphere, based on ultra-low radio wavelengths, as detected by LOFAR. Transient radio phenomena and pulsars The combination of low frequencies, omnidirectional antennae, high-speed data transport and computing means that LOFAR will open a new era in the monitoring of the radio sky. It will be possible to make sensitive radio maps of the entire sky visible from The Netherlands (about 60% of the entire sky) in only one night. Transient radio phenomena, only hinted at by previous narrow-field surveys, will be discovered, rapidly localised with unprecedented accuracy, and automatically compared to data from other facilities (e.g. gamma-ray, optical, and X-ray observatories). Such transient phenomena may be associated with exploding stars, black holes, flares on Sun-like stars, radio bursts from exoplanets or even SETI signals. In addition, this key science project will make a deep survey for radio pulsars at low radio frequencies, and will attempt to detect giant radio bursts from rotating neutron stars in distant galaxies. Ultra high-energy cosmic rays LOFAR offers a unique possibility in particle physics for studying the origin of high-energy and ultra-high-energy cosmic rays (HECRs and UHECRs) at energies between eV. Both the sites and processes for accelerating particles are unknown. Possible candidate sources of these HECRs are shocks in radio lobes of powerful radio galaxies, intergalactic shocks created during the epoch of galaxy formation, so-called Hyper-novae, gamma-ray bursts, or decay products of super-massive particles from topological defects, left over from phase transitions in the early Universe. The primary observable is the intense radio pulse that is produced when a primary CR hits the atmosphere and produces an extensive air shower (EAS). An EAS is aligned along the direction of motion of the primary particle, and a substantial part of its component consists of electron-positron pairs which emit radio emission in the terrestrial magnetosphere (e.g., geo-synchrotron emission). Cosmic magnetism LOFAR opens the window to the so far unexplored low-energy synchrotron radio waves, emitted by cosmic-ray electrons in weak magnetic fields. Very little is known about the origin and evolution of cosmic magnetic fields. The space around galaxies and between galaxies may all be magnetic, and LOFAR may be the first to detect weak radio emission from such regions. LOFAR will also measure the Faraday effect, which is the rotation of polarization plane of low-frequency radio waves, and gives another tool to detect weak magnetic fields. Solar physics and space weather The Sun is an intense radio source. The already strong thermal radiation of the K hot solar corona is superimposed by intense radio bursts that are associated with phenomena of the solar activity, like flares and coronal mass ejections (CMEs). Solar radio radiation in the LOFAR frequency range is emitted in the middle and upper corona. So LOFAR is an ideal instrument for studies of the launch of CMEs heading towards interplanetary space. LOFAR's imaging capabilities will yield information on whether such a CMEs might hit the Earth. This makes LOFAR is a valuable instrument for space weather studies. Solar observations with LOFAR will include routine monitoring of the solar activity as the root of space weather. Furthermore, LOFAR's flexibility enables rapid responses to solar radio bursts with follow-up observations. Solar flares produce energetic electrons that not only lead to the emission of non-thermal solar radio radiation. The electrons also emit X-rays and heat the ambient plasma. So joint observation campaigns with other ground- and space-based instruments, e.g. RHESSI, Hinode, the Solar Dynamics Observatory (SDO), and eventually the Advanced Technology Solar Telescope and the Solar Orbiter provide insights into this fundamental astrophysical process. Timeline In the early 1990s, the study of aperture array technology for radio astronomy was being actively studied by ASTRON – the Netherlands Institute for Radio Astronomy. At the same time, scientific interest in a low-frequency radio telescope began to emerge at ASTRON and at the Dutch Universities. A feasibility study was carried out and international partners sought during 1999. In 2000 the Netherlands LOFAR Steering Committee was set up by the ASTRON Board with representatives from all interested Dutch university departments and ASTRON. In November 2003 the Dutch Government allocated 52 million euro to fund the infrastructure of LOFAR under the Bsik programme. In accordance with Bsik guidelines, LOFAR was funded as a multidisciplinary sensor array to facilitate research in geophysics, computer sciences and agriculture as well as astronomy. In December 2003 LOFAR's Initial Test Station (ITS) became operational. The ITS system consists of 60 inverse V-shaped dipoles; each dipole is connected to a low-noise amplifier (LNA), which provides enough amplification of the incoming signals to transport them over a 110 m long coaxial cable to the receiver unit (RCU). On April 26, 2005, an IBM Blue Gene/L supercomputer was installed at the University of Groningen's math centre, for LOFAR's data processing. At that time it was the second most powerful supercomputer in Europe, after the MareNostrum in Barcelona. Since 2014 an even more powerful computing cluster (correlator) called COBALT performs the correlation of signals from all individual stations. In August/September 2006 the first LOFAR station (Core Station CS001, aka. CS1 ) was put in the field using pre-production hardware. A total of 96 dual-dipole antennas (the equivalent of a full LOFAR station) are grouped in four clusters, the central cluster with 48 dipoles and other three clusters with 16 dipoles each. Each cluster is about 100 m in size. The clusters are distributed over an area of ~500 m in diameter. In November 2007 the first international LOFAR station (DE601) next to the Effelsberg 100 m radio telescope became the first operational station. The first fully complete station, (CS302) on the edge of the LOFAR core, was delivered in May 2009, with a total of 40 Dutch stations scheduled for completion in 2013. By 2014, 38 stations in the Netherlands, five stations in Germany (Effelsberg, Tautenburg, Unterweilenbach, Bornim/Potsdam, and Jülich), and one each in the UK (Chilbolton), in France (Nançay) and in Sweden (Onsala) were operational. LOFAR was officially opened on 12 June 2010 by Queen Beatrix of the Netherlands. Regular observations started in December 2012.
Technology
Ground-based observatories
null
1526695
https://en.wikipedia.org/wiki/Blue%20spruce
Blue spruce
The blue spruce (Picea pungens), also commonly known as Colorado spruce or Colorado blue spruce, is a species of spruce tree native to North America in Arizona, Colorado, Idaho, New Mexico, Utah and Wyoming. It is noted for its blue-green colored needles, and has therefore been used as an ornamental tree in many places far beyond its native range. Description In the wild, Picea pungens grows to as much as in height, but more typically tall. When planted in parks and gardens it most often grows tall with a spread of . It has scaly grey-brown bark with a slight amount of a cinnamon-red undertone on its trunk, not as rough as an Engelmann spruce. On older trees the trunk bark will be deeply furrowed and scaly. The diameter of the trunk may reach as much as . Blue spruces are conifers with a pyramidal or conical crown when young, but more open and irregular in shape as they become older. The stout branches grow out horizontally in well defined whorls, but lower branches droop downwards as trees age. Young twigs never hang downwards and are yellow-brown in color. The narrow, needle-like, evergreen leaves are quite sharply pointed and may be dull green, blue, or pale white. Each of the needles is four sided with stomata on every side, stiff, and long. The needles are attached radially to their shoots, but curve upward. The leaf buds are golden brown and cone shaped. The buds may be in size and the tip may either be blunt or pointed. The pollen producing cones, more properly strobili, develop throughout the crown of blue spruce trees, but are more common in the upper half of the crown. Pollen cones are mainly yellow with a touch of red and average long. The seed cones begin growing in May or June and release their mature seeds in the autumn of the same year in which they start to grow. When young they are purple-brown in color. When fully mature they are light brown with thin, papery scales and are often curved. Overall they are longer than they are wide, between long, and circular in cross section. The seed cones are only found at the top of the tree. This helps to facilitate cross-pollination. The seeds are dark brown. They average 4 mm in length with the papery wing extending beyond the tip almost twice this length. Chemistry The phytochemistry of the blue spruce is relatively little studied. The ripe seeds have a 1.17% yield of essential oils while the cones produce only 0.38% when steam distilled for four hours. The main component, over 40%, of the essential oils is limonene with β-Pinene and α-Pinene the next most significant. Taxonomy Picea pungens was given its first valid scientific description by George Engelmann in 1879. He had previously named it Abies menziesii in 1862 and then as Picea menziesii in 1863 after, but both those names had already been used making them illegitimate names. Names Picea, the genus name, is thought to come from the Latin word pix meaning "pitch", a reference to the typical sticky resin in spruce bark. The specific epithet pungens means "sharply pointed", referring to the leaves. The most frequently used common name in English is blue spruce. It was first used for other trees in 1817 and is still used for any spruce tree with a glaucous blue color to their needles, but most frequently meaning Picea pungens. Though this is the most common name, in the wild only part of the population has the waxy blue-gray coating for which the tree is named. Less frequently, but still common, is Colorado blue spruce, a name first used in 1912. The usage of Colorado spruce dates to 1881, but is less frequent than the longer alternate. Occasionally encountered are the names Parry's spruce, prickly spruce, silver spruce, and white spruce. Blue spruces are also rarely called silvertip fir, but this name is also applied to Abies magnifica especially when sold as Christmas trees. In addition it is sometimes labeled as "Colorado green spruce" or "green spruce" by plant nurseries or tree farms. Similar to the meaning of the scientific name, the Navajo name for this species is a compound c’ó deniní with c’ó meaning spruce and deniní meaning "it is sharp". Ecology Blue Spruce occurs at high elevations, in the forests of the South Central Rockies and in the Southern Rocky Mountains. It grows in mesic montane conifer forests, often associating with Douglas-fir, ponderosa pine, or white fir. It has a riparian affinity, preferring moist soils such as those along streams or at the edges of wet meadows. The Douglas-fir or ponderosa pine only become associated with streams at lower, warmer elevations. It also may be found alongside the quaking aspen (Populus tremuloides) in the high mountain habitats of desert ranges in the Intermountain West. Climate Blue spruce usually grows in cool and humid climatic zones where the annual precipitation mainly occurs in the summer. Blue spruce is most common in Colorado and the Southwest. The annual average temperature ranges from 3.9 to 6.1 degrees C (39 to 43 degrees F). And ranges from - 3.9 to - 2.8 degrees C (25 to 27 degrees F) in January. In July, the average temperature ranges from 13.9 to 15.0 degrees C (57 to 59 degrees F). The average minimum temperature in January ranges from - 11.1 to 8.9 degrees C (12 to 16 degrees F), and the average maximum temperature in July ranges from 21.1 to 22.2 C (70 to 72 degrees F). There is a frost-free period of about 55 to 60 days from June to August. Annual mean precipitation generally vary from 460 to 610 mm (18 to 24 in). Winter is the season with the poorest rainfall, the precipitation is usually less than 20 percent of the annual moisture falling from December to March. Fifty percent of the annual precipitation occurs during the growing season of the plants. Blue spruce is generally considered to grow best with abundant moisture. Nevertheless, this species can withstand drought better than any other spruce. It can withstand extremely low temperatures (-40 degrees C) as well. Furthermore, this species is more resistant to high insolation and frost damage compared to other associated species. Distributed soil types and topography Blue spruce generally exists on gentle uplands and sub irrigated slopes, in well-watered tributary drainage, extending down intermittent streams, and on lower northerly slopes. Blue spruce always grow naturally in the soils which are in the order Mollisols, and the soil will also be in the orders histosols and inceptisols in a lesser extent. Blue spruce is considered as a pioneer tree species in moist soil in Utah. Rooting habits Blue spruce seedlings have shallow roots that penetrate approximately into the soil during the first year of growth. Although freezing can't damage much in blue spruce, frost heaving will cause seedling loss. Shadows in late spring and early autumn minimize this frost heaving loss. Despite the shallow roots, blue spruce is able to resist strong winds. Five years before transplanting, the total root surface area of 2-meter-high trees was doubled by pruning the roots of blue spruce. It also increases the root concentration in drip irrigation pipeline from 40% to 60%, which is an advantage in landscape greening. Pests and diseases The blue spruce is attacked by two species of Adelges, an aphid-like insect that causes galls to form. Nymphs of the pineapple gall adelgid form galls at the base of twigs which resemble miniature pineapples and those of the Cooley's spruce gall adelgid cause cone-shaped galls at the tips of branches. The larva of the spruce budworm eat the buds and growing shoots while the spruce needle miner hollows out the needles and makes them coalesce in a webbed mass. An elongated white scale insect, the pine needle scale feeds on the needles causing fluffy white patches on the twigs and aphids also suck sap from the needles and may cause them to fall and possibly dieback. Mites can also infest the blue spruce, especially in a dry summer, causing yellowing of the oldest needles. Another insect pest is the spruce beetle (Dendroctonus rufipennis) which bores under the bark. It often first attacks trees which have blown over by the wind and when the larvae mature two years afterwards, a major outbreak occurs and vast numbers of beetles attack nearby standing trees. The blue spruce is susceptible to several needle casting diseases which cause the needles to turn yellow, mottled or brown before they fall off. Various rust diseases also affect the tree causing yellowing of the needles as well as needle fall. Canker caused by Cytospora attacks one of the lower branches first and progressively makes its way higher up the tree. The first symptom is the needles turning reddish-brown and falling off. Meanwhile, patches of white resin appear on the bark and the branch eventually dies. It is also relatively intolerant of light pollution and when planted near street lights or other outdoor lighting its preparation for winter can be delayed and parts of the tree may be damaged. Range The native range of the blue spruce is largely in the Central and Southern Rocky Mountains and moist mountain valleys and canyons to the west. In New Mexico it only grows naturally in the higher mountain ranges of the state such as the Sandia–Manzano Mountains, Sangre de Cristo Mountains, and San Juan Mountains, as well as on Sierra Blanca Peak to the south. In Arizona the range is even more limited, growing in just Coconio and Apache counties. In Apache County it is found in the White Mountains in central eastern Arizona and the Lukachukai Mountains in the northeastern corner of the state. In Coconino County they only grow on the Kaibab Plateau. The blue spruce grows in every county in the western two-thirds of Colorado; approximately half of natural range of the species is in the mountains of Colorado. In Utah they are a locally common part of forests in the Uinta Mountains. West of the Uintas blue spruces are less frequent in canyons south of Salt Lake City. The blue spruce has become naturalized outside of its native range. In North America has escaped from cultivation in the states of Minnesota and New York. It has also become established to some extent in many western and northern European countries including Iceland, Norway, Sweden, the United Kingdom, France, and Belgium. In middle and southern Europe it is found in Germany, Switzerland, Austria, the former Czechoslovakia, and mainland Italy. To the east it grows in European portions of Russia, the Caucasus, and Bulgaria. Notable trees The tallest documented blue spruce tree is an individual in the San Juan Mountains of southern Colorado in the Hermosa Creek area. When measured by Matt Markworth in 2015 it was tall. Just three years later in 2018 it was threatened by the 416 Fire. Though the fire killed a shorter American champion tree with a larger trunk and crown spread the tall tree was spared due to being located in a sheltered valley. Cultivation Picea pungens and its many cultivars are often grown as ornamental trees in gardens and parks. It is also grown for the Christmas tree industry. It grows best in USDA growing zones 1 through 7, though it also does well in zones warmer than 7 where summer heat is moderate, as at San Francisco. Common cultivars (those marked have gained the Royal Horticultural Society's Award of Garden Merit): 'Baby Blue Eyes', 'Baby Blueeyes', or 'Baby Blue' – This is a semi-dwarf cultivar that grows slowly, but may eventually reach in height. It has a pyramidal shape and holds its color well. 'Fat Albert' – compact perfect cone to of a silver blue color 'Globosa' – shrub from in height 'Hoopsii' – A full size variety with a dense pyramidal habit known for "excellent" silver-blue color of its foliage. It reaches tall when full grown. 'Koster' – A medium sized cultivar that will reach 'Montgomery' – a slow growing dwarf variety. It will typically only grow tall in eight years, but may eventually reach a height of over . 'Pendula' – drooping branches, spreads to about wide by tall 'Sester's Dwarf' – denser foliage than the species, slowly grows to about tall Culture The Navajo and Keres Native Americans use this tree as a traditional medicinal plant and a ceremonial item, and twigs are given as gifts to bring good fortune. In traditional medicine, an infusion of the needles is used to treat colds and settle the stomach. This liquid is also used externally for rheumatic pains. The blue spruce is the state tree of Colorado. It officially became Colorado's state tree on 7 March 1939 when House Joint Resolution 7 was enacted by the legislature. Previously a vote of the state's school children was taken on Arbor Day in 1892 expressing their preference for the blue spruce as the state tree. From 1933 until 2014 the blue spruce was also the state tree of Utah. It was replaced by the quaking aspen because the aspen is a great deal more common than the blue spruce in Utah, making up 10% of the state's tree cover. Gallery
Biology and health sciences
Pinaceae
Plants
1527059
https://en.wikipedia.org/wiki/Percussion%20%28medicine%29
Percussion (medicine)
Percussion is a technique of clinical examination. Overview Percussion is a method of tapping on a surface to determine the underlying structures, and is used in clinical examinations to assess the condition of the thorax or abdomen. It is one of the four methods of clinical examination, together with inspection, palpation, auscultation, and inquiry. It is done with the middle finger of one hand tapping on the middle finger of the other hand using a wrist action. The nonstriking finger (known as the pleximeter) is placed firmly on the body over tissue. When percussing boney areas such as the clavicle, the pleximeter can be omitted and the bone is tapped directly such as when percussing an apical cavitary lung lesion typical of tuberculosis. There are two types of percussion: direct, which uses only one or two fingers; and indirect, which uses only the middle/flexor finger. Broadly classifying, there are four types of percussion sounds: resonant, hyper-resonant, stony dull or dull. A dull sound indicates the presence of a solid mass under the surface. A more resonant sound indicates hollow, air-containing structures. As well as producing different notes which can be heard they also produce different sensations in the pleximeter finger. Percussion was at first used to distinguish between empty and filled barrels of liquor, and Dr. Leopold Auenbrugger is said to be the person who introduced the technique to modern medicine, although this method was used by Avicenna about 1000 years before that for medical practice such as using percussion over the stomach to show how full it is, and to distinguish between ascites and tympanites. Of the thorax It is used to diagnose pneumothorax, emphysema and other diseases. It can be used to assess the respiratory mobility of the thorax. Of the abdomen It is used to find whether any organ is enlarged and similar (assessing for organomegaly). It is based on the principle of setting tissue and spaces in between at vibration. The sound thus generated is used to determine if the tissue is healthy or pathological.
Biology and health sciences
Diagnostics
Health
3002110
https://en.wikipedia.org/wiki/Language%20binding
Language binding
In programming and software design, a binding is an application programming interface (API) that provides glue code specifically made to allow a programming language to use a foreign library or operating system service (one that is not native to that language). Characteristics Binding generally refers to a mapping of one thing to another. In the context of software libraries, bindings are wrapper libraries that bridge two programming languages, so that a library written for one language can be used in another language. Many software libraries are written in system programming languages such as C or C++. To use such libraries from another language, usually of higher-level, such as Java, Common Lisp, Scheme, Python, or Lua, a binding to the library must be created in that language, possibly requiring recompiling the language's code, depending on the amount of modification needed. However, most languages offer a foreign function interface, such as Python's and OCaml's ctypes, and Embeddable Common Lisp's cffi and uffi. For example, Python bindings are used when an extant C library, written for some purpose, is to be used from Python. Another example is libsvn which is written in C to provide an API to access the Subversion software repository. To access Subversion from within Java code, libsvnjavahl can be used, which depends on libsvn being installed and acts as a bridge between the language Java and libsvn, thus providing an API that invokes functions from libsvn to do the work. Major motives to create library bindings include software reuse, to reduce reimplementing a library in several languages, and the difficulty of implementing some algorithms efficiently in some high-level languages. Runtime environment Object models Common Object Request Broker Architecture (CORBA) – cross-platform-language model Component Object Model (COM) – Microsoft Windows only cross-language model Distributed Component Object Model (DCOM) – extension enabling COM to work over networks Cross Platform Component Object Model (XPCOM) – Mozilla applications cross-platform model Common Language Infrastructure – .NET Framework cross-platform-language model Freedesktop.org D-Bus – open cross-platform-language model Virtual machines Comparison of application virtual machines Porting Portable object – cross-platform-language object model definition
Technology
Programming languages
null
5492915
https://en.wikipedia.org/wiki/Mesosaurus
Mesosaurus
Mesosaurus (meaning "middle lizard") is an extinct genus of reptile from the Early Permian of southern Africa and South America. Along with it, the genera Brazilosaurus and Stereosternum, it is a member of the family Mesosauridae and the order Mesosauria. Mesosaurus was long thought to have been one of the first marine reptiles, although new data suggests that at least those of Uruguay inhabited a hypersaline water body, rather than a typical marine environment. In any case, it had many adaptations to a fully aquatic lifestyle. It is usually considered to have been anapsid, although Friedrich von Huene considered it to be a synapsid. Recent study of Mesosauridae phylogeny places the group as either the basal most clade within Parareptilia or the basal most clade within Sauropsida (with the latter being the less supported position) despite the skull of Mesosaurus possessing the "Synapsid condition" of one temporal fenestra. Discovery and naming The holotype of M. tenuidens, MNHN 1865-77, is nicknamed the "Griqua Mesosaurus" and it was found in a Griqua hut in South Africa, likely in Kimberley, Northern Cape around 1830 and was being used as a pot lid. The circumstances of its discovery and how it was taken from its previous owners in South Africa are unknown, but what is known is that the specimen eventually surfaced in the collection of the French palaeontologist Paul Gervais during the 1860s and he designated it as the holotype of a new genus and species he named Mesosaurus tenuidens in 1865. Since then, Mesosaurus remains have also been identified from South America and were first identified in 1908 as belonging to a second species, M. brasiliensis, by J. H. MacGregor. Later studies have shown that M. brasiliensis was the same animal as M. tenuidens, which remains as the single valid species of Mesosaurus to this day. Two other species of mesosaurids have since been described, which are Stereosternum and Brazilosaurus, which are also considered to be synonyms of Mesosaurus tenuidens according to Piñeiro et al. (2021). Description Mesosaurus had a long skull that was larger than that of Stereosternum and had longer teeth. The teeth are angled outwards, especially those at the tips of the jaws.The bones of the postcranial skeleton are thick, having undergone pachyostosis. Mesosaurus is unusual among reptiles in that it possesses a cleithrum, usually found in more primitive bony fish and tetrapods. The head of the interclavicle of Mesosaurus is triangular, unlike those of other early reptiles, which are diamond-shaped. The nostrils were located at the top, allowing the creature to breathe with only the upper side of its head breaking the surface, in a similar manner to a modern crocodile. Palaeobiology Diet Mesosaurus had a small skull with long jaws. The teeth were originally thought to have been straining devices for the filter feeding of planktonic organisms. However, this idea was based on the assumption that the teeth of Mesosaurus were numerous and close together in the jaws. Newly examined remains of Mesosaurus show that it had fewer teeth and that the dentition was suitable for catching small nektonic prey such as crustaceans. Locomotion Mesosaurus was one of the first reptiles known to have returned to the water after early tetrapods came to land in the Late Devonian or later in the Paleozoic. It was around in length, with webbed feet, a streamlined body, and a long tail that may have supported a fin. It probably propelled itself through the water with its long hind legs and flexible tail. Its body was also flexible and could easily move sideways, but it had heavily thickened ribs, which would have prevented it from twisting its body. The pachyostosis seen in the bones of Mesosaurus may have enabled it to reach neutral buoyancy in the upper few meters of the water column. The additional weight may have stabilized the animal at the water's surface. Alternatively, it could have given Mesosaurus greater momentum when gliding underwater. While many features suggest a wholly aquatic lifestyle, Mesosaurus may have been able to move onto land for short periods of time. Its elbows and ankles were restricted in their movement, making walking appear impossible. It is more likely that if Mesosaurus moved onto land, it would push itself forward in a similar way to living female sea turtles when nesting on beaches. A study on vertebral column proportions suggested that, while young Mesosaurus might have been fully aquatic, adult animals spent some time on land. This is supported by the rarity of adult animals in aquatic settings, and a coprolite possessing drying fractures. However, how terrestrial these animals were is difficult to say, as their pachyostosis and other adaptations for an aquatic lifestyle would have made foraging on land difficult. Reproduction Clearly amniote-type fossil embryos of Mesosaurus in an advanced stage of development (i.e. fetuses) have been discovered in Uruguay and Brazil. These fossils are the earliest record of amniote fetuses, although amniotes are inferred to have had their typical reproductive strategy since their first appearance in the Late Carboniferous. Prior to their description, the oldest known amniote fetuses were from the Triassic. One isolated coiled fetus called FC-DPV 2504 is not surrounded by calcareous eggshells, suggesting that the glands in the oviduct of Mesosaurus and probably all Paleozoic amniotes were not able to secrete calcium carbonate, in contrast to post-paleozoic archosaurs. This would explain the scarcity of egg fossils in the paleozoic amniote fossil record. One Mesosaurus specimen called MCN-PV 2214 comprises a medium-size adult with a small individual in its rib cage which is interpreted as a fetus ‘in utero’, even suggesting that Mesosaurus like many other marine reptiles, gave live birth. If this interpretation is correct, this specimen would represent the earliest known example of viviparity in the fossil record. The isolated fetus FC-DPV 2504, however, rather points to an ovoviviparous reproduction strategy in Mesosaurus. Distribution Mesosaurus was significant in providing evidence for the theory of continental drift, because its remains were found in southern Africa, Whitehill Formation, and eastern South America (Melo Formation, Uruguay and Irati Formation, Brazil), two widely separated regions. As Mesosaurus was a coastal animal, and therefore less likely to have crossed the Atlantic Ocean, this distribution indicated that the two continents used to be joined together. Gallery
Biology and health sciences
Prehistoric marine reptiles
Animals
5493795
https://en.wikipedia.org/wiki/Stability%20theory
Stability theory
In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions. The heat equation, for example, is a stable partial differential equation because small perturbations of initial data lead to small variations in temperature at a later time as a result of the maximum principle. In partial differential equations one may measure the distances between functions using Lp norms or the sup norm, while in differential geometry one may measure the distance between spaces using the Gromov–Hausdorff distance. In dynamical systems, an orbit is called Lyapunov stable if the forward orbit of any point is in a small enough neighborhood or it stays in a small (but perhaps, larger) neighborhood. Various criteria have been developed to prove stability or instability of an orbit. Under favorable circumstances, the question may be reduced to a well-studied problem involving eigenvalues of matrices. A more general method involves Lyapunov functions. In practice, any one of a number of different stability criteria are applied. Overview in dynamical systems Many parts of the qualitative theory of differential equations and dynamical systems deal with asymptotic properties of solutions and the trajectories—what happens with the system after a long period of time. The simplest kind of behavior is exhibited by equilibrium points, or fixed points, and by periodic orbits. If a particular orbit is well understood, it is natural to ask next whether a small change in the initial condition will lead to similar behavior. Stability theory addresses the following questions: Will a nearby orbit indefinitely stay close to a given orbit? Will it converge to the given orbit? In the former case, the orbit is called stable; in the latter case, it is called asymptotically stable and the given orbit is said to be attracting. An equilibrium solution to an autonomous system of first order ordinary differential equations is called: stable if for every (small) , there exists a such that every solution having initial conditions within distance i.e. of the equilibrium remains within distance i.e. for all . asymptotically stable if it is stable and, in addition, there exists such that whenever then as . Stability means that the trajectories do not change too much under small perturbations. The opposite situation, where a nearby orbit is getting repelled from the given orbit, is also of interest. In general, perturbing the initial state in some directions results in the trajectory asymptotically approaching the given one and in other directions to the trajectory getting away from it. There may also be directions for which the behavior of the perturbed orbit is more complicated (neither converging nor escaping completely), and then stability theory does not give sufficient information about the dynamics. One of the key ideas in stability theory is that the qualitative behavior of an orbit under perturbations can be analyzed using the linearization of the system near the orbit. In particular, at each equilibrium of a smooth dynamical system with an n-dimensional phase space, there is a certain n×n matrix A whose eigenvalues characterize the behavior of the nearby points (Hartman–Grobman theorem). More precisely, if all eigenvalues are negative real numbers or complex numbers with negative real parts then the point is a stable attracting fixed point, and the nearby points converge to it at an exponential rate, cf Lyapunov stability and exponential stability. If none of the eigenvalues are purely imaginary (or zero) then the attracting and repelling directions are related to the eigenspaces of the matrix A with eigenvalues whose real part is negative and, respectively, positive. Analogous statements are known for perturbations of more complicated orbits. Stability of fixed points in 2D The paradigmatic case is the stability of the origin under the linear autonomous differential equation where and is a 2-by-2 matrix. We would sometimes perform change-of-basis by for some invertible matrix , which gives . We say is " in the new basis". Since and , we can classify the stability of origin using and , while freely using change-of-basis. Classification of stability types If , then the rank of is zero or one. If the rank is zero, then , and there is no flow. If the rank is one, then and are both one-dimensional. If , then let span , and let be a preimage of , then in basis, , and so the flow is a shearing along the direction. In this case, . If , then let span and let span , then in basis, for some nonzero real number . If , then it is unstable, diverging at a rate of from along parallel translates of . If , then it is stable, converging at a rate of to along parallel translates of . If , we first find the Jordan normal form of the matrix, to obtain a basis in which is one of three possible forms: where . If , then . The origin is a source, with integral curves of form Similarly for . The origin is a sink. If or , then , and the origin is a saddle point. with integral curves of form . where . This can be further simplified by a change-of-basis with , after which . We can explicitly solve for with . The solution is with . This case is called the "degenerate node". The integral curves in this basis are central dilations of , plus the x-axis. If , then the origin is an degenerate source. Otherwise it is a degenerate sink. In both cases, where . In this case, . If , then this is a spiral sink. In this case, . The integral lines are logarithmic spirals. If , then this is a spiral source. In this case, . The integral lines are logarithmic spirals. If , then this is a rotation ("neutral stability") at a rate of , moving neither towards nor away from origin. In this case, . The integral lines are circles. The summary is shown in the stability diagram on the right. In each case, except the case of , the values allows unique classification of the type of flow. For the special case of , there are two cases that cannot be distinguished by . In both cases, has only one eigenvalue, with algebraic multiplicity 2. If the eigenvalue has a two-dimensional eigenspace (geometric multiplicity 2), then the system is a central node (sometimes called a "star", or "dicritical node") which is either a source (when ) or a sink (when ). If it has a one-dimensional eigenspace (geometric multiplicity 1), then the system is a degenerate node (if ) or a shearing flow (if ). Area-preserving flow When , we have , so the flow is area-preserving. In this case, the type of flow is classified by . If , then it is a rotation ("neutral stability") around the origin. If , then it is a shearing flow. If , then the origin is a saddle point. Stability of fixed points The simplest kind of an orbit is a fixed point, or an equilibrium. If a mechanical system is in a stable equilibrium state then a small push will result in a localized motion, for example, small oscillations as in the case of a pendulum. In a system with damping, a stable equilibrium state is moreover asymptotically stable. On the other hand, for an unstable equilibrium, such as a ball resting on a top of a hill, certain small pushes will result in a motion with a large amplitude that may or may not converge to the original state. There are useful tests of stability for the case of a linear system. Stability of a nonlinear system can often be inferred from the stability of its linearization. Maps Let be a continuously differentiable function with a fixed point , . Consider the dynamical system obtained by iterating the function : The fixed point is stable if the absolute value of the derivative of at is strictly less than 1, and unstable if it is strictly greater than 1. This is because near the point , the function has a linear approximation with slope : Thus which means that the derivative measures the rate at which the successive iterates approach the fixed point or diverge from it. If the derivative at is exactly 1 or −1, then more information is needed in order to decide stability. There is an analogous criterion for a continuously differentiable map with a fixed point , expressed in terms of its Jacobian matrix at , . If all eigenvalues of are real or complex numbers with absolute value strictly less than 1 then is a stable fixed point; if at least one of them has absolute value strictly greater than 1 then is unstable. Just as for =1, the case of the largest absolute value being 1 needs to be investigated further — the Jacobian matrix test is inconclusive. The same criterion holds more generally for diffeomorphisms of a smooth manifold. Linear autonomous systems The stability of fixed points of a system of constant coefficient linear differential equations of first order can be analyzed using the eigenvalues of the corresponding matrix. An autonomous system where and is an matrix with real entries, has a constant solution (In a different language, the origin is an equilibrium point of the corresponding dynamical system.) This solution is asymptotically stable as ("in the future") if and only if for all eigenvalues of , . Similarly, it is asymptotically stable as ("in the past") if and only if for all eigenvalues of , . If there exists an eigenvalue of with then the solution is unstable for . Application of this result in practice, in order to decide the stability of the origin for a linear system, is facilitated by the Routh–Hurwitz stability criterion. The eigenvalues of a matrix are the roots of its characteristic polynomial. A polynomial in one variable with real coefficients is called a Hurwitz polynomial if the real parts of all roots are strictly negative. The Routh–Hurwitz theorem implies a characterization of Hurwitz polynomials by means of an algorithm that avoids computing the roots. Non-linear autonomous systems Asymptotic stability of fixed points of a non-linear system can often be established using the Hartman–Grobman theorem. Suppose that is a -vector field in which vanishes at a point , . Then the corresponding autonomous system has a constant solution Let be the Jacobian matrix of the vector field at the point . If all eigenvalues of have strictly negative real part then the solution is asymptotically stable. This condition can be tested using the Routh–Hurwitz criterion. Lyapunov function for general dynamical systems A general way to establish Lyapunov stability or asymptotic stability of a dynamical system is by means of Lyapunov functions.
Mathematics
Dynamical systems
null
5498706
https://en.wikipedia.org/wiki/Current%20density
Current density
In electromagnetism, current density is the amount of charge per unit time that flows through a unit area of a chosen cross section. The current density vector is defined as a vector whose magnitude is the electric current per cross-sectional area at a given point in space, its direction being that of the motion of the positive charges at this point. In SI base units, the electric current density is measured in amperes per square metre. Definition Assume that (SI unit: m2) is a small surface centered at a given point and orthogonal to the motion of the charges at . If (SI unit: A) is the electric current flowing through , then electric current density at is given by the limit: with surface remaining centered at and orthogonal to the motion of the charges during the limit process. The current density vector is the vector whose magnitude is the electric current density, and whose direction is the same as the motion of the positive charges at . At a given time , if is the velocity of the charges at , and is an infinitesimal surface centred at and orthogonal to , then during an amount of time , only the charge contained in the volume formed by and will flow through . This charge is equal to where is the charge density at . The electric current is , it follows that the current density vector is the vector normal (i.e. parallel to ) and of magnitude The surface integral of over a surface , followed by an integral over the time duration to , gives the total amount of charge flowing through the surface in that time (): More concisely, this is the integral of the flux of across between and . The area required to calculate the flux is real or imaginary, flat or curved, either as a cross-sectional area or a surface. For example, for charge carriers passing through an electrical conductor, the area is the cross-section of the conductor, at the section considered. The vector area is a combination of the magnitude of the area through which the charge carriers pass, , and a unit vector normal to the area, The relation is The differential vector area similarly follows from the definition given above: If the current density passes through the area at an angle to the area normal then where is the dot product of the unit vectors. That is, the component of current density passing through the surface (i.e. normal to it) is , while the component of current density passing tangential to the area is , but there is no current density actually passing through the area in the tangential direction. The only component of current density passing normal to the area is the cosine component. Importance Current density is important to the design of electrical and electronic systems. Circuit performance depends strongly upon the designed current level, and the current density then is determined by the dimensions of the conducting elements. For example, as integrated circuits are reduced in size, despite the lower current demanded by smaller devices, there is a trend toward higher current densities to achieve higher device numbers in ever smaller chip areas. See Moore's law. At high frequencies, the conducting region in a wire becomes confined near its surface which increases the current density in this region. This is known as the skin effect. High current densities have undesirable consequences. Most electrical conductors have a finite, positive resistance, making them dissipate power in the form of heat. The current density must be kept sufficiently low to prevent the conductor from melting or burning up, the insulating material failing, or the desired electrical properties changing. At high current densities the material forming the interconnections actually moves, a phenomenon called electromigration. In superconductors excessive current density may generate a strong enough magnetic field to cause spontaneous loss of the superconductive property. The analysis and observation of current density also is used to probe the physics underlying the nature of solids, including not only metals, but also semiconductors and insulators. An elaborate theoretical formalism has developed to explain many fundamental observations. The current density is an important parameter in Ampère's circuital law (one of Maxwell's equations), which relates current density to magnetic field. In special relativity theory, charge and current are combined into a 4-vector. Calculation of current densities in matter Free currents Charge carriers which are free to move constitute a free current density, which are given by expressions such as those in this section. Electric current is a coarse, average quantity that tells what is happening in an entire wire. At position at time , the distribution of charge flowing is described by the current density: where is the current density vector; is the particles' average drift velocity (SI unit: m∙s−1); is the charge density (SI unit: coulombs per cubic metre), in which is the number of particles per unit volume ("number density") (SI unit: m−3); is the charge of the individual particles with density (SI unit: coulombs). A common approximation to the current density assumes the current simply is proportional to the electric field, as expressed by: where is the electric field and is the electrical conductivity. Conductivity is the reciprocal (inverse) of electrical resistivity and has the SI units of siemens per metre (S⋅m−1), and has the SI units of newtons per coulomb (N⋅C−1) or, equivalently, volts per metre (V⋅m−1). A more fundamental approach to calculation of current density is based upon: indicating the lag in response by the time dependence of , and the non-local nature of response to the field by the spatial dependence of , both calculated in principle from an underlying microscopic analysis, for example, in the case of small enough fields, the linear response function for the conductive behaviour in the material. See, for example, Giuliani & Vignale (2005) or Rammer (2007). The integral extends over the entire past history up to the present time. The above conductivity and its associated current density reflect the fundamental mechanisms underlying charge transport in the medium, both in time and over distance. A Fourier transform in space and time then results in: where is now a complex function. In many materials, for example, in crystalline materials, the conductivity is a tensor, and the current is not necessarily in the same direction as the applied field. Aside from the material properties themselves, the application of magnetic fields can alter conductive behaviour. Polarization and magnetization currents Currents arise in materials when there is a non-uniform distribution of charge. In dielectric materials, there is a current density corresponding to the net movement of electric dipole moments per unit volume, i.e. the polarization : Similarly with magnetic materials, circulations of the magnetic dipole moments per unit volume, i.e. the magnetization , lead to magnetization currents: Together, these terms add up to form the bound current density in the material (resultant current due to movements of electric and magnetic dipole moments per unit volume): Total current in materials The total current is simply the sum of the free and bound currents: Displacement current There is also a displacement current corresponding to the time-varying electric displacement field : which is an important term in Ampere's circuital law, one of Maxwell's equations, since absence of this term would not predict electromagnetic waves to propagate, or the time evolution of electric fields in general. Continuity equation Since charge is conserved, current density must satisfy a continuity equation. Here is a derivation from first principles. The net flow out of some volume (which can have an arbitrary shape but fixed for the calculation) must equal the net change in charge held inside the volume: where is the charge density, and is a surface element of the surface enclosing the volume . The surface integral on the left expresses the current outflow from the volume, and the negatively signed volume integral on the right expresses the decrease in the total charge inside the volume. From the divergence theorem: Hence: This relation is valid for any volume, independent of size or location, which implies that: and this relation is called the continuity equation. In practice In electrical wiring, the maximum current density (for a given temperature rating) can vary from 4 A⋅mm−2 for a wire with no air circulation around it, to over 6 A⋅mm−2 for a wire in free air. Regulations for building wiring list the maximum allowed current of each size of cable in differing conditions. For compact designs, such as windings of SMPS transformers, the value might be as low as 2 A⋅mm−2. If the wire is carrying high-frequency alternating currents, the skin effect may affect the distribution of the current across the section by concentrating the current on the surface of the conductor. In transformers designed for high frequencies, loss is reduced if Litz wire is used for the windings. This is made of multiple isolated wires in parallel with a diameter twice the skin depth. The isolated strands are twisted together to increase the total skin area and to reduce the resistance due to skin effects. For the top and bottom layers of printed circuit boards, the maximum current density can be as high as 35 A⋅mm−2 with a copper thickness of 35 μm. Inner layers cannot dissipate as much heat as outer layers; designers of circuit boards avoid putting high-current traces on inner layers. In the semiconductors field, the maximum current densities for different elements are given by the manufacturer. Exceeding those limits raises the following problems: The Joule effect which increases the temperature of the component. The electromigration effect which will erode the interconnection and eventually cause an open circuit. The slow diffusion effect which, if exposed to high temperatures continuously, will move metallic ions and dopants away from where they should be. This effect is also synonymous with ageing. The following table gives an idea of the maximum current density for various materials. Even if manufacturers add some margin to their numbers, it is recommended to, at least, double the calculated section to improve the reliability, especially for high-quality electronics. One can also notice the importance of keeping electronic devices cool to avoid exposing them to electromigration and slow diffusion. In biological organisms, ion channels regulate the flow of ions (for example, sodium, calcium, potassium) across the membrane in all cells. The membrane of a cell is assumed to act like a capacitor. Current densities are usually expressed in pA⋅pF−1 (picoamperes per picofarad) (i.e., current divided by capacitance). Techniques exist to empirically measure capacitance and surface area of cells, which enables calculation of current densities for different cells. This enables researchers to compare ionic currents in cells of different sizes. In gas discharge lamps, such as flashlamps, current density plays an important role in the output spectrum produced. Low current densities produce spectral line emission and tend to favour longer wavelengths. High current densities produce continuum emission and tend to favour shorter wavelengths. Low current densities for flash lamps are generally around 10 A⋅mm−2. High current densities can be more than 40 A⋅mm−2.
Physical sciences
Electrodynamics
Physics
11794177
https://en.wikipedia.org/wiki/Carya%20laciniosa
Carya laciniosa
Carya laciniosa, the shellbark hickory, in the Juglandaceae or walnut family is also called kingnut, big, bottom, thick, or western shellbark, attesting to some of its characteristics. It is a slow-growing, long-lived tree, hard to transplant because of its long taproot, and subject to insect damage. The nuts, largest of all hickory nuts, are sweet and edible. Wildlife and people harvest most of them; those remaining produce seedling trees readily. The wood is hard, heavy, strong, and very flexible, making it a favored wood for tool handles. A specimen tree has been reported in Missouri with diameter at breast height, tall, and a spread of . Description Sapling and pole stages to maturity Growth and yield: The hickories as a group grow slowly in diameter, and shellbark hickory is no exception. Sapling size trees average per year in diameter growth, increasing to per year as poles and sawtimber. Second-growth trees show growth rates of per year. Shellbark hickory occasionally grows to a height of and a diameter of . Rooting habit: Shellbark hickory develops a large taproot that penetrates deeply into the soil. Lateral roots emerge at nearly right angles to the taproot, spreading horizontally through the soil. Major distinct lateral roots usually develop 12 inches or more below ground level and appear only after taproot is well formed. In Illinois, root growth was rapid in April, slowed during July and August, increased again in September, and ended in late November. Mycorrhizal associations are formed when trees are young. The only specific fungus identified from shellbark hickory roots is an ectotrophic mycorrhiza, Laccaria ochropurpurea. Reaction to competition: Shellbark hickory is very shade-tolerant, exceeded only by sugar maple (Acer saccharum) and beech (Fagus grandifolia). It grows slowly under a dense canopy, however. In stands with only partial shade, it reproduces well. It is a very strong competitor in most of the species associations in which it is found. Under forest conditions, shellbark hickory often develops a clear bole for half its length and has a narrow, oblong crown. Open-grown trees have egg-shaped crowns. Heavy release sometimes results in epicormic branching. On mature trees, the bark peels away from the trunk in long, sometimes broad, strips. This gives the trees a “shaggy” appearance that is easily confused with that of the Shagbark hickory (Carya ovata). That close similarity is the reason Shellbark hickories are frequently misidentified. A closer examination of other traits is usually needed to distinguish the two species. Damaging agents: Although numerous insects and diseases affect hickories, shellbark hickory has no enemies that seriously threaten its development or perpetuation as a species. Seed production can be reduced significantly, however, through attack by several insects. Two of the most important are the pecan weevil (Curculio caryae) and the hickory shuckworm (Laspeyresia caryana). The hickory bark beetle (Scolytus quadrispinosus) feeds in the cambium and seriously weakens or even kills some trees. Adults of the hickory spiral borer (Agrilus arcuatus torquatus) feed on leaves, but the larvae feed beneath the bark and can be very destructive to hickory seedlings. The flatheaded appletree borer (Chrysobothris femorata) likewise is a foliage-feeder as an adult, but its larvae feed on the phloem and outer sapwood. The living-hickory borer (Goes pulcher) feeds in the trunks and branches of trees. A twig girdler (Oncideres cingulata) can seriously affect reproduction by killing back the tops of seedlings and sprouts. Both standing dead trees and freshly cut logs are highly susceptible to attacks by numerous species of wood borers. A large number of insect species feed on hickory foliage. None of them causes serious problems for shellbark hickory, although they may be responsible for some stem deformity and growth loss. Shellbark hickory is free of serious diseases, but it is a host species for a variety of fungi. More than 130 fungi have been identified from species of Carya. These include leaf disease, stem canker, wood rot, and root rot-causing fungi. Specific information for shellbark hickory is not available. Shellbark hickory is susceptible to bole injury from fire, and fire injuries are often invaded by wood rot fungi. It is resistant to snow and ice damage, but is susceptible to frost damage. Distribution and habitat Shellbark hickory is widely distributed, but is common nowhere. The range extends from western New York through southern Michigan to southeast Iowa, south through eastern Kansas into northern Oklahoma, and eastward through Tennessee into Pennsylvania. This species is most prominent in the lower Ohio River region and south along the Mississippi River to central Arkansas. It is frequently found in the great river swamps of central Missouri and the Wabash River region in Indiana and Ohio. It's also found scattered in the Hudson valley in New York state In part due to the activities of humans, shellbark hickory has become rare in its natural range. The heavy seeds do not travel far from the parent tree and many stands have been lost to forest clearing and lumber harvesting. It is also not planted much as an ornamental due to its slow growth and difficulty of transplanting. Climate The mean length of the frost-free period within the range of shellbark hickory is from 150 to 210 days. The average January temperature is between , and for July the mean temperature is from . An average minimum temperature of occurs in the northern part of the range, and an average maximum temperature of is found throughout the range. Precipitation varies between per year including of snow. Soils and topography Shellbark hickory grows best on deep, fertile, moist soils, most typical of the order Alfisols. It does not thrive in heavy clay soils, but grows well on heavy loams or silt loams. Shellbark hickory requires moister situations than do pignut, mockernut, or shagbark hickories (Carya glabra, C. alba, or C. ovata), although it is sometimes found on dry, sandy soils. Specific nutrient requirements are not known, but generally the hickories grow best on neutral or slightly alkaline soils. The species is essentially a bottomland species and is often found on river terraces and second bottoms. Land subject to shallow inundations for a few weeks early in the growing season is favorable for shellbark. However, the tree will grow on a wide range of topographic and physiographic sites. Associated forest cover Shellbark hickory may be found in pure groups of several trees but is more frequent singly in association with other hardwoods. The species is a minor component of the forest cover types bur oak (Society of American Foresters type 42), pin oak–sweetgum (type 65), and swamp chestnut oak–cherrybark oak (type 91). It may also be found in one or more of the types in which hickories are included, but it is not identified at the species level. Shellbark hickory commonly grows in association with American elm (Ulmus americana), slippery (U. rubra), and winged elms (U. alata), white (Fraxinus americana) and green ash (F. pennsylvanica), basswood (Tilia americana), American hornbeam (Carpinus caroliniana), red maple (Acer rubrum), blackgum (Nyssa sylvatica), sweetgum (Liquidambar styraciflua), and cottonwood (Populus deltoides). It is found in association with four other hickories–shagbark, mockernut, bitternut (Carya cordiformis), and water (C. aquatica), and numerous oak species, including swamp white (Quercus bicolor), pin (Q. palustris), white (Q. alba), Shumard (Q. shumardii), water (Q. nigra), Delta post (Q. stellata var. paludosa), swamp chestnut (Q. michauxii), and Nuttall (Q. nuttallii). The herbaceous stratum includes numerous sedges and grasses. The shrub and small tree layer may be composed of painted buckeye (Aesculus sylvatica), pawpaw (Asimina triloba), flowering dogwood (Cornus florida), eastern redbud (Cercis canadensis), possumhaw (Ilex decidua), poison ivy (Toxicodendron radicans), and trumpet-creeper (Campsis radicans). Uses The seeds within shellbark hickory nuts are edible and consumed by ducks, quail, wild turkeys, squirrels, chipmunks, deer, foxes, raccoons, and white-footed mice. A few plantations of shellbark hickory have been established for nut production, but the nuts are difficult to crack, though the kernel is sweet. The wood is used for furniture, tool handles, sporting goods, veneer, fuelwood, charcoal, and drum sticks. Genetics Shellbark hickory hybridizes with the pecan, Carya illinoensis (C. x nussbaumeri Sarg.), and shagbark hickory, C. ovata (C. x dunbarii Sarg.). Shellbark hickory has 32 chromosomes. In general, species within the genus with the same chromosome number are able to cross. Numerous hybrids among the Carya species with 32 chromosomes (pecan, bitternut, shellbark, and shagbark) have been described. Gallery
Biology and health sciences
Fagales
Plants
11794330
https://en.wikipedia.org/wiki/Carya%20tomentosa
Carya tomentosa
Carya tomentosa, commonly known as mockernut hickory, mockernut, white hickory, whiteheart hickory, hognut, bullnut, is a species of tree in the walnut family Juglandaceae. The most abundant of the hickories, and common in the eastern half of the United States, it is long lived, sometimes reaching the age of 500 years. A straight-growing hickory, a high percentage of its wood is used for products where strength, hardness, and flexibility are needed. The wood makes excellent fuel wood, as well. Description Reproduction and early growth Flowering and fruiting Mockernut hickory is monoecious - male and female flowers are produced on the same tree. Mockernut male flowers are catkins about long and may be produced on branches from axils of leaves of the previous season or from the inner scales of the terminal buds at the base of the current growth. The female flowers appear in short spikes on peduncles terminating in shoots of the current year. Flowers bloom in the spring from April to May, depending on latitude and weather. Usually the male flowers emerge before the female flowers. Hickories produce very large amounts of pollen that is dispersed by the wind. Fruits are solitary or paired and globose, ripening in September and October, and are about long with a short necklike base. The fruit has a thick, four-ribbed husk thick that usually splits from the middle to the base. The nut is distinctly four-angled with a reddish-brown, very hard shell thick containing a small edible kernel. Seed production and dissemination The seed is dispersed from September through December. Mockernut hickory requires a minimum of 25 years to reach commercial seed-bearing age. Optimum seed production occurs from 40 to 125 years, and the maximum age listed for commercial seed production is 200 years. Good seed crops occur every two to three years with light seed crops in intervening years. Around 50 to 75% of fresh seed will germinate. Fourteen mockernut hickory trees in southeastern Ohio produced an average annual crop of 6,285 nuts for 6 years; about 39% were sound, 48% aborted, and 13% had insect damage. Hickory shuckworm (Laspeyresia caryana) is probably a major factor in reducing germination. Mockernut hickory produces one of the heaviest seeds of the hickory species; cleaned seeds range from 70 to 250 seeds/kg (32 to 113/lb). Seed is disseminated mainly by gravity and wildlife, particularly squirrels. Birds also help disperse seed. Wildlife such as squirrels and chipmunks often bury the seed at some distance from the seed-bearing tree. Seedling development Hickory seeds show embryo dormancy that can be overcome by stratification in a moist medium at for 30 to 150 days. When stored for a year or more, seed may require stratification for only 30 to 60 days. Hickory nuts seldom remain viable in the ground for more than one year. Hickory species normally require a moderately moist seedbed for satisfactory seed germination, and mockernut hickory seems to reproduce best in moist duff. Germination is hypogeal. Mockernut seedlings are not fast-growing. The height growth of mockernut seedlings observed in the Ohio Valley in the open or under light shade on red clay soil was: Vegetative reproduction True hickories sprout prolifically from stumps after cutting and fire. As the stumps increase in size, the number of stumps that produce sprouts decreases; age is probably directly correlated to stump size and sprouting. Coppice management is a possibility with true hickories. True hickories are difficult to reproduce from cuttings. Madden discussed the techniques for selecting, packing, and storing hickory propagation wood. Reed indicated that the most tested hickory species for root stock for pecan hickory grafts were mockernut and water hickory (Carya aquatica). However, mockernut root stock grew slowly and reduced the growth of pecan tops. Also, this graft seldom produced a tree that bore well or yielded large nuts. Sapling and pole stages to maturity Growth and yield Mockernut hickory is a large, true hickory with a dense crown. This species occasionally grows to about tall and in diameter at breast height (dbh), but heights and diameters usually range from about , respectively. The relation of height to age is: The current annual growth of mockernut hickory on dry sites is estimated at 1.0 m3/ha (15 ft3/acre). In fully stocked stands on moderately fertile soil2.1 m3 /ha (30 ft3 /acre) is estimated, though annual growth rates of 3.1 m3/ha (44 ft3/acre) were reported in Ohio (26). Greenwood and bark weights for commercial-size mockernut trees from mixed hardwoods in Georgia are available for total tree and saw-log stems to a 4-inch top for trees 5 to 22 inches d.b.h.. Available growth data and other research information are summarized for hickory species, not for individual species. Trimble compared growth rates of various Appalachian hardwoods including a hickory species category dominant-codominant hickory trees in dbh on good oak sites grew slowly compared to northern red oak, yellow-poplar, black cherry (Prunus serotina), and sugar maple (Acer saccharum). Hickories were in the white oak, sweet birch (Betula lenta), and American beech (Fagus grandifolia) growth-rate category. Dominant-codominant hickory trees grew about dbh per year compared to for the moderate-growth species (black cherry) and for the faster-growing species (yellow-poplar and red oak). Equations are available for predicting merchantable gross volumes from hickory stump diameters in Ohio. Also, procedures are described for predicting diameters and heights and for developing volume tables to any merchantable top diameter for hickory species in southern Illinois and West Virginia. Generally, epicormic branching is not a problem with hickory species, but a few branches do occur. The leaves turn yellow in Autumn. Rooting habit True hickories such as mockernut develop a long taproot with few laterals. Early root growth is primarily into the taproot, which typically reaches a depth of during the first year. Small laterals originate along the taproot, but many die back during the fall. During the second year, the taproot may reach a depth of , and the laterals grow rapidly. After 5 years, the root system attains its maximum depth, and the horizontal spread of the roots is about double that of the crown. By age 10, the height is four times the depth of the taproot. Etymology The species' name comes from the Latin word tomentum, meaning "stuffing", referring to the underside of the leaves, which are covered with dense, short hairs, which help identify the species. Also called the white hickory due to the light color of the wood, the common name mockernut likely refers to the would-be nut eater, who would struggle to crack the thick shell only to find a small, unrewarding nut inside. Distribution and habitat Native range Mockernut hickory, a true hickory, grows from Massachusetts and New York west to southern Ontario, and northern Illinois; then to southeastern Iowa, Missouri, and eastern Kansas, south to eastern Texas and east to northern Florida. This species is not present in Michigan, New Hampshire and Vermont as previously mapped by Little. Mockernut hickory is most abundant southward through Virginia, North Carolina, and Florida, where it is the most common of the hickories. It is also abundant in the lower Mississippi Valley and grows largest in the lower Ohio River Basin and in Missouri and Arkansas. Climate The climate where mockernut hickory grows is usually humid. Within its range, the mean annual precipitation measures from in the north to in the south. During the growing season (April through September), annual precipitation varies from . About of annual snowfall is common in the northern part of the range, but snow is rare in the southern portion. Annual temperatures range from . Monthly average temperatures range from in July and from in January. Temperature extremes are well above and below . The growing season is about 160 days in the northern part of the range and up to 320 days in the southern part of the range. Soils and topography In the north, mockernut hickory is found on drier soils of ridges and hillsides and less frequently on moist woodlands and alluvial bottoms. The species grows and develops best on deep, fertile soils. In the Cumberland Mountains and hills of southern Indiana, it grows on dry sites such as south and west slopes or dry ridges. In Alabama and Mississippi, it grows on sandy soils with shortleaf pine (Pinus echinata) and loblolly pine (P taeda). However, most of the merchantable mockernut grows on moderately fertile upland soils. Mockernut hickory grows primarily on ultisols occurring on an estimated 65% of its range, including much of the southern to northeastern United States. These soils are low in nutrients and usually moist, but during the warm season, they are dry part of the time. Along the mid-Atlantic and in the southern and western range, mockernut hickory grows on a variety of soils on slopes of 25% or less, including combinations of fine to coarse loams, clays, and well-drained quartz sands. On slopes steeper than 25%, mockernut often grows on coarse loams. Mockernut grows on inceptisols in an estimated 15% of its range. These clayey soils are moderate to high in nutrients and are primarily in the Appalachians on gentle to moderate slopes, where water is available to plants during the growing season. In the northern Appalachians on slopes of 25% or less, mockernut hickory grows on poorly drained loams with a fragipan. In the central and southern Appalachians on slopes 25% or less, mockernut hickory grows on fine loams. On steeper slopes, it grows on coarse loams. In the northwestern part of the range, mockernut grows on mollisols. These soils have a deep, fertile surface horizon greater than thick. Mollisols characteristically form under grass in climates with moderate to high seasonal precipitation. Mockernut grows on a variety of soils including wet, fine loams, sandy textured soils that often have been burned, plowed, and pastured. Alfisols are also present in these areas and contain a medium to high supply of nutrients. Water is available to plants more than half the year or more than three consecutive months during the growing season. On slopes of 25% or less, mockernut grows on wet to moist, fine loam soils with a high carbonate content. Ecology Mockernuts are preferred mast for wildlife, particularly squirrels, which eat green nuts. Black bears, foxes, rabbits, beavers, and white-footed mice feed on the nuts, and sometimes the bark. The white-tailed deer browse on foliage and twigs and also feed on nuts. Hickory nuts are a minor source of food for ducks, quail, and turkey. Mockernut hickory nuts are consumed by many species of birds and other animals, including wood duck, red-bellied woodpecker, red fox, squirrels, beaver, eastern cottontail, eastern chipmunk, turkey, white-tailed deer, white-footed mice, and others. Many insect pests eat hickory leaves and bark. Mockernut hickories also provide cavities for animals to live in, such as woodpeckers, black rat snakes, raccoons, Carolina chickadees, and more. They are also good nesting trees, providing cover for birds with their thick foliage. Animals help disperse seeds so that new hickories can grow elsewhere. Chipmunks, squirrels, and birds do this best. Some fungi grow on mockernut hickory roots, sharing nutrients from the soil. Reaction to competition At certain times during its life, mockernut hickory may be variously classified as tolerant to intolerant. Overall it is classified as intolerant of shade. It recovers rapidly from suppression and is probably a climax species on moist sites. Silvicultural practices for managing the oak-hickory type have been summarized. Establishing the seedling origin of hickory trees is difficult because of seed predators. Although infrequent bumper seed crops usually provide some seedlings, seedling survival is poor under a dense canopy. Because of prolific sprouting ability, hickory reproduction can survive browsing, breakage, drought, and fire. Top dieback and resprouting may occur several times, each successive shoot reaching a larger size and developing a stronger root system than its predecessors. By this process, hickory reproduction gradually accumulates and grows under moderately dense canopies, especially on sites dry enough to restrict reproduction of more tolerant, but more fire or drought-sensitive species. Wherever adequate hickory advance reproduction occurs, clearcutting results in new sapling stands containing some hickories. Reproduction is difficult to attain if advance hickory regeneration is inadequate, though; then clearcutting will eliminate hickories except for stump sprouts. In theory, light thinnings or shelterwood cuts can be used to create advance hickory regeneration, but this has not been demonstrated. Damaging agents Mockernut hickory is extremely sensitive to fire because of the low insulating capacity of the hard, flinty bark. It is not subject to severe loss from disease. The main fungus of hickory is Poria spiculosa, a trunk rot. This fungus kills the bark, which produces a canker, causes heart rot and decay, and can seriously degrade the tree. Mineral streaks and sapsucker-induced streaks also degrade the lumber. In general, the hard, strong, and durable wood of hickories makes them relatively resistant to decay fungi. Most fungi cause little, if any, decay in small, young trees. Common foliage diseases include leaf mildew and witches' broom (Microstroma juglandis), leaf blotch (Mycosphaerella dendroides), and pecan scab (Cladosporium effusum). Mockernut hickory is host to anthracnose (Gnomonia caryae). Nuts of all hickory species are susceptible to attack by the hickory nut weevil (Curculio caryae). Another weevil (Conotrachelus aratus) attacks young shoots and leaf petioles. The Curculio species are the most damaging and can destroy 65% of the hickory nut crop. Hickory shuckworms also damage nuts. The bark beetle (Scolytus quadrispinosus) attacks mockernut hickory, especially in drought years and where hickory species are growing rapidly. The hickory spiral borer (Argilus arcuatus torquatus) and the pecan carpenterworm (Cossula magnifica) are also serious insect enemies of mockernut. The hickory bark beetle probably destroys more sawtimber-size mockernut trees than any other insect. The hickory spiral borer kills many seedlings and young trees, and the pecan carpenterworm degrades both trees and logs. The twig girdler (Oncideres cingulata) attacks both small and large trees; it seriously deforms trees by sawing branches. Sometimes, these girdlers cut hickory seedlings near ground level. Two casebearers (Acrobasis caryivorella and A. juglandis) feed on buds and leaves; later, they bore into succulent hickory shoots. Larvae of A. caryivorella may destroy entire nut sets. The living-hickory borer (Goes pulcher) feeds on hickory boles and branches throughout the East. Borers commonly found on dying or dead hickory trees or cut logs include: Banded hickory borer (Knulliana cincta) Long-horned beetle (Saperda discoidea) Apple twig borer (Amphicerus bicaudatus) Flatheaded ambrosia beetle (Platypus compositus) Redheaded ash borer (Neoclytus acuminatus) False powderpost beetle (Scobicia bidentata) Severe damage to hickory lumber and manufactured hickory products is caused by powderpost beetles (Lyctus spp. and Polycanon stoutii). Gall insects (Caryomyia spp.) commonly infest leaves. The fruit-tree leafroller (Archips argyrospila) and the hickory leafroller (Argyrotaenia juglandana) are the most common leaf feeders. The giant bark aphid (Longistigma caryae) is common on hickory bark. This aphid usually feeds on twigs and can cause branch mortality. The European fruit lecanium (Parthnolecanium corni) is common on hickories. Mockernut is not easily injured by ice glaze or snow, but young seedlings are very susceptible to frost damage. Many birds and animals feed on the nuts of mockernut hickory. This feeding combined with insect and disease problems eliminates the annual nut production, except during bumper seed crop years. Associated forest cover Mockernut hickory is associated with the eastern oak-hickory forest and the beech-maple forest. The species does not exist in sufficient numbers to be included as a title species in the Society of American Foresters forest cover types. Nevertheless, it is identified as an associated species in eight cover types. Three of the upland oak types and the bottom land type are subclimax to climax. In the central forest upland oak types, mockernut is commonly associated with: pignut hickory (Carya glabra) shagbark hickory (C. ovata) bitternut hickory (C. cordiformis) black oak (Quercus velutina) scarlet oak (Q. coccinea) post oak (Q. stellata) bur oak (Q. macrocarpa) blackgum (Nyssa sylvatica) yellow-poplar (Liriodendron tulipifera) maples (Acer spp.) white ash (Fraxinus americana) eastern white pine (Pinus strobus) eastern hemlock (Tsuga canadensis) Common understory vegetation includes: flowering dogwood (Cornus florida) sumac (Rhus spp.) sassafras (Sassafras albidum) sourwood (Oxydendrum arboreum) downy serviceberry (Amelanchier spp.) redbud (Cercis canadensis) eastern hophornbeam (Ostrya virginiana) American hornbeam (Carpinus caroliniana) Mockernut is also associated with: wild grapes (Vitis spp.) rosebay rhododendron (Rhododendron maximum) mountain-laurel (Kalmia latifolia) greenbriers (Smilax spp.) blueberries (Vaccinium spp.) witch-hazel (Hamamelis virginiana) spicebush (Lindera benzoin) New Jersey tea (Ceanothus americanus) wild hydrangea (Hydrangea arborescens) tick-trefoil (Desmodium spp.) bluestem (Andropogon spp.) poverty oatgrass (Danthonia spicata) sedges (Carex spp.) pussytoes (Antennaria spp.) goldenrod (Solidago spp.) asters (Aster or other genera, depending on the classification). In the southern forest, mockernut grows with: shortleaf pine loblolly pine pignut hickory gums oaks sourwood winged elm (Ulmus alata) flowering dogwood redbud sourwood persimmon (Diospyros virginiana) eastern redcedar (Juniperus virginiana) sumacs hawthorns (Crataegus spp.) blueberries honeysuckle (Lonicera spp.) mountain-laurel viburnums greenbriers grapes In the loblolly pine-hardwood type in the southern forest, mockernut commonly grows in the upland and drier sites with: white oak (Quercus alba) post oak northern red oak (Q. rubra) southern red oak (Q. falcata) scarlet oak shagbark and pignut hickories blackgum flowering dogwood hawthorn sourwood greenbrier grape honeysuckle blueberry In the southern bottom lands, mockernut occurs in the swamp chestnut oak-cherrybark oak type along with: green ash (Fraxinus pennsylvanica) white ash shagbark shellbark hickory (Carya laciniosa) bitternut hickories white oak delta post oak (Quercus stellata var. paludosa) Shumard oak (Q. shumardii) blackgum. Understory trees include: American pawpaw (Asimina triloba) flowering dogwood painted buckeye (Aesculus sylvatica) American hornbeam devils-walking stick (Aralia spinosa) redbud American holly (Ilex opaca) Dwarf palmetto (Sabal minor) Coastal plain willow (Salix caroliniana) Uses True hickories provide a large portion of the high-grade hickory used by industry. Mockernut is used for lumber, pulpwood, charcoal, and other fuelwood products. Hickory species are preferred species for fuelwood consumption. Mockernut has the second-highest heating value among the species of hickories. It can be used for veneer, but the low supply of logs of veneer quality is a limiting factor. Mockernut hickory is used for tool handles requiring high shock resistance. It is used for ladder rungs, athletic goods, agricultural implements, dowels, gymnasium apparatus, poles, shafts, well pumps, and furniture. Lower-grade lumber is used for pallets, blocking, etc. Hickory sawdust, chips, and some solid wood are often used by packing companies to smoke meats; mockernut is the preferred wood for smoking hams. Though mockernut kernels are edible, they are rarely eaten by humans because of their size and because they are eaten by squirrels and other wildlife. Genetics Mockernut is a 64-chromosome species, so rarely crosses with 32-chromosome species such as pecan or shellbark hickory. No published information exists concerning population or other genetic studies of this species. Efforts are currently underway to map the genome of pecan in a collaborative effort. The genome map at some point may expand to cover other hickory species. Hickories are noted for their variability, with many natural hybrids known among North American Carya species. Hickories usually can be crossed successfully within the genus. Geneticists recognize that mockernut hickory hybridizes naturally with C. illinoensis (Carya x schneckii Sarg.) and C. ovata (Carya x collina Laughlin). Mockernut readily hybridizes with tetraploid C. texana. Hybrids generally are shy nut producers or produce nuts that are not filled with a kernel. Numerous exceptions to this rule are known. Gallery
Biology and health sciences
Fagales
Plants
11795855
https://en.wikipedia.org/wiki/Caninae
Caninae
Caninae (whose members are known as canines () is the only living subfamily within Canidae, alongside the extinct Borophaginae and Hesperocyoninae. They first appeared in North America, during the Oligocene around 35 million years ago, subsequently spreading to Asia and elsewhere in the Old World at the end of the Miocene, some 7 million to 8 million years ago. Taxonomy and lineage The genus Leptocyon (Greek: leptos slender + cyon dog) includes 11 species and was the first primitive canine. They were small and weighed around 2 kg. They first appeared in Sioux County, Nebraska in the Orellan era 34-32 million years ago, which was the beginning of the Oligocene. This was the same time as the appearance of the Borophaginae with whom they share features, indicating that these were two sister groups. Borophaginae skull and dentition were designed for a powerful killing bite compared with the Leptocyon which were designed for snatching small, fast-moving prey. The species L. delicatus is the smallest canid to have existed. At the close of their genus 9 million years ago one Leptocyon lineage resembled the modern fox. The various species of Leptocyon branched 11.9 Mya into Vulpini (foxes) and Canini (canines). The canines spent two-thirds of their history in North America, before dispersing 7 million years ago into Asia, Europe, and Africa. One of the characteristics that distinguished them from the Borophaginae and Hesperocyoninae was their possession of less weight in their limbs and more length in their legs, which may have aided their dispersion. The first canine to arrive in Eurasia was the coyote-sized Canis cipio, whose scant fossils were found in Spain. However, the assignment of C. cipio within the canines to the genus Canis or genus Eucyon is not clear. Phylogenetic relationships The results of allozyme and chromosome analyses have previously suggested several phylogenetic divisions: DNA analysis shows that the first three form monophyletic clades. The wolf-like canines and the South American canines together form the tribe Canini. Molecular data imply a North American origin of living Canidae some 10 Mya and an African origin of wolf-like canines (Canis, Cuon, and Lycaon), with the jackals being the most basal of this group. The South American clade is rooted by the maned wolf and bush dog, and the fox-like canines by the fennec fox and Blanford's fox. The gray fox and island fox are basal to the other clades; however, this topological difference is not strongly supported. The cladogram below is based on the phylogeny of Lindblad-Toh (2005) modified to incorporate recent findings on Canis, Vulpes, Lycalopex species, and Dusicyon.
Biology and health sciences
Canines
Animals
4117384
https://en.wikipedia.org/wiki/NGC%206302
NGC 6302
NGC 6302 (also known as the Bug Nebula, Butterfly Nebula, or Caldwell 69) is a bipolar planetary nebula in the constellation Scorpius. The structure in the nebula is among the most complex ever seen in planetary nebulae. The spectrum of Butterfly Nebula shows that its central star is one of the hottest stars known, with a surface temperature in excess of 250,000 degrees Celsius, implying that the star from which it formed must have been very large. The central star, a white dwarf, was identified in 2009, using the upgraded Wide Field Camera 3 on board the Hubble Space Telescope. The star has a current mass of around 0.64 solar masses. It is surrounded by a dense equatorial disc composed of gas and dust. This dense disc is postulated to have caused the star's outflows to form a bipolar structure similar to an hourglass. This bipolar structure shows features such as ionization walls, knots and sharp edges to the lobes. Observation history As it is included in the New General Catalogue, this object has been known since at least 1888. The earliest-known study of NGC 6302 is by Edward Emerson Barnard, who drew and described it in 1907. The nebula featured in some of the first images released after the final servicing mission of the Hubble Space Telescope in September 2009. Characteristics NGC 6302 has a complex structure, which may be approximated as bipolar with two primary lobes, though there is evidence for a second pair of lobes that may have belonged to a previous phase of mass loss. A dark lane runs through the waist of the nebula obscuring the central star at all wavelengths. The nebula contains a prominent northwest lobe which extends up to 3.0′ away from the central star and is estimated to have formed from an eruptive event around 1,900 years ago. It has a circular part whose walls are expanding such that each part has a speed proportional to its distance from the central star. At an angular distance of 1.71′ from the central star, the flow velocity of this lobe is measured to be 263 km/s. At the extreme periphery of the lobe, the outward velocity exceeds 600 km/s. The western edge of the lobe displays characteristics suggestive of a collision with pre-existing globules of gas which modified the outflow in that region. Central star The central star, among the hottest stars known, had escaped detection because of a combination of its high temperature (meaning that it radiates mainly in the ultraviolet), the dusty torus (which absorbs a large fraction of the light from the central regions, especially in the ultraviolet) and the bright background from the star. It was not seen in the first Hubble Space Telescope images; the improved resolution and sensitivity of the new Wide Field Camera 3 of the same telescope later revealed the faint star at the centre. A temperature of 200,000 Kelvin is indicated, and a mass of 0.64 solar masses. The original mass of the star was much higher, but most was ejected in the event which created the planetary nebula. The luminosity and temperature of the star indicate it has ceased nuclear burning and is on its way to becoming a white dwarf, fading at a predicted rate of 1% per year. Dust chemistry The prominent dark lane that runs through the centre of the nebula has been shown to have an unusual composition, showing evidence for multiple crystalline silicates, crystalline water ice and quartz, with other features which have been interpreted as the first extra-solar detection of carbonates. This detection has been disputed, due to the difficulties in forming carbonates in a non-aqueous environment. The dispute remains unresolved. One of the characteristics of the dust detected in NGC 6302 is the existence of both oxygen-bearing silicate molecules and carbon-bearing polycyclic aromatic hydrocarbons (PAHs). Stars are usually either oxygen-rich or carbon-rich, the change from the former to the latter occurring late in the evolution of the star due to nuclear and chemical changes in the star's atmosphere. NGC 6302 belongs to a group of objects where hydrocarbon molecules formed in an oxygen-rich environment.
Physical sciences
Notable nebulae
Astronomy
4122592
https://en.wikipedia.org/wiki/Computer%20network
Computer network
A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies. The nodes of a computer network can include personal computers, servers, networking hardware, or other specialized or general-purpose hosts. They are identified by network addresses and may have hostnames. Hostnames serve as memorable labels for the nodes and are rarely changed after initial assignment. Network addresses serve for locating and identifying the nodes by communication protocols such as the Internet Protocol. Computer networks may be classified by many criteria, including the transmission medium used to carry signals, bandwidth, communications protocols to organize network traffic, the network size, the topology, traffic control mechanisms, and organizational intent. Computer networks support many applications and services, such as access to the World Wide Web, digital video and audio, shared use of application and storage servers, printers and fax machines, and use of email and instant messaging applications. History Computer networking may be considered a branch of computer science, computer engineering, and telecommunications, since it relies on the theoretical and practical application of the related disciplines. Computer networking was influenced by a wide array of technological developments and historical milestones. In the late 1950s, a network of computers was built for the U.S. military Semi-Automatic Ground Environment (SAGE) radar system using the Bell 101 modem. It was the first commercial modem for computers, released by AT&T Corporation in 1958. The modem allowed digital data to be transmitted over regular unconditioned telephone lines at a speed of 110 bits per second (bit/s). In 1959, Christopher Strachey filed a patent application for time-sharing in the United Kingdom and John McCarthy initiated the first project to implement time-sharing of user programs at MIT. Strachey passed the concept on to J. C. R. Licklider at the inaugural UNESCO Information Processing Conference in Paris that year. McCarthy was instrumental in the creation of three of the earliest time-sharing systems (the Compatible Time-Sharing System in 1961, the BBN Time-Sharing System in 1962, and the Dartmouth Time-Sharing System in 1963). In 1959, Anatoly Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organization of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centers. Kitov's proposal was rejected, as later was the 1962 OGAS economy management network project. In 1960, the commercial airline reservation system semi-automatic business research environment (SABRE) went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users. In 1965, Western Electric introduced the first widely used telephone switch that implemented computer control in the switching fabric. Throughout the 1960s, Paul Baran and Donald Davies independently invented the concept of packet switching for data communication between computers over a network. Baran's work addressed adaptive routing of message blocks across a distributed network, but did not include routers with software switches, nor the idea that users, rather than the network itself, would provide the reliability. Davies' hierarchical network design included high-speed routers, communication protocols and the essence of the end-to-end principle. The NPL network, a local area network at the National Physical Laboratory (United Kingdom), pioneered the implementation of the concept in 1968-69 using links. Both Baran's and Davies' inventions were seminal contributions that influenced the development of computer networks. In 1969, the first four nodes of the ARPANET were connected using circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, and the University of Utah. Designed principally by Bob Kahn, the network's routing, flow control, software design and network control were developed by the IMP team working for Bolt Beranek & Newman. In the early 1970s, Leonard Kleinrock carried out mathematical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services were first deployed on experimental public data networks in Europe. In 1973, the French CYCLADES network, directed by Louis Pouzin was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Peter Kirstein put internetworking into practice at University College London (UCL), connecting the ARPANET to British academic networks, the first international heterogeneous computer network. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system that was based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. Metcalfe, with John Shoch, Yogen Dalal, Ed Taft, and Butler Lampson also developed the PARC Universal Packet for internetworking. In 1974, Vint Cerf and Bob Kahn published their seminal 1974 paper on internetworking, A Protocol for Packet Network Intercommunication. Later that year, Cerf, Yogen Dalal, and Carl Sunshine wrote the first Transmission Control Protocol (TCP) specification, , coining the term Internet as a shorthand for internetworking. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978. Public data networks in Europe, North America and Japan began using X.25 in the late 1970s and interconnected with X.75. This underlying infrastructure was used for expanding TCP/IP networks in the 1980s. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1977, the first long-distance fiber network was deployed by GTE in Long Beach, California. In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1980, Ethernet was upgraded from the original protocol to the 10 Mbit/s protocol, which was developed by Ron Crane, Bob Garner, Roy Ogus, and Yogen Dalal. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of 1 Gbit/s. Subsequently, higher speeds of up to 400 Gbit/s were added (). The scaling of Ethernet has been a contributing factor to its continued use. Use Computer networks enhance how users communicate with each other by using various electronic methods like email, instant messaging, online chat, voice and video calls, and video conferencing. Networks also enable the sharing of computing resources. For example, a user can print a document on a shared printer or use shared storage devices. Additionally, networks allow for the sharing of files and information, giving authorized users access to data stored on other computers. Distributed computing leverages resources from multiple computers across a network to perform tasks collaboratively. Network packet Most modern computer networks use protocols based on packet-mode transmission. A network packet is a formatted unit of data carried by a packet-switched network. Packets consist of two types of data: control information and user data (payload). The control information provides data the network needs to deliver the user data, for example, source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between. With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched. When one user is not sending packets, the link can be filled with packets from other users, and so the cost can be shared, with relatively little interference, provided the link is not overused. Often the route a packet needs to take through a network is not immediately available. In that case, the packet is queued and waits until a link is free. The physical link technologies of packet networks typically limit the size of packets to a certain maximum transmission unit (MTU). A longer message may be fragmented before it is transferred and once the packets arrive, they are reassembled to construct the original message. Network topology The physical or geographic locations of network nodes and links generally have relatively little effect on a network, but the topology of interconnections of a network can significantly affect its throughput and reliability. With many technologies, such as bus or star networks, a single failure can cause the network to fail entirely. In general, the more interconnections there are, the more robust the network is; but the more expensive it is to install. Therefore, most network diagrams are arranged by their network topology which is the map of logical interconnections of network hosts. Common topologies are: Bus network: all nodes are connected to a common medium along this medium. This was the layout used in the original Ethernet, called 10BASE5 and 10BASE2. This is still a common topology on the data link layer, although modern physical layer variants use point-to-point links instead, forming a star or a tree. Star network: all nodes are connected to a special central node. This is the typical layout found in a small switched Ethernet LAN, where each client connects to a central network switch, and logically in a wireless LAN, where each wireless client associates with the central wireless access point. Ring network: each node is connected to its left and right neighbor node, such that all nodes are connected and that each node can reach each other node by traversing nodes left- or rightwards. Token ring networks, and the Fiber Distributed Data Interface (FDDI), made use of such a topology. Mesh network: each node is connected to an arbitrary number of neighbors in such a way that there is at least one traversal from any node to any other. Fully connected network: each node is connected to every other node in the network. Tree network: nodes are arranged hierarchically. This is the natural topology for a larger Ethernet network with multiple switches and without redundant meshing. The physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring, but the physical topology is often a star, because all neighboring connections can be routed via a central physical location. Physical layout is not completely irrelevant, however, as common ducting and equipment locations can represent single points of failure due to issues like fires, power failures and flooding. Overlay network An overlay network is a virtual network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet. Overlay networks have been used since the early days of networking, back when computers were connected via telephone lines using modems, even before data networks were developed. The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on the telephone network. Even today, each Internet node can communicate with virtually any other through an underlying mesh of sub-networks of wildly different topologies and technologies. Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network. Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually a map) indexed by keys. Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP multicast have not seen wide acceptance largely because they require modification of all routers in the network. On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination. For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast, resilient routing and quality of service studies, among others. Network links The transmission media (often referred to in the literature as the physical medium) used to link devices to form a computer network include electrical cable, optical fiber, and free space. In the OSI model, the software to handle the media is defined at layers 1 and 2 — the physical layer and the data link layer. A widely adopted family that uses copper and fiber media in local area network (LAN) technology are collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Wireless LAN standards use radio waves, others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data. Wired The following classes of wired technologies are used in computer networking. Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second. ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed local area network. Twisted pair cabling is used for wired Ethernet and other standards. It typically consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 Mbit/s to 10 Gbit/s. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios. An optical fiber is a glass fiber. It carries pulses of light that represent data via lasers and optical amplifiers. Some advantages of optical fibers over metal wires are very low transmission loss and immunity to electrical interference. Using dense wave division multiplexing, optical fibers can simultaneously carry multiple streams of data on different wavelengths of light, which greatly increases the rate that data can be sent to up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea communications cables to interconnect continents. There are two basic types of fiber optics, single-mode optical fiber (SMF) and multi-mode optical fiber (MMF). Single-mode fiber has the advantage of being able to sustain a coherent signal for dozens or even a hundred kilometers. Multimode fiber is cheaper to terminate but is limited to a few hundred or even only a few dozens of meters, depending on the data rate and cable grade. Wireless Network connections can be established wirelessly using radio or other electromagnetic means of communication. Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately apart. Communications satellites – Satellites also communicate via microwave. The satellites are stationed in space, typically in geosynchronous orbit above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals. Cellular networks use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area is served by a low-power transceiver. Radio and spread spectrum technologies – Wireless LANs use a high-frequency radio technology similar to digital cellular. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi. Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices. Extending the Internet to interplanetary dimensions via radio waves and optical means, the Interplanetary Internet. IP over Avian Carriers was a humorous April fool's Request for Comments, issued as . It was implemented in real life in 2001. The last two cases have a large round-trip delay time, which gives slow two-way communication but does not prevent sending large amounts of information (they can have high throughput). Network nodes Apart from any physical transmission media, networks are built from additional basic system building blocks, such as network interface controllers, repeaters, hubs, bridges, switches, routers, modems, and firewalls. Any particular piece of equipment will frequently contain multiple building blocks and so may perform multiple functions. Network interfaces A network interface controller (NIC) is computer hardware that connects the computer to the network media and has the ability to process low-level network information. For example, the NIC may have a connector for plugging in a cable, or an aerial for wireless transmission and reception, and the associated circuitry. In Ethernet networks, each NIC has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce. Repeaters and hubs A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal is retransmitted at a higher power level, or to the other side of obstruction so that the signal can cover longer distances without degradation. In most twisted-pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart. Repeaters work on the physical layer of the OSI model but still require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters used in a network, e.g., the Ethernet 5-4-3 rule. An Ethernet repeater with multiple ports is known as an Ethernet hub. In addition to reconditioning and distributing network signals, a repeater hub assists with collision detection and fault isolation for the network. Hubs and repeaters in LANs have been largely obsoleted by modern network switches. Bridges and switches Network bridges and network switches are distinct from a hub in that they only forward frames to the ports involved in the communication whereas a hub forwards to all ports. Bridges only have two ports but a switch can be thought of as a multi-port bridge. Switches normally have numerous ports, facilitating a star topology for devices, and for cascading additional switches. Bridges and switches operate at the data link layer (layer 2) of the OSI model and bridge traffic between two or more network segments to form a single local network. Both are devices that forward frames of data between ports based on the destination MAC address in each frame. They learn the association of physical ports to MAC addresses by examining the source addresses of received frames and only forward the frame when necessary. If an unknown destination MAC is targeted, the device broadcasts the request to all ports except the source, and discovers the location from the reply. Bridges and switches divide the network's collision domain but maintain a single broadcast domain. Network segmentation through bridging and switching helps break down a large, congested network into an aggregation of smaller, more efficient networks. Routers A router is an internetworking device that forwards packets between networks by processing the addressing or routing information included in the packet. The routing information is often processed in conjunction with the routing table. A router uses its routing table to determine where to forward packets and does not require broadcasting packets which is inefficient for very big networks. Modems Modems (modulator-demodulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Early modems modulated audio signals sent over a standard voice telephone line. Modems are still commonly used for telephone lines, using a digital subscriber line technology and cable television systems using DOCSIS technology. Firewalls A firewall is a network device or software for controlling network security and access rules. Firewalls are inserted in connections between secure internal networks and potentially insecure external networks such as the Internet. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks. Communication protocols A communication protocol is a set of rules for exchanging information over a network. Communication protocols have various characteristics. They may be connection-oriented or connectionless, they may use circuit mode or packet switching, and they may use hierarchical addressing or flat addressing. In a protocol stack, often constructed per the OSI model, communications functions are divided up into protocol layers, where each layer leverages the services of the layer below it until the lowest layer controls the hardware that sends information across the media. The use of protocol layering is ubiquitous across the field of computer networking. An important example of a protocol stack is HTTP (the World Wide Web protocol) running over TCP over IP (the Internet protocols) over IEEE 802.11 (the Wi-Fi protocol). This stack is used between the wireless router and the home user's personal computer when the user is surfing the web. There are many communication protocols, a few of which are described below. Common protocols Internet protocol suite The Internet protocol suite, also called TCP/IP, is the foundation of all modern networking. It offers connection-less and connection-oriented services over an inherently unreliable network traversed by datagram transmission using Internet protocol (IP). At its core, the protocol suite defines the addressing, identification, and routing specifications for Internet Protocol Version 4 (IPv4) and for IPv6, the next generation of the protocol with a much enlarged addressing capability. The Internet protocol suite is the defining set of protocols for the Internet. IEEE 802 IEEE 802 is a family of IEEE standards dealing with local area networks and metropolitan area networks. The complete IEEE 802 protocol suite provides a diverse set of networking capabilities. The protocols have a flat addressing scheme. They operate mostly at layers 1 and 2 of the OSI model. For example, MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol. IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based network access control protocol, which forms the basis for the authentication mechanisms used in VLANs (but it is also found in WLANs) – it is what the home user sees when the user has to enter a "wireless access key". Ethernet Ethernet is a family of technologies used in wired LANs. It is described by a set of standards together called IEEE 802.3 published by the Institute of Electrical and Electronics Engineers. Wireless LAN Wireless LAN based on the IEEE 802.11 standards, also widely known as WLAN or WiFi, is probably the most well-known member of the IEEE 802 protocol family for home users today. IEEE 802.11 shares many properties with wired Ethernet. SONET/SDH Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support circuit-switched digital telephony. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames. Asynchronous Transfer Mode Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet protocol suite or Ethernet that use variable-sized packets or frames. ATM has similarities with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins. ATM still plays a role in the last mile, which is the connection between an Internet service provider and the home user. Cellular standards There are a number of different digital cellular standards, including: Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN). Routing Routing is the process of selecting network paths to carry network traffic. Routing is performed for many kinds of networks, including circuit switching networks and packet switched networks. In packet-switched networks, routing protocols direct packet forwarding through intermediate nodes. Intermediate nodes are typically network hardware devices such as routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though because they lack specialized hardware, may offer limited performance. The routing process directs forwarding on the basis of routing tables, which maintain a record of the routes to various network destinations. Most routing algorithms use only one network path at a time. Multipath routing techniques enable the use of multiple alternative paths. Routing can be contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, the structured addressing used by routers outperforms unstructured addressing used by bridging. Structured IP addresses are used on the Internet. Unstructured MAC addresses are used for bridging on Ethernet and similar local area networks. Geographic scale Networks may be characterized by many properties or features, such as physical capacity, organizational purpose, user authorization, access rights, and others. Another distinct classification method is that of the physical extent or geographic scale. Nanoscale network A nanoscale network has key components implemented at the nanoscale, including message carriers, and leverages physical principles that differ from macroscale communication mechanisms. Nanoscale communication extends communication to very small sensors and actuators such as those found in biological systems and also tends to operate in environments that would be too harsh for other communication techniques. Personal area network A personal area network (PAN) is a computer network used for communication among computers and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters. A wired PAN is usually constructed with USB and FireWire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN. Local area network A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Wired LANs are most commonly based on Ethernet technology. Other networking technologies such as ITU-T G.hn also provide a way to create a wired LAN using existing wiring, such as coaxial cables, telephone lines, and power lines. A LAN can be connected to a wide area network (WAN) using a router. The defining characteristics of a LAN, in contrast to a WAN, include higher data transfer rates, limited geographic range, and lack of reliance on leased lines to provide connectivity. Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to and in excess of 100 Gbit/s, standardized by IEEE in 2010. Home area network A home area network (HAN) is a residential LAN used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a cable Internet access or digital subscriber line (DSL) provider. Storage area network A storage area network (SAN) is a dedicated network that provides access to consolidated, block-level data storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the storage appears as locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments. Campus area network A campus area network (CAN) is made up of an interconnection of LANs within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber, Cat5 cabling, etc.) are almost entirely owned by the campus tenant or owner (an enterprise, university, government, etc.). For example, a university campus network is likely to link a variety of campus buildings to connect academic colleges or departments, the library, and student residence halls. Backbone network A backbone network is part of a computer network infrastructure that provides a path for the exchange of information between different LANs or subnetworks. A backbone can tie together diverse networks within the same building, across different buildings, or over a wide area. When designing a network backbone, network performance and network congestion are critical factors to take into account. Normally, the backbone network's capacity is greater than that of the individual networks connected to it. For example, a large company might implement a backbone network to connect departments that are located around the world. The equipment that ties together the departmental networks constitutes the network backbone. Another example of a backbone network is the Internet backbone, which is a massive, global system of fiber-optic cable and optical networking that carry the bulk of data between wide area networks (WANs), metro, regional, national and transoceanic networks. Metropolitan area network A metropolitan area network (MAN) is a large computer network that interconnects users with computer resources in a geographic region of the size of a metropolitan area. Wide area network A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances. A WAN uses a communications channel that combines many types of media such as telephone lines, cables, and airwaves. A WAN often makes use of transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI model: the physical layer, the data link layer, and the network layer. Enterprise private network An enterprise private network is a network that a single organization builds to interconnect its office locations (e.g., production sites, head offices, remote offices, shops) so they can share computer resources. Virtual private network A virtual private network (VPN) is an overlay network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features. VPN may have best-effort performance or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Global area network A global area network (GAN) is a network used for supporting mobile users across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs. Organizational scope Networks are typically managed by the organizations that own them. Private enterprise networks may use a combination of intranets and extranets. They may also provide network access to the Internet, which has no single owner and permits virtually unlimited global connectivity. Intranet An intranet is a set of networks that are under the control of a single administrative entity. An intranet typically uses the Internet Protocol and IP-based tools such as web browsers and file transfer applications. The administrative entity limits the use of the intranet to its authorized users. Most commonly, an intranet is the internal LAN of an organization. A large intranet typically has at least one web server to provide users with organizational information. Extranet An extranet is a network that is under the administrative control of a single organization but supports a limited connection to a specific external network. For example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. These other entities are not necessarily trusted from a security standpoint. The network connection to an extranet is often, but not always, implemented via WAN technology. Internet An internetwork is the connection of multiple different types of computer networks to form a single computer network using higher-layer network protocols and connecting them together using routers. The Internet is the largest example of internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet protocol suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet utilizes copper communications and an optical networking backbone to enable the World Wide Web (WWW), the Internet of things, video transfer, and a broad range of information services. Participants on the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet protocol suite and the IP addressing system administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths. Darknet A darknet is an overlay network, typically running on the Internet, that is only accessible through specialized software. It is an anonymizing network where connections are made only between trusted peers — sometimes called friends (F2F) — using non-standard protocols and ports. Darknets are distinct from other distributed peer-to-peer networks as sharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference. Network service Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate. The World Wide Web, E-mail, printing and network file sharing are examples of well-known network services. Network services such as Domain Name System (DNS) give names for IP and MAC addresses (people remember names like nm.lan better than numbers like ), and Dynamic Host Configuration Protocol (DHCP) to ensure that the equipment on the network has a valid IP address. Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service. Network performance Bandwidth Bandwidth in bit/s may refer to consumed bandwidth, corresponding to achieved throughput or goodput, i.e., the average rate of successful data transfer through a communication path. The throughput is affected by processes such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap and bandwidth allocation (using, for example, bandwidth allocation protocol and dynamic bandwidth allocation). Network delay Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint to another. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several components, the sum of which is the total delay: Processing delay time it takes a router to process the packet header Queuing delay time the packet spends in routing queues Transmission delay time it takes to push the packet's bits onto the link Propagation delay time for a signal to propagate through the media A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from less than a microsecond to several hundred milliseconds. Performance metrics The parameters that affect performance typically can include throughput, jitter, bit error rate and latency. In circuit-switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads. Other types of performance measures can include the level of noise and echo. In an Asynchronous Transfer Mode (ATM) network, performance can be measured by line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique, and modem enhancements. There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled instead of measured. For example, state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed. Network congestion Network congestion occurs when a link or node is subjected to a greater data load than it is rated for, resulting in a deterioration of its quality of service. When networks are congested and queues become too full, packets have to be discarded, and participants must rely on retransmission to maintain reliable communications. Typical effects of congestion include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either to only a small increase in the network throughput or to a potential reduction in network throughput. Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion even after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse. Modern networks use congestion control, congestion avoidance and traffic control techniques where endpoints typically slow down or sometimes even stop transmission entirely when the network is congested to try to avoid congestive collapse. Specific techniques include: exponential backoff in protocols such as 802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers. Another method to avoid the negative effects of network congestion is implementing quality of service priority schemes allowing selected traffic to bypass congestion. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for critical services. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn home networking standard. For the Internet, addresses the subject of congestion control in detail. Network resilience Network resilience is "the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation." Security Computer networks are also used by security hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network via a denial-of-service attack. Network security Network Security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources. Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies, and individuals. Network surveillance Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency. Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity. Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent or investigate criminal activity. With the advent of programs such as the Total Information Awareness program, technologies such as high-speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens. However, many civil rights and privacy groups—such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union—have expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. Fears such as this have led to lawsuits such as Hepting v. AT&T. The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance". End to end encryption End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet service providers or application service providers, from reading or tampering with communications. End-to-end encryption generally protects both confidentiality and integrity. Examples of end-to-end encryption include HTTPS for web traffic, PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio. Typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee the protection of communications between clients and servers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox. The end-to-end encryption paradigm does not directly address risks at the endpoints of the communication themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the endpoints and the times and quantities of messages that are sent. SSL/TLS The introduction and rapid growth of e-commerce on the World Wide Web in the mid-1990s made it obvious that some form of authentication and encryption was needed. Netscape took the first shot at a new standard. At the time, the dominant web browser was Netscape Navigator. Netscape created a standard called secure socket layer (SSL). SSL requires a server with a certificate. When a client requests access to an SSL-secured server, the server sends a copy of the certificate to the client. The SSL client checks this certificate (all web browsers come with an exhaustive list of root certificates preloaded), and if the certificate checks out, the server is authenticated and the client negotiates a symmetric-key cipher for use in the session. The session is now in a very secure encrypted tunnel between the SSL server and the SSL client. Views of networks Users and network administrators typically have different views of their networks. Users can share printers and some servers from a workgroup, which usually means they are in the same geographic location and are on the same LAN, whereas a network administrator is responsible for keeping that network up and running. A community of interest has less of a connection of being in a local area and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate via peer-to-peer technologies. Network administrators can see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application-layer gateways) that interconnect via the transmission media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more transmission media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using VLANs. Users and administrators are aware, to varying extents, of a network's trust and scope characteristics. Again using TCP/IP architectural terminology, an intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees). Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers). Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering viewpoint, the Internet is the set of subnets, and aggregates of subnets, that share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS). Over the Internet, there can be business-to-business, business-to-consumer and consumer-to-consumer communications. When money or sensitive information is exchanged, the communications are apt to be protected by some form of communications security mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any access by general Internet users and administrators, using secure VPN technology.
Technology
Networks
null
577876
https://en.wikipedia.org/wiki/Redox%20indicator
Redox indicator
A redox indicator (also called an oxidation-reduction indicator) is an indicator which undergoes a definite color change at a specific electrode potential. The requirement for fast and reversible color change means that the oxidation-reduction equilibrium for an indicator redox system needs to be established very quickly. Therefore, only a few classes of organic redox systems can be used for indicator purposes. There are two common classes of redox indicators: metal complexes of phenanthroline and bipyridine. In these systems, the metal changes oxidation state. organic redox systems such as methylene blue. In these systems, a proton participates in the redox reaction. Therefore, sometimes redox indicators are also divided into two general groups: independent or dependent on pH. The most common redox indicator are organic compounds. Redox Indicator example: The molecule 2,2'- Bipyridine is a redox Indicator. In solution, it changes from light blue to red at an electrode potential of 0.97 V. pH independent pH dependent
Physical sciences
Chemical methods
Chemistry
577881
https://en.wikipedia.org/wiki/Chromate%20and%20dichromate
Chromate and dichromate
Chromate salts contain the chromate anion, . Dichromate salts contain the dichromate anion, . They are oxyanions of chromium in the +6 oxidation state and are moderately strong oxidizing agents. In an aqueous solution, chromate and dichromate ions can be interconvertible. Chemical properties Chromates react with hydrogen peroxide, giving products in which peroxide, , replaces one or more oxygen atoms. In acid solution the unstable blue peroxo complex Chromium(VI) oxide peroxide, CrO(O2)2, is formed; it is an uncharged covalent molecule, which may be extracted into ether. Addition of pyridine results in the formation of the more stable complex CrO(O2)2py. Acid–base properties In aqueous solution, chromate and dichromate anions exist in a chemical equilibrium. The predominance diagram shows that the position of the equilibrium depends on both pH and the analytical concentration of chromium. The chromate ion is the predominant species in alkaline solutions, but dichromate can become the predominant ion in acidic solutions. Further condensation reactions can occur in strongly acidic solution with the formation of trichromates, , and tetrachromates, . All polyoxyanions of chromium(VI) have structures made up of tetrahedral CrO4 units sharing corners. The hydrogen chromate ion, HCrO4−, is a weak acid: + H+; pKa ≈ 5.9 It is also in equilibrium with the dichromate ion: 2  + H2O This equilibrium does not involve a change in hydrogen ion concentration, which would predict that the equilibrium is independent of pH. The red line on the predominance diagram is not quite horizontal due to the simultaneous equilibrium with the chromate ion. The hydrogen chromate ion may be protonated, with the formation of molecular chromic acid, H2CrO4, but the pKa for the equilibrium is not well characterized. Reported values vary between about −0.8 and 1.6. The dichromate ion is a somewhat weaker base than the chromate ion: , pKa = 1.18 The pKa value for this reaction shows that it can be ignored at pH > 4. Oxidation–reduction properties The chromate and dichromate ions are fairly strong oxidizing agents. Commonly three electrons are added to a chromium atom, reducing it to oxidation state +3. In acid solution the aquated Cr3+ ion is produced. + 14 H+ + 6 e− → 2 Cr3+ + 7 H2O ε0 = 1.33 V In alkaline solution chromium(III) hydroxide is produced. The redox potential shows that chromates are weaker oxidizing agent in alkaline solution than in acid solution. + 4 + 3 e− → + 5 ε0 = −0.13 V Applications Approximately of hexavalent chromium, mainly sodium dichromate, were produced in 1985. Chromates and dichromates are used in chrome plating to protect metals from corrosion and to improve paint adhesion. Chromate and dichromate salts of heavy metals, lanthanides and alkaline earth metals are only very slightly soluble in water and are thus used as pigments. The lead-containing pigment chrome yellow was used for a very long time before environmental regulations discouraged its use. When used as oxidizing agents or titrants in a redox chemical reaction, chromates and dichromates convert into trivalent chromium, Cr3+, salts of which typically have a distinctively different blue-green color. Natural occurrence and production The primary chromium ore is the mixed metal oxide chromite, FeCr2O4, found as brittle metallic black crystals or granules. Chromite ore is heated with a mixture of calcium carbonate and sodium carbonate in the presence of air. The chromium is oxidized to the hexavalent form, while the iron forms iron(III) oxide, Fe2O3: 4 FeCr2O4 + 8 Na2CO3 + 7 O2 → 8 Na2CrO4 + 2 Fe2O3 + 8 CO2 Subsequent leaching of this material at higher temperatures dissolves the chromates, leaving a residue of insoluble iron oxide. Normally the chromate solution is further processed to make chromium metal, but a chromate salt may be obtained directly from the liquor. Chromate containing minerals are rare. Crocoite, PbCrO4, which can occur as spectacular long red crystals, is the most commonly found chromate mineral. Rare potassium chromate minerals and related compounds are found in the Atacama Desert. Among them is lópezite – the only known dichromate mineral. Toxicity Hexavalent chromium compounds can be toxic and carcinogenic (IARC Group 1). Inhaling particles of hexavalent chromium compounds can cause lung cancer. Also positive associations have been observed between exposure to chromium (VI) compounds and cancer of the nose and nasal sinuses. The use of chromate compounds in manufactured goods is restricted in the EU (and by market commonality the rest of the world) by EU Parliament directive on the Restriction of Hazardous Substances (RoHS) Directive (2002/95/EC).
Physical sciences
Metallic oxyanions
Chemistry
577962
https://en.wikipedia.org/wiki/Half-cell
Half-cell
In electrochemistry, a half-cell is a structure that contains a conductive electrode and a surrounding conductive electrolyte separated by a naturally occurring Helmholtz double layer. Chemical reactions within this layer momentarily pump electric charges between the electrode and the electrolyte, resulting in a potential difference between the electrode and the electrolyte. The typical anode reaction involves a metal atom in the electrode being dissolved and transported as a positive ion across the double layer, causing the electrolyte to acquire a net positive charge while the electrode acquires a net negative charge. The growing potential difference creates an intense electric field within the double layer, and the potential rises in value until the field halts the net charge-pumping reactions. This self-limiting action occurs almost instantly in an isolated half-cell; in applications two dissimilar half-cells are appropriately connected to constitute a Galvanic cell. A standard half-cell consists of a metal electrode in an aqueous solution where the concentration of the metal ions is 1 molar (1 mol/L) at 298 kelvins (25 °C). In the case of the standard hydrogen electrode (SHE), a platinum electrode is used and is immersed in an acidic solution where the concentration of hydrogen ions is 1M, with hydrogen gas at 1atm being bubbled through solution. The electrochemical series, which consists of standard electrode potentials and is closely related to the reactivity series, was generated by measuring the difference in potential between the metal half-cell in a circuit with a standard hydrogen half-cell, connected by a salt bridge. The standard hydrogen half-cell: 2H+(aq) + 2e− → H2(g) The half-cells of a Daniell cell: Original equation Zn + Cu2+ → Zn2+ + Cu Half-cell (anode) of Zn Zn → Zn2+ + 2e− Half-cell (cathode) of Cu Cu2+ + 2e− → Cu
Physical sciences
Electrochemistry
Chemistry
578099
https://en.wikipedia.org/wiki/Hypochlorous%20acid
Hypochlorous acid
Hypochlorous acid is an inorganic compound with the chemical formula , also written as HClO, HOCl, or ClHO. Its structure is . It is an acid that forms when chlorine dissolves in water, and itself partially dissociates, forming a hypochlorite anion, . HClO and are oxidizers, and the primary disinfection agents of chlorine solutions. HClO cannot be isolated from these solutions due to rapid equilibration with its precursor, chlorine. Because of its strong antimicrobial properties, the related compounds sodium hypochlorite (NaOCl) and calcium hypochlorite () are ingredients in many commercial bleaches, deodorants, and disinfectants. The white blood cells of mammals, such as humans, also contain hypochlorous acid as a tool against foreign bodies. In living organisms, HOCl is generated by the reaction of hydrogen peroxide with chloride ions under the catalysis of the heme enzyme myeloperoxidase (MPO). Like many other disinfectants, hypochlorous acid solutions will destroy pathogens, such as COVID-19, absorbed on surfaces. In low concentrations, such solutions can serve to disinfect open wounds. History Hypochlorous acid was discovered in 1834 by the French chemist Antoine Jérôme Balard (1802–1876) by adding, to a flask of chlorine gas, a dilute suspension of mercury(II) oxide in water. He also named the acid and its compounds. Despite being relatively easy to make, it is difficult to maintain a stable hypochlorous acid solution. It is not until recent years that scientists have been able to cost-effectively produce and maintain hypochlorous acid water for stable commercial use. Uses In organic synthesis, HClO converts alkenes to chlorohydrins. In biology, hypochlorous acid is generated in activated neutrophils by myeloperoxidase-mediated peroxidation of chloride ions, and contributes to the destruction of bacteria. In medicine, hypochlorous acid water has been used as a disinfectant and sanitiser. In wound care, and as of early 2016 the U.S. Food and Drug Administration has approved products whose main active ingredient is hypochlorous acid for use in treating wounds and various infections in humans and pets. It is also FDA-approved as a preservative for saline solutions. In disinfection, it has been used in the form of liquid spray, wet wipes and aerosolised application. Recent studies have shown hypochlorous acid water to be suitable for fog and aerosolised application for disinfection chambers and suitable for disinfecting indoor settings such as offices, hospitals and healthcare clinics. In food service and water distribution, specialized equipment to generate weak solutions of HClO from water and salt is sometimes used to generate adequate quantities of safe (unstable) disinfectant to treat food preparation surfaces and water supplies. It is also commonly used in restaurants due to its non-flammable and nontoxic characteristics. In water treatment, hypochlorous acid is the active sanitizer in hypochlorite-based products (e.g. used in swimming pools). Similarly, in ships and yachts, marine sanitation devices use electricity to convert seawater into hypochlorous acid to disinfect macerated faecal waste before discharge into the sea. In deodorization, hypochlorous acid has been tested to remove up to 99% of foul odours including garbage, rotten meat, toilet, stool, and urine odours. Formation, stability and reactions Addition of chlorine to water gives both hydrochloric acid (HCl) and hypochlorous acid (HClO): When acids are added to aqueous salts of hypochlorous acid (such as sodium hypochlorite in commercial bleach solution), the resultant reaction is driven to the left, and chlorine gas is formed. Thus, the formation of stable hypochlorite bleaches is facilitated by dissolving chlorine gas into basic water solutions, such as sodium hydroxide. The acid can also be prepared by dissolving dichlorine monoxide in water; under standard aqueous conditions, anhydrous hypochlorous acid is currently impossible to prepare due to the readily reversible equilibrium between it and its anhydride: , K = 3.55 × 10−3 dm3/mol (at 0 °C) The presence of light or transition metal oxides of copper, nickel, or cobalt accelerates the exothermic decomposition into hydrochloric acid and oxygen: Fundamental reactions In aqueous solution, hypochlorous acid partially dissociates into the anion hypochlorite : Salts of hypochlorous acid are called hypochlorites. One of the best-known hypochlorites is NaClO, the active ingredient in bleach. HClO is a stronger oxidant than chlorine under standard conditions. , E = +1.63 V HClO reacts with HCl to form chlorine: HClO reacts with ammonia to form monochloramine: HClO can also react with organic amines, forming N-chloroamines. Hypochlorous acid exists in equilibrium with its anhydride, dichlorine monoxide. , K = 3.55 × 10−3 dm3/mol (at 0 °C) Reactivity of HClO with biomolecules Hypochlorous acid reacts with a wide variety of biomolecules, including DNA, RNA, fatty acid groups, cholesterol and proteins. Reaction with protein sulfhydryl groups Knox et al. first noted that HClO is a sulfhydryl inhibitor that, in sufficient quantity, could completely inactivate proteins containing sulfhydryl groups. This is because HClO oxidises sulfhydryl groups, leading to the formation of disulfide bonds that can result in crosslinking of proteins. The HClO mechanism of sulfhydryl oxidation is similar to that of monochloramine, and may only be bacteriostatic, because once the residual chlorine is dissipated, some sulfhydryl function can be restored. One sulfhydryl-containing amino acid can scavenge up to four molecules of HClO. Consistent with this, it has been proposed that sulfhydryl groups of sulfur-containing amino acids can be oxidized a total of three times by three HClO molecules, with the fourth reacting with the α-amino group. The first reaction yields sulfenic acid () then sulfinic acid () and finally . Sulfenic acids form disulfides with another protein sulfhydryl group, causing cross-linking and aggregation of proteins. Sulfinic acid and derivatives are produced only at high molar excesses of HClO, and disulfides are formed primarily at bacteriocidal levels. Disulfide bonds can also be oxidized by HClO to sulfinic acid. Because the oxidation of sulfhydryls and disulfides evolves hydrochloric acid, this process results in the depletion HClO. Reaction with protein amino groups Hypochlorous acid reacts readily with amino acids that have amino group side-chains, with the chlorine from HClO displacing a hydrogen, resulting in an organic chloramine. Chlorinated amino acids rapidly decompose, but protein chloramines are longer-lived and retain some oxidative capacity. Thomas et al. concluded from their results that most organic chloramines decayed by internal rearrangement and that fewer available NH2 groups promoted attack on the peptide bond, resulting in cleavage of the protein. McKenna and Davies found that 10 mM or greater HClO is necessary to fragment proteins in vivo. Consistent with these results, it was later proposed that the chloramine undergoes a molecular rearrangement, releasing HCl and ammonia to form an aldehyde. The aldehyde group can further react with another amino group to form a Schiff base, causing cross-linking and aggregation of proteins. Reaction with DNA and nucleotides Hypochlorous acid reacts slowly with DNA and RNA as well as all nucleotides in vitro. GMP is the most reactive because HClO reacts with both the heterocyclic NH group and the amino group. In similar manner, TMP with only a heterocyclic NH group that is reactive with HClO is the second-most reactive. AMP and CMP, which have only a slowly reactive amino group, are less reactive with HClO. UMP has been reported to be reactive only at a very slow rate. The heterocyclic NH groups are more reactive than amino groups, and their secondary chloramines are able to donate the chlorine. These reactions likely interfere with DNA base pairing, and, consistent with this, Prütz has reported a decrease in viscosity of DNA exposed to HClO similar to that seen with heat denaturation. The sugar moieties are nonreactive and the DNA backbone is not broken. NADH can react with chlorinated TMP and UMP as well as HClO. This reaction can regenerate UMP and TMP and results in the 5-hydroxy derivative of NADH. The reaction with TMP or UMP is slowly reversible to regenerate HClO. A second slower reaction that results in cleavage of the pyridine ring occurs when excess HClO is present. is inert to HClO. Reaction with lipids Hypochlorous acid reacts with unsaturated bonds in lipids, but not saturated bonds, and the ion does not participate in this reaction. This reaction occurs by hydrolysis with addition of chlorine to one of the carbons and a hydroxyl to the other. The resulting compound is a chlorohydrin. The polar chlorine disrupts lipid bilayers and could increase permeability. When chlorohydrin formation occurs in lipid bilayers of red blood cells, increased permeability occurs. Disruption could occur if enough chlorohydrin is formed. The addition of preformed chlorohydrin to red blood cells can affect permeability as well. Cholesterol chlorohydrin have also been observed, but do not greatly affect permeability, and it is believed that is responsible for this reaction. Hypochlorous acid also reacts with a subclass of glycerophospholipids called plasmalogens, yielding chlorinated fatty aldehydes which are capable of protein modification and may play a role in inflammatory processes such as platelet aggregation and the formation of neutrophil extracellular traps. Mode of disinfectant action E. coli exposed to hypochlorous acid lose viability in less than 0.1 seconds due to inactivation of many vital systems. Hypochlorous acid has a reported of 0.0104–0.156 ppm and 2.6 ppm caused 100% growth inhibition in 5 minutes. However, the concentration required for bactericidal activity is also highly dependent on bacterial concentration. Inhibition of glucose oxidation In 1948, Knox et al. proposed the idea that inhibition of glucose oxidation is a major factor in the bacteriocidal nature of chlorine solutions. They proposed that the active agent or agents diffuse across the cytoplasmic membrane to inactivate key sulfhydryl-containing enzymes in the glycolytic pathway. This group was also the first to note that chlorine solutions (HClO) inhibit sulfhydryl enzymes. Later studies have shown that, at bacteriocidal levels, the cytosol components do not react with HClO. In agreement with this, McFeters and Camper found that aldolase, an enzyme that Knox et al. proposes would be inactivated, was unaffected by HClO in vivo. It has been further shown that loss of sulfhydryls does not correlate with inactivation. That leaves the question concerning what causes inhibition of glucose oxidation. The discovery that HClO blocks induction of β-galactosidase by added lactose led to a possible answer to this question. The uptake of radiolabeled substrates by both ATP hydrolysis and proton co-transport may be blocked by exposure to HClO preceding loss of viability. From this observation, it proposed that HClO blocks uptake of nutrients by inactivating transport proteins. The question of loss of glucose oxidation has been further explored in terms of loss of respiration. Venkobachar et al. found that succinic dehydrogenase was inhibited in vitro by HClO, which led to the investigation of the possibility that disruption of electron transport could be the cause of bacterial inactivation. Albrich et al. subsequently found that HClO destroys cytochromes and iron-sulfur clusters and observed that oxygen uptake is abolished by HClO and adenine nucleotides are lost. It was also observed that irreversible oxidation of cytochromes paralleled the loss of respiratory activity. One way of addressing the loss of oxygen uptake was by studying the effects of HClO on succinate-dependent electron transport. Rosen et al. found that levels of reductable cytochromes in HClO-treated cells were normal, and these cells were unable to reduce them. Succinate dehydrogenase was also inhibited by HClO, stopping the flow of electrons to oxygen. Later studies revealed that Ubiquinol oxidase activity ceases first, and the still-active cytochromes reduce the remaining quinone. The cytochromes then pass the electrons to oxygen, which explains why the cytochromes cannot be reoxidized, as observed by Rosen et al. However, this line of inquiry was ended when Albrich et al. found that cellular inactivation precedes loss of respiration by using a flow mixing system that allowed evaluation of viability on much smaller time scales. This group found that cells capable of respiring could not divide after exposure to HClO. Depletion of adenine nucleotides Having eliminated loss of respiration, Albrich et al. proposes that the cause of death may be due to metabolic dysfunction caused by depletion of adenine nucleotides. Barrette et al. studied the loss of adenine nucleotides by studying the energy charge of HClO-exposed cells and found that cells exposed to HClO were unable to step up their energy charge after addition of nutrients. The conclusion was that exposed cells have lost the ability to regulate their adenylate pool, based on the fact that metabolite uptake was only 45% deficient after exposure to HClO and the observation that HClO causes intracellular ATP hydrolysis. It was also confirmed that, at bacteriocidal levels of HClO, cytosolic components are unaffected. So it was proposed that modification of some membrane-bound protein results in extensive ATP hydrolysis, and this, coupled with the cells inability to remove AMP from the cytosol, depresses metabolic function. One protein involved in loss of ability to regenerate ATP has been found to be ATP synthetase. Much of this research on respiration reconfirms the observation that relevant bacteriocidal reactions take place at the cell membrane. Inhibition of DNA replication Recently it has been proposed that bacterial inactivation by HClO is the result of inhibition of DNA replication. When bacteria are exposed to HClO, there is a precipitous decline in DNA synthesis that precedes inhibition of protein synthesis, and closely parallels loss of viability. During bacterial genome replication, the origin of replication (oriC in E. coli) binds to proteins that are associated with the cell membrane, and it was observed that HClO treatment decreases the affinity of extracted membranes for oriC, and this decreased affinity also parallels loss of viability. A study by Rosen et al. compared the rate of HClO inhibition of DNA replication of plasmids with different replication origins and found that certain plasmids exhibited a delay in the inhibition of replication when compared to plasmids containing oriC. Rosen's group proposed that inactivation of membrane proteins involved in DNA replication are the mechanism of action of HClO. Protein unfolding and aggregation HClO is known to cause post-translational modifications to proteins, the notable ones being cysteine and methionine oxidation. A recent examination of HClO's bactericidal role revealed it to be a potent inducer of protein aggregation. Hsp33, a chaperone known to be activated by oxidative heat stress, protects bacteria from the effects of HClO by acting as a holdase, effectively preventing protein aggregation. Strains of Escherichia coli and Vibrio cholerae lacking Hsp33 were rendered especially sensitive to HClO. Hsp33 protected many essential proteins from aggregation and inactivation due to HClO, which is a probable mediator of HClO's bactericidal effects. Hypochlorites Hypochlorites are the salts of hypochlorous acid; commercially important hypochlorites are calcium hypochlorite and sodium hypochlorite. Production of hypochlorites using electrolysis Solutions of hypochlorites can be produced in-situ by electrolysis of an aqueous sodium chloride solution in both batch and flow processes. The composition of the resulting solution depends on the pH at the anode. In acid conditions the solution produced will have a high hypochlorous acid concentration, but will also contain dissolved gaseous chlorine, which can be corrosive, at a neutral pH the solution will be around 75% hypochlorous acid and 25% hypochlorite. Some of the chlorine gas produced will dissolve forming hypochlorite ions. Hypochlorites are also produced by the disproportionation of chlorine gas in alkaline solutions. Safety HClO is classified as non-hazardous by the Environmental Protection Agency in the US. As an oxidising agent, it can be corrosive or irritant depending on its concentration and pH. In a clinical test, hypochlorous acid water was tested for eye irritation, skin irritation, and toxicity. The test concluded that it was non-toxic and non-irritating to the eye and skin. In a 2017 study, a saline hygiene solution preserved with pure hypochlorous acid was shown to reduce the bacterial load significantly without altering the diversity of bacterial species on the eyelids. After 20 minutes of treatment, there was more than 99% reduction of the Staphylococci bacteria. Commercialisation Commercial disinfection applications remained elusive for a long time after the discovery of hypochlorous acid because the stability of its solution in water is difficult to maintain. The active compounds quickly deteriorate back into salt water, losing the solution its disinfecting capability, which makes it difficult to transport for wide use. It is less commonly used as a disinfectant compared to bleach and alcohol due to cost, despite its stronger disinfecting capabilities. Technological developments have reduced manufacturing costs and allow for manufacturing and bottling of hypochlorous acid water for home and commercial use. However, most hypochlorous acid water has a short shelf life. Storing away from heat and direct sunlight can help slow the deterioration. The further development of continuous flow electrochemical cells has been implemented in new products, allowing the commercialisation of domestic and industrial continuous flow devices for the in-situ generation of hypochlorous acid for disinfection purposes.
Physical sciences
Specific acids
Chemistry
578150
https://en.wikipedia.org/wiki/Standard%20hydrogen%20electrode
Standard hydrogen electrode
In electrochemistry, the standard hydrogen electrode (abbreviated SHE), is a redox electrode which forms the basis of the thermodynamic scale of oxidation-reduction potentials. Its absolute electrode potential is estimated to be at 25 °C, but to form a basis for comparison with all other electrochemical reactions, hydrogen's standard electrode potential () is declared to be zero volts at any temperature. Potentials of all other electrodes are compared with that of the standard hydrogen electrode at the same temperature. Nernst equation for SHE The hydrogen electrode is based on the redox half cell corresponding to the reduction of two hydrated protons, into one gaseous hydrogen molecule, General equation for a reduction reaction: The reaction quotient () of the half-reaction is the ratio between the chemical activities () of the reduced form (the reductant, ) and the oxidized form (the oxidant, ). Considering the redox couple: 2H_{(aq)}+ + 2e- <=> H2_{(g)} at chemical equilibrium, the ratio of the reaction products by the reagents is equal to the equilibrium constant of the half-reaction: where and correspond to the chemical activities of the reduced and oxidized species involved in the redox reaction represents the activity of . denotes the chemical activity of gaseous hydrogen (), which is approximated here by its fugacity denotes the partial pressure of gaseous hydrogen, expressed without unit; where is the mole fraction is the total gas pressure in the system is the standard pressure (1 bar = 10 pascal) introduced here simply to overcome the pressure unit and to obtain an equilibrium constant without unit. More details on managing gas fugacity to get rid of the pressure unit in thermodynamic calculations can be found at thermodynamic activity#Gases. The followed approach is the same as for chemical activity and molar concentration of solutes in solution. In the SHE, pure hydrogen gas () at the standard pressure of is engaged in the system. Meanwhile the general SHE equation can also be applied to other thermodynamic systems with different mole fraction or total pressure of hydrogen. This redox reaction occurs at a platinized platinum electrode. The electrode is immersed in the acidic solution and pure hydrogen gas is bubbled over its surface. The concentration of both the reduced and oxidised forms of hydrogen are maintained at unity. That implies that the pressure of hydrogen gas is 1 bar (100 kPa) and the activity coefficient of hydrogen ions in the solution is unity. The activity of hydrogen ions is their effective concentration, which is equal to the formal concentration times the activity coefficient. These unit-less activity coefficients are close to 1.00 for very dilute water solutions, but usually lower for more concentrated solutions. As the general form of the Nernst equation at equilibrium is the following: and as by definition in the case of the SHE, The Nernst equation for the SHE becomes: Simply neglecting the pressure unit present in , this last equation can often be directly written as: And by solving the numerical values for the term the practical formula commonly used in the calculations of this Nernst equation is: (unit: volt) As under standard conditions the equation simplifies to: (unit: volt) This last equation describes the straight line with a negative slope of -0.0591 volt/ pH unit delimiting the lower stability region of water in a Pourbaix diagram where gaseous hydrogen is evolving because of water decomposition. where: is the activity of the hydrogen ions (H+) in aqueous solution, with: is the activity coefficient of hydrogen ions (H+) in aqueous solution is the molar concentration of hydrogen ions (H+) in aqueous solution is the standard concentration (1 M) used to overcome concentration unit is the partial pressure of the hydrogen gas, in bar () is the universal gas constant: J⋅K−1⋅mol−1 (rounded here to 4 decimal) is the absolute temperature, in kelvin (at 25 °C: 298.15 K) is the Faraday constant (the charge per mole of electrons), equal to is the standard pressure: : as the system is at chemical equilibrium, hydrogen gas, is also in equilibrium with dissolved hydrogen, and the Nernst equation implicitly takes into account the Henry's law for gas dissolution. Therefore, there is no need to independently consider the gas dissolution process in the system, as it is already de facto included. SHE vs NHE vs RHE During the early development of electrochemistry, researchers used the normal hydrogen electrode as their standard for zero potential. This was convenient because it could actually be constructed by "[immersing] a platinum electrode into a solution of 1 N strong acid and [bubbling] hydrogen gas through the solution at about 1 atm pressure". However, this electrode/solution interface was later changed. What replaced it was a theoretical electrode/solution interface, where the concentration of H+ was 1 M, but the H+ ions were assumed to have no interaction with other ions (a condition not physically attainable at those concentrations). To differentiate this new standard from the previous one, it was given the name 'standard hydrogen electrode'. Finally, there are also reversible hydrogen electrodes (RHEs), which are practical hydrogen electrodes whose potential depends on the pH of the solution. In summary, NHE (normal hydrogen electrode): potential of a platinum electrode in 1 M acid solution with 1 bar of hydrogen bubbled through SHE (standard hydrogen electrode): potential of a platinum electrode in a theoretical ideal solution (the current standard for zero potential for all temperatures) RHE (reversible hydrogen electrode): a practical hydrogen electrode whose potential depends on the pH of the solution Choice of platinum The choice of platinum for the hydrogen electrode is due to several factors: inertness of platinum (it does not corrode) the capability of platinum to catalyze the reaction of proton reduction a high intrinsic exchange current density for proton reduction on platinum excellent reproducibility of the potential (bias of less than 10 μV when two well-made hydrogen electrodes are compared with one another) The surface of platinum is platinized (i.e., covered with a layer of fine powdered platinum also known as platinum black) to: Increase total surface area. This improves reaction kinetics and maximum possible current Use a surface material that adsorbs hydrogen well at its interface. This also improves reaction kinetics Other metals can be used for fabricating electrodes with a similar function such as the palladium-hydrogen electrode. Interference Because of the high adsorption activity of the platinized platinum electrode, it's very important to protect electrode surface and solution from the presence of organic substances as well as from atmospheric oxygen. Inorganic ions that can be reduced to a lower valency state at the electrode also have to be avoided (e.g., , ). A number of organic substances are also reduced by hydrogen on a platinum surface, and these also have to be avoided. Cations that can be reduced and deposited on the platinum can be source of interference: silver, mercury, copper, lead, cadmium and thallium. Substances that can inactivate ("poison") the catalytic sites include arsenic, sulfides and other sulfur compounds, colloidal substances, alkaloids, and material found in biological systems. Isotopic effect The standard redox potential of the deuterium couple is slightly different from that of the proton couple (ca. −0.0044 V vs SHE). Various values in this range have been obtained: −0.0061 V, −0.00431 V, −0.0074 V. 2 D_{(aq)}+ + 2 e- -> D2_{(g)} Also difference occurs when hydrogen deuteride (HD, or deuterated hydrogen, DH) is used instead of hydrogen in the electrode. Experimental setup The scheme of the standard hydrogen electrode: platinized platinum electrode hydrogen gas solution of the acid with activity of H+ = 1 mol dm−3 hydroseal for preventing oxygen interference reservoir through which the second half-element of the galvanic cell should be attached. The connection can be direct, through a narrow tube to reduce mixing, or through a salt bridge, depending on the other electrode and solution. This creates an ionically conductive path to the working electrode of interest.
Physical sciences
Electrochemistry
Chemistry
578550
https://en.wikipedia.org/wiki/Bush%20dog
Bush dog
The bush dog (Speothos venaticus) is a canine found in Central and South America. In spite of its extensive range, it is very rare in most areas except in Suriname, Guyana and Peru; it was first described by Peter Wilhelm Lund from fossils in Brazilian caves and was believed to be extinct. The bush dog is the only extant species in the genus Speothos, and genetic evidence suggests that its closest living relative is the maned wolf of central South America or the African wild dog. The species is listed as Near Threatened by the IUCN. In Brazil, it is called ('vinegar dog') and ('bush dog'). In Spanish-speaking countries, it is called ('vinegar dog'), ('vinegar fox'), ('water dog'), and ('shrub or woodland dog'). Description Adult bush dogs have soft long brownish-tan fur, with a lighter reddish tinge on the head, neck and back and a bushy tail, while the underside is dark, sometimes with a lighter throat patch. Younger individuals, however, have black fur over their entire bodies. Adults typically have a head-body length of , with a tail. They have a shoulder height of and weigh . They have short legs relative to their body, as well as a short snout and relatively small ears. The teeth are adapted for its carnivorous habits. Uniquely for an American canid, the dental formula is for a total of 38 teeth. The bush dog is one of three canid species (the other two being the dhole and the African wild dog) with trenchant heel dentition, having a single cusp on the talonid of the lower carnassial tooth that increases the cutting blade length. Females have four pairs of teats and both sexes have large scent glands on either side of the anus. Bush dogs have partially webbed toes, which allow them to swim more efficiently. Genetics Speothos has a diploid chromosome number of 74, and so it is unable to produce fertile hybrids with other canids. Distribution and habitat Bush dogs are found from Costa Rica in Central America and through much of South America east of the Andes, as far south as central Bolivia, Paraguay, and southern Brazil. They primarily inhabit lowland forests up to elevation, wet savannas and other habitats near rivers, but may also be found in drier cerrado and open pasture. The historic range of this species may have extended as far north as Costa Rica where the species may still be found in suitable habitat. New, repeated observations of bush dog groups have been recorded in east-central (Barbilla National Park) and south-eastern (La Amistad International Park) Costa Rica, and a substantial portion of the Talamanca Mountains up to to the north-northwest and at elevations up to . Very recent fossils dating from 300 AD to 900 AD (the Late Ceramic Age) have been found in the Manzanilla site on the eastern coast of Trinidad. There are three recognised subspecies: The South American bush dog (Speothos venaticus venaticus), with a range including southern Colombia and Venezuela, the Guyanas, most of Brazil, eastern Ecuador and Peru, Bolivia, and northern Paraguay. The Panamanian bush dog (Speothos venaticus panamensis), with a range including Panama, northern Colombia and Venezuela, western Ecuador. The southern bush dog (Speothos venaticus wingei), with a range including southern Brazil and Paraguay, as well as extreme northeastern Argentina. The first camera trap photos of this species in Argentina were obtained in April 2016 from the Selva Paranaense Don Otto Ecological Private Reserve, located in Eldorado Department of the Misiones province of Argentina. Behavior Bush dogs are carnivores and hunt during the day. Their typical prey are pacas, agoutis, acouchis and capybaras, all large rodents. Although they can hunt alone, bush dogs are usually found in small packs. The dogs can bring down much larger prey, including peccaries and rheas, and a pack of six dogs has even been reported hunting a tapir, where they trailed the animal and nipped at its legs until it was felled. When hunting paca, part of the pack chases it on land and part wait for it in the water, where it often retreats. Bush dogs appear to be the most gregarious South American canid species. They use hollow logs and cavities such as armadillo burrows for shelter. Packs consist of a single mated pair and their immediate relations, and have a home range of . Only the adult pair breed, while the other members of the pack are subordinate, and help with rearing and guarding any pups. Packmates keep in contact with frequent whines, perhaps because visibility is poor in the undergrowth where they typically hunt. While eating large prey, parents position themselves at either ends of the animal, making it easier for the pups to disembowel it. Reproduction Bush dogs mate throughout the year; oestrus lasts up to twelve days and occurs every 15 to 44 days. Like many other canids, bush dog mating includes a copulatory tie, during which the animals are locked together. Urine-marking plays a significant role in their pre-copulatory behavior. Gestation lasts from 65 to 83 days and normally results in the birth of a litter of three to six pups, although larger litters of up to 10 have been reported. The young are born blind and helpless and initially weigh . The eyes open after 14 to 19 days and the pups first emerge from the nativity den shortly thereafter. The young are weaned at around four weeks and reach sexual maturity at one year. They can live for up to 10 years in captivity. Conservation Bush dogs are among the least-studied canines, and their conservation efforts are still in early stages. Due to their rarity, when bush dog bones were discovered in a cave in 1839, paleontologist Peter Wilhelm Lund mistakenly believed they were extinct. Living individuals were later found. Research shows they are generalists capable of thriving in diverse habitats. However, conservation is challenging due to their dense habitats and sparse, scattered populations, making them difficult to locate. Bush dogs require large, undisturbed territories to support their pack-based lifestyle, and they are notably shy. The International Union for Conservation of Nature (IUCN) lists bush dogs as Near Threatened due to a population decline of approximately 20-25% over the past 12 years. The main threats include habitat loss (particularly from deforestation for wood, cattle farming, and palm oil production), loss of prey due to human hunting, and diseases contracted from domestic dogs. Habitat loss, especially through Amazonian clear-cutting, is the most significant threat, while disease transmission from unvaccinated domestic dogs has also become a growing concern due to human encroachment. Hunting bush dogs is illegal in most of their range, including countries like Colombia, Ecuador, Brazil, French Guiana, Paraguay, Peru, Bolivia, Panama, and Argentina. However, Guyana and Suriname lack explicit hunting bans for bush dogs, and many countries in the bush dog’s range have limited resources to enforce existing wildlife laws. To better understand and protect bush dogs, scientists are experimenting with various monitoring methods. Traditional camera traps have proven ineffective due to bush dogs' elusive nature, so researchers are now using scent-detection dogs to locate bush dog burrows. This approach aims to provide valuable insights into their habitat use, prey preferences, and pack dynamics, including when cubs leave the pack. Protected areas such as the Yasuni Biosphere Reserve may support stable populations. In a positive development, bush dogs were recently captured on camera traps in Costa Rica's Talamanca Mountains in 2020, suggesting they may be expanding their range northward and into higher elevations. This could indicate that with dedicated conservation efforts, bush dogs may stabilize or even increase in numbers.
Biology and health sciences
Canines
Animals
578923
https://en.wikipedia.org/wiki/Anterior%20cruciate%20ligament
Anterior cruciate ligament
The anterior cruciate ligament (ACL) is one of a pair of cruciate ligaments (the other being the posterior cruciate ligament) in the human knee. The two ligaments are called "cruciform" ligaments, as they are arranged in a crossed formation. In the quadruped stifle joint (analogous to the knee), based on its anatomical position, it is also referred to as the cranial cruciate ligament. The term cruciate is Latin for cross. This name is fitting because the ACL crosses the posterior cruciate ligament to form an "X". It is composed of strong, fibrous material and assists in controlling excessive motion by limiting mobility of the joint. The anterior cruciate ligament is one of the four main ligaments of the knee, providing 85% of the restraining force to anterior tibial displacement at 30 and 90° of knee flexion. The ACL is the most frequently injured ligament in the knee. Structure The ACL originates from deep within the notch of the distal femur. Its proximal fibers fan out along the medial wall of the lateral femoral condyle. The two bundles of the ACL are the anteromedial and the posterolateral, named according to where the bundles insert into the tibial plateau. The tibial plateau is a critical weight-bearing region on the upper extremity of the tibia. The ACL attaches in front of the intercondyloid eminence of the tibia, where it blends with the anterior horn of the medial meniscus. Purpose The purpose of the ACL is to resist the motions of anterior tibial translation and internal tibial rotation; this is important to have rotational stability. This function prevents anterior tibial subluxation of the lateral and medial tibiofemoral joints, which is important for the pivot-shift phenomenon. The ACL has mechanoreceptors that detect changes in direction of movement, position of the knee joint, and changes in acceleration, speed, and tension. A key factor in instability after ACL injuries is having altered neuromuscular function secondary to diminished somatosensory information. For athletes who participate in sports involving cutting, jumping, and rapid deceleration, the knee must be stable in terminal extension, which is the screw-home mechanism. Clinical significance Injury An ACL tear is one of the most common knee injuries, with over 100,000 tears occurring annually in the US. Most ACL tears are a result of a non-contact mechanism such as a sudden change in a direction causing the knee to rotate inward. As the knee rotates inward, additional strain is placed on the ACL, since the femur and tibia, which are the two bones that articulate together forming the knee joint, move in opposite directions, causing the ACL to tear. Most athletes require reconstructive surgery on the ACL, in which the torn or ruptured ACL is completely removed and replaced with a piece of tendon or ligament tissue from the patient (autograft) or from a donor (allograft). Conservative treatment has poor outcomes in ACL injury, since the ACL is unable to form a fibrous clot, as it receives most of its nutrients from synovial fluid; this washes away the reparative cells, making the formation of fibrous tissue difficult. The two most common sources for tissue are the patellar ligament and the hamstrings tendon. The patellar ligament is often used, since bone plugs on each end of the graft are extracted, which helps integrate the graft into the bone tunnels during reconstruction. The surgery is arthroscopic, meaning that a tiny camera is inserted through a small surgical cut. The camera sends video to a large monitor so the surgeon can see any damage to the ligaments. In the event of an autograft, the surgeon makes a larger cut to get the needed tissue. In the event of an allograft, in which material is donated, this is not necessary, since no tissue is taken directly from the patient's own body. The surgeon drills a hole forming the tibial bone tunnel and femoral bone tunnel, allowing for the patient's new ACL graft to be guided through. Once the graft is pulled through the bone tunnels, two screws are placed into the tibial and femoral bone tunnel. Recovery time usually ranges between one and two years, but is sometimes longer, depending if the patient chose an autograft or allograft. A week or so after the occurrence of the injury, the athlete is usually deceived by the fact that he/she is walking normally and not feeling much pain. This is dangerous, as some athletes start resuming some of their activities such as jogging, which with a wrong move or twist, could damage the bones, as the graft has not completely become integrated into the bone tunnels. Injured athletes must understand the significance of each step of an ACL injury to avoid complications and ensure a proper recovery. Nonoperative treatment of the ACL ACL reconstruction is the most common treatment for an ACL tear, but it is not the only treatment available for individuals. Some may find it more beneficial to complete a nonoperative rehabilitation program. Individuals who are going to continue with physical activity that involves cutting and pivoting, and individuals who are no longer participating in those specific activities both are candidates for the nonoperative route. In comparing operative and nonoperative approaches to ACL tears, few differences were noted between surgical and nonsurgical groups, with no significant differences in regard to knee function or muscle strength reported by the patients. The main goals to achieve during rehabilitation (rehab) of an ACL tear is to regain sufficient functional stability, maximize full muscle strength, and decrease risk of reinjury. Typically, three phases are involved in nonoperative treatment - the acute phase, the neuromuscular training phase, and the return to sport phase. During the acute phase, the rehab is focusing on the acute symptoms that occur right after the injury and are causing an impairment. The use of therapeutic exercises and appropriate therapeutic modalities is crucial during this phase to assist in repairing the impairments from the injury. The neuromuscular training phase is used to focus on the patient regaining full strength in both the lower extremity and the core muscles. This phase begins when the patient regains full range of motion, no effusion, and adequate lower extremity strength. During this phase, the patient completes advanced balance, proprioception, cardiovascular conditioning, and neuromuscular interventions. In the final, return to sport phase, the patient focuses on sport-specific activities and agility. A functional performance brace is suggested to be used during the phase to assist with stability during pivoting and cutting activities. Operative treatment of the ACL Anterior cruciate ligament surgery is a complex operation that requires expertise in the field of orthopedic and sports medicine. Many factors should be considered when discussing surgery, including the athlete's level of competition, age, previous knee injury, other injuries sustained, leg alignment, and graft choice. Typically, four graft types are possible, the bone-patella tendon-bone graft, the semitendinosus and gracilis tendons (quadrupled hamstring tendon), quadriceps tendon, and an allograft. Although extensive research has been conducted on which grafts are the best, the surgeon typically chooses the type of graft with which he or she is most comfortable. If rehabilitated correctly, the reconstruction should last. In fact, 92.9% of patients are happy with graft choice. Prehabilitation has become an integral part of the ACL reconstruction process. This means that the patient exercises before getting surgery to maintain factors such as range of motion and strength. Based on a single leg hop test and self-reported assessment, prehab improved function; these effects were sustained 12 weeks postoperatively. Postsurgical rehabilitation is essential in the recovery from the reconstruction. This typically takes a patient 6 to 12 months to return to life as it was prior to the injury. The rehab can be divided into protection of the graft, improving range of motion, decrease swelling, and regaining muscle control. Each phase has different exercises based on the patients' needs. For example, while the ligament is healing, a patient's joint should not be used for full weight-bearing, but the patient should strengthen the quadriceps and hamstrings by doing quad sets and weight shifting drills. Phase two would require full weight-bearing and correcting gait patterns, so exercises such as core strengthening and balance exercises would be appropriate. In phase three, the patient begins running, and can do aquatic workouts to help with reducing joint stresses and cardiorespiratory endurance. Phase four includes multiplanar movements, thus enhancing a running program and beginning agility and plyometric drills. Lastly, phase five focuses on sport- or life-specific motions, depending on the patient. A 2010 Los Angeles Times review of two medical studies discussed whether ACL reconstruction was advisable. One study found that children under 14 who had ACL reconstruction fared better after early surgery than those who underwent a delayed surgery. For adults 18 to 35, though, patients who underwent early surgery followed by rehabilitation fared no better than those who had rehabilitative therapy and a later surgery. The first report focused on children and the timing of an ACL reconstruction. ACL injuries in children are a challenge because children have open growth plates in the bottom of the femur or thigh bone and on the top of the tibia or shin. An ACL reconstruction typically crosses the growth plates, posing a theoretical risk of injury to the growth plate, stunting leg growth, or causing the leg to grow at an unusual angle. The second study noted focused on adults. It found no significant statistical difference in performance and pain outcomes for patients who receive early ACL reconstruction vs. those who receive physical therapy with an option for later surgery. This would suggest that many patients without instability, buckling, or giving way after a course of rehabilitation can be managed nonoperatively, but was limited to outcomes after two years and did not involve patients who were serious athletes. Patients involved in sports requiring significant cutting, pivoting, twisting, or rapid acceleration or deceleration may not be able to participate in these activities without ACL reconstruction. ACL injuries in women Risk differences between outcomes in men and women can be attributed to a combination of multiple factors, including anatomical, hormonal, genetic, positional, neuromuscular, and environmental factors. The size of the anterior cruciate ligament is often the most reported difference. Studies look at the length, cross-sectional area, and volume of ACLs. Researchers use cadavers, and in vivo placement to study these factors, and most studies confirm that women have smaller anterior cruciate ligaments. Other factors that could contribute to higher risks of ACL tears in women include patient weight and height, the size and depth of the intercondylar notch, the diameter of the ACL, the magnitude of the tibial slope, the volume of the tibial spines, the convexity of the lateral tibiofemoral articular surfaces, and the concavity of the medial tibial plateau. While anatomical factors are most talked about, extrinsic factors, including dynamic movement patterns, might be the most important risk factor when it comes to ACL injury. Gallery
Biology and health sciences
Human anatomy
Health
579019
https://en.wikipedia.org/wiki/Agouti
Agouti
The agouti (, ) or common agouti is many of several rodent species of the genus Dasyprocta. They are native to Central America, northern and central South America, and the southern Lesser Antilles. Some species have also been introduced elsewhere in the West Indies. They are related to guinea pigs and look quite similar, but they are larger and have longer legs. The species vary considerably in colour, being brown, reddish, dull orange, greyish, or blackish, but typically with lighter underparts. Their bodies are covered with coarse hair, which is raised when alarmed. They weigh and are in length, with short, hairless tails. The related pacas were formerly included in genus Agouti, but these animals were reclassified in 1998 as genus Cuniculus. The Spanish term is agutí. In Mexico, the agouti is called the . In Panama, it is known as the and in eastern Ecuador, as the . Etymology The name "agouti" is derived from either Guarani or Tupi, both South American indigenous languages, in which the name is written variously as agutí, agoutí, acutí, akuti and akuri. The Portuguese term for these animals, cutia, is derived from this original naming. Description Agoutis have five toes on their front feet and three toes on their hind feet; the first toe is very small. The tail is very short or nonexistent and hairless. The molar teeth have cylindrical crowns, with several islands and a single lateral fold of enamel. Agoutis may grow to be up to in length and in weight. Most species are brown on their backs and whitish or buff on their bellies; the fur may have a glossy appearance and then glimmers in an orange colour. Reports differ as to whether they are diurnal or nocturnal animals. Behaviour and habits In the wild, they are shy animals and flee from humans, while in captivity they may become trusting. In Trinidad, they are renowned for being very fast runners, able to keep hunting dogs occupied with chasing them for hours. Agoutis are found in forested and wooded areas in Central and South America. Their habitats include rainforests, savannas, and cultivated fields. They conceal themselves at night in hollow tree trunks or in burrows among roots. Active and graceful in their movements, their pace is either a kind of trot or a series of springs following one another so rapidly as to look like a gallop. They take readily to water, in which they swim well. When feeding, agoutis sit on their hind legs and hold food between their forepaws. They may gather in groups of up to 100 to feed. They eat fallen fruit, leaves and roots, although they may sometimes climb trees to eat green fruit. They hoard food in small, buried stores. They sometimes eat the eggs of ground-nesting birds and even shellfish on the seashore. They may cause damage to sugarcane and banana plantations. They are regarded as one of the few species (along with macaws) that can open Brazil nuts without tools, mainly thanks to their strength and exceptionally sharp teeth. In southern Brazil, their main source of energy is the nut of Araucaria angustifolia. Breeding Agoutis give birth to litters of two to four young (pups) after a gestation period of three months. Some species have two litters a year in May and October, while others breed year round. The pups are born in burrows lined with leaves, roots and hair. They are well developed at birth and may be up and eating within an hour. Fathers are barred from the nest while the young are very small, but the parents pair bond for the rest of their lives. They can live for as long as 20 years, a remarkably long time for a rodent. Species Azara's agouti, Dasyprocta azarae Coiban agouti, Dasyprocta coibae Crested agouti, Dasyprocta cristata Black agouti, Dasyprocta fuliginosa Orinoco agouti, Dasyprocta guamara Kalinowski's agouti, Dasyprocta kalinowskii Red-rumped agouti, Dasyprocta leporina Mexican agouti, Dasyprocta mexicana Black-rumped agouti, Dasyprocta prymnolopha Central American agouti, Dasyprocta punctata Ruatan Island agouti, Dasyprocta ruatanica Brown agouti, Dasyprocta variegata (previously lumped with D. punctata)
Biology and health sciences
Rodents
Animals
579026
https://en.wikipedia.org/wiki/Gravitational%20potential
Gravitational potential
In classical mechanics, the gravitational potential is a scalar potential associating with each point in space the work (energy transferred) per unit mass that would be needed to move an object to that point from a fixed reference point in the conservative gravitational field. It is analogous to the electric potential with mass playing the role of charge. The reference point, where the potential is zero, is by convention infinitely far away from any mass, resulting in a negative potential at any finite distance. Their similarity is correlated with both associated fields having conservative forces. Mathematically, the gravitational potential is also known as the Newtonian potential and is fundamental in the study of potential theory. It may also be used for solving the electrostatic and magnetostatic fields generated by uniformly charged or polarized ellipsoidal bodies. Potential energy The gravitational potential (V) at a location is the gravitational potential energy (U) at that location per unit mass: where m is the mass of the object. Potential energy is equal (in magnitude, but negative) to the work done by the gravitational field moving a body to its given position in space from infinity. If the body has a mass of 1 kilogram, then the potential energy to be assigned to that body is equal to the gravitational potential. So the potential can be interpreted as the negative of the work done by the gravitational field moving a unit mass in from infinity. In some situations, the equations can be simplified by assuming a field that is nearly independent of position. For instance, in a region close to the surface of the Earth, the gravitational acceleration, g, can be considered constant. In that case, the difference in potential energy from one height to another is, to a good approximation, linearly related to the difference in height: Mathematical form The gravitational potential V at a distance x from a point mass of mass M can be defined as the work W that needs to be done by an external agent to bring a unit mass in from infinity to that point: where G is the gravitational constant, and F is the gravitational force. The product GM is the standard gravitational parameter and is often known to higher precision than G or M separately. The potential has units of energy per mass, e.g., J/kg in the MKS system. By convention, it is always negative where it is defined, and as x tends to infinity, it approaches zero. The gravitational field, and thus the acceleration of a small body in the space around the massive object, is the negative gradient of the gravitational potential. Thus the negative of a negative gradient yields positive acceleration toward a massive object. Because the potential has no angular components, its gradient is where x is a vector of length x pointing from the point mass toward the small body and is a unit vector pointing from the point mass toward the small body. The magnitude of the acceleration therefore follows an inverse square law: The potential associated with a mass distribution is the superposition of the potentials of point masses. If the mass distribution is a finite collection of point masses, and if the point masses are located at the points x1, ..., xn and have masses m1, ..., mn, then the potential of the distribution at the point x is If the mass distribution is given as a mass measure dm on three-dimensional Euclidean space R3, then the potential is the convolution of with dm. In good cases this equals the integral where is the distance between the points x and r. If there is a function ρ(r) representing the density of the distribution at r, so that , where dv(r) is the Euclidean volume element, then the gravitational potential is the volume integral If V is a potential function coming from a continuous mass distribution ρ(r), then ρ can be recovered using the Laplace operator, : This holds pointwise whenever ρ is continuous and is zero outside of a bounded set. In general, the mass measure dm can be recovered in the same way if the Laplace operator is taken in the sense of distributions. As a consequence, the gravitational potential satisfies Poisson's equation.
Physical sciences
Classical mechanics
Physics
579219
https://en.wikipedia.org/wiki/%CE%92-Carotene
Β-Carotene
β-Carotene (beta-carotene) is an organic, strongly colored red-orange pigment abundant in fungi, plants, and fruits. It is a member of the carotenes, which are terpenoids (isoprenoids), synthesized biochemically from eight isoprene units and thus having 40 carbons. Dietary β-carotene is a provitamin A compound, converting in the body to retinol (vitamin A). In foods, it has rich content in carrots, pumpkin, spinach, and sweet potato. It is used as a dietary supplement and may be prescribed to treat erythropoietic protoporphyria, an inherited condition of sunlight sensitivity. β-carotene is the most common carotenoid in plants. When used as a food coloring, it has the E number E160a. The structure was deduced in 1930. Isolation of β-carotene from fruits abundant in carotenoids is commonly done using column chromatography. It is industrially extracted from richer sources such as the algae Dunaliella salina. The separation of β-carotene from the mixture of other carotenoids is based on the polarity of a compound. β-Carotene is a non-polar compound, so it is separated with a non-polar solvent such as hexane. Being highly conjugated, it is deeply colored, and as a hydrocarbon lacking functional groups, it is lipophilic. Provitamin A activity Plant carotenoids are the primary dietary source of provitamin A worldwide, with β-carotene as the best-known provitamin A carotenoid. Others include α-carotene and β-cryptoxanthin. Carotenoid absorption is restricted to the duodenum of the small intestine. One molecule of β-carotene can be cleaved by the intestinal enzyme β,β-carotene 15,15'-monooxygenase into two molecules of vitamin A. Absorption, metabolism and excretion As part of the digestive process, food-sourced carotenoids must be separated from plant cells and incorporated into lipid-containing micelles to be bioaccessible to intestinal enterocytes. If already extracted (or synthetic) and then presented in an oil-filled dietary supplement capsule, there is greater bioavailability compared to that from foods. At the enterocyte cell wall, β-carotene is taken up by the membrane transporter protein scavenger receptor class B, type 1 (SCARB1). Absorbed β-carotene is then either incorporated as such into chylomicrons or first converted to retinal and then retinol, bound to retinol binding protein 2, before being incorporated into chylomicrons. The conversion process consists of one molecule of β-carotene cleaved by the enzyme beta-carotene 15,15'-dioxygenase, which is encoded by the BCO1 gene, into two molecules of retinal. When plasma retinol is in the normal range the gene expression for SCARB1 and BCO1 are suppressed, creating a feedback loop that suppresses β-carotene absorption and conversion. The majority of chylomicrons are taken up by the liver, then secreted into the blood repackaged into low density lipoproteins (LDLs). From these circulating lipoproteins and the chylomicrons that bypassed the liver, β-carotene is taken into cells via receptor SCARB1. Human tissues differ in expression of SCARB1, and hence β-carotene content. Examples expressed as ng/g, wet weight: liver=479, lung=226, prostate=163 and skin=26. Once taken up by peripheral tissue cells, the major usage of absorbed β-carotene is as a precursor to retinal via symmetric cleavage by the enzyme beta-carotene 15,15'-dioxygenase, which is encoded by the BCO1 gene. A lesser amount is metabolized by the mitochondrial enzyme beta-carotene 9',10'-dioxygenase, which is encoded by the BCO2 gene. The products of this asymmetric cleavage are two beta-ionone molecules and rosafluene. BCO2 appears to be involved in preventing excessive accumulation of carotenoids; a BCO2 defect in chickens results in yellow skin color due to accumulation in subcutaneous fat. Conversion factors For counting dietary vitamin A intake, β-carotene may be converted either using the newer retinol activity equivalents (RAE) or the older international unit (IU). Retinol activity equivalents (RAEs) Since 2001, the US Institute of Medicine uses retinol activity equivalents (RAE) for their Dietary Reference Intakes, defined as follows: 1 μg RAE = 1 μg retinol from food or supplements 1 μg RAE = 2 μg all-trans-β-carotene from supplements 1 μg RAE = 12 μg of all-trans-β-carotene from food 1 μg RAE = 24 μg α-carotene or β-cryptoxanthin from food RAE takes into account carotenoids' variable absorption and conversion to vitamin A by humans better than and replaces the older retinol equivalent (RE) (1 μg RE = 1 μg retinol, 6 μg β-carotene, or 12 μg α-carotene or β-cryptoxanthin). RE was developed 1967 by the United Nations/World Health Organization Food and Agriculture Organization (FAO/WHO). International Units Another older unit of vitamin A activity is the international unit (IU). Like retinol equivalent, the international unit does not take into account carotenoid variable absorption and conversion to vitamin A by humans, as well as the more modern retinol activity equivalent. Food and supplement labels still generally use IU, but IU can be converted to the more useful retinol activity equivalent as follows: 1 μg RAE = 3.33 IU retinol 1 IU retinol = 0.3 μg RAE 1 IU β-carotene from supplements = 0.3 μg RAE 1 IU β-carotene from food = 0.05 μg RAE 1 IU α-carotene or β-cryptoxanthin from food = 0.025 μg RAE1 Dietary sources The average daily intake of β-carotene is in the range 2–7 mg, as estimated from a pooled analysis of 500,000 women living in the US, Canada, and some European countries. Beta-carotene is found in many foods and is sold as a dietary supplement. β-Carotene contributes to the orange color of many different fruits and vegetables. Vietnamese gac (Momordica cochinchinensis Spreng.) and crude palm oil are particularly rich sources, as are yellow and orange fruits, such as cantaloupe, mangoes, pumpkin, and papayas, and orange root vegetables such as carrots and sweet potatoes. The color of β-carotene is masked by chlorophyll in green leaf vegetables such as spinach, kale, sweet potato leaves, and sweet gourd leaves. The U.S. Department of Agriculture lists foods high in β-carotene content: No dietary requirement Government and non-government organizations have not set a dietary requirement for β-carotene. Side effects Excess β-carotene is predominantly stored in the fat tissues of the body. The most common side effect of excessive β-carotene consumption is carotenodermia, a physically harmless condition that presents as a conspicuous orange skin tint arising from deposition of the carotenoid in the outermost layer of the epidermis. Carotenosis Carotenoderma, also referred to as carotenemia, is a benign and reversible medical condition where an excess of dietary carotenoids results in orange discoloration of the outermost skin layer. It is associated with a high blood β-carotene value. This can occur after a month or two of consumption of beta-carotene rich foods, such as carrots, carrot juice, tangerine juice, mangos, or in Africa, red palm oil. β-carotene dietary supplements can have the same effect. The discoloration extends to palms and soles of feet, but not to the white of the eye, which helps distinguish the condition from jaundice. Carotenodermia is reversible upon cessation of excessive intake. Consumption of greater than 30 mg/day for a prolonged period has been confirmed as leading to carotenemia. No risk for hypervitaminosis A At the enterocyte cell wall, β-carotene is taken up by the membrane transporter protein scavenger receptor class B, type 1 (SCARB1). Absorbed β-carotene is then either incorporated as such into chylomicrons or first converted to retinal and then retinol, bound to retinol binding protein 2, before being incorporated into chylomicrons. The conversion process consists of one molecule of β-carotene cleaved by the enzyme beta-carotene 15,15'-dioxygenase, which is encoded by the BCO1 gene, into two molecules of retinal. When plasma retinol is in the normal range the gene expression for SCARB1 and BCO1 are suppressed, creating a feedback loop that suppresses absorption and conversion. Because of these two mechanisms, high intake will not lead to hypervitaminosis A. Drug interactions β-Carotene can interact with medication used for lowering cholesterol. Taking them together can lower the effectiveness of these medications and is considered only a moderate interaction. Bile acid sequestrants and proton-pump inhibitors can decrease absorption of β-carotene. Consuming alcohol with β-carotene can decrease its ability to convert to retinol and could possibly result in hepatotoxicity. β-Carotene and lung cancer in smokers Chronic high doses of β-carotene supplementation increases the probability of lung cancer in smokers while its natural vitamer, retinol, increases lung cancer in smokers and nonsmokers. The effect is specific to supplementation dose as no lung damage has been detected in those who are exposed to cigarette smoke and who ingest a physiological dose of β-carotene (6 mg), in contrast to high pharmacological dose (30 mg). Increases in lung cancer have been attributed to the tendency of β-carotene to oxidize, yet based on the pharmacokinetics of β-carotene absorption and transport through the intestine and the lack of specific β-carotene transporters, it is unlikely that β-carotene reaches the lung of smokers in sufficient quantities. Additional research is required to understand the link between the increased risk of cancer and all-cause mortality following β-carotene supplementation. Additionally, supplemental, high-dose β-carotene may increase the risk of prostate cancer, intracerebral hemorrhage, and cardiovascular and total mortality irrespective of smoking status. Industrial sources β-carotene is industrially made either by total synthesis (see ) or by extraction from biological sources such as vegetables, microalgae (especially Dunaliella salina), and genetically-engineered microbes. The synthetic path is low-cost and high-yield. Research Medical authorities generally recommend obtaining beta-carotene from food rather than dietary supplements. A 2013 meta-analysis of randomized controlled trials concluded that high-dosage (≥9.6 mg/day) beta-carotene supplementation is associated with a 6% increase in the risk of all-cause mortality, while low-dosage (<9.6 mg/day) supplementation does not have a significant effect on mortality. Research is insufficient to determine whether a minimum level of beta-carotene consumption is necessary for human health and to identify what problems might arise from insufficient beta-carotene intake. However, a 2018 meta-analysis mostly of prospective cohort studies found that both dietary and circulating beta-carotene are associated with a lower risk of all-cause mortality. The highest circulating beta-carotene category, compared to the lowest, correlated with a 37% reduction in the risk of all-cause mortality, while the highest dietary beta-carotene intake category, compared to the lowest, was linked to an 18% decrease in the risk of all-cause mortality. Macular degeneration Age-related macular degeneration (AMD) represents the leading cause of irreversible blindness in elderly people. AMD is an oxidative stress, retinal disease that affects the macula, causing progressive loss of central vision. β-carotene content is confirmed in human retinal pigment epithelium. Reviews reported mixed results for observational studies, with some reporting that diets higher in β-carotene correlated with a decreased risk of AMD whereas other studies reporting no benefits. Reviews reported that for intervention trials using only β-carotene, there was no change to risk of developing AMD. Cancer A meta-analysis concluded that supplementation with β-carotene does not appear to decrease the risk of cancer overall, nor specific cancers including: pancreatic, colorectal, prostate, breast, melanoma, or skin cancer generally. High levels of β-carotene may increase the risk of lung cancer in current and former smokers. Results are not clear for thyroid cancer. Cataract A Cochrane review looked at supplementation of β-carotene, vitamin C, and vitamin E, independently and combined, on people to examine differences in risk of cataract, cataract extraction, progression of cataract, and slowing the loss of visual acuity. These studies found no evidence of any protective effects afforded by β-carotene supplementation on preventing and slowing age-related cataract. A second meta-analysis compiled data from studies that measured diet-derived serum beta-carotene and reported a not statistically significant 10% decrease in cataract risk. Erythropoietic protoporphyria High doses of β-carotene (up to 180 mg per day) may be used as a treatment for erythropoietic protoporphyria, a rare inherited disorder of sunlight sensitivity, without toxic effects. Food drying Foods rich in carotenoid dyes show discoloration upon drying. This is due to thermal degradation of carotenoids, possibly via isomerization and oxidation reactions.
Biology and health sciences
Biological pigments
Biology
579223
https://en.wikipedia.org/wiki/Land%20reclamation
Land reclamation
Land reclamation, often known as reclamation, and also known as land fill (not to be confused with a waste landfill), is the process of creating new land from oceans, seas, riverbeds or lake beds. The land reclaimed is known as reclamation ground, reclaimed land, or land fill. History In Ancient Egypt, the rulers of the Twelfth Dynasty (c. 2000–1800 BC) undertook a far-sighted land reclamation scheme to increase agricultural output. They constructed levees and canals to connect the Faiyum with the Bahr Yussef waterway, diverting water that would have flowed into Lake Moeris and causing gradual evaporation around the lake's edges, creating new farmland from the reclaimed land. A similar land reclamation system using dams and drainage canals was used in the Greek Copaic Basin during the Middle Helladic Period (c. 1900–1600 BC). Another early large-scale project was the Beemster Polder in the Netherlands, realized in 1612 adding of land. In Hong Kong the Praya Reclamation Scheme added of land in 1890 during the second phase of construction. It was one of the most ambitious projects ever taken during the Colonial Hong Kong era. Some 20% of land in the Tokyo Bay area has been reclaimed, most notably Odaiba artificial island. The city of Rio de Janeiro was largely built on reclaimed land, as was Wellington, New Zealand. Methods Land reclamation can be achieved by a number of different methods. The simplest method involves filling the area with large amounts of heavy rock and/or cement, then filling with clay and dirt until the desired height is reached. The process is called "infilling" and the material used to fill the space is generally called "infill". Draining of submerged wetlands is often used to reclaim land for agricultural use. Deep cement mixing is used typically in situations in which the material displaced by either dredging or draining may be contaminated and hence needs to be contained. Land dredging is also another method of land reclamation. It is the removal of sediments and debris from the bottom of a body of water. It is commonly used for maintaining reclaimed land masses as sedimentation, a natural process, fills channels and harbors. Notable instances Africa The Hassan II Mosque is built on reclaimed land. The Eko Atlantic in Lagos. Gracefield Island in Lekki, Lagos. The Foreshore in Cape Town. Stone Town in Zanzibar. Asia Parts of the coastlines of Mainland China, Hong Kong, North Korea and South Korea. It is estimated that nearly 65% of tidal flats around the Yellow Sea have been reclaimed. The north of Bahrain. Inland lowlands in the Yangtze valley, China, including the areas of important cities like Wuhan. Nanhui New City in Shanghai Haikou Bay, Hainan Province, China, where the west side of Haidian Island is being extended, and off the coast of Haikou, where new land for a marina is being created. The Cotai area of Macau, where many casinos are located. Parts of Shekou in Shenzhen, Guangdong province. Much of the coastline of Mumbai, India. It took over 150 years to join the original Seven Islands of Bombay. These seven islands were lush, green, thickly wooded, and dotted with 22 hills, with the Arabian Sea washing through them at high tide. The original Isle of Bombay was only long and wide from Dongri to Malabar Hill (at its broadest point) and the other six were Colaba, Old Woman's Island, Mahim, Parel, Worli and Mazgaon. (
Physical sciences
Artificial landforms
null
579730
https://en.wikipedia.org/wiki/Data%20center
Data center
A data center is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems. Since IT operations are crucial for business continuity, it generally includes redundant or backup components and infrastructure for power supply, data communication connections, environmental controls (e.g., air conditioning, fire suppression), and various security devices. A large data center is an industrial-scale operation using as much electricity as a medium town. Estimated global data center electricity consumption in 2022 was 240–340 TWh, or roughly 1–1.3% of global electricity demand. This excludes energy used for cryptocurrency mining, which was estimated to be around 110 TWh in 2022, or another 0.4% of global electricity demand. The IEA projects that data center electric use could double between 2022 and 2026. High demand for electricity from data centers, including by cryptomining and artificial intelligence, has also increased strain on local electric grids and increased electricity prices in some markets. Data centers can vary widely in terms of size, power requirements, redundancy, and overall structure. Four common categories used to segment types of data centers are onsite data centers, colocation facilities, hyperscale data centers, and edge data centers. History Data centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center. Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised. During the microcomputer industry boom of the 1980s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, organizations grew aware of the need to control IT resources. The availability of inexpensive networking equipment, coupled with new standards for the network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term data center, as applied to specially designed computer rooms, started to gain popular recognition about this time. A boom of data centers came during the dot-com bubble of 1997–2000. Companies needed fast Internet connectivity and non-stop operation to deploy systems and to establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called internet data centers (IDCs), which provide enhanced capabilities, such as crossover backup: "If a Bell Atlantic line is cut, we can transfer them to ... to minimize the time of outage." The term cloud data centers (CDCs) has been used. Increasingly, the division of these terms has almost disappeared and they are being integrated into the term data center. The global data center market saw steady growth in the 2010s, with a notable acceleration in the latter half of the decade. According to Gartner, worldwide data center infrastructure spending reached $200 billion in 2021, representing a 6% increase from 2020 despite the economic challenges posed by the COVID-19 pandemic. The latter part of the 2010s and early 2020s saw a significant shift towards AI and machine learning applications, generating a global boom for more powerful and efficient data center infrastructure. As of March 2021, global data creation was projected to grow to more than 180 zettabytes by 2025, up from 64.2 zettabytes in 2020. The United States is currently the foremost leader in data center infrastructure, hosting 5,381 data centers as of March 2024, the highest number of any country worldwide. According to global consultancy McKinsey & Co., U.S. market demand is expected to double to 35 gigawatts (GW) by 2030, up from 17 GW in 2022. As of 2023, the U.S. accounts for roughly 40 percent of the global market. A study published by the Electric Power Research Institute (EPRI) in May 2024 estimates U.S. data center power consumption could range from 4.6% to 9.1% of the country’s generation by 2030. As of 2023, about 80% of U.S. data center load was concentrated in 15 states, led by Virginia and Texas. Requirements for modern data centers Modernization and data center transformation enhances performance and energy efficiency. Information security is also a concern, and for this reason, a data center has to offer a secure environment that minimizes the chances of a security breach. A data center must, therefore, keep high standards for assuring the integrity and functionality of its hosted computer environment. Industry research company International Data Corporation (IDC) puts the average age of a data center at nine years old. Gartner, another research company, says data centers older than seven years are obsolete. The growth in data (163 zettabytes by 2025) is one factor driving the need for data centers to modernize. Focus on modernization is not new: concern about obsolete equipment was decried in 2007, and in 2011 Uptime Institute was concerned about the age of the equipment therein. By 2018 concern had shifted once again, this time to the age of the staff: "data center staff are aging faster than the equipment." Meeting standards for data centers The Telecommunications Industry Association's Telecommunications Infrastructure Standard for Data Centers specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center. Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces, provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to: Operate and manage a carrier's telecommunication network Provide data center based applications directly to the carrier's customers Provide hosted applications for a third party to provide services to their customers Provide a combination of these and similar data center applications Data center transformation Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach. The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security. Standardization/consolidation: Reducing the number of data centers and avoiding server sprawl (both physical and virtual) often includes replacing aging data center equipment, and is aided by standardization. Virtualization: Lowers capital and operational expenses, reduces energy consumption. Virtualized desktops can be hosted in data centers and rented out on a subscription basis. Investment bank Lazard Capital Markets estimated in 2008 that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization. Automating: Automating tasks such as provisioning, configuration, patching, release management, and compliance is needed, not just when facing fewer skilled IT workers. Securing: Protection of virtual systems is integrated with the existing security of physical infrastructures. Raised floor A raised floor standards guide named GR-2930 was developed by Telcordia Technologies, a subsidiary of Ericsson. Although the first raised floor computer room was made by IBM in 1956, and they've "been around since the 1960s", it was the 1970s that made it more common for computer centers to thereby allow cool air to circulate more efficiently. The first purpose of the raised floor was to allow access for wiring. Lights out The lights-out data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure. Noise levels Generally speaking, local authorities prefer noise levels at data centers to be "10 dB below the existing night-time background noise level at the nearest residence." OSHA regulations require monitoring of noise levels inside data centers if noise exceeds 85 decibels. The average noise level in server areas of a data center may reach as high as 92-96 dB(A). Residents living near data centers have described the sound as "a high-pitched whirring noise 24/7", saying “It’s like being on a tarmac with an airplane engine running constantly ... Except that the airplane keeps idling and never leaves.” External sources of noise include HVAC equipment and energy generators. Data center design The field of data center design has been growing for decades in various directions, including new construction big and small along with the creative re-use of existing facilities, like abandoned retail space, old salt mines and war-era bunkers. a 65-story data center has already been proposed the number of data centers as of 2016 had grown beyond 3 million USA-wide, and more than triple that number worldwide Local building codes may govern the minimum ceiling heights and other parameters. Some of the considerations in the design of data centers are: Size - one room of a building, one or more floors, or an entire building, Capacity - can hold up to or past 1,000 servers Other considerations - Space, power, cooling, and costs in the data center. Mechanical engineering infrastructure - heating, ventilation and air conditioning (HVAC); humidification and dehumidification equipment; pressurization. Electrical engineering infrastructure design - utility service planning; distribution, switching and bypass from power sources; uninterruptible power source (UPS) systems; and more. Design criteria and trade-offs Availability expectations: The costs of avoiding downtime should not exceed the cost of the downtime itself Site selection: Location factors include proximity to power grids, telecommunications infrastructure, networking services, transportation lines and emergency services. Other considerations should include flight paths, neighboring power drains, geological risks, and climate (associated with cooling costs). Often, power availability is the hardest to change. High availability Various metrics exist for measuring the data-availability that results from data-center availability beyond 95% uptime, with the top of the scale counting how many nines can be placed after 99%. Modularity and flexibility Modularity and flexibility are key elements in allowing for a data center to grow and change over time. Data center modules are pre-engineered, standardized building blocks that can be easily configured and moved as needed. A modular data center may consist of data center equipment contained within shipping containers or similar portable containers. Components of the data center can be prefabricated and standardized which facilitates moving if needed. Environmental control Temperature and humidity are controlled via: Air conditioning indirect cooling, such as using outside air, Indirect Evaporative Cooling (IDEC) units, and also using sea water. It is important that computers do not get humid or overheat, as high humidity can lead to dust clogging the fans, which leads to overheat, or can cause components to malfunction, ruining the board and running a fire hazard. Overheat can cause components, usually the silicon or copper of the wires or circuits to melt, causing connections to loosen, causing fire hazards. Electrical power Backup power consists of one or more uninterruptible power supplies, battery banks, and/or diesel / gas turbine generators. To prevent single points of failure, all elements of the electrical systems, including backup systems, are typically given redundant copies, and critical servers are connected to both the A-side and B-side power feeds. This arrangement is often made to achieve N+1 redundancy in the systems. Static transfer switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure. Low-voltage cable routing Options include: Data cabling can be routed through overhead cable trays Raised floor cabling, both for security reasons and to avoid the extra cost of cooling systems over the racks. Smaller/less expensive data centers may use anti-static tiles instead for a flooring surface. Air flow Air flow management addresses the need to improve data center computer cooling efficiency by preventing the recirculation of hot air exhausted from IT equipment and reducing bypass airflow. There are several methods of separating hot and cold airstreams, such as hot/cold aisle containment and in-row cooling units. Aisle containment Cold aisle containment is done by exposing the rear of equipment racks, while the fronts of the servers are enclosed with doors and covers. This is similar to how large-scale food companies refrigerate and store their products. Computer cabinets/Server farms are often organized for containment of hot/cold aisles. Proper air duct placement prevents the cold and hot air from mixing. Rows of cabinets are paired to face each other so that the cool and hot air intakes and exhausts don't mix air, which would severely reduce cooling efficiency. Alternatively, a range of underfloor panels can create efficient cold air pathways directed to the raised-floor vented tiles. Either the cold aisle or the hot aisle can be contained. Another option is fitting cabinets with vertical exhaust duct chimneys. Hot exhaust pipes/vents/ducts can direct the air into a Plenum space above a Dropped ceiling and back to the cooling units or to outside vents. With this configuration, traditional hot/cold aisle configuration is not a requirement. Fire protection Data centers feature fire protection systems, including passive and Active Design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a fire at its incipient stage. Although the main room usually does not allow Wet Pipe-based Systems due to the fragile nature of Circuit-boards, there still exist systems that can be used in the rest of the facility or in cold/hot aisle air circulation systems that are closed systems, such as: Sprinkler systems Misting, using high pressure to create extremely small water droplets, which can be used in sensitive rooms due to the nature of the droplets. However, there also exist other means to put out fires, especially in Sensitive areas, usually using Gaseous fire suppression, of which Halon gas was the most popular, until the negative effects of producing and using it were discovered. Security Physical access is usually restricted. Layered security often starts with fencing, bollards and mantraps. Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information. Fingerprint recognition mantraps are starting to be commonplace. Logging access is required by some data protection regulations; some organizations tightly link this to access control systems. Multiple log entries can occur at the main entrance, entrances to internal rooms, and at equipment cabinets. Access control at cabinets can be integrated with intelligent power distribution units, so that locks are networked through the same appliance. Energy use Energy use is a central issue for data centers. Power draw ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building. For higher power density facilities, electricity costs are a dominant operating expense and account for over 10% of the total cost of ownership (TCO) of a data center. Greenhouse gas emissions In 2020, data centers (excluding cryptocurrency mining) and data transmission each used about 1% of world electricity. Although some of this electricity was low carbon, the IEA called for more "government and industry efforts on energy efficiency, renewables procurement and RD&D", as some data centers still use electricity generated by fossil fuels. They also said that lifecycle emissions should be considered, that is including embodied emissions, such as in buildings. Data centers are estimated to have been responsible for 0.5% of US greenhouse gas emissions in 2018. Some Chinese companies, such as Tencent, have pledged to be carbon neutral by 2030, while others such as Alibaba have been criticized by Greenpeace for not committing to become carbon neutral. Google and Microsoft now each consume more power than some fairly big countries, surpassing the consumption of more than 100 countries. Energy efficiency and overhead The most commonly used energy efficiency metric for data centers is power usage effectiveness (PUE), calculated as the ratio of total power entering the data center divided by the power used by IT equipment. PUE measures the percentage of power used by overhead devices (cooling, lighting, etc.). The average USA data center has a PUE of 2.0, meaning two watts of total power (overhead + IT equipment) for every watt delivered to IT equipment. State-of-the-art data centers are estimated to have a PUE of roughly 1.2. Google publishes quarterly efficiency metrics from its data centers in operation. PUEs of as low as 1.01 have been achieved with two phase immersion cooling. The U.S. Environmental Protection Agency has an Energy Star rating for standalone or large data centers. To qualify for the ecolabel, a data center must be within the top quartile in energy efficiency of all reported facilities. The Energy Efficiency Improvement Act of 2015 (United States) requires federal facilities — including data centers — to operate more efficiently. California's Title 24 (2014) of the California Code of Regulations mandates that every newly constructed data center must have some form of airflow containment in place to optimize energy efficiency. The European Union also has a similar initiative: EU Code of Conduct for Data Centres. Energy use analysis and projects The focus of measuring and analyzing energy use goes beyond what is used by IT equipment; facility support hardware such as chillers and fans also use energy. In 2011, server racks in data centers were designed for more than 25 kW and the typical server was estimated to waste about 30% of the electricity it consumed. The energy demand for information storage systems is also rising. A high-availability data center is estimated to have a 1 megawatt (MW) demand and consume $20,000,000 in electricity over its lifetime, with cooling representing 35% to 45% of the data center's total cost of ownership. Calculations show that in two years, the cost of powering and cooling a server could be equal to the cost of purchasing the server hardware. Research in 2018 has shown that a substantial amount of energy could still be conserved by optimizing IT refresh rates and increasing server utilization. In 2011, Facebook, Rackspace and others founded the Open Compute Project (OCP) to develop and publish open standards for greener data center computing technologies. As part of the project, Facebook published the designs of its server, which it had built for its first dedicated data center in Prineville. Making servers taller left space for more effective heat sinks and enabled the use of fans that moved more air with less energy. By not buying commercial off-the-shelf servers, energy consumption due to unnecessary expansion slots on the motherboard and unneeded components, such as a graphics card, was also saved. In 2016, Google joined the project and published the designs of its 48V DC shallow data center rack. This design had long been part of Google data centers. By eliminating the multiple transformers usually deployed in data centers, Google had achieved a 30% increase in energy efficiency. In 2017, sales for data center hardware built to OCP designs topped $1.2 billion and are expected to reach $6 billion by 2021. Power and cooling analysis Power is the largest recurring cost to the user of a data center. Cooling it at or below wastes money and energy. Furthermore, overcooling equipment in environments with a high relative humidity can expose equipment to a high amount of moisture that facilitates the growth of salt deposits on conductive filaments in the circuitry. A power and cooling analysis, also referred to as a thermal assessment, measures the relative temperatures in specific areas as well as the capacity of the cooling systems to handle specific ambient temperatures. A power and cooling analysis can help to identify hot spots, over-cooled areas that can handle greater power use density, the breakpoint of equipment loading, the effectiveness of a raised-floor strategy, and optimal equipment positioning (such as AC units) to balance temperatures across the data center. Power cooling density is a measure of how much square footage the center can cool at maximum capacity. The cooling of data centers is the second largest power consumer after servers. The cooling energy varies from 10% of the total energy consumption in the most efficient data centers and goes up to 45% in standard air-cooled data centers. Energy efficiency analysis An energy efficiency analysis measures the energy use of data center IT and facilities equipment. A typical energy efficiency analysis measures factors such as a data center's Power Use Effectiveness (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics. However, the limitation of most current metrics and approaches is that they do not include IT in the analysis. Case studies have shown that by addressing energy efficiency holistically in a data center, major efficiencies can be achieved that are not possible otherwise. Computational Fluid Dynamics (CFD) analysis This type of analysis uses sophisticated tools and techniques to understand the unique thermal conditions present in each data center—predicting the temperature, airflow, and pressure behavior of a data center to assess performance and energy consumption, using numerical modeling. By predicting the effects of these environmental conditions, CFD analysis of a data center can be used to predict the impact of high-density racks mixed with low-density racks and the onward impact on cooling resources, poor infrastructure management practices, and AC failure or AC shutdown for scheduled maintenance. Thermal zone mapping Thermal zone mapping uses sensors and computer modeling to create a three-dimensional image of the hot and cool zones in a data center. This information can help to identify optimal positioning of data center equipment. For example, critical servers might be placed in a cool zone that is serviced by redundant AC units. Green data centers Data centers use a lot of power, consumed by two main usages: The power required to run the actual equipment and then the power required to cool the equipment. Power efficiency reduces the first category. Cooling cost reduction through natural means includes location decisions: When the focus is avoiding good fiber connectivity, power grid connections, and people concentrations to manage the equipment, a data center can be miles away from the users. Mass data centers like Google or Facebook don't need to be near population centers. Arctic locations that can use outside air, which provides cooling, are becoming more popular. Renewable electricity sources are another plus. Thus countries with favorable conditions, such as Canada, Finland, Sweden, Norway, and Switzerland are trying to attract cloud computing data centers. Singapore lifted a three-year ban on new data centers in April 2022. A major data center hub for the Asia-Pacific region, Singapore lifted its moratorium on new data center projects in 2022, granting 4 new projects, but rejecting more than 16 data center applications from over 20 new data centers applications received. Singapore's new data centers shall meet very strict green technology criteria including "Water Usage Effectiveness (WUE) of 2.0/MWh, Power Usage Effectiveness (PUE) of less than 1.3, and have a "Platinum certification under Singapore's BCA-IMDA Green Mark for New Data Centre" criteria that clearly addressed decarbonization and use of hydrogen cells or solar panels. Direct current data centers Direct current data centers are data centers that produce direct current on site with solar panels and store the electricity on site in a battery storage power station. Computers run on direct current and the need for inverting the AC power from the grid would be eliminated. The data center site could still use AC power as a grid-as-a-backup solution. DC data centers could be 10% more efficient and use less floor space for inverting components. Energy reuse It is very difficult to reuse the heat which comes from air-cooled data centers. For this reason, data center infrastructures are more often equipped with heat pumps. An alternative to heat pumps is the adoption of liquid cooling throughout a data center. Different liquid cooling techniques are mixed and matched to allow for a fully liquid-cooled infrastructure that captures all heat with water. Different liquid technologies are categorized in 3 main groups, indirect liquid cooling (water-cooled racks), direct liquid cooling (direct-to-chip cooling) and total liquid cooling (complete immersion in liquid, see server immersion cooling). This combination of technologies allows the creation of a thermal cascade as part of temperature chaining scenarios to create high-temperature water outputs from the data center. Impact on electricity prices Cryptomining and the artificial intelligence boom of the 2020s has also led to increased demand for electricity, that the IEA expects could double global overall data center demand for electricity between 2022 and 2026. The US could see its share of the electricity market going to data centers increase from 4% to 6% over those four years. Bitcoin used up 2% of US electricity in 2023. This has led to increased electricity prices in some regions, particularly in regions with lots of data centers like Santa Clara, California and upstate New York. Data centers have also generated concerns in Northern Virginia about whether residents will have to foot the bill for future power lines. It has also made it harder to develop housing in London. A Bank of America Institute report in July 2024 found that the increase in demand for electricity due in part to AI has been pushing electricity prices higher and is a significant contributor to electricity inflation. Dynamic infrastructure Dynamic infrastructure provides the ability to intelligently, automatically and securely move workloads within a data center anytime, anywhere, for migrations, provisioning, to enhance performance, or building co-location facilities. It also facilitates performing routine maintenance on either physical or virtual systems all while minimizing interruption. A related concept is Composable Infrastructure, which allows for the dynamic reconfiguration of the available resources to suit needs, only when needed. Side benefits include reducing cost facilitating business continuity and high availability enabling cloud and grid computing. Network infrastructure Communications in data centers today are most often based on networks running the Internet protocol suite. Data centers contain a set of routers and switches that transport traffic between the servers and to the outside world which are connected according to the data center network architecture. Redundancy of the internet connection is often provided by using two or more upstream service providers (see Multihoming). Some of the servers at the data center are used for running the basic internet and intranet services needed by internal users in the organization, e.g., e-mail servers, proxy servers, and DNS servers. Network security elements are also usually deployed: firewalls, VPN gateways, intrusion detection systems, and so on. Also common are monitoring systems for the network and some of the applications. Additional off-site monitoring systems are also typical, in case of a failure of communications inside the data center. Software/data backup Non-mutually exclusive options for data backup are: Onsite Offsite Onsite is traditional, and one of its major advantages is immediate availability. Offsite backup storage Data backup techniques include having an encrypted copy of the data offsite. Methods used for transporting data are: Having the customer write the data to a physical medium, such as magnetic tape, and then transporting the tape elsewhere. Directly transferring the data to another site during the backup, using appropriate links. Uploading the data "into the cloud". Modular data center For quick deployment or IT disaster recovery, several large hardware vendors have developed mobile/modular solutions that can be installed and made operational in a very short amount of time. Micro data center Micro data centers (MDCs) are access-level data centers which are smaller in size than traditional data centers but provide the same features. They are typically located near the data source to reduce communication delays, as their small size allows several MDCs to be spread out over a wide area. MDCs are well suited to user-facing, front end applications. They are commonly used in edge computing and other areas where low latency data processing is needed.
Technology
Commercial buildings
null
580252
https://en.wikipedia.org/wiki/Reuleaux%20triangle
Reuleaux triangle
A Reuleaux triangle is a curved triangle with constant width, the simplest and best known curve of constant width other than the circle. It is formed from the intersection of three circular disks, each having its center on the boundary of the other two. Constant width means that the separation of every two parallel supporting lines is the same, independent of their orientation. Because its width is constant, the Reuleaux triangle is one answer to the question "Other than a circle, what shape can a manhole cover be made so that it cannot fall down through the hole?" They are named after Franz Reuleaux, a 19th-century German engineer who pioneered the study of machines for translating one type of motion into another, and who used Reuleaux triangles in his designs. However, these shapes were known before his time, for instance by the designers of Gothic church windows, by Leonardo da Vinci, who used it for a map projection, and by Leonhard Euler in his study of constant-width shapes. Other applications of the Reuleaux triangle include giving the shape to guitar picks, fire hydrant nuts, pencils, and drill bits for drilling filleted square holes, as well as in graphic design in the shapes of some signs and corporate logos. Among constant-width shapes with a given width, the Reuleaux triangle has the minimum area and the sharpest (smallest) possible angle (120°) at its corners. By several numerical measures it is the farthest from being centrally symmetric. It provides the largest constant-width shape avoiding the points of an integer lattice, and is closely related to the shape of the quadrilateral maximizing the ratio of perimeter to diameter. It can perform a complete rotation within a square while at all times touching all four sides of the square, and has the smallest possible area of shapes with this property. However, although it covers most of the square in this rotation process, it fails to cover a small fraction of the square's area, near its corners. Because of this property of rotating within a square, the Reuleaux triangle is also sometimes known as the Reuleaux rotor. The Reuleaux triangle is the first of a sequence of Reuleaux polygons whose boundaries are curves of constant width formed from regular polygons with an odd number of sides. Some of these curves have been used as the shapes of coins. The Reuleaux triangle can also be generalized into three dimensions in multiple ways: the Reuleaux tetrahedron (the intersection of four balls whose centers lie on a regular tetrahedron) does not have constant width, but can be modified by rounding its edges to form the Meissner tetrahedron, which does. Alternatively, the surface of revolution of the Reuleaux triangle also has constant width. Construction The Reuleaux triangle may be constructed either directly from three circles, or by rounding the sides of an equilateral triangle. The three-circle construction may be performed with a compass alone, not even needing a straightedge. By the Mohr–Mascheroni theorem the same is true more generally of any compass-and-straightedge construction, but the construction for the Reuleaux triangle is particularly simple. The first step is to mark two arbitrary points of the plane (which will eventually become vertices of the triangle), and use the compass to draw a circle centered at one of the marked points, through the other marked point. Next, one draws a second circle, of the same radius, centered at the other marked point and passing through the first marked point. Finally, one draws a third circle, again of the same radius, with its center at one of the two crossing points of the two previous circles, passing through both marked points. The central region in the resulting arrangement of three circles will be a Reuleaux triangle. Alternatively, a Reuleaux triangle may be constructed from an equilateral triangle T by drawing three arcs of circles, each centered at one vertex of T and connecting the other two vertices. Or, equivalently, it may be constructed as the intersection of three disks centered at the vertices of T, with radius equal to the side length of T. Mathematical properties The most basic property of the Reuleaux triangle is that it has constant width, meaning that for every pair of parallel supporting lines (two lines of the same slope that both touch the shape without crossing through it) the two lines have the same Euclidean distance from each other, regardless of the orientation of these lines. In any pair of parallel supporting lines, one of the two lines will necessarily touch the triangle at one of its vertices. The other supporting line may touch the triangle at any point on the opposite arc, and their distance (the width of the Reuleaux triangle) equals the radius of this arc. The first mathematician to discover the existence of curves of constant width, and to observe that the Reuleaux triangle has constant width, may have been Leonhard Euler. In a paper that he presented in 1771 and published in 1781 entitled De curvis triangularibus, Euler studied curvilinear triangles as well as the curves of constant width, which he called orbiforms. Extremal measures By many different measures, the Reuleaux triangle is one of the most extreme curves of constant width. By the Blaschke–Lebesgue theorem, the Reuleaux triangle has the smallest possible area of any curve of given constant width. This area is where s is the constant width. One method for deriving this area formula is to partition the Reuleaux triangle into an inner equilateral triangle and three curvilinear regions between this inner triangle and the arcs forming the Reuleaux triangle, and then add the areas of these four sets. At the other extreme, the curve of constant width that has the maximum possible area is a circular disk, which has area . The angles made by each pair of arcs at the corners of a Reuleaux triangle are all equal to 120°. This is the sharpest possible angle at any vertex of any curve of constant width. Additionally, among the curves of constant width, the Reuleaux triangle is the one with both the largest and the smallest inscribed equilateral triangles. The largest equilateral triangle inscribed in a Reuleaux triangle is the one connecting its three corners, and the smallest one is the one connecting the three midpoints of its sides. The subset of the Reuleaux triangle consisting of points belonging to three or more diameters is the interior of the larger of these two triangles; it has a larger area than the set of three-diameter points of any other curve of constant width. Although the Reuleaux triangle has sixfold dihedral symmetry, the same as an equilateral triangle, it does not have central symmetry. The Reuleaux triangle is the least symmetric curve of constant width according to two different measures of central asymmetry, the Kovner–Besicovitch measure (ratio of area to the largest centrally symmetric shape enclosed by the curve) and the Estermann measure (ratio of area to the smallest centrally symmetric shape enclosing the curve). For the Reuleaux triangle, the two centrally symmetric shapes that determine the measures of asymmetry are both hexagonal, although the inner one has curved sides. The Reuleaux triangle has diameters that split its area more unevenly than any other curve of constant width. That is, the maximum ratio of areas on either side of a diameter, another measure of asymmetry, is bigger for the Reuleaux triangle than for other curves of constant width. Among all shapes of constant width that avoid all points of an integer lattice, the one with the largest width is a Reuleaux triangle. It has one of its axes of symmetry parallel to the coordinate axes on a half-integer line. Its width, approximately 1.54, is the root of a degree-6 polynomial with integer coefficients. Just as it is possible for a circle to be surrounded by six congruent circles that touch it, it is also possible to arrange seven congruent Reuleaux triangles so that they all make contact with a central Reuleaux triangle of the same size. This is the maximum number possible for any curve of constant width. Among all quadrilaterals, the shape that has the greatest ratio of its perimeter to its diameter is an equidiagonal kite that can be inscribed into a Reuleaux triangle. Other measures By Barbier's theorem all curves of the same constant width including the Reuleaux triangle have equal perimeters. In particular this perimeter equals the perimeter of the circle with the same width, which is . The radii of the largest inscribed circle of a Reuleaux triangle with width s, and of the circumscribed circle of the same triangle, are respectively; the sum of these radii equals the width of the Reuleaux triangle. More generally, for every curve of constant width, the largest inscribed circle and the smallest circumscribed circle are concentric, and their radii sum to the constant width of the curve. The optimal packing density of the Reuleaux triangle in the plane remains unproven, but is conjectured to be which is the density of one possible double lattice packing for these shapes. The best proven upper bound on the packing density is approximately 0.947. It has also been conjectured, but not proven, that the Reuleaux triangles have the highest packing density of any curve of constant width. Rotation within a square Any curve of constant width can form a rotor within a square, a shape that can perform a complete rotation while staying within the square and at all times touching all four sides of the square. However, the Reuleaux triangle is the rotor with the minimum possible area. As it rotates, its axis does not stay fixed at a single point, but instead follows a curve formed by the pieces of four ellipses. Because of its 120° angles, the rotating Reuleaux triangle cannot reach some points near the sharper angles at the square's vertices, but rather covers a shape with slightly rounded corners, also formed by elliptical arcs. At any point during this rotation, two of the corners of the Reuleaux triangle touch two adjacent sides of the square, while the third corner of the triangle traces out a curve near the opposite vertex of the square. The shape traced out by the rotating Reuleaux triangle covers approximately 98.8% of the area of the square. As a counterexample Reuleaux's original motivation for studying the Reuleaux triangle was as a counterexample, showing that three single-point contacts may not be enough to fix a planar object into a single position. The existence of Reuleaux triangles and other curves of constant width shows that diameter measurements alone cannot verify that an object has a circular cross-section. In connection with the inscribed square problem, observed that the Reuleaux triangle provides an example of a constant-width shape in which no regular polygon with more than four sides can be inscribed, except the regular hexagon, and he described a small modification to this shape that preserves its constant width but also prevents regular hexagons from being inscribed in it. He generalized this result to three dimensions using a cylinder with the same shape as its cross section. Applications Reaching into corners Several types of machinery take the shape of the Reuleaux triangle, based on its property of being able to rotate within a square. The Watts Brothers Tool Works square drill bit has the shape of a Reuleaux triangle, modified with concavities to form cutting surfaces. When mounted in a special chuck which allows for the bit not having a fixed centre of rotation, it can drill a hole that is nearly square. Although patented by Henry Watts in 1914, similar drills invented by others were used earlier. Other Reuleaux polygons are used to drill pentagonal, hexagonal, and octagonal holes. Panasonic's RULO robotic vacuum cleaner has its shape based on the Reuleaux triangle in order to ease cleaning up dust in the corners of rooms. Rolling cylinders Another class of applications of the Reuleaux triangle involves cylindrical objects with a Reuleaux triangle cross section. Several pencils are manufactured in this shape, rather than the more traditional round or hexagonal barrels. They are usually promoted as being more comfortable or encouraging proper grip, as well as being less likely to roll off tables (since the center of gravity moves up and down more than a rolling hexagon). A Reuleaux triangle (along with all other curves of constant width) can roll but makes a poor wheel because it does not roll about a fixed center of rotation. An object on top of rollers that have Reuleaux triangle cross-sections would roll smoothly and flatly, but an axle attached to Reuleaux triangle wheels would bounce up and down three times per revolution. This concept was used in a science fiction short story by Poul Anderson titled "The Three-Cornered Wheel". A bicycle with floating axles and a frame supported by the rim of its Reuleaux triangle shaped wheel was built and demonstrated in 2009 by Chinese inventor Guan Baihua, who was inspired by pencils with the same shape. Mechanism design Another class of applications of the Reuleaux triangle involves using it as a part of a mechanical linkage that can convert rotation around a fixed axis into reciprocating motion. These mechanisms were studied by Franz Reuleaux. With the assistance of the Gustav Voigt company, Reuleaux built approximately 800 models of mechanisms, several of which involved the Reuleaux triangle. Reuleaux used these models in his pioneering scientific investigations of their motion. Although most of the Reuleaux–Voigt models have been lost, 219 of them have been collected at Cornell University, including nine based on the Reuleaux triangle. However, the use of Reuleaux triangles in mechanism design predates the work of Reuleaux; for instance, some steam engines from as early as 1830 had a cam in the shape of a Reuleaux triangle. One application of this principle arises in a film projector. In this application, it is necessary to advance the film in a jerky, stepwise motion, in which each frame of film stops for a fraction of a second in front of the projector lens, and then much more quickly the film is moved to the next frame. This can be done using a mechanism in which the rotation of a Reuleaux triangle within a square is used to create a motion pattern for an actuator that pulls the film quickly to each new frame and then pauses the film's motion while the frame is projected. The rotor of the Wankel engine is shaped as a curvilinear triangle that is often cited as an example of a Reuleaux triangle. However, its curved sides are somewhat flatter than those of a Reuleaux triangle and so it does not have constant width. Architecture In Gothic architecture, beginning in the late 13th century or early 14th century, the Reuleaux triangle became one of several curvilinear forms frequently used for windows, window tracery, and other architectural decorations. For instance, in English Gothic architecture, this shape was associated with the decorated period, both in its geometric style of 1250–1290 and continuing into its curvilinear style of 1290–1350. It also appears in some of the windows of the Milan Cathedral. In this context, the shape is sometimes called a spherical triangle, which should not be confused with spherical triangle meaning a triangle on the surface of a sphere. In its use in Gothic church architecture, the three-cornered shape of the Reuleaux triangle may be seen both as a symbol of the Trinity, and as "an act of opposition to the form of the circle". The Reuleaux triangle has also been used in other styles of architecture. For instance, Leonardo da Vinci sketched this shape as the plan for a fortification. Modern buildings that have been claimed to use a Reuleaux triangle shaped floorplan include the MIT Kresge Auditorium, the Kölntriangle, the Donauturm, the Torre de Collserola, and the Mercedes-Benz Museum. However in many cases these are merely rounded triangles, with different geometry than the Reuleaux triangle. Mapmaking Another early application of the Reuleaux triangle, da Vinci's world map from circa 1514, was a world map in which the spherical surface of the earth was divided into eight octants, each flattened into the shape of a Reuleaux triangle. Similar maps also based on the Reuleaux triangle were published by Oronce Finé in 1551 and by John Dee in 1580. Other objects Many guitar picks employ the Reuleaux triangle, as its shape combines a sharp point to provide strong articulation, with a wide tip to produce a warm timbre. Because all three points of the shape are usable, it is easier to orient and wears less quickly compared to a pick with a single tip. The Reuleaux triangle has been used as the shape for the cross section of a fire hydrant valve nut. The constant width of this shape makes it difficult to open the fire hydrant using standard parallel-jawed wrenches; instead, a wrench with a special shape is needed. This property allows the fire hydrants to be opened only by firefighters (who have the special wrench) and not by other people trying to use the hydrant as a source of water for other activities. Following a suggestion of , the antennae of the Submillimeter Array, a radio-wave astronomical observatory on Mauna Kea in Hawaii, are arranged on four nested Reuleaux triangles. Placing the antennae on a curve of constant width causes the observatory to have the same spatial resolution in all directions, and provides a circular observation beam. As the most asymmetric curve of constant width, the Reuleaux triangle leads to the most uniform coverage of the plane for the Fourier transform of the signal from the array. The antennae may be moved from one Reuleaux triangle to another for different observations, according to the desired angular resolution of each observation. The precise placement of the antennae on these Reuleaux triangles was optimized using a neural network. In some places the constructed observatory departs from the preferred Reuleaux triangle shape because that shape was not possible within the given site. Signs and logos The shield shapes used for many signs and corporate logos feature rounded triangles. However, only some of these are Reuleaux triangles. The corporate logo of Petrofina (Fina), a Belgian oil company with major operations in Europe, North America and Africa, used a Reuleaux triangle with the Fina name from 1950 until Petrofina's merger with Total S.A. (today TotalEnergies) in 2000. Another corporate logo framed in the Reuleaux triangle, the south-pointing compass of Bavaria Brewery, was part of a makeover by design company Total Identity that won the SAN 2010 Advertiser of the Year award. The Reuleaux triangle is also used in the logo of Colorado School of Mines. In the United States, the National Trails System and United States Bicycle Route System both mark routes with Reuleaux triangles on signage. In nature According to Plateau's laws, the circular arcs in two-dimensional soap bubble clusters meet at 120° angles, the same angle found at the corners of a Reuleaux triangle. Based on this fact, it is possible to construct clusters in which some of the bubbles take the form of a Reuleaux triangle. The shape was first isolated in crystal form in 2014 as Reuleaux triangle disks. Basic bismuth nitrate disks with the Reuleaux triangle shape were formed from the hydrolysis and precipitation of bismuth nitrate in an ethanol–water system in the presence of 2,3-bis(2-pyridyl)pyrazine. Generalizations Triangular curves of constant width with smooth rather than sharp corners may be obtained as the locus of points at a fixed distance from the Reuleaux triangle. Other generalizations of the Reuleaux triangle include surfaces in three dimensions, curves of constant width with more than three sides, and the Yanmouti sets which provide extreme examples of an inequality between width, diameter, and inradius. Three-dimensional version The intersection of four balls of radius s centered at the vertices of a regular tetrahedron with side length s is called the Reuleaux tetrahedron, but its surface is not a surface of constant width. It can, however, be made into a surface of constant width, called Meissner's tetrahedron, by replacing three of its edge arcs by curved surfaces, the surfaces of rotation of a circular arc. Alternatively, the surface of revolution of a Reuleaux triangle through one of its symmetry axes forms a surface of constant width, with minimum volume among all known surfaces of revolution of given constant width. Reuleaux polygons The Reuleaux triangle can be generalized to regular or irregular polygons with an odd number of sides, yielding a Reuleaux polygon, a curve of constant width formed from circular arcs of constant radius. The constant width of these shapes allows their use as coins that can be used in coin-operated machines. Although coins of this type in general circulation usually have more than three sides, a Reuleaux triangle has been used for a commemorative coin from Bermuda. Similar methods can be used to enclose an arbitrary simple polygon within a curve of constant width, whose width equals the diameter of the given polygon. The resulting shape consists of circular arcs (at most as many as sides of the polygon), can be constructed algorithmically in linear time, and can be drawn with compass and straightedge. Although the Reuleaux polygons all have an odd number of circular-arc sides, it is possible to construct constant-width shapes with an even number of circular-arc sides of varying radii. Yanmouti sets The Yanmouti sets are defined as the convex hulls of an equilateral triangle together with three circular arcs, centered at the triangle vertices and spanning the same angle as the triangle, with equal radii that are at most equal to the side length of the triangle. Thus, when the radius is small enough, these sets degenerate to the equilateral triangle itself, but when the radius is as large as possible they equal the corresponding Reuleaux triangle. Every shape with width w, diameter d, and inradius r (the radius of the largest possible circle contained in the shape) obeys the inequality and this inequality becomes an equality for the Yanmouti sets, showing that it cannot be improved. Related figures In the classical presentation of a three-set Venn diagram as three overlapping circles, the central region (representing elements belonging to all three sets) takes the shape of a Reuleaux triangle. The same three circles form one of the standard drawings of the Borromean rings, three mutually linked rings that cannot, however, be realized as geometric circles. Parts of these same circles are used to form the triquetra, a figure of three overlapping semicircles (each two of which form a vesica piscis symbol) that again has a Reuleaux triangle at its center; just as the three circles of the Venn diagram may be interlaced to form the Borromean rings, the three circular arcs of the triquetra may be interlaced to form a trefoil knot. Relatives of the Reuleaux triangle arise in the problem of finding the minimum perimeter shape that encloses a fixed amount of area and includes three specified points in the plane. For a wide range of choices of the area parameter, the optimal solution to this problem will be a curved triangle whose three sides are circular arcs with equal radii. In particular, when the three points are equidistant from each other and the area is that of the Reuleaux triangle, the Reuleaux triangle is the optimal enclosure. Circular triangles are triangles with circular-arc edges, including the Reuleaux triangle as well as other shapes. The deltoid curve is another type of curvilinear triangle, but one in which the curves replacing each side of an equilateral triangle are concave rather than convex. It is not composed of circular arcs, but may be formed by rolling one circle within another of three times the radius. Other planar shapes with three curved sides include the arbelos, which is formed from three semicircles with collinear endpoints, and the Bézier triangle. The Reuleaux triangle may also be interpreted as the stereographic projection of one triangular face of a spherical tetrahedron, the Schwarz triangle of parameters with spherical angles of measure and sides of spherical length
Mathematics
Two-dimensional space
null
580863
https://en.wikipedia.org/wiki/Trowel
Trowel
A trowel is a small hand tool used for digging, applying, smoothing, or moving small amounts of viscous or particulate material. Common varieties include the masonry trowel, garden trowel, and float trowel. A power trowel is a much larger gasoline or electrically powered walk-behind device with rotating paddles used to finish concrete floors. Hand trowel Numerous forms of trowel are used in masonry, concrete, and drywall construction, as well as applying adhesives such as those used in tiling and laying synthetic flooring. Masonry trowels are traditionally made of forged carbon steel, but some newer versions are made of cast stainless steel, which has longer wear and is rust-free. These include: Bricklayer's trowel has an elongated triangular-shaped flat metal blade, used by masons for leveling, spreading, and shaping cement, plaster, and mortar. Pointing trowel, a scaled-down version of a bricklayer's trowel, for small jobs and repair work. Tuck pointing trowel is long and thin, designed for packing mortar between bricks. Float trowel or finishing trowel is usually rectangular, used to smooth, level, or texture the top layer of hardening concrete. A flooring trowel has one rectangular end and one pointed end, made to fit corners. A grout float is used for applying and working grout into gaps in floor and wall tile. Gauging trowel has a rounded tip, used to mix measured proportions of the different ingredients for quick set plaster. Pool trowel is a flat-bladed tool with rounded ends used to apply coatings to concrete, especially on swimming pool decks. Margin trowel is a small rectangular bladed tool used to move, apply, and smooth small amounts of masonry or adhesive material. Notched trowel is a rectangular shaped tool with regularly spaced notches along one or more sides used to apply adhesive when adhering tile, or laying synthetic floor surfaces. Other forms of trowel include: Garden trowel, a hand tool with a pointed, scoop-shaped metal blade and wooden, metal, or plastic handle. It is comparable to a spade or shovel, but is generally much smaller, being designed for use with one hand. It is used for breaking up earth, digging small holes, especially for planting and weeding, mixing in fertilizer or other additives, and transferring plants to pots. Camping trowel, a hand tool used in the outdoors to securely stake and prop up a tent, channel a small stream of water, level a sleeping surface, dig a cathole for no traces of waste and do many more outdoor survival chores. Camping trowels can sometimes be made of lighter weight materials than gardening trowels to make them easier to carry in a backpack or they can be made of heavier materials for chopping kindling or shoveling soil without having to awkwardly reach or bend over. Camping trowels may incorporate a secondary side ruler to measure ground surface depth; however, the ruler might prematurely become defaced from course soil particulates. Camping trowels sometimes have a front tip and side features, such as a pointed tip and a serrated side edge to easily cut through tree roots or frozen soil. These serrated camping trowels may include a cover guard to protect the user from cut wounds as well as save backpacks from puncture holes and tears. They may also fold-up for added protection and easy storage. Few others allow for items such as toilet paper to be stored upon or inside the handle.[2] In archaeology brick or pointing trowels (usually 4" or 5" steel trowels) are used to scratch the strata in an excavation and allow the colors of the soil to be clear, so that the different strata can be identified, processed and excavated. In the United States, there are several preferred brands of pointing trowels, including the Marshalltown trowel; while in the British Isles the WHS 4" pointing trowel is the traditional tool.
Technology
Agricultural tools
null
580880
https://en.wikipedia.org/wiki/Tonkinese%20cat
Tonkinese cat
Tonkinese is a domestic cat breed produced by crossbreeding between the Siamese and Burmese. Members of the breed are distinguished by a pointed coat pattern in a variety of colors. In addition to the modified coat colors of the "mink" pattern, which is a dilution of the point color, the breed is now being shown in the foundation-like Siamese and Burmese colors: pointed with white and solid overall (sepia). The best known variety is the short-haired Tonkinese, but there is a semi-longhaired (sometimes called Tibetan) which tends to be more popular in Europe, mainly in the Netherlands, Germany, Belgium, Luxembourg, and France. History Origin The modern Tonkinese breed is a reconstruction of a breed brought to the West in the 19th century. These cats were originally known as 'chocolate Siamese'. Breeders working with imported cats from Malaysia noticed some cats have aquamarine eyes and darker coats than the Siamese. In 1901 the Siamese Cat Club recognised them as a Siamese of the 'chocolate' type. Many of the cats used to found the Siamese and Burmese in the West are believed to be Tonkinese, including Wong Mau. Tonkinese would be bred still but registered as either Burmese or Siamese, it was not until the 1950s that breeders would take interest in the cat. These breeders worked together on developing breeding lines with these cats and by 1965 the Tonkinese was recognised in Canada as a distinct breed — whence the name originated. More modern Tonkinese cats are the result of the crossbreeding programs of two breeders working independently of each other. Margaret Conroy, of Canada, and Jane Barletta, of the United States, crossed the Siamese and Burmese breeds, with the aim of creating the ideal combination of both parent breeds' distinctive appearance and lively personalities. The cats thus produced were moved from crossbreed classification to an established breed in 2001. The name is a reference to the Tonkin region of Indochina, though it is suggestive only, as the cats have no connection with the area. In the West, Tonkinese cats under the age of sixth months have historically been referred to as "small-cats" rather than "kittens" to reflect a more direct translation from Burmese, although this term has become almost obsolete since the mid-20th century. Breed recognition The breed received championship status with the Cat Fanciers' Association in 1984. The Governing Council of the Cat Fancy (GCCF) recognised the breed in 1991. Today the breed is recognised in most of Europe, Australia, New Zealand, Hong Kong, Japan, and South Africa. Over 30 countries have Tonkinese cats featured on postage stamps. Description Appearance Tonkinese are a medium-sized cat, considered an intermediate type between the slender, long-bodied modern Siamese and European Burmese and the more "cobby", or substantially-built American Burmese. Like their Burmese ancestors, they are deceptively muscular and typically seem much heavier than expected when picked up. Tail and legs are slim but proportionate to the body, with distinctive oval paws. They have a gently rounded, slightly wedge-shaped head and blunted muzzle, with moderately almond-shaped eyes and ears set towards the outside of their head. The American style is a rounder but sculpted head with a shorter body and sturdier appearance to reflect the old-fashioned Siamese and rounded Burmese from which it was originally bred in the United States. While many American breeders avoided using the extreme "contemporary" Burmese in favor of the more moderate "traditional" Burmese, the original Tonkinese breed standard was based on the extreme spherical style of the Burmese descended from Wong Mau. Coat and color The Tonkinese comes in the several colours listed below. Black (also referred to as “brown”, “seal”, or “natural” by different fanciers or organizations) Blue Chocolate (also called “champagne”) Lilac (also called “platinum”) Cinnamon Fawn Red Cream Additional dilute modifiers (including “caramel”, “apricot”) Each color also has three variations of colorpoint coat pattern: "point", the classic Siamese-style dark face, ears, legs and tail on a contrasting white or cream base, and blue eyes; "solid" or "sepia", similar to the Burmese, in which the color is essentially uniform over the body with only faintly visible points and golden-amber or green eyes; and "mink", a unique intermediate between the other two, in which the base is a lighter shade but still harmonious with the point color, and the eyes are a lighter blue-green, called aquamarine. They can be anywhere on the entire blue-green to green-blue spectrum. Additionally, all colors can present in the tortoiseshell or tabby patterns. Color and pattern recognition Depending on the cat registry, not all colors and patterns are allowed for the Tonkinese cat breed. Tonkineses are currently officially recognized by the Cat Fanciers' Association (CFA) and World Cat Federation (WCF) in only four base colors: black (brown, seal, natural), blue, chocolate (champagne), and lilac (platinum). All four base colors are allowed in the three colorpoint patterns. The GCCF accepts brown, blue, chocolate, lilac, cinnamon, and fawn, red, cream, plus caramel and apricot. These colors are allowed in the tortoiseshell and tabby patterns, and additionally the three colorpoint patterns. Similar to the GCCF, The International Cat Association (TICA) accepts all of the genetically possible colors and patterns. Temperament Like both parent breeds, Tonkinese are active, vocal and generally people-oriented cats, playful and interested in everything going on around them; however, this also means they are easily susceptible to becoming lonesome or bored. Their voice is similar in tone to the Burmese, persistent but softer and sweeter than the Siamese, similar to the gentle quacking of a duck. Like Burmese, Tonkinese are reputed to sometimes engage in such dog-like behaviors as fetching, and to enjoy jumping to great heights. Health In a 2012 review of over 5,000 cases of urate urolithiasis the Tonkinese was significantly under-represented, with only one of the recorded cases belonging to the breed against a population of 365. Genetics Tonkin is a crossbreed type, with coat color and pattern wholly dependent on whether individuals carry the Siamese or Burmese gene. Breeding two mink Tonkinese cats does not usually yield a full litter of mink kittens, as this intermediate pattern is the result of having one gene for the Burmese solid pattern and one for the Siamese pointed pattern. Colors and patterns in any litter depend both on statistical chance and the color genetics and patterns of the parents. Breeding between two mink-patterned cats will, on average, produce half mink kittens and one quarter each pointed and sepia kittens. A pointed and a sepia bred together will always produce all mink patterned kittens. A pointed bred to a mink will produce half pointed and half mink kittens, and a sepia bred to a mink will produce half sepia and half mink kittens.
Biology and health sciences
Cats
Animals
580936
https://en.wikipedia.org/wiki/Sulfur%20trioxide
Sulfur trioxide
Sulfur trioxide (alternative spelling sulphur trioxide) is the chemical compound with the formula SO3. It has been described as "unquestionably the most [economically] important sulfur oxide". It is prepared on an industrial scale as a precursor to sulfuric acid. Sulfur trioxide exists in several forms: gaseous monomer, crystalline trimer, and solid polymer. Sulfur trioxide is a solid at just below room temperature with a relatively narrow liquid range. Gaseous SO3 is the primary precursor to acid rain. Molecular structure and bonding Monomer The molecule SO3 is trigonal planar. As predicted by VSEPR theory, its structure belongs to the D3h point group. The sulfur atom has an oxidation state of +6 and may be assigned a formal charge value as low as 0 (if all three sulfur-oxygen bonds are assumed to be double bonds) or as high as +2 (if the Octet Rule is assumed). When the formal charge is non-zero, the S-O bonding is assumed to be delocalized. In any case the three S-O bond lengths are equal to one another, at 1.42 Å. The electrical dipole moment of gaseous sulfur trioxide is zero. Trimer Both liquid and gaseous SO3 exists in an equilibrium between the monomer and the cyclic trimer. The nature of solid SO3 is complex and at least 3 polymorphs are known, with conversion between them being dependent on traces of water. Absolutely pure SO3 freezes at 16.8 °C to give the γ-SO3 form, which adopts the cyclic trimer configuration [S(=O)2(μ-O)]3. Polymer If SO3 is condensed above 27 °C, then α-SO3 forms, which has a melting point of 62.3 °C. α-SO3 is fibrous in appearance. Structurally, it is the polymer [S(=O)2(μ-O)]n. Each end of the polymer is terminated with OH groups. β-SO3, like the alpha form, is fibrous but of different molecular weight, consisting of an hydroxyl-capped polymer, but melts at 32.5 °C. Both the gamma and the beta forms are metastable, eventually converting to the stable alpha form if left standing for sufficient time. This conversion is caused by traces of water. Relative vapor pressures of solid SO3 are alpha < beta < gamma at identical temperatures, indicative of their relative molecular weights. Liquid sulfur trioxide has a vapor pressure consistent with the gamma form. Thus heating a crystal of α-SO3 to its melting point results in a sudden increase in vapor pressure, which can be forceful enough to shatter a glass vessel in which it is heated. This effect is known as the "alpha explosion". Chemical reactions Sulfur trioxide undergoes many reactions. Hydration and hydrofluorination SO3 is the anhydride of H2SO4. Thus, it is susceptible to hydration: SO3 + H2O → H2SO4(ΔfH = −200 kJ/mol) Gaseous sulfur trioxide fumes profusely even in a relatively dry atmosphere owing to formation of a sulfuric acid mist. SO3 is aggressively hygroscopic. The heat of hydration is sufficient that mixtures of SO3 and wood or cotton can ignite. In such cases, SO3 dehydrates these carbohydrates. Akin to the behavior of H2O, hydrogen fluoride adds to give fluorosulfuric acid: SO3 + HF → FSO3H Deoxygenation SO3 reacts with dinitrogen pentoxide to give the nitronium salt of pyrosulfate: 2 SO3 + N2O5 → [NO2]2S2O7 Oxidant Sulfur trioxide is an oxidant. It oxidizes sulfur dichloride to thionyl chloride. SO3 + SCl2 → SOCl2 + SO2 Lewis acid SO3 is a strong Lewis acid readily forming adducts with Lewis bases. With pyridine, it gives the sulfur trioxide pyridine complex. Related adducts form from dioxane and trimethylamine. Sulfonating agent Sulfur trioxide is a potent sulfonating agent, i.e. it adds SO3 groups to substrates. Often the substrates are organic, as in aromatic sulfonation. For activated substrates, Lewis base adducts of sulfur trioxide are effective sulfonating agents. Preparation The direct oxidation of sulfur dioxide to sulfur trioxide in air proceeds very slowly: 2 SO2 + O2 → 2 SO3(ΔH = −198.4 kJ/mol) Industrial Industrially SO3 is made by the contact process. Sulfur dioxide is produced by the burning of sulfur or iron pyrite (a sulfide ore of iron). After being purified by electrostatic precipitation, the SO2 is then oxidised by atmospheric oxygen at between 400 and 600 °C over a catalyst. A typical catalyst consists of vanadium pentoxide (V2O5) activated with potassium oxide K2O on kieselguhr or silica support. Platinum also works very well but is too expensive and is poisoned (rendered ineffective) much more easily by impurities. The majority of sulfur trioxide made in this way is converted into sulfuric acid. Laboratory Sulfur trioxide can be prepared in the laboratory by the two-stage pyrolysis of sodium bisulfate. Sodium pyrosulfate is an intermediate product: Dehydration at 315 °C: 2 NaHSO4 → Na2S2O7 + H2O Cracking at 460 °C: Na2S2O7 → Na2SO4 + SO3 The latter occurs at much lower temperatures (45–60 °C) in the presence of catalytic H2SO4. In contrast, KHSO4 undergoes the same reactions at a higher temperature. Another two step method involving a salt pyrolysis starts with concentrated sulfuric acid and anhydrous tin tetrachloride: Reaction between tin tetrachloride and sulfuric acid in a 1:2 molar mixture at near reflux (114 °C): SnCl4 + 2 H2SO4 → Sn(SO4)2 + 4 HCl Pyrolysis of anhydrous tin(IV) sulfate at 150 °C - 200 °C: Sn(SO4)2 → SnO2 + 2 SO3 The advantage of this method over the sodium bisulfate one is that it requires much lower temperatures and can be done using normal borosilicate laboratory glassware without the risk of shattering. A disadvantage is that it generates significant quantities of hydrogen chloride gas which needs to be captured as well. SO3 may also be prepared by dehydrating sulfuric acid with phosphorus pentoxide. Applications Sulfur trioxide is a reagent in sulfonation reactions. Dimethyl sulfate is produced commercially by the reaction of dimethyl ether with sulfur trioxide: Sulfate esters are used as detergents, dyes, and pharmaceuticals. Sulfur trioxide is generated in situ from sulfuric acid or is used as a solution in the acid. B2O3 stabilized sulfur trioxide was traded by Baker & Adamson under the tradename "Sulfan" in the 20th century. Safety Along with being an oxidizing agent, sulfur trioxide is highly corrosive. It reacts violently with water to produce highly corrosive sulfuric acid.
Physical sciences
Covalent oxides
Chemistry
581091
https://en.wikipedia.org/wiki/Auscultation
Auscultation
Auscultation (based on the Latin verb auscultare "to listen") is listening to the internal sounds of the body, usually using a stethoscope. Auscultation is performed for the purposes of examining the circulatory and respiratory systems (heart and breath sounds), as well as the alimentary canal. The term was introduced by René Laennec. The act of listening to body sounds for diagnostic purposes has its origin further back in history, possibly as early as Ancient Egypt. Auscultation and palpation go together in physical examination and are alike in that both have ancient roots, both require skill, and both are still important today. Laënnec's contributions were refining the procedure, linking sounds with specific pathological changes in the chest, and inventing a suitable instrument (the stethoscope) to mediate between the patient's body and the clinician's ear. Auscultation is a skill that requires substantial clinical experience, a fine stethoscope and good listening skills. Health professionals (doctors, nurses, etc.) listen to three main organs and organ systems during auscultation: the heart, the lungs, and the gastrointestinal system. When auscultating the heart, doctors listen for abnormal sounds, including heart murmurs, gallops, and other extra sounds coinciding with heartbeats. Heart rate is also noted. When listening to lungs, breath sounds such as wheezes, crepitations and crackles are identified. The gastrointestinal system is auscultated to note the presence of bowel sounds. Electronic stethoscopes can be recording devices, and can provide noise reduction and signal enhancement. This is helpful for purposes of telemedicine (remote diagnosis) and teaching. This opened the field to computer-aided auscultation. Ultrasonography (US) inherently provides capability for computer-aided auscultation, and portable US, especially portable echocardiography, replaces some stethoscope auscultation (especially in cardiology), although not nearly all of it (stethoscopes are still essential in basic checkups, listening to bowel sounds, and other primary care contexts). Auscultogram The sounds of auscultation can be depicted using symbols to produce an auscultogram. It is used in cardiology training. Mediate and immediate auscultation Mediate auscultation is an antiquated medical term for listening (auscultation) to the internal sounds of the body using an instrument (mediate), usually a stethoscope. It is opposed to immediate auscultation, directly placing the ear on the body. Doppler auscultation It was demonstrated in the 2000s that Doppler auscultation using a handheld ultrasound transducer enables the auscultation of valvular movements and blood flow sounds that are undetected during cardiac examination with a stethoscope. The Doppler auscultation presented a sensitivity of 84% for the detection of aortic regurgitations, while classic stethoscope auscultation presented a sensitivity of 58%. Moreover, Doppler auscultation was superior in the detection of impaired ventricular relaxation. Since the physics of Doppler auscultation and classic auscultation are different, it has been suggested that both methods could complement each other.
Biology and health sciences
Diagnostics
Health
581326
https://en.wikipedia.org/wiki/Littoral%20zone
Littoral zone
The littoral zone, also called litoral or nearshore, is the part of a sea, lake, or river that is close to the shore. In coastal ecology, the littoral zone includes the intertidal zone extending from the high water mark (which is rarely inundated), to coastal areas that are permanently submerged — known as the foreshore — and the terms are often used interchangeably. However, the geographical meaning of littoral zone extends well beyond the intertidal zone to include all neritic waters within the bounds of continental shelves. Etymology The word littoral may be used both as a noun and as an adjective. It derives from the Latin noun litus, litoris, meaning "shore". (The doubled t is a late-medieval innovation, and the word is sometimes seen in the more classical-looking spelling litoral.) Description The term has no single definition. What is regarded as the full extent of the littoral zone, and the way the littoral zone is divided into subregions, varies in different contexts. For lakes, the littoral zone is the nearshore habitat where photosynthetically active radiation penetrates to the lake bottom in sufficient quantities to support photosynthesis. The use of the term also varies from one part of the world to another, and between different disciplines. For example, military commanders speak of the littoral in ways that are quite different from the definition used by marine biologists. The adjacency of water gives a number of distinctive characteristics to littoral regions. The erosive power of water results in particular types of landforms, such as sand dunes, and estuaries. The natural movement of the littoral along the coast is called the littoral drift. Biologically, the ready availability of water enables a greater variety of plant and animal life, and particularly the formation of extensive wetlands. In addition, the additional local humidity due to evaporation usually creates a microclimate supporting unique types of organisms. In oceanography and marine biology In oceanography and marine biology, the idea of the littoral zone is extended roughly to the edge of the continental shelf. Starting from the shoreline, the littoral zone begins at the spray region just above the high tide mark. From here, it moves to the intertidal region between the high and low water marks, and then out as far as the edge of the continental shelf. These three subregions are called, in order, the supralittoral zone, the eulittoral zone, and the sublittoral zone. Supralittoral zone The supralittoral zone (also called the splash, spray or supratidal zone) is the area above the spring high tide line that is regularly splashed, but not submerged by ocean water. Seawater penetrates these elevated areas only during storms with high tides. Organisms that live here must cope with exposure to fresh water from rain, cold, heat, dryness and predation by land animals and seabirds. At the top of this area, patches of dark lichens can appear as crusts on rocks. Some types of periwinkles, Neritidae and detritus feeding Isopoda commonly inhabit the lower supralittoral. Eulittoral zone The eulittoral zone (also called the midlittoral or mediolittoral zone) is the intertidal zone, known also as the foreshore. It extends from the spring high tide line, which is rarely inundated, to the spring low tide line, which is rarely not inundated. It is alternately exposed and submerged once or twice daily. Organisms living here must be able to withstand the varying conditions of temperature, light, and salinity. Despite this, productivity is high in this zone. The wave action and turbulence of recurring tides shape and reform cliffs, gaps and caves, offering a huge range of habitats for sedentary organisms. Protected rocky shorelines usually show a narrow, almost homogenous, eulittoral strip, often marked by the presence of barnacles. Exposed sites show a wider extension and are often divided into further zones. For more on this, see intertidal ecology. Sublittoral zone The sublittoral zone starts immediately below the eulittoral zone. This zone is permanently covered with seawater and is approximately equivalent to the neritic zone. In physical oceanography, the sublittoral zone refers to coastal regions with significant tidal flows and energy dissipation, including non-linear flows, internal waves, river outflows and oceanic fronts. In practice, this typically extends to the edge of the continental shelf, with depths around 200 meters. In marine biology, the sublittoral zone refers to the areas where sunlight reaches the ocean floor, that is, where the water is never so deep as to take it out of the photic zone. This results in high primary production and makes the sublittoral zone the location of the majority of sea life. As in physical oceanography, this zone typically extends to the edge of the continental shelf. The benthic zone in the sublittoral is much more stable than in the intertidal zone; temperature, water pressure, and the amount of sunlight remain fairly constant. Sublittoral corals do not have to deal with as much change as intertidal corals. Corals can live in both zones, but they are more common in the sublittoral zone. Within the sublittoral, marine biologists also identify the following: The infralittoral zone is the algal dominated zone, which may extend to five metres below the low water mark. The circalittoral zone is the region beyond the infralittoral, that is, below the algal zone and dominated by sessile animals such as mussels and oysters. Shallower regions of the sublittoral zone, extending not far from the shore, are sometimes referred to as the subtidal zone. Habitats in littoral zones Many vertebrates (e.g., mammals, waterfowl, reptiles) and invertebrates (insects, etc.) use both the littoral zone as well as the terrestrial ecosystem for food and habitat. Biota that are commonly assumed to reside in the pelagic zone often rely heavily on resources from the littoral zone. Littoral areas of ponds and lakes are typically better oxygenated, structurally more complex, and afford more abundant and diverse food resources than do profundal sediments. All these factors lead to a high diversity of insects and very complex trophic interactions. The great lakes of the world represent a global heritage of surface freshwater and aquatic biodiversity. Species lists for 14 of the world's largest lakes reveal that 15% of the global diversity (the total number of species) of freshwater fishes, 9% of non-insect freshwater invertebrate diversity, and 2% of aquatic insect diversity live in this handful of lakes. The vast majority (more than 93%) of species inhabit the shallow, nearshore littoral zone, and 72% are completely restricted to the littoral zone, even though littoral habitats are a small fraction of total lake areas. Because the littoral zone is important for many recreational and industrial purposes, it is often severely affected by many human activities that increase nutrient loading, spread invasive species, cause acidification and climate change, and produce increased fluctuations in water level. Littoral zones are both more negatively affected by human activity and less intensively studied than offshore waters. Conservation of the remarkable biodiversity and biotic integrity of large lakes will require better integration of littoral zones into our understanding of lake ecosystem functioning and focused efforts to alleviate human impacts along the shoreline. In freshwater ecosystems In freshwater situations, the littoral zone is the nearshore habitat where photosynthetically active radiation penetrates to the lake bottom in sufficient quantities to support photosynthesis. Sometimes other definitions are used. For example, the Minnesota Department of Natural Resources defines littoral as that portion of the lake that is less than 15 feet in depth. Such fixed-depth definitions often do not accurately represent the true ecological zonation, but are sometimes used because they are simple measurements to make bathymetric maps or when there are no measurements of light penetration. The littoral zone comprises an estimated 78% of Earth's total lake area. The littoral zone may form a narrow or broad fringing wetland, with extensive areas of aquatic plants sorted by their tolerance to different water depths. Typically, four zones are recognized, from higher to lower on the shore: wooded wetland, wet meadow, marsh and aquatic vegetation. The relative areas of these four types depends not only on the profile of the shoreline, but upon past water levels. The area of wet meadow is particularly dependent upon past water levels; in general, the area of wet meadows along lakes and rivers increases with natural water level fluctuations. Many of the animals in lakes and rivers are dependent upon the wetlands of littoral zones, since the rooted plants provide habitat and food. Hence, a large and productive littoral zone is considered an important characteristic of a healthy lake or river. Littoral zones are at particular risk for two reasons. First, human settlement is often attracted to shorelines, and settlement often disrupts breeding habitats for littoral zone species. For example, many turtles are killed on roads when they leave the water to lay their eggs in upland sites. Fish can be negatively affected by docks and retaining walls which remove breeding habitat in shallow water. Some shoreline communities even deliberately try to remove wetlands since they may interfere with activities like swimming. Overall, the presence of human settlement has a demonstrated negative impact upon adjoining wetlands. An equally serious problem is the tendency to stabilize lake or river levels with dams. Dams removed the spring flood, which carries nutrients into littoral zones and reduces the natural fluctuation of water levels upon which many wetland plants and animals depend. Hence, over time, dams can reduce the area of wetland from a broad littoral zone to a narrow band of vegetation. Marshes and wet meadows are at particular risk. Other definitions For the purposes of naval operations, the US Navy divides the littoral zone in the ways shown on the diagram at the top of this article. The US Army Corps of Engineers and the US Environmental Protection Agency have their own definitions, which have legal implications. The UK Ministry of Defence defines the littoral as those land areas (and their adjacent areas and associated air space) that are susceptible to engagement and influence from the sea.
Physical sciences
Oceanography
Earth science
581471
https://en.wikipedia.org/wiki/Oviraptor
Oviraptor
Oviraptor (; ) is a genus of oviraptorid dinosaur that lived in Asia during the Late Cretaceous period. The first remains were collected from the Djadokhta Formation of Mongolia in 1923 during a paleontological expedition led by Roy Chapman Andrews, and in the following year the genus and type species Oviraptor philoceratops were named by Henry Fairfield Osborn. The genus name refers to the initial thought of egg-stealing habits, and the specific name was intended to reinforce this view indicating a preference over ceratopsian eggs. Despite the fact that numerous specimens have been referred to the genus, Oviraptor is only known from a single partial skeleton regarded as the holotype, as well as a nest of about fifteen eggs and several small fragments from a juvenile. Oviraptor was a rather small feathered oviraptorid, estimated at long with a weight between . It had a wide lower jaw with a skull that likely had a crest. Both upper and lower jaws were toothless and developed a horny beak, which was used during feeding along the robust morphology of the lower jaws. The arms were well-developed and elongated ending in three fingers with curved claws. Like other oviraptorids, Oviraptor had long hindlimbs that had four-toed feet, with the first toe reduced. The tail was likely not very elongated, and ended in a pygostyle that supported large feathers. The initial relationships of Oviraptor were poorly understood at the time and was assigned to the unrelated Ornithomimidae by the original describer, Henry Osborn. However, re-examinations made by Rinchen Barsbold proved that Oviraptor was distinct enough to warrant a separate family, the Oviraptoridae. When first described, Oviraptor was interpreted as an egg-thief, egg-eating dinosaur given the close association of the holotype with a dinosaur nest. However, findings of numerous oviraptorosaurs in nesting poses have demonstrated that this specimen was actually brooding the nest and not stealing nor feeding on the eggs. Moreover, the discovery of remains of a small juvenile or nestling have been reported in association with the holotype specimen, further supporting parental care. History of discovery The first remains of Oviraptor were discovered on reddish sandstones of the Late Cretaceous Djadokhta Formation of Mongolia, at the Bayn Dzak locality (also known as Flaming Cliffs), during the Third Central Asiatic expedition in 1923. This expedition was led by the North American naturalist Roy Chapman Andrews and ended in the discovery of three new-to-science theropod fossil remains—including those of Oviraptor. These were formally described by the North American paleontologist Henry Fairfield Osborn in 1924, who in the basis of the new material, named the genera Oviraptor, Saurornithoides and Velociraptor. The particular genus Oviraptor was erected with the type species O. philoceratops based on the holotype AMNH 6517, a partial individual lacking the back of the skeleton but including a badly crushed skull, partial cervical and dorsal vertebrae, pectoral elements including the furcula with the left arm and partial hands, the left ilium and some ribs. Accordingly, this specimen was found lying over a nest of approximately 15 eggs—a nest that has been catalogued as AMNH 6508—with the skull separated from the eggs by only of sediment. Given the close proximity of both specimens, Osborn interpreted Oviraptor as a dinosaur with egg-eating habits, and explained that the generic name, Oviraptor, is Latin for "egg seizer" or "egg thief", due to the association of the fossils. The specific name, philoceratops, is intended as "fondness for ceratopsian eggs" which is also given as a result of the initial thought of the nest pertaining to Protoceratops or another ceratopsian. However, Osborn suggested that the name Oviraptor could reflect an incorrect perception of this dinosaur. Furthermore, Osborn found Oviraptor to be similar to the unrelated—at the time, however, considered related—fast-running ornithomimids based on the toothless jaws, and assigned Oviraptor to the Ornithomimidae. Osborn had previously reported the taxon as "Fenestrosaurus philoceratops", but this was later discredited. In 1976, the Mongolian paleontologist Rinchen Barsbold noted some inconsistencies regarding the taxonomic placement of Oviraptor and concluded that this taxon was quite distinct from ornithomimids based on anatomical traits. Under this consideration, he erected the Oviraptoridae to contain Oviraptor and close relatives. After Osborn's initial description of Oviraptor, the egg nest associated with the holotype was accepted to have belonged to Protoceratops, and oviraptorids were largely considered to have been egg-eating theropods. Nevertheless, in the 1990s, the discovery of numerous nesting and nestling oviraptorid specimens proved that Osborn was correct in his caution regarding the name of Oviraptor. These findings showed that oviraptorids brooded and protected their nests by crouching on them. This new line of evidence showed that the nest associated with the holotype of Oviraptor belonged to it and the specimen was actually brooding the eggs at the time of death, not preying on them. Referred specimens After the naming of Oviraptoridae in 1976, Barsbold referred six additional specimens to Oviraptor, including two particular specimens under the number MPC-D 100/20 and 100/21. In 1986, Barsbold realized that the latter two did not belong to the genus and instead they represented a new oviraptorid: Conchoraptor. Most of the other specimens are also unlikely to belong to Oviraptor itself, and they have been assigned to other oviraptorids. A partial individual also with eggs from the Bayan Mandahu Formation of Mongolia was referred in 1996 by Dong Zhiming and Philip J. Currie, the specimen IVPP V9608. However, in 2010 Nicholas R. Longrich and the two latter paleontologist have expressed their uncertainties regarding this referral as there are several anatomical differences such as the hand phalangeal proportions. They concluded that this specimen was a different and indeterminate species not referrable to this taxon. In 1981, Barsbold referred the specimen MPC-D 100/42 to Oviraptor, a very well-preserved and rather complete individual from the Djadokhta Formation. Since the known elements of Oviraptor were so fragmentary compared to other members, MPC-D 100/42 became the prime reference/depiction of this taxon being prominently labelled as Oviraptor philoceratops in scientific literature. This conception was refuted by James M. Clark and colleagues in 2002, who noted that this tall-crested specimen has more features of the skull in common with Citipati than it does with Oviraptor—which in fact, does not preserve a crest—and it may represent a second species of the former, or, an entire new genus. In 1986, Barsbold described a second species of Oviraptor, "O. mongoliensis", based on specimen MPC-D 100/32a which hails from the Nemegt Formation. However, a re-examination by Barsbold in 1997 found enough differences in this specimen to name the new genus Rinchenia, but he did not describe it with formality and this new oviraptorid remained as a nomen dubium. This was amended by the Polish paleontologist Halszka Osmólska and team in 2004 by formally naming the taxon Rinchenia mongoliensis. The North American paleontologist Mark A. Norell and colleagues in 2018 reported a new specimen of Oviraptor: AMNH 33092, which is composed of a tibia and two metatarsals of a nestling or very small juvenile. AMNH 33092 was found in association with the holotype and it was likely part of the nest. Oviraptor is now known from the holotype with associated eggs, and a juvenile/nestling. Description The holotype specimen has been estimated at in length with a weight ranging from . Though the holotype largely lacks the posterior region of the skeleton, it is likely that Oviraptor had two well-developed hindlimbs that ended in three functional toes with the first one being vestigial, as well as a relatively reduced tail. As evidenced in related oviraptorids, the arms were covered by elongated feathers, and the tail ended in a pygostyle, which is known to support a fan of feathers. Skull The skull of Oviraptor was deep and shortened with large fenestrae (openings) compared to other dinosaurs, and measures about long as preserved. The actual length may actually be longer though, given that the holotype skull lacks several regions such as the premaxilla. The holotype skull lacks a crest in almost its entirety, however, the top surfaces of the fused parietal and frontal bones indicate that it likely had a well-developed crest, supported by the nasal and premaxilla bones (mainly the latter) of the rostrum. Oviraptor had an elongated maxilla and dentary, which may result into a more extended snout compared to the highly stocky jaws of other oviraptorids. The palate is rigid, extended below the jaw line and formed by the premaxillae, vomers, and maxillae. As in other oviraptorids, it may have had a pair of tooth-like projections on the palate that were directed downwards. As in other oviraptorids, the nares (external nostrils) would have been relatively small and placed high on the skull. Oviraptor had toothless jaws that ended in a robust, parrot-like rhamphotheca (horny beak). The curvature of the dentary tip was down-turned but less pronounced than other oviraptorids, such as Citipati. As a whole, the lower jaw is a short and deep bone that covers . Postcranial skeleton As in most oviraptorids, the neural spines of the holotype cervical vertebrae were short, and the neural arches were X-shaped. However the spines become more pronounced in posterior vertebrae. The zygapophyses of the first cervical vertebrae are configured parallel to each other, and the postzygapophyses appear to not diverge significantly from the midline, mostly similar to Citipati. The cervical ribs are fused to the vertebrae in the holotype. The neural spines are rectangular in the anterior series of the dorsal vertebrae when seen in a lateral view and larger than the spines of the cervicals. On the anteriormost dorsal vertebra several pleurocoels (small air-spaced holes) can be found, which are similar to those of Khaan. The furcula of Oviraptor is very distinct from other oviraptorids in having a midline keel on the anterior surface of the hypocleidium−a downwards directed projection at the center of the furcula. This bone is V-shaped, rounded in cross-section, preserves an elongate spike-like hypocleidium, and the interclavicular angle is about 90°. The scapulocoracoid is fused in the holotype, however, the coracoid is badly damaged. The scapula is slightly bowed and measures in length. Oviraptor had a relatively elongated arm composed of the humerus, radius, ulna, and manus. The phalangeal formula of Oviraptor was 2-3-4, as seen in most other theropods and oviraptorids. The hand of Oviraptor had three skinny and bird-like fingers with each finger ended in side to side flattened and recurved unguals (claw bone). Unlike some oviraptorids, Oviraptor did not suffer a reduction of the second and third finger relative to the first one. The regarded juvenile Oviraptor AMNH 33092 preserves hindlimb material, comprising a right tibia with metatarsals III and IV. Its tibia is long, indicating a substantially smaller individual than the holotype. The nest AMNH 6508 preserves elongatoolithid eggs, with each egg being long (some are incomplete). Nevertheless, there is the possibility that taphonomical crushing may have compressed them by up to . Classification Oviraptor was originally allied with the ornithomimids by Osborn due to its toothless beak. Osborn also found similarities with Chirostenotes, which is still considered a close relative of Oviraptor. In 1976, Barsbold erected a new family to contain Oviraptor and its close kin, making Oviraptor the type genus of the Oviraptoridae. During the redescription of the holotype skull in 2002 by Clark and colleagues, they noted that Oviraptor had a relatively elongated maxilla and dentary. These traits are less pronounced in derived oviraptorids and suggests that Oviraptor belongs to the near base of the Oviraptoridae. The cladogram below follows an analysis by Gregory F. Funston and colleagues in 2020: Paleobiology Feeding When first described in 1924 by Osborn, Oviraptor was originally presumed to have been ovivorous—an organism that has an egg-based diet—based on the association of the holotype with a nest thought to belong to Protoceratops. In 1977, Barsbold proposed a crushing jaw hypothesis. He argued that the strength of the robust lower jaws and likely rhamphotheca (horny beak) was strong enough to break the shells of mollusks such as clams, which are found in the same geological formation as Oviraptor. These bones form part of the main upper jaw bone or maxilla, which converge in the middle to form a pair of prongs. The rhamphotheca and lower jaws together with the extension of several bones from the palate, would have made a piercing tool. Barsbold also suggested that oviraptorids could have had a semiaquatic life-style based on the mollusk-based diet, the high location of nasal cavities, an augmented musculature of the tail, and the greater size of the first manual digit. In a 1990 conference abstract, David K. Smith presented an osteological reevaluation of Oviraptor where he rejected the statements made by Barsbold. He found no evidence indicating a forelimb specialized in aquatic locomotion, and the jaws, rather than preserve a crushing mechanism, preserve shearing surfaces. As the skull is toothless, lightly built and lacks several strong muscle insertion areas, Smith suggested that leaves may have been an important part in the diet of Oviraptor. However, in 1995, Norell and colleagues reported the fragmented remains of a lizard in the body cavity of the holotype specimen, suggesting that Oviraptor was partially carnivorous. In 2008, Stig Olav K. Jansen compared the skull of several oviraptorid species to those of birds and turtles to investigate which properties can predict a rhamphotheca. He found the lower jaws of oviraptorids to be very similar to those of parrots, and the upper jaws to be more similar to those of turtles. Based on these observations, Jansen suggested that oviraptorids were omnivorous as the sharply-developed rhamphotheca together with the prominent forelimbs would have been adapted to catch and tear small prey. Moreover, the pointed projections of the palate would have contributed in holding prey. Jansen pointed out that a fully herbivorous diet in oviraptorids seems unlikely as they lacked flat and wide tomia (cutting edges of the mandibles) to chew, and were unable to move the lower jaws sideways. However, he considered the lower jaws strong enough to have at least crushed elements like eggs, nuts or other hard seeds. Longrich and colleagues in 2010 also rejected a durophagous (animals that practise shell-crushing) hypothesis, given that such animals typically develop teeth with broad crushing surfaces. The pointed shape of the dentary bones in the lower jaws suggests that oviraptorids had a sharp-edged rhamphotheca used for shearing food instead. The symphyseal (bone union) region at the front of the dentary may have given some ability for crushing, but as this was a relatively small area, it was probably not the main function of the jaws. Another argument against them having been eaters of mollusks is the fact that most oviraptorids have been found in sediments that are interpreted to represent mostly arid or semi-arid environments, such as Oviraptor in the Djadokhta Formation. The team also found that oviraptorids and dicynodonts share cranial features such as short, deep, and toothless mandibles; elongated dentary symphyses; elongated mandibular openings; and a pointed palate. Modern animals with jaws that resemble those of oviraptorids include parrots and tortoises; the latter group also has tooth-like projections on the palatal region. Longrich and colleagues concluded that due to the similarities between oviraptorids and herbivorous animals, the bulk of their diet would most likely have been formed by plant material. The jaws of oviraptorids may have been specialised for processing food, such as xerophytic vegetation−a vegetation that is adapted for environments with little water—that would have grown in their arid environments, but this is not possible to demonstrate, as little is known about the paleoflora of the Gobi Desert. In 2018 however, Funston and colleagues supported the crushing jaw hypothesis. They pointed out that the stocky rostrum and robust lower jaws of oviraptorids suggest, in fact, a strong and nipping bite, which is rather similar to those of parrots. Funston and colleagues considered these anatomical traits of oviraptorids to be consistent with a frugivorous diet that incorporated nuts and seeds. Reproduction Since the description of the embryonic Citipati specimen in 1994, oviraptorids became more understood: instead of having been egg-eating animals, they actually brooded and cared for the nests. This specimen showed that the holotype of Oviraptor was likely a sexually mature individual that perished incubating the associated nest with eggs. This new behavior on oviraptorids became more clear with the report and short description of an adult nesting specimen of Citipati in 1995 by Norell and colleagues. The specimen was found on top of egg clutches, with its hindlimbs crouched symmetrically on each side of the nest and the forelimbs covering the nest perimeter. This brooding posture is found today only in modern avian dinosaurs and supports a behavioral link between the latter group and non-avian dinosaurs. In 1996, Dong and Currie described a new nesting oviraptorid specimen from the Bayan Mandahu Formation. It was found lying atop a nest composed by approximately 6 eggs as preserved, and these were laid in a mound-shaped structure with a circular pattern. As the specimen was found over the nest with its forelimbs covering the eggs and the partially preserved hindfoot near the center of the nest, Dong and Currie suggested that it was caught and buried by a sandstorm during incubation. They ruled out the possibility of oviraptorids being egg-thieves as they would have either consumed or instinctively abandoned the nest long before it was buried by a sandstorm or another meteorological phenomenon. In 1999 Clark and team described in detail the previously reported Citipati nesting specimen and briefly discussed the holotype specimen of Oviraptor and its association with the nest AMNH 6508. They pointed out that the exact position in which the holotype was found over the nest is unclear as they were separated during preparation, and the nest appears to be not entirely complete with about 15 eggs preserved of which two damaged. Moreover, the semicircular arrangement of the nest indicates that the eggs were laid in pairs and in at least three rings, and this nest was originally circular, similar to a mound. Thomas P. Hopp and Mark J. Orsen in 2004 analyzed the brooding behavior of extinct and extant dinosaur species, including oviraptorids, in order to evaluate the reason for the elongation and development of wing and tail feathers. Given that the most complete oviraptorid nesting specimen—at the time, the 1995 Citipati nesting specimen—was found in a very avian-like posture, with the forelimbs in a near-folded posture and the pectoral region, belly, and feet in contact with the eggs, Hopp and Orsen indicated that long pennaceous feathers and a feather covering were most likely present in life. The "wings" and tail of oviraptorids would have granted protection for the eggs and hatchlings against climate factors like the sunlight, wind, and rainfalls. However, the arms of this specimen were not extremely folded as in some modern birds, instead, they are more extended resembling the style of large flightless birds like the ostrich. The extended position of the arm is also similar to the brooding behavior of this bird, which is known to nest in large clutches like oviraptorids. Based on the forelimb position of nesting oviraptorids, Hopp and Orsen proposed brooding as the ancestral reason behind wing and tail feather elongation, as there was a greater need to provide optimal protection for eggs and juveniles. In 2005, Tamaki Sato and team reported an unusual oviraptorid specimen from the Nanxiong Formation. This new specimen was found preserving mainly the pelvic region with two eggs inside and thereby indicating a female. The size and position of the eggs suggest that oviraptorids retained two functional oviducts, but had reduced the number of eggs ovulated to one per oviduct. David J. Varricchio and colleagues in 2008 found that the relatively large egg clutch-size of oviraptorids and troodontids is most similar to those of modern birds that practice polygamous mating and extensive male parental care, such as ratite birds, suggesting similar habits. This reproductive system is most likely to represent the ancestral condition for modern birds, with biparental care (where both parents participate) being a later development. In 2014, W. Scott Persons and colleagues suggested that oviraptorosaurs were secondarily flightless and several of the traits in their tails may indicate a propensity for display behaviour, such as courtship display. The tail of several oviraptorosaurs and oviraptorids ended in pygostyles, a bony structure at the end of the tail that, at least in modern birds, is used to support a feather fan. Furthermore, the tail was notably muscular and had a pronounced flexibility, which may have aided in courtship movements. In 2018, Tzu-Ruei Yang and colleagues identified cuticle layers on several egg-shells of maniraptoran dinosaurs including those of oviraptorids. These particular layers are composed of proteins, polysaccharides and pigments, but mainly of lipids and hydroxyapatite. In modern birds they serve to protect the eggs from dehydration and invasion of microorganisms. As most oviraptorid specimens have been found in formations of caliche-based sedimentation, Yang and colleagues suggested that the cuticle-coated eggs would have been a reproductive strategy adapted for enhancing their hatching success in such arid climates and environments. In 2019 Yang and colleagues re-evaluated the hypothesis of thermoregulatory contact incubation using complete oviraptorid nests from the Nanxiong Formation, and provided a detailed reconstruction of the architecture of the oviraptorid clutch. They noted that adult oviraptorid specimens in association with nest were not necessarily incubating the eggs as they could represent a female in the process of laying eggs, and the multi-ring clutch prevented sufficient heat transfer from the parent to the inner rings of eggs. An average oviraptorid nest was built as a gently-inclined mound with a highly organized architecture: the eggs were likely pigmented and arranged in pairs with each pair arranged in three to four elliptical rings. As the parent was likely operating from the nest center, this region was devoid of eggs. Yang and colleagues concluded the oviraptorid nesting style was so unique that they lack modern analogs, therefore, using oviraptorid reproduction may not be the best example to inform about the evolution of bird reproductive strategies. However, the team was unable to determinate if the juvenile Oviraptor AMNH 33092 had hatched from the nest associated with the holotype. Paleoenvironment Oviraptor is known from the Bayn Dzak locality of the Djadokhta Formation in Mongolia, a formation that dates back to the Late Cretaceous about 71 million to 75 million years ago. The paleoenvironment of the Djadokhta Formation is interpreted as having a semiarid climate, with sand dune and alluvial settings similar to the modern Gobi Desert. The semiarid steppe landscape was drained by intermittent streams and was sometimes affected by dust and sandstorms, and moisture was seasonal. Though this formation is largely considered to preserved highly arid environments, several short-lived water bodies have been reported from the Ukhaa Tolgod locality, based on fluvial sedimentation. Furthermore, it is thought that later in the Campanian age and into the Maastrichtian, the climate would shift to the more humid fluvial environment seen in the Nemegt Formation. The Djadokhta Formation is separated into a lower Bayn Dzak Member and an upper Turgrugyin Member. The known remains of Oviraptor have been produced by the Bayn Dzak member, which has also yielded the dinosaurs Bainoceratops, Pinacosaurus, Protoceratops, Saurornithoides, Velociraptor, and Halszkaraptor. Further dinosaur fauna from this member includes that of the Ukhaa Tolgod locality, composed of Apsaravis, Byronosaurus, Citipati, Gobipteryx, Khaan, Khol, Shuuvuia, Tsaagan, and Minotaurasaurus. Taphonomy The pose of the holotype of Oviraptor along with the association of eggs, suggest that it was trapped over the nest during a sandstorm, and burial was relatively rapid given that the body had no opportunity to become fully disarticulated or scavenged by predators. The paleontologist Kenneth Carpenter also agreed in that sandstorms may have been the most likely event that the eggs found in the deposits were buried. Among elements, the skull have become particularly flattened and distorted during the fossilization process.
Biology and health sciences
Theropods
Animals
581543
https://en.wikipedia.org/wiki/Pit%20viper
Pit viper
The Crotalinae, commonly known as pit vipers, or pit adders, are a subfamily of vipers found in Asia and the Americas. Like all other vipers, they are venomous. They are distinguished by the presence of a heat-sensing pit organ located between the eye and the nostril on both sides of the head. Currently, 23 genera and 155 species are recognized: These are also the only viperids found in the Americas. The groups of snakes represented here include rattlesnakes, lanceheads, and Asian pit vipers. The type genus for this subfamily is Crotalus, of which the type species is the timber rattlesnake, C. horridus. These snakes range in size from the diminutive hump-nosed viper, Hypnale hypnale, that grows to a typical total length (including tail) of only , to the bushmaster, Lachesis muta, a species known to reach a maximum total length of in length. This subfamily is unique in that all member species share a common characteristic – a deep pit, or fossa, in the loreal area between the eye and the nostril on either side of the head. These loreal pits are the external openings to a pair of extremely sensitive infrared-detecting organs, which in effect give the snakes a sixth sense to help them find and perhaps even judge the size of the small, warm-blooded prey on which they feed. The pit organ is complex in structure and is similar to the thermoreceptive labial pits found in boas and pythons. It is deep and located in a maxillary cavity. The membrane is like an eardrum that divides the pit into two sections of unequal size, with the larger of the two facing forwards and exposed to the environment. The two sections are connected via a narrow tube, or duct, that can be opened or closed by a group of surrounding muscles. By controlling this tube, the snake can balance the air pressure on either side of the membrane. The membrane has many nerve endings packed with mitochondria. Succinic dehydrogenase, lactic dehydrogenase, adenosine triphosphate, monoamine oxidase, generalized esterases, and acetylcholine esterase have also been found in it. When prey comes into range, infrared radiation falling onto the membrane allows the snake to determine its direction. Having one of these organs on either side of the head produces a stereo effect that indicates distance, as well as direction. Experiments have shown, when deprived of their senses of sight and smell, these snakes can strike accurately at moving objects less than warmer than the background. The paired pit organs provide the snake with thermal rangefinder capabilities. Clearly, these organs are of great value to a predator that hunts at night, as well as for avoiding the snake’s own predators. Among vipers, these snakes are also unique in that they have a specialized muscle, called the muscularis pterigoidius glandulae, between the venom gland and the head of the ectopterygoid. Contraction of this muscle, together with that of the muscularis compressor glandulae, forces venom out of the gland. Evolution The earliest known fossil pit viper remains are from the Early Miocene of Nebraska. As pit vipers are thought to have had an Asian origin before eventually colonizing the Americas, this suggests that they must have originated and diversified even earlier. During the Late Miocene, they reached as far west as eastern Europe, where they are no longer found; it is thought that they did not expand further into Europe. Geographic range The subfamily Crotalinae is found from Central Asia eastward and southward to Japan, China, Indonesia, peninsular India, Nepal, Bangladesh and Sri Lanka. In the Americas, they range from southern Canada southward to Central America to southern South America. Habitat Crotalines are a versatile subfamily, with members found in habitats ranging from parched desert (e.g., the sidewinder, Crotalus cerastes) to rainforests (e.g., the bushmaster, Lachesis muta). They may be either arboreal or terrestrial, and at least one species (the cottonmouth, Agkistrodon piscivorus) is semiaquatic. The altitude record is held jointly by Crotalus triseriatus in Mexico and Gloydius strauchi in China, both of which have been found above the treeline at over 4,000 m above sea level. Behavior Although a few species of crotalines are highly active by day, such as Trimeresurus trigonocephalus, a bright green pit viper endemic to Sri Lanka, most are nocturnal, preferring to avoid high daytime temperatures and to hunt when their favored prey are also active. The snakes' heat-sensitive pits are also thought to aid in locating cooler areas in which to rest. As ambush predators, crotalines typically wait patiently somewhere for unsuspecting prey to wander by. At least one species, the arboreal Gloydius shedaoensis of China, is known to select a specific ambush site and return to it every year in time for the spring migration of birds. Studies have indicated these snakes learn to improve their strike accuracy over time. Many temperate species of pit vipers (e.g. most rattlesnakes) congregate in sheltered areas or "dens" to overwinter (brumate, see hibernation), the snakes benefiting from the combined heat. In cool temperatures and while pregnant, pit vipers also bask on sunny ledges. Some species do not mass together in this way, for example the copperhead, Agkistrodon contortrix, or the Mojave rattlesnake, Crotalus scutulatus. Like most snakes, crotalines keep to themselves and strike only if cornered or threatened. Smaller snakes are less likely to stand their ground than larger specimens. Pollution and the destruction of rainforests have caused many pit viper populations to decline. Humans also threaten pit vipers, as many are hunted for their skins or killed by cars when they wander onto roads. Reproduction With few exceptions, crotalines are ovoviviparous, meaning that the embryos develop within eggs that remain inside the mother's body until the offspring are ready to hatch, when the hatchlings emerge as functionally free-living young. In such species, the eggshells are reduced to soft membranes that the young shed, either within the reproductive tract, or immediately after emerging. Among the oviparous (egg-laying) pit vipers are Lachesis, Calloselasma, and some Trimeresurus species. All egg-laying crotalines are believed to guard their eggs. Brood sizes range from two for very small species, to as many as 86 for the fer-de-lance, Bothrops atrox, which is among the most prolific of all live-bearing snakes. Many young crotalines have brightly coloured tails that contrast dramatically with the rest of their bodies. These tails are known to be used by a number of species in a behavior known as caudal luring; the young snakes make worm-like movements with their tails to lure unsuspecting prey within striking distance. Taxonomy In the past, the pit vipers were usually classed as a separate family: the Crotalidae. Today, however, the monophyly of the viperines and the crotalines as a whole is undisputed, which is why they are treated here as a subfamily of the Viperidae. Genera *) Not including the nominate subspecies. ) Type genus.
Biology and health sciences
Snakes
Animals
581632
https://en.wikipedia.org/wiki/Mountain%20gorilla
Mountain gorilla
The mountain gorilla (Gorilla beringei beringei) is one of the two subspecies of the eastern gorilla. It is listed as endangered by the IUCN . There are two populations: One is found in the Virunga volcanic mountains of Central/East Africa, within three National Parks: Mgahinga, in southwest Uganda; Volcanoes, in northwest Rwanda; and Virunga, in the eastern Democratic Republic of Congo (DRC). The other population is found in Uganda's Bwindi Impenetrable National Park. Some primatologists speculate the Bwindi population is a separate subspecies, though no description has been finalized. The latest population count, released in 2019, revealed there to be approximately 1060 mountain gorillas in the wild Evolution, taxonomy, and classification Mountain gorillas are descendants of ancestral monkeys and apes found in Africa and Arabia during the start of the Oligocene epoch (34–24 million years ago). The fossil record provides evidence of the hominoid primates (apes) found in East Africa approximately 22–32 million years ago. The fossil record of the area where mountain gorillas live is particularly poor and so its evolutionary history is not clear. It was about 8.8 to 12 million years ago that the group of primates who were to evolve into gorillas split from their common ancestor with humans and chimps; this is when the genus Gorilla emerged. Mountain gorillas have been isolated from eastern lowland gorillas for approximately 10,000 years and these two taxa separated from their western counterparts approximately 1.2 to 3 million years ago. The genus was first referenced as Troglodytes in 1847, but renamed to Gorilla in 1852. It was not until 1967 that the taxonomist Colin Groves proposed that all gorillas be regarded as one species (Gorilla gorilla) with three subspecies Gorilla gorilla gorilla (western lowland gorilla), Gorilla gorilla graueri (lowland gorillas found west of the Virungas) and Gorilla gorilla beringei (mountain gorillas, including Gorilla beringei, found in the Virungas and Bwindi). In 2003, after a review, they were divided into two species (Gorilla gorilla and Gorilla beringei) by The World Conservation Union (IUCN). There is now agreement that there are two species, each with two subspecies. Characteristics The fur of the mountain gorilla, often thicker and longer than that of other gorilla species, enables them to live in colder temperatures. Gorillas can be identified by nose prints unique to each individual Males reach a standing height of , a girth of , an arm span of and a weight of . Females are smaller with a weight of . This subspecies is smaller than the eastern lowland gorilla, the other subspecies of eastern gorilla. Adult males have more pronounced bony crests on the top and back of their skulls, giving their heads a more conical shape. These crests anchor the powerful temporalis muscles, which attach to the lower jaw (mandible). Adult females also have these crests, but they are less pronounced. Like all gorillas, they feature dark brown eyes framed by a black ring around the iris. Adult males are called silverbacks because a saddle of gray or silver-colored hair develops on their backs with age. The hair on their backs is shorter than on most other body parts, and their arm hair is especially long. Fully erect males average in height, with an arm span of and weigh . The tallest silverback recorded was tall with an arm span of , a chest of , and a weight of , shot in Alimbongo, northern Kivu in May 1938. There is an unconfirmed record of another individual, shot in 1932, that was and weighed . The heaviest silverback recorded was a tall specimen shot in Ambam, Cameroon. The mountain gorilla is primarily terrestrial and quadrupedal. However, it will climb into fruiting trees if the branches can carry its weight. Like all great apes other than humans, its arms are longer than its legs. It moves by knuckle-walking, supporting its weight on the backs of its curved fingers rather than its palms. The mountain gorilla is diurnal, spending most of the day eating, as large quantities of food are needed to sustain its massive bulk. It forages in the early morning, rests during the late morning and around midday, and in the afternoon it forages again before resting at night. Each gorilla builds a nest from surrounding vegetation to sleep in, constructing a new one every evening. Only infants sleep in the same nest as their mothers. They leave their sleeping sites when the sun rises at around 6 am, except when it is cold and overcast; then they often stay longer in their nests. Distribution and habitat The mountain gorilla inhabits the Albertine Rift montane cloud forest, including the Virunga Mountains, ranging in elevation from . Most groups live on the slopes of three of the dormant volcanoes: Karisimbi, Mikeno, and Visoke. The vegetation is very dense at the bottom of the mountains, becoming more sparse at higher elevations, and the forests are often cloudy, misty and cold. The mountain gorilla also occasionally uses the border habitat with the Rwenzori-Virunga montane moorlands, at elevations higher than the Albertine Rift montane cloud forest. Behaviour and ecology The home range used by one group of gorillas during one year is influenced by availability of food sources and usually includes several vegetation zones. George Schaller identified ten distinct zones, including: bamboo forest at ; Hagenia forest at ; and the giant senecio zone at . The mountain gorilla spends most of its time in Hagenia forest, where galium vines are found year-round. All parts of this vine are consumed: leaves, stems, flowers, and berries. It travels to the bamboo forest during the few months of the year when fresh shoots are available, and it climbs into subalpine regions to eat the soft centers of giant senecio trees. Diet The mountain gorilla is primarily a herbivore; the majority of its diet is composed of the leaves, shoots, and stems (85.8%) of 142 plant species. It also feeds on bark (6.9%), roots (3.3%), flowers (2.3%), and fruit (1.7%), as well as small invertebrates. (0.1%). In a year long study in Bwindi Impenetrable Forest adult males ate an average of of food a day, while females ate . Social structure The mountain gorilla is highly social, and lives in relatively stable, cohesive groups held together by long-term bonds between adult males and females. Relationships among females are relatively weak. These groups are nonterritorial; the silverback generally defends his group rather than his territory. In the Virunga mountain gorillas, the average length of tenure for a dominant silverback is 4.7 years. 61% of groups are composed of one adult male and a number of females and 36% contain more than one adult male. The remaining gorillas are either lone males or exclusively male groups, usually made up of one mature male and a few younger males. Group sizes vary from five to thirty, with an average of ten individuals. A typical group contains: one dominant silverback, who is the group's undisputed leader; another subordinate silverback (usually a younger brother, half-brother, or even an adult son of the dominant silverback); one or two blackbacks, who act as sentries; three to four sexually mature females, who have bonded for life to the dominant silverback; and from three to six juveniles and infants. Most males and approximately 60% of females leave their natal group. Males leave when they are about eleven years old, and often the separation process is slow: they spend more and more time on the edge of the group until they leave altogether. They may travel alone or with an all-male group for two–five years before they can attract females to join them and form a new group. Females typically emigrate when they are about eight years old, either transferring directly to an established group or beginning a new one with a lone male. Females often transfer to a new group several times before they choose to settle down with a certain silverback male. The dominant silverback generally determines the movements of the group, leading it to appropriate feeding sites throughout the year. He also mediates conflicts within the group and protects it from external threats. When the group is attacked by humans, leopards, or other gorillas, the silverback will protect them, even at the cost of his own life. He is the center of attention during rest sessions, and young gorillas frequently stay close to him and include him in their games. If a mother dies or leaves the group, the silverback is usually the one who looks after her abandoned offspring, even allowing them to sleep in his nest. Young mountain gorillas have been observed searching for and dismantling poachers' snares. When the silverback dies or is killed by disease, accident, or poachers, the family group may be disrupted. Unless there is an accepted male descendant capable of taking over his position, the group will either split up or adopt an unrelated male. When a new silverback joins the family group, he may kill all of the infants of the dead silverback. Infanticide has not been observed in stable groups. Analysis of mountain gorilla genomes by whole genome sequencing indicates that a recent decline in their population size has led to extensive inbreeding. As an apparent result, individuals are typically homozygous for 34% of their genome sequence. Furthermore, homozygosity and the expression of deleterious recessive mutations as consequences of inbreeding have likely resulted in the purging of severely deleterious mutations from the population. Aggression Although strong and powerful, mountain gorillas are generally gentle and very shy. Severe aggression is rare in stable groups, but when two mountain gorilla groups meet, sometimes the two silverbacks can engage in a fight to the death, using their canines to cause deep, gaping injuries. Conflicts are most often resolved by displays and other threat behaviors that are intended to intimidate without becoming physical. A ritualized charge display is unique to gorillas. The entire sequence has nine steps: (1) progressively quickening hooting, (2) symbolic feeding, (3) rising bipedally, (4) throwing vegetation, (5) chest-beating with cupped hands, (6) one leg kick, (7) sideways running four-legged, (8) slapping and tearing vegetation, and (9) thumping the ground with palms. Jill Donisthorpe has stated that a male charged at her twice. In both cases, the gorilla turned away when she stood her ground. Affiliation The midday rest period is an important time for establishing and reinforcing relationships within the group. Mutual grooming reinforces social bonds, and helps keep hair free from dirt and parasites. It is not so common among gorillas as in other primates, although females groom their offspring regularly. Young gorillas play often and are more arboreal than the large adults. Playing helps them learn how to communicate and behave within the group. Activities include wrestling, chasing, and somersaults. The silverback and his females tolerate and, if encouraged, even participate. Vocalization Twenty-five distinct vocalizations are recognized, many of which are used primarily for group communication within dense vegetation. Sounds classified as grunts and barks are heard most frequently while traveling, and indicate the whereabouts of individual group members. They also may be used during social interactions when discipline is required. Screams and roars signal alarm or warning, and are produced most often by silverbacks. Deep, rumbling belches suggest contentment and are heard frequently during feeding and resting periods. They are the most common form of intragroup communication. Aversions Mountain gorillas generally demonstrate aversion to certain reptiles and insects. Infants, whose typical behavior is to chase anything that moves, will go out of their way to avoid chameleons and caterpillars. The gorillas also demonstrate an aversion to water bodies in the environment and will cross streams only if they can do so without getting wet, such as by using fallen logs to cross the stream. They also dislike rain. Research In October 1902, Captain Robert von Beringe (1865–1940) shot two large apes during an expedition to establish the boundaries of German East Africa. One of the apes was recovered and sent to the Berlin Zoological Museum, where Professor Paul Matschie (1861–1926) classified the animal as a new form of gorilla and named it Gorilla beringei after the man who shot it. In 1925, Carl Akeley, a hunter from the American Museum of Natural History who wished to study the gorillas, convinced Albert I of Belgium to establish the Albert National Park to protect the animals of the Virunga mountains. George Schaller began his 20-month observation of the mountain gorillas in 1959, subsequently publishing two books: The Mountain Gorilla and The Year of the Gorilla. Little was known about the life of the mountain gorilla before his research, which described its social organization, life history, and ecology. Dian Fossey began what would become an 18-year study in 1967. Fossey made new observations, completed the first accurate census, and established active conservation practices, such as anti-poaching patrols. The Digit Fund, which Fossey started, continued her work and was later renamed the Dian Fossey Gorilla Fund International. The Fund's Karisoke Research Center monitors and protects the mountain gorillas of the Virungas. Close monitoring and research of the Bwindi mountain gorillas began in the 1990s. Conservation As of 2018, the mountain gorilla was listed as endangered on the IUCN Red List. Conservation efforts have led to an increase in the overall population of the mountain gorilla (Gorilla beringei beringei) in the Virungas and at Bwindi. The overall population is now believed to be at more than 1,000 individuals. In December 2010, the official website of Virunga National Park announced that "the number of mountain gorillas living in the tri-national forested area of which Virunga forms a part, has increased by 26.3% during the last seven years - an average growth rate of 3.7% per annum." The 2010 census estimated that 480 mountain gorillas inhabited the region. The 2003 census had estimated the Virunga gorilla population to be 380 individuals, which represented a 17% increase in the total population since 1989, when there were 320 individuals. The population has almost doubled since its lowest point in 1981, when a census estimated that only 254 gorillas remained. The 2006 census at Bwindi indicated a population of 340 gorillas, representing a 6% increase in total population size since 2002 and a 12% increase from 320 individuals in 1997. All of those estimates were based on traditional census methods using dung samples collected at night nests. Conversely, genetic analyses of the entire population during the 2006 census indicated there only were approximately 300 individuals in Bwindi. The discrepancy highlights the difficulty in using imprecise census data to estimate population growth. According to computer modeling of their population dynamics in both Bwindi and the Virungas, groups of gorillas who were habituated for research and ecotourism have higher growth rates than unhabituated gorillas. Habituation means that through repeated, neutral contact with humans, gorillas exhibit normal behavior when people are in proximity. Habituated gorillas are more closely guarded by field staff and they receive veterinary treatment for snares, respiratory disease, and other life-threatening conditions. Nonetheless, researchers recommended that some gorillas remain unhabituated as a bet-hedging strategy against the risk of human pathogens being transmitted throughout the population. The main international non-governmental organization involved in conservation of mountain gorillas is the International Gorilla Conservation Programme, which was established in 1991 as a joint effort of the African Wildlife Foundation, Fauna & Flora International, and the World Wide Fund for Nature. Conservation requires work at many levels, from local to international, and involves protection and law enforcement as well as research and education. Dian Fossey broke down conservation efforts into the following three categories: Active conservation includes frequent patrols in wildlife areas to destroy poacher equipment and weapons, firm and prompt law enforcement, census counts in regions of breeding and ranging concentration, and strong safeguards for the limited habitat the animals occupy. Theoretical conservation seeks to encourage growth in tourism by improving existing roads that circle the mountains, by renovating the park headquarters and tourist lodging, and by the habituation of gorillas near the park boundaries for tourists to visit and photograph. Community-based conservation management involves biodiversity protection by, for, and with the local community. A collaborative management process has had some success in the Bwindi Impenetrable National Park. The forest was designated a national park in 1991; this occurred with little community consultation and the new status prohibited local people from accessing resources within the park as well as reducing economic opportunities. Subsequently, a number of forest fires were deliberately lit and threats were made to the gorillas. To counteract this, three schemes to provide benefits from the existence of forest communities and involving the local community in park management were developed. They included agreements allowing the controlled harvesting of resources in the park, receipt of some revenue from tourism, and establishment of a trust fund partly for community development. Tension between people and the park has thus been reduced and now there is more willingness to take part in gorilla protection. Surveys of community attitudes conducted by CARE show a steadily increasing proportion of people in favour of the park. Moreover, there have been no cases of deliberate burning and the problem of snares in these areas has been reduced. While community-based conservation bears out individual analysis, there are significant overlaps between active and theoretical conservation and a discussion of the two as halves of a whole seems more constructive. For example, in 2002, Rwanda's national parks went through a restructuring process. The director of the IGCP, Eugène Rutagarama, stated that "They got more rangers on better salaries, more radios, more patrol cars and better training in wildlife conservation. They also built more shelters in the park, from which rangers could protect the gorillas". The funding for these types of improvements usually comes from tourism - in 2008, approximately 20,000 tourists visited gorilla populations in Rwanda, generating around $8 million in revenue for the parks. According to the Director of UNESCO, Audrey Azoulay, "As we have seen in Rwanda, species conservation succeeds when local communities are placed at the heart of the conservation strategy. Biodiversity protection measures must go hand in hand with measures that meet the needs of these local communities". In Rwanda, it costs $1,500 per person to come and see the gorillas. Under Rwandan law, 10% of this revenue must be returned to the community, which represents around €10 million invested in building schools, roads and drinking water supplies. As Audrey Azoulay explains, in 1980 there were just 250 mountain gorillas, today there are 1,063 - and 80% of them in Rwanda. In Uganda too, tourism is seen as a "high value activity that generates enough revenue to cover park management costs and contribute to the national budget of the Uganda Wildlife Authority." Furthermore, tourist visits which are conducted by park rangers also allow censuses of gorilla sub-populations to be undertaken concurrently. In addition to tourism, other measures for conservation of the sub-population can be taken such as ensuring connecting corridors between isolated areas to make movement between them easier and safer. Threats The mountain gorilla is threatened by habitat loss and poaching. Habitat loss Loss of habitat is one of the most severe threats to gorilla populations. The forests where mountain gorillas live are surrounded by rapidly increasing human settlement. Through shifting (slash-and-burn) agriculture, pastoral expansion, and logging, villages in forest zones cause fragmentation and degradation of habitat. The late 1960s saw the Virunga Conservation Area (VCA) of Rwanda's national park reduced by more than half of its original size to support the cultivation of Pyrethrum. This led to a massive reduction in mountain gorilla population numbers by the mid-1970s. The resulting deforestation confines the gorillas to isolated deserts. Some groups may raid crops for food, creating further animosity and retaliation. The impact of habitat loss extends beyond the reduction of suitable living space for gorillas. As gorilla groups are increasingly isolated from one another geographically due to human settlements, the genetic diversity of each group is reduced. Some signs of inbreeding are already appearing in younger gorillas, including webbed hands and feet. Poaching Mountain gorillas are not usually hunted for bushmeat, but frequently, they are maimed or killed by traps and snares intended for other animals. They have been killed for their heads, hands, and feet, which are sold to collectors. Infants are sold to zoos, researchers, and people who want them as pets. The abduction of infants generally involves the loss of at least one adult, as members of a group will fight to the death to protect their young. The Virunga gorillas are particularly susceptible to animal trafficking for the illegal pet trade. With young gorillas worth from $1,000 to $5,000 on the black market, poachers seeking infant and juvenile specimens will kill and wound other members of the group in the process. Those of the group that survive often disband. One well-documented case is known as the "Taiping 4". In this situation, a Malaysian Zoo received four wild-born infant gorillas from Nigeria at a cost of US$1.6 million using falsified export documents. Poaching for meat also is particularly threatening in regions of political unrest. Most of the African great apes survive in areas of chronic insecurity, where there is a breakdown of law and order. The killing of mountain gorillas at Bikenge in Virunga National Park in January 2007 was a well-documented case. Disease Despite the protection garnered from being located in national parks, the mountain gorilla is also at risk from people of a more well-meaning nature. Groups subjected to regular visits from tourists and locals are at a continued risk of disease cross-transmission (Lilly et al., 2002) – this is in spite of attempts to enforce a rule that humans and gorillas be separated by a distance of seven metres at all times to prevent this. With a similar genetic makeup to humans and an immune system that has not evolved to cope with human disease, this poses a serious conservation threat. Indeed, according to some researchers, infectious diseases (predominantly respiratory) are responsible for approximately 20% of sudden deaths in mountain gorilla populations. With the implementation of a successful ecotourism program in which human-gorilla interaction was minimised, during the period of 1989–2000 four sub-populations in Rwanda experienced an increase of 76%. By contrast, seven of the commonly visited sub-populations in the Democratic Republic of Congo (DRC) saw a decline of almost 20% over only four years (1996–2000). The risk of disease transmission is not limited to those of a human origin; pathogens from domestic animals and livestock through contaminated water are also a concern. Studies have found that waterborne, gastrointestinal parasites such as Cryptosporidium sp., Microsporidia sp., and Giardia sp. are genetically identical when found in livestock, humans, and gorillas, particularly along the border of the Bwindi Impenetrable Forest, Uganda. War and civil unrest Rwanda, Uganda, and the Democratic Republic of Congo have been politically unstable and beleaguered by war and civil unrest during the last decades. Using simulation modeling, Byers et al. (2003) have suggested that times of war and unrest have negative impacts on the habitat and populations of mountain gorillas. Due to the increase in human encounters, both aggressive and passive, this has resulted in a rise in mortality rates and a decrease in reproductive success. More direct impacts from conflict can also be seen. Kanyamibwa notes that there were reports that mines were placed along trails in the Volcanoes National Park, and that many gorillas were killed as a result. Pressure from habitat destruction in the form of logging also increased as refugees fled the cities and cut down trees for wood. During the Rwandan genocide, some poaching activity also was linked to the general breakdown of law and order and lack of any ramifications.
Biology and health sciences
Apes
Animals
581797
https://en.wikipedia.org/wiki/Matching%20%28graph%20theory%29
Matching (graph theory)
In the mathematical discipline of graph theory, a matching or independent edge set in an undirected graph is a set of edges without common vertices. In other words, a subset of the edges is a matching if each vertex appears in at most one edge of that matching. Finding a matching in a bipartite graph can be treated as a network flow problem. Definitions Given a graph a matching M in G is a set of pairwise non-adjacent edges, none of which are loops; that is, no two edges share common vertices. A vertex is matched (or saturated) if it is an endpoint of one of the edges in the matching. Otherwise the vertex is unmatched (or unsaturated). A maximal matching is a matching M of a graph G that is not a subset of any other matching. A matching M of a graph G is maximal if every edge in G has a non-empty intersection with at least one edge in M. The following figure shows examples of maximal matchings (red) in three graphs. A maximum matching (also known as maximum-cardinality matching) is a matching that contains the largest possible number of edges. There may be many maximum matchings. The matching number of a graph is the size of a maximum matching. Every maximum matching is maximal, but not every maximal matching is a maximum matching. The following figure shows examples of maximum matchings in the same three graphs. A perfect matching is a matching that matches all vertices of the graph. That is, a matching is perfect if every vertex of the graph is incident to an edge of the matching. A matching is perfect if . Every perfect matching is maximum and hence maximal. In some literature, the term complete matching is used. In the above figure, only part (b) shows a perfect matching. A perfect matching is also a minimum-size edge cover. Thus, the size of a maximum matching is no larger than the size of a minimum edge cover: . A graph can only contain a perfect matching when the graph has an even number of vertices. A near-perfect matching is one in which exactly one vertex is unmatched. Clearly, a graph can only contain a near-perfect matching when the graph has an odd number of vertices, and near-perfect matchings are maximum matchings. In the above figure, part (c) shows a near-perfect matching. If every vertex is unmatched by some near-perfect matching, then the graph is called factor-critical. Given a matching M, an alternating path is a path that begins with an unmatched vertex and whose edges belong alternately to the matching and not to the matching. An augmenting path is an alternating path that starts from and ends on free (unmatched) vertices. Berge's lemma states that a matching M is maximum if and only if there is no augmenting path with respect to M. An induced matching is a matching that is the edge set of an induced subgraph. Properties In any graph without isolated vertices, the sum of the matching number and the edge covering number equals the number of vertices. If there is a perfect matching, then both the matching number and the edge cover number are . If and are two maximal matchings, then and . To see this, observe that each edge in can be adjacent to at most two edges in because is a matching; moreover each edge in is adjacent to an edge in by maximality of , hence Further we deduce that In particular, this shows that any maximal matching is a 2-approximation of a maximum matching and also a 2-approximation of a minimum maximal matching. This inequality is tight: for example, if is a path with 3 edges and 4 vertices, the size of a minimum maximal matching is 1 and the size of a maximum matching is 2. A spectral characterization of the matching number of a graph is given by Hassani Monfared and Mallik as follows: Let be a graph on vertices, and be distinct nonzero purely imaginary numbers where . Then the matching number of is if and only if (a) there is a real skew-symmetric matrix with graph and eigenvalues and zeros, and (b) all real skew-symmetric matrices with graph have at most nonzero eigenvalues. Note that the (simple) graph of a real symmetric or skew-symmetric matrix of order has vertices and edges given by the nonozero off-diagonal entries of . Matching polynomials A generating function of the number of k-edge matchings in a graph is called a matching polynomial. Let G be a graph and mk be the number of k-edge matchings. One matching polynomial of G is Another definition gives the matching polynomial as where n is the number of vertices in the graph. Each type has its uses; for more information see the article on matching polynomials. Algorithms and computational complexity Maximum-cardinality matching A fundamental problem in combinatorial optimization is finding a maximum matching. This problem has various algorithms for different classes of graphs. In an unweighted bipartite graph, the optimization problem is to find a maximum cardinality matching. The problem is solved by the Hopcroft-Karp algorithm in time time, and there are more efficient randomized algorithms, approximation algorithms, and algorithms for special classes of graphs such as bipartite planar graphs, as described in the main article. Maximum-weight matching In a weighted bipartite graph, the optimization problem is to find a maximum-weight matching; a dual problem is to find a minimum-weight matching. This problem is often called maximum weighted bipartite matching, or the assignment problem. The Hungarian algorithm solves the assignment problem and it was one of the beginnings of combinatorial optimization algorithms. It uses a modified shortest path search in the augmenting path algorithm. If the Bellman–Ford algorithm is used for this step, the running time of the Hungarian algorithm becomes , or the edge cost can be shifted with a potential to achieve running time with the Dijkstra algorithm and Fibonacci heap. In a non-bipartite weighted graph, the problem of maximum weight matching can be solved in time using Edmonds' blossom algorithm. Maximal matchings A maximal matching can be found with a simple greedy algorithm. A maximum matching is also a maximal matching, and hence it is possible to find a largest maximal matching in polynomial time. However, no polynomial-time algorithm is known for finding a minimum maximal matching, that is, a maximal matching that contains the smallest possible number of edges. A maximal matching with k edges is an edge dominating set with k edges. Conversely, if we are given a minimum edge dominating set with k edges, we can construct a maximal matching with k edges in polynomial time. Therefore, the problem of finding a minimum maximal matching is essentially equal to the problem of finding a minimum edge dominating set. Both of these two optimization problems are known to be NP-hard; the decision versions of these problems are classical examples of NP-complete problems. Both problems can be approximated within factor 2 in polynomial time: simply find an arbitrary maximal matching M. Counting problems The number of matchings in a graph is known as the Hosoya index of the graph. It is #P-complete to compute this quantity, even for bipartite graphs. It is also #P-complete to count perfect matchings, even in bipartite graphs, because computing the permanent of an arbitrary 0–1 matrix (another #P-complete problem) is the same as computing the number of perfect matchings in the bipartite graph having the given matrix as its biadjacency matrix. However, there exists a fully polynomial time randomized approximation scheme for counting the number of bipartite matchings. A remarkable theorem of Kasteleyn states that the number of perfect matchings in a planar graph can be computed exactly in polynomial time via the FKT algorithm. The number of perfect matchings in a complete graph Kn (with n even) is given by the double factorial (n − 1)!!. The numbers of matchings in complete graphs, without constraining the matchings to be perfect, are given by the telephone numbers. The number of perfect matchings in a graph is also known as the hafnian of its adjacency matrix. Finding all maximally matchable edges One of the basic problems in matching theory is to find in a given graph all edges that may be extended to a maximum matching in the graph (such edges are called maximally matchable edges, or allowed edges). Algorithms for this problem include: For general graphs, a deterministic algorithm in time and a randomized algorithm in time . For bipartite graphs, if a single maximum matching is found, a deterministic algorithm runs in time . Online bipartite matching The problem of developing an online algorithm for matching was first considered by Richard M. Karp, Umesh Vazirani, and Vijay Vazirani in 1990. In the online setting, nodes on one side of the bipartite graph arrive one at a time and must either be immediately matched to the other side of the graph or discarded. This is a natural generalization of the secretary problem and has applications to online ad auctions. The best online algorithm, for the unweighted maximization case with a random arrival model, attains a competitive ratio of . Characterizations Kőnig's theorem states that, in bipartite graphs, the maximum matching is equal in size to the minimum vertex cover. Via this result, the minimum vertex cover, maximum independent set, and maximum vertex biclique problems may be solved in polynomial time for bipartite graphs. Hall's marriage theorem provides a characterization of bipartite graphs which have a perfect matching and the Tutte theorem provides a characterization for arbitrary graphs. Applications Matching in general graphs A Kekulé structure of an aromatic compound consists of a perfect matching of its carbon skeleton, showing the locations of double bonds in the chemical structure. These structures are named after Friedrich August Kekulé von Stradonitz, who showed that benzene (in graph theoretical terms, a 6-vertex cycle) can be given such a structure. The Hosoya index is the number of non-empty matchings plus one; it is used in computational chemistry and mathematical chemistry investigations for organic compounds. The Chinese postman problem involves finding a minimum-weight perfect matching as a subproblem. Matching in bipartite graphs Graduation problem is about choosing minimum set of classes from given requirements for graduation. Hitchcock transport problem involves bipartite matching as sub-problem. Subtree isomorphism problem involves bipartite matching as sub-problem.
Mathematics
Graph theory
null
2174399
https://en.wikipedia.org/wiki/Tehran%20Metro
Tehran Metro
Tehran Metro () is a rapid transit system serving Tehran, the capital of Iran. It is the largest metro system in the Middle East. The system is owned and operated by Tehran Urban and Suburban Railway. It consists of six operational metro lines (and an additional commuter rail line), with construction under way on seven lines including northwestern extension of line 4, south extension line 6, northwestern and east extension line 7, east extension line 2 and Line 10, Line 8 and 9. The Tehran Metro carries more than 3 million passengers a day. In 2018, 820 million trips were made on Tehran Metro. , the total system is long, of which is metro-grade rail. It is planned to have a length of with eleven lines once all construction is complete by 2040. On all days of the week, the Metro service runs from 04:30 to 22:00. The line uses standard gauge and is mostly underground. Ticket price is 5,300 Iranian Toman for each journey (about US$0.05), regardless of the distance traveled, but using prepaid tickets costs much less. Seniors may travel for free on the metro. On all Tehran metro trains the first and half of the second carriages from each end are reserved for women. Women can still ride other cars freely. History Initial plans for the metro system were laid in late 1960s but could not be executed until 1982 because of socio-political issues such as the Iranian Revolution and the Iran-Iraq War. In 1970, the Plan and Budget Organization and the Municipality of Tehran announced an international tender for construction of a metro in Tehran. The French company SOFRETU, affiliated with the state-owned Paris transportation authority RATP, won the tender and in the same year began to conduct preliminary studies on the project. In 1974, a final report with a so-called "street-metro" proposal was tendered. The street-metro system recommended a road network with a loop express way in the central area and two highways for new urban areas and an 8-line metro network which were complemented by bus network and taxi services. Geological surveys commenced in 1976. In 1978, construction on the line was started in northern Tehran by the French company, however this development was short-lived with the advent of the Iranian Revolution and Iran–Iraq War in 1979 and 1980 respectively. SOFRETU ceased operations in Iran in December 1980. On March 3, 1982, the Iranian Cabinet ministers formally announced the stop of Tehran Metro operations by the French company. In 1985, the "Tehran Metro Execution Plan" was re-approved by the Majiles, the Iranian Parliament, on the basis of legal project of "Amendment of Law of Establishment of Tehran Urban and suburban Railway Company" which had been founded on Farvardin 1364 (April 1985). This was a literal continuation of exactly the same project that had been laid out before the revolution. Work proceeded slowly because of the continuing Iran–Iraq War and often ground to a halt. By the summer of 1985, urban pressure from the rapidly urbanising population, and lack of developed public transport system prompted the work to be resumed in earnest. "Line 1" (From Blvd. Shahid Ayatollah Haghani to City of Rey) and its extension to Behesht-e-Zahra Cemetery was made a priority. "Line 2" (From Dardasht in Tehran Pars district to Sadeghiyeh Second Square) and an extending towards the City of Karaj and Mehrshahr district was also made a secondary priority. Studies were also made to establish the previously designed Lines 3 & 4. It was decided that an organisation by the name of the Metro Company should be established in order to handle the future development of the system. The Metro Company then became managed by Asghar Ebrahimi Asl for eleven years. During that time, hundreds of millions of dollars were spent on the system and the Metro Company was given government concessions for the exploitation of iron ore mines in Bandar Abbas (Hormuzgan Province), exploitation and sale of Moghan Diotomite mine in the Iranian region of Azarbaijan, export of refinery residues from Isfahan oil refinery as well as tar from Isfahan steel mill. The year after Asghar Ebrahimi Asl left the management of the Metro Company and Mohsen Hashemi succeeded him, the first line of the Tehran Metro was launched between Tehran and Karaj. On 7 March 1999, an overland Tehran-Karaj express electric train started a limited service of between Azadi Square (Tehran) and Malard (Karaj) that called at one intermediate station at Vardavard. Line 5 of the Tehran metro began operating in 1999. Iran's first metro system, the line was constructed by the Chinese company NORINCO. From 2000 onwards, commercial operation began on Lines 1 and 2. The wagons on these lines are provided by CRV via CNTIC. The railway tracks and points on these lines are provided by the Austrian company Voestalpine. The Metro uses equipment manufactured by a wide range of international companies: double-deck passenger cars for the Tehran-Karaj regional line are supplied by CRV (although some trains are from SEGC) via CNTIC and assembled by the Wagon Pars factory in Arak. , approximately $2 billion has been spent on the Metro project. The Tehran Metro transports about 2.5 million passengers daily through its 7 operational lines (Lines 1, 2, 3, 4, 5, 7, and 8). It also has additional one line under construction (Line 6), and an additional two lines in engineering phase. New 80 wagons have been added to the system in September 2012 to ease transportation and reduce rush-hour congestion. Iran is able to produce its need in wagons and trains independently. A branch line of Line 4 began running to Mehrabad International Airport on 15 March 2016. A express line to Imam Khomeini International Airport was opened in August 2017. Amidst the COVID-19 cases increasing in Iran, Tehran Metro made wearing masks a requirement to enter the metro network at any station. Law enforcement located in every station were ordered to prevent passengers from entering without masks and such passengers would be led to purchase masks from mask selling desks located at every metro station. Lines Line 1 Line 1, coloured red on system maps, is long, of which are underground (from Tajrish station to Shoush-Khayyam crossing) and the rest runs at surface level. There are 5 2 stations along this line of which 23 stations are located underground and 8 above ground. , the total capacity of line 1 is 650,000 passenger per day, with trains stopping at each station for 20 seconds. The trains are each made up of seven wagons, with a nominal capacity of 1,300 seated and standing passengers. The maximum speed of the trains is which is tempered to an average of due to stoppages at stations along the route. Line 1 runs mostly north–south. A , three station extension of the line from Mirdamad station to Qolhak station opened on May 20, 2009. The , four stations second phase of this extension from Qolhak station to Tajrish Square was completed in 2011. Construction was to be completed by March 2007 but faced major issues due to large boulders and rock bed in part of the tunnels as well as water drainage issues. It has also faced major financing issues as the government has refused to release funds earmarked for the project to the municipality. Since August 2017, one of Line 1's stations, Darvazeh Dowlat is open 24 hours a day, in order to accommodate passengers traveling to and from Imam Khomeini Airport via Line 1. Line 1 connects Tehran to Imam Khomeini International Airport. Its first phase, to Shahr-e-Aftab station, opened in 2016, and the airport station opened in August 2017. It is the only metro line in Tehran that is completely open 24 hours a day (even if the frequency is only 80 minutes...), in order to accommodate passengers from late night and early morning flights (Line 1's Darvazeh Dowlat station is the only other metro station outside of Line 1 with that classification). A third phase, which is currently operational, will extend Line 1 to the satellite city of Parand and bring the total length of the line to . Its per hour speeds classify it as an express subway line, the first of its kind on the Tehran Metro. Line 2 This line opened between Sadeghieh and Imam Khomeini in February 2000. Line 2 is long, with underground and elevated. There are 22 stations along the line, of which Imam Khomeini Station was shared by Line 1. Line 2 is coloured blue on system maps and runs mostly east–west through the city. The line was extended from Imam-Khomeini to Baharestan Metro Station in 2004, and to Shahid Madani, Sarsabz and Elm-o-Sanat University in March 2006 with the intermediate stations, Darvazeh Shemiran and Sabalan, opening in July 2006. It was extended further from Elm-o-Sanat University to Tehran Pars in February 2009, and to Farhangsara in June 2010. The extension phase to new east terminal is under construction. Line 3 Line 3 travels from northeast to southwest. Line 3 is one of the most important lines as it connects southwest Tehran to northeast, crosses busy parts of the capital city, and can help to alleviate traffic problems. About of Line 3 became operational in December 2012, followed by in April 2014, and finally, the last section of the line which is opened on September 22, 2015, increasing the length of the line to a total of , and serving 25 stations . Line 4 The line is long with 23 stations. which connects the western part of Tehran to eastern part. This line initially runs through Ekbatan (western Tehran) to Kolahdooz (eastern Tehran). The construction of a western extension to line 4 has been started in 2012 connecting Ekbatan to Chaharbagh Sq. This extension will include 3 stations. A sub-line of this line connects Bimeh station to Mehrabad Airport. This sub-line has 3 stations at Bimeh, Terminal 1&2 and Terminal 4&6. Section 1, from Ferdowsi Square to Darvazeh Shemiran, opened in April 2008. Section 2 from Darvazeh shemiran to Shohada Square opened in February 2009. On May 24, 2009, Section 3 from Ferdowsi Square to Enghelab Square opened. On July 23, 2012, two more stations were inaugurated, connecting line 4 with line 5. Currently there are 23 stations in operation on Line 4, coloured yellow on the system maps. Line 5 Line 5 is coloured green on system maps; it is a commuter rail line and has 13 stations. Entering the area of Karaj with main stations at Karaj and Golshahr and Hashtgerd. It connects with the western end of Line 2 at Tehran (Sadeghiyeh) station, and with the western end of Line 4 at Eram-e Sabz Metro Station. Line 6 Line 6 is pink coloured on system maps. An initial section between Shohada Square to Dowlat Abad opened on April 7, 2019. This line is long with 13 stations right now. When completed, this line will be long with 31 stations, connecting southeast Tehran to northwest. A tunnel boring machine (TBM) is used to construct the tunnel. TBM is using earth pressure balanced method to pass safely through urban areas without considerable settlement. Line 7 This line, similar to line 6, and in contrast with line 3, goes from northwest to southeast and was constructed with modern TBM machines. Its first phase, compromising of of line and 7 stations were opened in June 2017. This line has with 20 stations right now. Future plans There are several plans to expand Tehran's metro network to over in total. Some plans only concern additional inserted stations, like Vavan on line 1 in the South or Aghdasiyeh on line 3 in the North. Some extensions and completely new lines are under construction, some extensions or new lines are proposals in the moment. Under construction Line 3 (formerly named Eslamshahr line) In the south, line 3 will continue for from the terminus Azadegan with five new stations to Eslamshahr. Originally, the plan was to build a commuter rail-link like line 5 with a new interchange platform at Azadegan under the name "Eslamshahr Line". But until construction began in 2016, the plans were changed into a transfer-free extension of the existing route. The opening is scheduled for 2025. Line 6 Line 6 extension is on the way in the Northwest, where three new stations are built, and at the East end, where one additional station is under construction. Line 7 There is an extension of one station each from each recent terminus in the North and in the Southeast under construction. Line 10 The completely rebuilt line 10, coloured dark blue in the system map, stretching with 35 stations will run along a west–east corridor from Vardavard metro station of line 5 in the west of Tehran towards the area of Kosar aqueduct in the east with an interchange to the extended line 4. Construction started in September 2020. Further plans Line 8 Line 8 of Tehran's Metro, coloured brown in the system map, is a planned circular line, surrounding the city center from Fadak station (line 2) in the North, over the West, and ending in the southeastern borough of Shahrak-e-Valfajr. It might have 34 stations, 21 of them newly built, while the others will be expanded existing ones becoming interchange stations to other lines. Line 9 The planned line 9 of the metro network, coloured golden in the system map, is another circular line, starting further west at line 5 station Chitgar, passing the city center in the North, turning south and ending at line 6 station Dowlat Abad. It might have 39 stations all together, 27 of them new constructed, while the others will be expansions of existing stations to become interchanges to other lines. Line 11 Line 11, coloured light green in the system map, is another planned tangent line, starting from Chitgar station at line 5, connecting the southern parts of Tehran, and ending in the Southeast in the borough of Eslam Abad. It might have 18 stations, most of them newly built, just five to be expanded existing stations to become interchanges with other lines. LRT Lines 3 LRT (Tram) lines are proposed along with the Metro lines. Express Commuter Railway 3 other commuter Rail lines are planned along with Line 5 (Tehran-Karaj-Hashtgerd Commuter Rail) bringing the total Metro Commuter Rails to 4 Lines . Interchange stations 1- Darvazeh Shemiran; Lines 2 & 4 2- Shahid Beheshti; Lines 1 & 3 3- Darvazeh Dowlat; Lines 1 & 4 4- Imam Khomeini; Lines 1 & 2 5- Theatr-e Shahr; Lines 3 & 4 6- Shademan; Lines 2 & 4 7- (Tehran) Sadeghiyeh; Lines 2 & 5 8- Eram-e Sabz; Lines 4 & 5 9- Shahid Navvab-e Safavi; Lines 2 & 7 10- Mahdiyeh; Lines 3 & 7 11- Meydan-e Shohada; Lines 4 & 6 12- Meydan-e Mohammadiyeh; Lines 1 & 7 13- Imam Hossein; Lines 2 & 6 14- Daneshgah-e Tarbiat Modares; Lines 6 & 7 15- Towhid; Lines 4 & 7 16- Shohada-ye Haftom-e Tir; Lines 1 & 6 17- Meydan-e Vali Asr; Lines 3 & 6 18- Shohada-ye Hefdah-e Shahrivar; Lines 6 & 7 (under construction on line 6, operational on line 7) 19- Daneshgah-e Emam Ali; Lines 2 & 3 (operational on line 2, planned on line 3) 20- Ayatollah Kashani; Lines 4 & 6 (under construction on line 4, operational on line 6) 21- Shahr-e-Rey; Lines 1 & 6 (operational on line 1, under construction on line 6) Network map Safety All routes have been equipped with automatic train protection (ATP), automatic train stop (ATS), centralized traffic control (CTC), and SCADA. More and more residents use the metro due to the improvement in the peak-hour headways, the opening of more stations and overall improvement with new escalators, elevators, and air-conditioning in the trains. On 18 July 2007, a twenty square metres area immediately adjacent to the entrance of the Toupkhaneh metro station caved in. There were no casualties, but the station had to undergo numerous repairs. On 15 April 2012, safety walls of Mianrood River broke due to heavy rain in Tehran, and consequently, 300,000 cubic meters of water entered metro tunnel of Line 4. The two nearest stations were still under construction, so Metro operators had enough time to evacuate other stations from passengers. Nobody was killed, but water depth in the Habib-o-llah station, the deepest station on Line 4, was estimated to be near 18 meters. It took nearly two weeks to reopen the flooded stations which were previously in operation. Complaints The Cultural Heritage Organization of Iran has complained that the vibrations caused by the Metro were having a significant and highly adverse effect on the Masoudieh Palace in the Baharestan neighbourhood of central Tehran. The Cultural Heritage Organisation has also complained about vibrations near other historic sites such as the Golestan Palace and the National Museum of Iran. Tickets Regular single table tickets You can only use the subway once with this ticket. This ticket costs 12,000 Rials. If you plan to take a round trip, you need to get two single tickets. Suburban single table tickets This is the ticket from the 5th metro line that reaches Sadeghieh station from Karaj station. This ticket costs 12,000 Rials. International Airport Single Ticket This ticket is used for the subway line of Imam Khomeini Airport. This ticket costs 90,000 Rials. Electronic ticket You can use the subway as many times as you want by charging it. The cost of each of these e-cards is 30,000 Rials or 50,000 Rials and you can charge up to 500,000 Rials after purchase. You can charge your e-card using various booths and wall-mounted electronic charging devices at the bus and subway stations, either by cash or by bank credit card and with non-attendance methods such as my Tehran app Tehran Metro Snapshot Gallery
Technology
Asia_2
null
14469976
https://en.wikipedia.org/wiki/Microbial%20cyst
Microbial cyst
A microbial cyst is a resting or dormant stage of a microorganism, that can be thought of as a state of suspended animation in which the metabolic processes of the cell are slowed and the cell ceases all activities like feeding and locomotion. Many groups of single-celled, microscopic organisms, or microbes, possess the ability to enter this dormant state. Encystment, the process of cyst formation, can function as a method for dispersal and as a way for an organism to survive in unfavorable environmental conditions. These two functions can be combined when a microbe needs to be able to survive harsh conditions between habitable environments (such as between hosts) in order to disperse. Cysts can also be sites for nuclear reorganization and cell division, and in parasitic species they are often the infectious stage between hosts. When the encysted microbe reaches an environment favorable to its growth and survival, the cyst wall breaks down by a process known as excystation. Environmental conditions that may trigger encystment include, but are not limited to: lack of nutrients or oxygen, extreme temperatures, desiccation, adverse pH, and presence of toxic chemicals which are not conducive for the growth of the microbe. History and terminology The idea that microbes could temporarily assume an alternate state of being to withstand changes in environmental conditions began with Antonie van Leeuwenhoek’s 1702 study on Animalcules, currently known as rotifers: “'I have often placed the Animalcules I have before described out of the water, not leaving the quantity of a grain of sand adjoining to them, in order to see whether when all the water about them was evaporated and they were exposed to the air their bodies would burst, as I had often seen in other Animalcules. But now I found that when almost all the water was evaporated, so that the creature could no longer be covered with water, nor move itself as usual, it then contracted itself into an oval figure, and in that state it remained, nor could I perceive that the moisture evaporated from its body, for it preserved its oval and round shape, unhurt." Leeuwenhoek later continued his work with rotifers to discover that when he returned the dried bodies to their preferred aquatic conditions, they resumed their original shape and began swimming again. These observations did not gain traction with the general microbiological community of the time, and the phenomena as Leeuwenhoek observed it was never given a name. In 1743, John Turberville Needham observed the revival of the encysted larval stage of the wheat parasite, Anguillulina tritici and later published these findings in New Microscopal Discoveries (1745). Several others repeated and expanded upon this work, informally referring to their studies on the “phenomenon of reviviscence.” In the late 1850s, reviviscence became embroiled in the debate surrounding the theory of spontaneous generation of life, leading two highly involved scientists on either side of the issue to call upon the Biological Society of France for an independent review of their opposing conclusions on the matter. Doyere, who believed rotifers could be desiccated and revitalized, and Pouchet, who believed they could not, allowed independent observers of various scientific backgrounds to observe and attempt to replicate their findings. The resulting report leaned toward the arguments made by Pouchet, with notable dissension from the main author who blamed his framing of the issue in the report on fear of religious retribution. Despite the attempt by Doyere and Pouchet to conclude debate on the topic of resurrection, investigations continued. In 1872, Wilhelm Preyer introduced the term ‘anabiosis’ (return to life) to describe the revitalization of viable, lifeless organisms to an active state. This was quickly followed by Schmidt’s 1948 proposal of the term ‘abiosis,’ leading to some confusion between terms describing the beginning of life from non-living elements, viable lifelessness, and nonliving components that are necessary for life. As part of his 1959 review of Leeuwenhoek’s original findings and the evolution of the science surrounding microbial cysts and other forms of metabolic suspension, D. Keilin proposed the term ‘cryptobiosis’ (latent life) to describe: “...the state of an organism when it shows no visible signs of life and when its metabolic activity becomes hardly measurable, or comes reversibly to a standstill.” As microbial research began to gain popularity exponentially, details about ciliated protist physiology and cyst formation led to increased curiosity about the role of encystment in the life cycle of ciliates and other microbes. The realization that no one category of microscopic organism ‘owns’ the ability to form metabolically dormant cysts necessitates the term ‘microbial cyst’ to describe the physical object as it exists in all its forms. Also important in the generation of the term, is the delineation of endospores and microbial cysts as different forms of cryptobiosis or dormancy. Endospores exhibit more extreme isolation from their environment in terms of cell wall thickness, impermeability to substrates, and presence of dipicolinic acid, a compound known to confer resistance to extreme heat. Microbial cysts have been likened to modified vegetative cells with the addition of a specialized capsule. Importantly, encystment is a process observed to precede cell division, while the formation of an endospore involves non-reproductive cellular division. The study of the encystment process was mostly confined to the 1970s and '80s, resulting in the lack of understanding of genetic mechanisms and additional defining characteristics, though they are generally thought to follow a different formation sequence than endospores. Formation and composition of the cyst wall Indicators of cyst formation in ciliated protists include varying degrees of ciliature resorption, with some ciliates losing both cilia and the membranous structures supporting them while others maintain kinetosomes and/or microtubular structures. De novo synthesis of cyst wall precursors in the endoplasmic reticulum also frequently indicate a ciliate is undergoing encystment. The composition of the cyst wall is variable in different organisms. The cyst walls of bacteria are formed by the thickening of the normal cell wall with added peptidoglycan layers. The walls of protozoan cysts are made of chitin, a type of glycopolymer. The cyst wall of some ciliated protists is composed of four layers, ectocyst, mesocyst, endocyst, and the granular layer. The ectocyst is the outer layer and contains a plug-like structure through which the vegetative cell reemerges during excystation. Interior to the ectocyst, the thick mesocyst is compact yet stratified in density. Chitinase treatments indicate the presence of chitin in the mesocyst of some ciliate species, but this compositional characteristic appears to be highly heterogeneous. The thin endocyst, interior to the mesocyst, is less dense than the ectocyst and is believed to be composed of proteins. The innermost granular layer lies directly outside the pellicle and is composed of de novo synthesized precursors of granular material. Cyst formation across species In bacteria In bacteria (for instance, Azotobacter sp.), encystment occurs by changes in the cell wall; the cytoplasm contracts and the cell wall thickens. Various members of the Azotobacteraceae family have been shown to survive in an encysted form for up to 24 years. The extremophile Rhodospirillum centenum, an anoxygenic, photosynthetic, nitrogen-fixing bacterium that grows in hot springs was found to form cysts in response to desiccation as well. Bacteria do not always form a single cyst. Varieties of cysts formation events are known. Rhodospirillum centenum can change the number of cysts per cell, usually ranging from four to ten cells per cyst depending on the environment. Some species of filamentous cyanobacteria have been known to form heterocysts to escape levels of oxygen concentration detrimental to their nitrogen fixing processes. This process is distinct from other types of microbial cysts in that the heterocysts are often produced in a repeating pattern within a filament composed of several vegetative cells, and once formed, heterocysts cannot return to a vegetative state. In protists Protists, especially protozoan parasites, are often exposed to very harsh conditions at various stages in their life cycle. For example, Entamoeba histolytica, a common intestinal parasite that causes dysentery, has to endure the highly acidic environment of the stomach before it reaches the intestine and various unpredictable conditions like desiccation and lack of nutrients while it is outside the host. An encysted form is well suited to survive such extreme conditions, although protozoan cysts are less resistant to adverse conditions compared to bacterial cysts. Cytoplasmic dehydration, high autophagic activity, nuclear condensation, and decrease of cell volume are all indicators of encystment initiation in ciliated protists. In addition to survival, the chemical composition of certain protozoan cyst walls may play a role in their dispersal. The sialyl groups present in the cyst wall of Entamoeba histolytica confer a net negative charge to the cyst which prevents its attachment to the intestinal wall thus causing its elimination in the feces. Other protozoan intestinal parasites like Giardia lamblia and Cryptosporidium also produce cysts as part of their life cycle (see oocyst). Due to the hard outer shell of the cyst, Cryptosporidium and Giardia are resistant to common disinfectants used by water treatment facilities such as chlorine. In some protozoans, the unicellular organism multiplies during or after encystment and releases multiple trophozoites upon excystation. Many additional species of protists have been shown to exhibit encystment when confronted with unfavorable environmental conditions. In rotifers Rotifers also produce diapause cysts, which are different from quiescent (environmentally triggered) cysts in that the process of their formation begins before environmental conditions have deteriorated to unfavorable levels and the dormant state may extend past the restoration of ideal conditions for microbial life. Food limited females of some Synchaeta pectinata strains produce unfertilized diapausing eggs with a thicker shell. Fertilized diapausing eggs can be produced in both food limited and non-food limited conditions, indicative of a bet-hedging mechanism for food availability or perhaps an adaptation to variation in food levels throughout a growing season. Pathology While the cyst component itself is not pathogenic, the formation of a cyst is what gives Giardia its primary tool of survival and its ability to spread from host to host. Ingestion of contaminated water, foods, or fecal matter gives rise to the most commonly diagnosed intestinal disease, giardiasis. Whereas it was previously believed that encystment only served a purpose for the organism itself, it has been found that protozoan cysts have a harboring effect. Common pathogenic bacteria can also be found taking refuge in the cyst of free-living protozoa. Survival times for bacteria in these cysts range from a few days to a few months in harsh environments. Not all bacteria are guaranteed to survive in the cyst formation of a protozoan; many species of bacteria are digested by the protozoan as it undergoes cystic growth.
Biology and health sciences
Biological reproduction
null
13309470
https://en.wikipedia.org/wiki/Hawaii%20hotspot
Hawaii hotspot
The Hawaii hotspot is a volcanic hotspot located near the namesake Hawaiian Islands, in the northern Pacific Ocean. One of the best known and intensively studied hotspots in the world, the Hawaii plume is responsible for the creation of the Hawaiian–Emperor seamount chain, a mostly undersea volcanic mountain range. Four of these volcanoes are active, two are dormant; more than 123 are extinct, most now preserved as atolls or seamounts. The chain extends from south of the island of Hawaii to the edge of the Aleutian Trench, near the eastern coast of Russia. While some volcanoes are created by geologic processes near tectonic plate convergence and subduction zones, the Hawaii hotspot is located far from plate boundaries. The classic hotspot theory, first proposed in 1963 by John Tuzo Wilson, proposes that a single, fixed mantle plume builds volcanoes that then, cut off from their source by the movement of the Pacific Plate, this causes less lava to erupt from these volcanoes and eventually erode below sea level over millions of years. According to this theory, the nearly 60° bend where the Emperor and Hawaiian segments within the seamounts was caused by shift in the movement of the Pacific Plate. Studies on tectonic movement have shown that several plates have changed their direction of plate movement because of differential subduction rates, breaking off of suducting slabs, and drag forces. In 2003, new investigations of this irregularity led to the proposal of a mobile hotspot hypothesis, suggesting that hotspots are prone to movement instead of the previous idea that hotspots are fixed in place, and that the 47-million-year-old bend was caused by a shift in the hotspot's motion rather than the plate's. According to this 2003 study, this could occur through plume drag taking parts of the plume in the direction of plate movement while the main plume could remain stationary. Many other hot spot tracks move in almost parallel so current thinking is a combination of these ideas. Ancient Hawaiians were the first to recognize the increasing age and weathered state of the volcanoes to the north as they progressed on fishing expeditions along the islands. The volatile state of the Hawaiian volcanoes and their constant battle with the sea was a major element in Hawaiian mythology, embodied in Pele, the deity of volcanoes. After the arrival of Europeans on the island, in 1880–1881 James Dwight Dana directed the first formal geological study of the hotspot's volcanics, confirming the relationship long observed by the natives. The Hawaiian Volcano Observatory was founded in 1912 by volcanologist Thomas Jaggar, initiating continuous scientific observation of the islands. In the 1970s, a mapping project was initiated to gain more information about the complex geology of Hawaii's seafloor. The hotspot has since been tomographically imaged, showing it to be wide and up to deep, and olivine and garnet-based studies have shown its magma chamber is approximately . In its at least 85 million years of activity the hotspot has produced an estimated of rock. The chain's rate of drift has slowly increased over time, causing the amount of time each individual volcano is active to decrease, from 18 million years for the 76-million-year-old Detroit Seamount, to just under 900,000 for the one-million-year-old Kohala; on the other hand, eruptive volume has increased from per year to about . Overall, this has caused a trend towards more active but quickly-silenced, closely spaced volcanoes — whereas volcanoes on the near side of the hotspot overlap each other (forming such superstructures as Hawaii Island and the ancient Maui Nui), the oldest of the Emperor seamounts are spaced as far as apart. Theories Tectonic plates generally focus deformation and volcanism at plate boundaries. However, the Hawaii hotspot is more than from the nearest plate boundary; while studying it in 1963, Canadian geophysicist J. Tuzo Wilson proposed the hotspot theory to explain these zones of volcanism so far from regular conditions, a theory that has since come into wide acceptance. Wilson's stationary hotspot theory Wilson proposed that mantle convection produces small, hot, buoyant upwellings under the Earth's surface; these thermally active mantle plumes supply magma which in turn sustains long-lasting volcanic activity. This "mid-plate" volcanism builds peaks that rise from relatively featureless sea floor, initially as seamounts and later as fully-fledged volcanic islands. The local tectonic plate (in the case of the Hawaii hotspot, the Pacific Plate) gradually passes over the hotspot, carrying its volcanoes with it without affecting the plume. Over hundreds of thousands of years, the magma supply for an individual volcano is slowly cut off, eventually causing its extinction. No longer active enough to overpower erosion, the volcano slowly recedes beneath the waves, becoming a seamount once again. As the cycle continues, a new volcanic center pierces the crust, and a volcanic island arises anew. The process continues until the mantle plume itself collapses. This cycle of growth and dormancy strings together volcanoes over millions of years, leaving a trail of volcanic islands and seamounts across the ocean floor. According to Wilson's theory, the Hawaiian volcanoes should be progressively older and increasingly eroded the further they are from the hotspot, and this is easily observable; the oldest rock in the main Hawaiian islands, that of Kauai, is about 5.5 million years old and deeply eroded, while the rock on Hawaii Island is a comparatively young 0.7 million years of age or less, with new lava constantly erupting at Kīlauea, the hotspot's present center. Another consequence of his theory is that the chain's length and orientation serves to record the direction and speed of the Pacific Plate's movement. A major feature of the Hawaiian trail is a "sudden" 60-degree bend at a 40- to 50-million-year-old section of its length, and according to Wilson's theory, this is evidence of a major change in plate direction, one that would have initiated subduction along much of the Pacific Plate's western boundary. This part of the theory has recently been challenged, and the bend might be attributed to the movement of the hotspot itself. Geophysicists believe that hotspots originate at one of two major boundaries deep in the Earth, either a shallow interface in the lower mantle between an upper mantle convecting layer and a lower non-convecting layer, or a deeper D'' ("D double-prime") layer, approximately thick and immediately above the core-mantle boundary. A mantle plume that would initiate melt is generated through partial melting of mantle material, reduction in melting point through addition of volatiles by subduction of hydrated slabs, and decrease in pressure due to erosional processes. This heated, buoyant, and less-viscous portion of the upper layer would become less dense due to thermal expansion, and rise towards the surface as a Rayleigh-Taylor instability. When the mantle plume reaches the base of the lithosphere, the plume heats it and produces melt. This magma then makes its way to the surface, where it is erupted as lava. Arguments for the validity of the hotspot theory generally center on the steady age progression of the Hawaiian islands and nearby features: a similar bend in the trail of the Macdonald hotspot, the Austral–Marshall Islands seamount chain, located just south; other Pacific hotspots following the same age-progressed trend from southeast to northwest in fixed relative positions; and seismologic studies of Hawaii which show increased temperatures at the core–mantle boundary, showing further evidence for a mantle plume forming. Shallow hotspot hypothesis Another hypothesis is that melting anomalies form as a result of lithospheric extension, which allows pre-existing melt to rise to the surface. These melting anomalies are normally called "hotspots", but under the shallow-source hypothesis the mantle underlying them is not anomalously hot. In the case of the Hawaiian–Emperor seamount chain, the Pacific plate boundary system was very different around 80 Mya, when the Emperor seamount chain began to form. There is evidence that the chain started on a spreading ridge (the Pacific-Kula Ridge) that has now been subducted at the Aleutian trench. The locus of melt extraction may have migrated off the ridge and into the plate interior, leaving a trail of volcanism behind it. This migration may have occurred because this part of the plate was extending in order to accommodate intraplate stress. Thus, a long-lived region of melt escape could have been sustained. Supporters of this hypothesis argue that the wavespeed anomalies seen in seismic tomographic studies cannot be reliably interpreted as hot upwellings originating in the lower mantle. Moving hotspot theory The most heavily challenged element of Wilson's theory is whether hotspots are indeed fixed relative to the overlying tectonic plates. Drill samples, collected by scientists as far back as 1963, suggest that the hotspot may have drifted over time, at the relatively rapid pace of about per year during the late Cretaceous and early Paleogene eras (81–47 Mya); in comparison, the Mid-Atlantic Ridge spreads at a rate of per year. In 1987, a study published by Peter Molnar and Joann Stock found that the hotspot does move relative to the Pacific Ocean; however, they interpreted this as the result of the relative motions of the North American and Pacific plates rather than that of the hotspot itself. In 2021 researchers proposed a three stage Hawaii hotspot model. The first stage has ridge plume interaction in which the Hawaii hotspot either fed the Izanagi-Pacific or Kula-Pacific ridge. This period involved the creation of young oceanic crust and the formation of the Meji and Detroit seamounts. The second stage involved the mutual movements of the Pacific plate and the Hawaii hotspot. It is possible, as supported by gravitational modelling, that during this period that the Hawaii hotspot drifted about 4-9 degrees to the south, in contrast to the northward Pacific Plate movement. The third stage has continued movement of the Pacific plate, with stagnation of the Hawaii hotspot. In 2001 the Ocean Drilling Program (since merged into the Integrated Ocean Drilling Program), an international research effort to study the world's seafloors, funded a two-month expedition aboard the research vessel JOIDES Resolution to collect lava samples from four submerged Emperor seamounts. The project drilled Detroit, Nintoku, and Koko seamounts, all of which are in the far northwest end of the chain, the oldest section. These lava samples were then tested in 2003, suggesting a mobile Hawaiian hotspot and a shift in its motion as the cause of the bend. Lead scientist John Tarduno told National Geographic: The Hawaii bend was used as a classic example of how a large plate can change motion quickly. You can find a diagram of the Hawaii–Emperor bend entered into just about every introductory geological textbook out there. It really is something that catches your eye." Despite the large shift, the change in direction was never recorded by magnetic declinations, fracture zone orientations or plate reconstructions; nor could a continental collision have occurred fast enough to produce such a pronounced bend in the chain. To test whether the bend was a result of a change in direction of the Pacific Plate, scientists analyzed the lava samples' geochemistry to determine where and when they formed. Age was determined by the radiometric dating of radioactive isotopes of potassium and argon. Researchers estimated that the volcanoes formed during a period 81 million to 45 million years ago. Tarduno and his team determined where the volcanoes formed by analyzing the rock for the magnetic mineral magnetite. While hot lava from a volcanic eruption cools, tiny grains within the magnetite align with the Earth's magnetic field, and lock in place once the rock solidifies. Researchers were able to verify the latitudes at which the volcanoes formed by measuring the grains' orientation within the magnetite. Paleomagnetists concluded that the Hawaiian hotspot had drifted southward sometime in its history, and that, 47 million years ago, the hotspot's southward motion greatly slowed, perhaps even stopping entirely. History of study Ancient Hawaiians The possibility that the Hawaiian Islands became older as one moved to the northwest was suspected by ancient Hawaiians long before Europeans arrived. During their voyages, seafaring Hawaiians noticed differences in erosion, soil formation, and vegetation, allowing them to deduce that the islands to the northwest (Niihau and Kauai) were older than those to the southeast (Maui and Hawaii). The idea was handed down the generations through the legend of Pele, the Hawaiian goddess of volcanoes. Pele was born to the female spirit Haumea, or Hina, who, like all Hawaiian gods and goddesses, descended from the supreme beings, Papa, or Earth Mother, and Wakea, or Sky Father. According to the myth, Pele originally lived on Kauai, when her older sister Nāmaka, the Goddess of the Sea, attacked her for seducing her husband. Pele fled southeast to the island of Oahu. When forced by Nāmaka to flee again, Pele moved southeast to Maui and finally to Hawaii, where she still lives in Halemaʻumaʻu at the summit of Kīlauea. There she was safe, because the slopes of the volcano are so high that even Nāmaka's mighty waves could not reach her. Pele's mythical flight, which alludes to an eternal struggle between volcanic islands and ocean waves, is consistent with geologic evidence about the ages of the islands decreasing to the southeast. Modern studies Three of the earliest recorded observers of the volcanoes were the Scottish scientists Archibald Menzies in 1794, James Macrae in 1825, and David Douglas in 1834. Just reaching the summits proved daunting: Menzies took three attempts to ascend Mauna Loa, and Douglas died on the slopes of Mauna Kea. The United States Exploring Expedition spent several months studying the islands in 1840–1841. American geologist James Dwight Dana was on that expedition, as was Lieutenant Charles Wilkes, who spent most of the time leading a team of hundreds that hauled a Kater's pendulum to the summit of Mauna Loa to measure gravity. Dana stayed with missionary Titus Coan, who would provide decades of first-hand observations. Dana published a short paper in 1852. Dana remained interested in the origin of the Hawaiian Islands, and directed a more in-depth study in 1880 and 1881. He confirmed that the islands' age increased with their distance from the southeasternmost island by observing differences in their degree of erosion. He also suggested that many other island chains in the Pacific showed a similar general increase in age from southeast to northwest. Dana concluded that the Hawaiian chain consisted of two volcanic strands, located along distinct but parallel curving pathways. He coined the terms "Loa" and "Kea" for the two prominent trends. The Kea trend includes the volcanoes of Kīlauea, Mauna Kea, Kohala, Haleakalā, and West Maui. The Loa trend includes Lōihi, Mauna Loa, Hualālai, Kahoolawe, Lānai, and West Molokai. Dana proposed that the alignment of the Hawaiian Islands reflected localized volcanic activity along a major fissure zone. Dana's "great fissure" theory served as the working hypothesis for subsequent studies until the mid-20th century. Dana's work was followed up by the 1884 expedition of geologist C. E. Dutton, who refined and expanded Dana's ideas. Most notably, Dutton established that the island of Hawaii actually harbored five volcanoes, whereas Dana counted three. This is because Dana had originally regarded Kīlauea as a flank vent of Mauna Loa, and Kohala as part of Mauna Kea. Dutton also refined others of Dana's observations, and is credited with the naming of 'a'ā and pāhoehoe-type lavas, although Dana had also noted a distinction. Stimulated by Dutton's expedition, Dana returned in 1887, and published many accounts of his expedition in the American Journal of Science. In 1890 he published the most detailed manuscript of its day, which remained the definitive guide to Hawaiian volcanism for decades. In 1909 two major books about Hawaii's volcanoes were published ("The volcanoes of Kilauea and Mauna Loa" by W.T. Brigham and "Hawaii and its volcanoes" by C.H. Hitchcock). In 1912 geologist Thomas Jaggar founded the Hawaiian Volcano Observatory. The facility was taken over in 1919 by the National Oceanic and Atmospheric Administration and in 1924 by the United States Geological Survey (USGS), which marked the start of continuous volcano observation on Hawaii Island. The next century was a period of thorough investigation, marked by contributions from many top scientists. The first complete evolutionary model was first formulated in 1946, by USGS geologist and hydrologist Harold T. Stearns. Since that time, advances (e.g. improved rock dating methods and submarine volcanic stages) have enabled the study of previously limited areas of observation. In the 1970s, the Hawaiian seafloor was mapped using ship-based sonar. Computed SYNBAPS (Synthetic Bathymetric Profiling System) data filled gaps between the ship-based sonar bathymetric measurements. From 1994 to 1998 the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) mapped Hawaii in detail and studied its ocean floor, making it one of the world's best-studied marine features. The JAMSTEC project, a collaboration with USGS and other agencies, employed manned submersibles, remotely operated underwater vehicles, dredge samples, and core samples. The Simrad EM300 multibeam side-scanning sonar system collected bathymetry and backscatter data. Characteristics Position The Hawaii hotspot has been imaged through seismic tomography, and is estimated to be wide. Tomographic images show a thin low-velocity zone extending to a depth of , connecting with a large low-velocity zone extending from a depth of to the core-mantle boundary. These low seismic velocity zones often indicate hotter and more buoyant mantle material, consistent with a plume originating in the lower mantle and a pond of plume material in the upper mantle. The low-velocity zone associated with the source of the plume is north of Hawaii, showing that the plume is tilted to a certain degree, deflected toward the south by mantle flow. Uranium decay-series disequilibria data has shown that the actively flowing region of the melt zone is  km wide at its base and at the upper mantle upwelling, consistent with tomographic measurements. Temperature Indirect studies found that the magma chamber is located about underground, which matches the estimated depth of the Cretaceous Period rock in the oceanic lithosphere; this may indicate that the lithosphere acts as a lid on melting by arresting the magma's ascent. The magma's original temperature was found in two ways, by testing garnet's melting point in lava and by adjusting the lava for olivine deterioration. Both USGS tests seem to confirm the temperature at about ; in comparison, the estimated temperature for mid-ocean ridge basalt is about . The surface heat flow anomaly around the Hawaiian Swell is only of the order of 10 mW/m2, far less than the continental United States range of 25–150 mW/m2. This is unexpected for the classic model of a hot, buoyant plume in the mantle. However, it has been shown that other plumes display highly variable surface heat fluxes and that this variability may be due to variable hydrothermal fluid flow in the Earth's crust above the hotspots. This fluid flow advectively removes heat from the crust, and the measured conductive heat flow is therefore lower than the true total surface heat flux. The low heat across the Hawaiian Swell indicates that it is not supported by a buoyant crust or upper lithosphere, but is rather propped up by the upwelling hot (and therefore less-dense) mantle plume that causes the surface to rise through a mechanism known as "dynamic topography". Movement Hawaiian volcanoes drift northwest from the hotspot at a rate of about a year. The hotspot has migrated south by about relative to the Emperor chain. Paleomagnetic studies support this conclusion based on changes in Earth's magnetic field, as captured in the orientation of magnetically susceptible mineral grains imprinted on igneous rocks during crystallization of the different rock bodies, showing that these seamounts formed at higher latitudes than present-day Hawaii. Prior to the bend, the hotspot migrated an estimated per year; the rate of movement changed at the time of the bend to about per year. The Ocean Drilling Program provided most of the current knowledge about the drift. The 2001 expedition drilled six seamounts and tested the samples to determine their original latitude, and thus the characteristics and speed of the hotspot's drift pattern in total. Each successive volcano spends less time actively attached to the plume. The large difference between the youngest and oldest lavas between Emperor and Hawaiian volcanoes indicates that the hotspot's velocity is increasing. For example, Kohala, the oldest volcano on Hawaii island, is one million years old and last erupted 120,000 years ago, a period of just under 900,000 years; whereas one of the oldest, Detroit Seamount, experienced 18 million or more years of volcanic activity. The oldest volcano in the chain, Meiji Seamount, perched on the edge of the Aleutian Trench, formed 85 million years ago. At its current velocity, the seamount will be destroyed within a few million years, as the Pacific Plate slides under the Eurasian Plate. It is unknown whether the seamount chain has been subducting under the Eurasian Plate, and whether the hotspot is older than Meiji Seamount, as any older seamounts have since been destroyed by the plate margin. It is also possible that a collision near the Aleutian Trench had changed the velocity of the Pacific Plate, explaining the hotspot chain's bend; the relationship between these features is still being investigated. Magma The composition of the volcanoes' magma has changed significantly according to analysis of the strontium–niobium–palladium elemental ratios. The Emperor Seamounts were active for at least 46 million years, with the oldest lava dated to the Cretaceous Period, followed by another 39 million years of activity along the Hawaiian segment of the chain, totaling 85 million years. Data demonstrate vertical variability in the amount of strontium present in both the alkalic (early stages) and tholeiitic (later stages) lavas. The systematic increase slows drastically at the time of the bend. Almost all magma created by the hotspot is igneous basalt; the volcanoes are constructed almost entirely of this or the similar in composition but coarser-grained gabbro and diabase. Other igneous rocks such as nephelinite are present in small quantities; these occur often on the older volcanoes, most prominently Detroit Seamount. Most eruptions are runny because basaltic magma is less viscous than magmas characteristic of more explosive eruptions such as the andesitic magmas that produce spectacular and dangerous eruptions around Pacific Basin margins. Volcanoes fall into several eruptive categories. Hawaiian volcanoes are called "Hawaiian-type". Hawaiian lava spills out of craters and forms long streams of glowing molten rock, flowing down the slope, covering acres of land and replacing ocean with new land. Eruptive frequency and scale There is significant evidence that lava flow rates have been increasing. Over the last six million years they have been far higher than ever before, at over per year. The average for the last million years is even higher, at about . In comparison, the average production rate at a mid-ocean ridge is about for every of ridge. The rate along the Emperor seamount chain averaged about per year. The rate was almost zero for the initial five million or so years in the hotspot's life. The average lava production rate along the Hawaiian chain has been greater, at per year. In total, the hotspot has produced an estimated of lava, enough to cover California with a layer about thick. The distance between individual volcanoes has shrunk. Although volcanoes have been drifting north faster and spending less time active, the far greater modern eruptive volume of the hotspot has generated more closely spaced volcanoes, and many of them overlap, forming such superstructures as Hawaii island and the ancient Maui Nui. Meanwhile, many of the volcanoes in the Emperor seamounts are separated by or even as much as . Topography and geoid A detailed topographic analysis of the Hawaiian–Emperor seamount chain reveals the hotspot as the center of a topographic high, and that elevation falls with distance from the hotspot. The most rapid decrease in elevation and the highest ratio between the topography and geoid height are over the southeastern part of the chain, falling with distance from the hotspot, particularly at the intersection of the Molokai and Murray fracture zones. The most likely explanation is that the region between the two zones is more susceptible to reheating than most of the chain. Another possible explanation is that the hotspot strength swells and subsides over time. In 1953, Robert S. Dietz and his colleagues first identified the swell behavior. It was suggested that the cause was mantle upwelling. Later work pointed to tectonic uplift, caused by reheating within the lower lithosphere. However, normal seismic activity beneath the swell, as well as lack of detected heat flow, caused scientists to suggest dynamic topography as the cause, in which the motion of the hot and buoyant mantle plume supports the high surface topography around the islands. Understanding the Hawaiian swell has important implications for hotspot study, island formation, and inner Earth. Seismicity The Hawaii hotspot is a highly active seismic zone with thousands of earthquakes occurring on and near Hawaii island every year. Most are too small to be felt by people but some are large enough to result in minor to moderate devastation. The most destructive recorded earthquake was the 2 April 1868 earthquake which had a magnitude of 7.9 on the Richter scale. It triggered a landslide on Mauna Loa, north of Pahala, killing 31 people. A tsunami claimed 46 more lives. The villages of Punaluu, Nīnole, Kaaawa, Honuapo, and Keauhou Landing were severely damaged. The tsunami reportedly rolled over the tops of the coconut trees up to high and it reached inland a distance of a quarter of a mile (400 m) in some places. The lower magnitude earthquakes are believed to occur through local stresses caused by spreading through the seepage of lava into fractures in the overlying rocks (wedging the rocks apart further) or the buoyancy of the underlying mantle plume upheaving the surrounding rocks. These local stresses would only produce lower energy earthquakes because of the lower tensile strength of basalt comparatively to its higher compressive strength. The higher magnitude earthquakes are derived from the basal (decollement) layer being influenced by deformities caused by the increased weight of the Hawaiian islands. These deformities could cause more compressive stresses, allowing for higher magnitude earthquakes. Such modelling to explain observed eathquake patterns suggests the concept that a soft center hole exists under the island of Hawaii where the lithospheric Pacific plate is broken. Volcanoes Over its 85 million year history, the Hawaii hotspot has created at least 129 volcanoes, more than 123 of which are extinct volcanoes, seamounts, and atolls, four of which are active volcanoes, and two of which are dormant volcanoes. They can be organized into three general categories: the Hawaiian archipelago, which comprises most of the U.S. state of Hawaii and is the location of all modern volcanic activity; the Northwestern Hawaiian Islands, which consist of coral atolls, extinct islands, and atoll islands; and the Emperor Seamounts, all of which have since eroded and subsided to the sea and become seamounts and guyots (flat-topped seamounts). Volcanic characteristics Hawaiian volcanoes are characterized by frequent rift eruptions, their large size (thousands of cubic kilometers in volume), and their rough, decentralized shape. Rift zones are a prominent feature on these volcanoes, and account for their seemingly random volcanic structure. The tallest mountain in the Hawaii chain, Mauna Kea, rises above mean sea level. Measured from its base on the seafloor, it is the world's tallest mountain, at ; Mount Everest rises above sea level. Hawaii is surrounded by a myriad of seamounts; however, they were found to be unconnected to the hotspot and its volcanism. Kīlauea erupted continuously from 1983 to 2018 through Puʻu ʻŌʻō, a minor volcanic cone, which has become an attraction for volcanologists and tourists alike. Landslides The Hawaiian islands are carpeted by a large number of landslides sourced from volcanic collapse. Bathymetric mapping has revealed at least 70 large landslides on the island flanks over in length, and the longest are long and over in volume. These debris flows can be sorted into two broad categories: slumps, mass movement over slopes which slowly flatten their originators, and more catastrophic debris avalanches, associated with flank and sector collapse, which fragment volcanic slopes and scatter volcanic debris past their slopes. These slides have caused massive tsunamis and earthquakes, fractured volcanic massifs, and scattered debris hundreds of miles away from their source. Active slumping is currently taking place on the south flank of the Big Island, where the Hilina Slump comprises a mobile portion of the island’s mass south of Kīlauea. Slumps tend to be deeply rooted in their originators, moving rock up to deep inside the volcano. Forced forward by the mass of newly ejected volcanic material, slumps may creep forward slowly, or surge forward in spasms that have caused the largest of Hawaii's historical earthquakes, in 1868 and 1975. Debris avalanches, meanwhile, are thinner and longer, and are defined by volcanic amphitheaters at their head and hummocky terrain at their base. Rapidly moving avalanches carried blocks tens of kilometers away, disturbing the local water column and causing a tsunami. Evidence of these events exists in the form of marine deposits high on the slopes of many Hawaiian volcanoes, and has marred the slopes of several Emperor seamounts, such as Daikakuji Guyot and Detroit Seamount. GPS measurements on the eastern flank of Hawaii Island over a 5 year epoch show the pattern of collapse with velocities of up to relative to the Pacific Plate Evolution and construction Hawaiian volcanoes follow a well-established life cycle of growth and erosion. After a new volcano forms, its lava output gradually increases. Height and activity both peak when the volcano is around 500,000 years old and then rapidly decline. Eventually it goes dormant, and eventually extinct. Weathering and erosion gradually reduce the height of the volcano until it again becomes a seamount. This life cycle consists of several stages. The first stage is the submarine preshield stage, currently represented solely by Kama‘ehuakanaloa. During this stage, the volcano builds height through increasingly frequent eruptions. The sea's pressure prevents explosive eruptions. The cold water quickly solidifies the lava, producing the pillow lava that is typical of underwater volcanic activity. As the seamount slowly grows, it goes through the shield stages. It forms many mature features, such as a caldera, while submerged. The summit eventually breaches the surface, and the lava and ocean water "battle" for control as the volcano enters the explosive subphase. This stage of development is exemplified by explosive steam vents. This stage produces mostly volcanic ash, a result of the waves dampening the lava. This conflict between lava and sea influences Hawaiian mythology. The volcano enters the subaerial subphase once it is tall enough to escape the water. Now the volcano puts on 95% of its above-water height over roughly 500,000 years. Thereafter eruptions become much less explosive. The lava released in this stage often includes both pāhoehoe and ʻaʻā, and the currently active Hawaiian volcanoes, Mauna Loa and Kīlauea, are in this phase. Hawaiian lava is often runny, blocky, slow, and relatively easy to predict; the USGS tracks where it is most likely to run, and maintains a tourist site for viewing the lava. Mechanical collapse, indicated by large submarine landslides adjacent to landslide scars on the islands, is an ongoing process that shapes the early phases of volcano construction for each of the islands. After the subaerial phase the volcano enters a series of postshield stages involving mechanical collapse creating subsidence and erosion, becoming an atoll and eventually a seamount. Once the Pacific Plate moves it out of the tropics, the reef mostly dies away, and the extinct volcano becomes one of an estimated 10,000 barren seamounts worldwide. Every Emperor seamount is a dead volcano. Coral reef development on Hawaiian Hotspot islands Reef growth and morphology often show the progression from underwater volcano to subaerial shield to seamount. The process of reef building around the margins of a volcanic island once it is formed, relates to both local island subsidence and global sea level increase. Other local factors such as water temperature and topography are important in reef formation. These fringing reefs gradually accrete vertically and seaward as an inactive volcano subsides, coinciding with a rise in relative sea level. A modern example, Kailua Bay off Oahu Hawaii, has been studied extensively to understand reef carbonate generation, sediment production and deposition. It is estimated that gross carbonate production is approximately 1.22 kg m−2 y−1 while sediment production via bio erosion is 0.33 kg m−2 y−1 resulting in an average vertical accretion of . This rate is considerably lower than worldwide averages for fringing reef accretion . Researchers are investigating the connections between strong wave action, reef biodiversity, rising sea levels and anthropogenic influence. As island subsidence progresses, fringing reefs develop into barrier reefs and once the volcano becomes a seamount, barrier reefs form atolls. Midway Atoll is a good example of the final stage of the evolution of a hotspot volcanic island.
Physical sciences
Geologic features
Earth science
13311819
https://en.wikipedia.org/wiki/Therapy
Therapy
A therapy or medical treatment is the attempted remediation of a health problem, usually following a medical diagnosis. Both words, treatment and therapy, are often abbreviated tx, Tx, or Tx. As a rule, each therapy has indications and contraindications. There are many different types of therapy. Not all therapies are effective. Many therapies can produce unwanted adverse effects. Treatment and therapy are often synonymous, especially in the usage of health professionals. However, in the context of mental health, the term therapy may refer specifically to psychotherapy. Semantic field The words care, therapy, treatment, and intervention overlap in a semantic field, and thus they can be synonymous depending on context. Moving rightward through that order, the connotative level of holism decreases and the level of specificity (to concrete instances) increases. Thus, in health-care contexts (where its senses are always noncount), the word care tends to imply a broad idea of everything done to protect or improve someone's health (for example, as in the terms preventive care and primary care, which connote ongoing action), although it sometimes implies a narrower idea (for example, in the simplest cases of wound care or postanesthesia care, a few particular steps are sufficient, and the patient's interaction with the provider of such care is soon finished). In contrast, the word intervention tends to be specific and concrete, and thus the word is often countable; for example, one instance of cardiac catheterization is one intervention performed, and coronary care (noncount) can require a series of interventions (count). At the extreme, the piling on of such countable interventions amounts to interventionism, a flawed model of care lacking holistic circumspection—merely treating discrete problems (in billable increments) rather than maintaining health. Therapy and treatment, in the middle of the semantic field, can connote either the holism of care or the discreteness of intervention, with context conveying the intent in each use. Accordingly, they can be used in both noncount and count senses (for example, therapy for chronic kidney disease can involve several dialysis treatments per week). The words aceology and are obscure and obsolete synonyms referring to the study of therapies. The English word therapy comes via Latin therapīa from and literally means "curing" or "healing". The term is a somewhat archaic doublet of the word therapy. Types of therapies By chronology, priority, or intensity Levels of care Levels of care classify health care into categories of chronology, priority, or intensity, as follows: Urgent care handles health issues that need to be handled today but are not necessarily emergencies; the urgent care venue can send a patient to the emergency care level if it turns out to be needed. In the United States (and possibly various other countries), urgent care centers also serve another function as their other main purpose: U.S. primary care practices have evolved in recent decades into a configuration whereby urgent care centers provide portions of primary care that cannot wait a month, because getting an appointment with the primary care practitioner is often subject to a waitlist of 2 to 8 weeks. Emergency care handles medical emergencies and is a first point of contact or intake for less serious problems, which can be referred to other levels of care as appropriate. Intensive care, also called critical care, is care for extremely ill or injured patients. It thus requires high resource intensity, knowledge, and skill, as well as quick decision making. Ambulatory care is care provided on an outpatient basis. Typically patients can walk into and out of the clinic under their own power (hence "ambulatory"), usually on the same day. Home care is care at home, including care from providers (such as physicians, nurses, and home health aides) making house calls, care from caregivers such as family members, and patient self-care. Primary care is meant to be the main kind of care in general, and ideally a medical home that unifies care across referred providers. Secondary care is care provided by medical specialists and other health professionals who generally do not have first contact with patients, for example, cardiologists, urologists and dermatologists. A patient reaches secondary care as a next step from primary care, typically by provider referral although sometimes by patient self-initiative. According to a systematic review, fields for development secondary care from patients’ viewpoint may be classified into four domains that should usefully guide future improvement of this care stage: “barriers to care, communication, coordination, and relationships and personal value”. Tertiary care is specialized consultative care, usually for inpatients and on referral from a primary or secondary health professional, in a facility that has personnel and facilities for advanced medical investigation and treatment, such as a tertiary referral hospital. Follow-up care is additional care during or after convalescence. Aftercare is generally synonymous with follow-up care. End-of-life care is care near the end of one's life. It often includes the following: Palliative care is supportive care, most especially (but not necessarily) near the end of life. Hospice care is palliative care very near the end of life when cure is very unlikely. Its main goal is comfort, both physical and mental. Lines of therapy Treatment decisions often follow formal or informal algorithmic guidelines. Treatment options can often be ranked or prioritized into lines of therapy: first-line therapy, second-line therapy, third-line therapy, and so on. First-line therapy (sometimes referred to as induction therapy, primary therapy, or front-line therapy) is the first therapy that will be tried. Its priority over other options is usually either: (1) formally recommended on the basis of clinical trial evidence for its best-available combination of efficacy, safety, and tolerability or (2) chosen based on the clinical experience of the physician. If a first-line therapy either fails to resolve the issue or produces intolerable side effects, additional (second-line) therapies may be substituted or added to the treatment regimen, followed by third-line therapies, and so on. An example of a context in which the formalization of treatment algorithms and the ranking of lines of therapy is very extensive is chemotherapy regimens. Because of the great difficulty in successfully treating some forms of cancer, one line after another may be tried. In oncology the count of therapy lines may reach 10 or even 20. Often multiple therapies may be tried simultaneously (combination therapy or polytherapy). Thus combination chemotherapy is also called polychemotherapy, whereas chemotherapy with one agent at a time is called single-agent therapy or monotherapy. Adjuvant therapy is therapy given in addition to the primary, main, or initial treatment, but simultaneously (as opposed to second-line therapy). Neoadjuvant therapy is therapy that is begun before the main therapy. Thus one can consider surgical excision of a tumor as the first-line therapy for a certain type and stage of cancer even though radiotherapy is used before it; the radiotherapy is neoadjuvant (chronologically first but not primary in the sense of the main event). Premedication is conceptually not far from this, but the words are not interchangeable; cytotoxic drugs to put a tumor "on the ropes" before surgery delivers the "knockout punch" are called neoadjuvant chemotherapy, not premedication, whereas things like anesthetics or prophylactic antibiotics before dental surgery are called premedication. Step therapy or stepladder therapy is a specific type of prioritization by lines of therapy. It is controversial in American health care because unlike conventional decision-making about what constitutes first-line, second-line, and third-line therapy, which in the U.S. reflects safety and efficacy first and cost only according to the patient's wishes, step therapy attempts to mix cost containment by someone other than the patient (third-party payers) into the algorithm. Therapy freedom and the negotiation between individual and group rights are involved. By intent By therapy composition Treatments can be classified according to the method of treatment: By matter by drugs: pharmacotherapy, chemotherapy (also, medical therapy often means specifically pharmacotherapy) by medical devices: implantation cardiac resynchronization therapy by specific molecules: molecular therapy (although most drugs are specific molecules, molecular medicine refers in particular to medicine relying on molecular biology) by specific biomolecular targets: targeted therapy molecular chaperone therapy by chelation: chelation therapy by specific chemical elements: by metals: by heavy metals: by gold: chrysotherapy (aurotherapy) by platinum-containing drugs: platin therapy by biometals by lithium: lithium therapy by potassium: potassium supplementation by magnesium: magnesium supplementation by chromium: chromium supplementation; phonemic neurological hypochromium therapy by copper: copper supplementation by nonmetals: by diatomic oxygen: oxygen therapy, hyperbaric oxygen therapy (hyperbaric medicine) transdermal continuous oxygen therapy by triatomic oxygen (ozone): ozone therapy by fluoride: fluoride therapy by other gases: medical gas therapy by water: hydrotherapy aquatic therapy rehydration therapy oral rehydration therapy water cure (therapy) by biological materials (biogenic substances, biomolecules, biotic materials, natural products), including their synthetic equivalents: biotherapy by whole organisms by viruses: virotherapy by bacteriophages: phage therapy by animal interaction: see animal interaction section by constituents or products of organisms by plant parts or extracts (but many drugs are derived from plants, even when the term phytotherapy is not used) scientific type: phytotherapy traditional (prescientific) type: herbalism by animal parts: quackery involving shark fins, tiger parts, and so on, often driving threat or endangerment of species by genes: gene therapy gene therapy for epilepsy gene therapy for osteoarthritis gene therapy for color blindness gene therapy of the human retina gene therapy in Parkinson's disease by epigenetics: epigenetic therapy by proteins: protein therapy (but many drugs are proteins despite not being called protein therapy) by enzymes: enzyme replacement therapy by hormones: hormone therapy hormonal therapy (oncology) hormone replacement therapy estrogen replacement therapy androgen replacement therapy hormone replacement therapy (menopause) transgender hormone therapy feminizing hormone therapy masculinizing hormone therapy antihormone therapy androgen deprivation therapy by whole cells: cell therapy (cytotherapy) by stem cells: stem cell therapy by immune cells: see immune system products below by immune system products: immunotherapy, host modulatory therapy by immune cells: T-cell vaccination cell transfer therapy autologous immune enhancement therapy TK cell therapy by humoral immune factors: antibody therapy by whole serum: serotherapy, including antiserum therapy by immunoglobulins: immunoglobulin therapy by monoclonal antibodies: monoclonal antibody therapy by urine: urine therapy (some scientific forms; many prescientific or pseudoscientific forms) by food and dietary choices: medical nutrition therapy grape therapy (quackery) by salts (but many drugs are the salts of organic acids, even when drug therapy is not called by names reflecting that) by salts in the air by natural dry salt air: "taking the cure" in desert locales (especially common in prescientific medicine; for example, one 19th-century way to treat tuberculosis) by artificial dry salt air: low-humidity forms of speleotherapy negative air ionization therapy by moist salt air: by natural moist salt air: seaside cure (especially common in prescientific medicine) by artificial moist salt air: water vapor forms of speleotherapy by salts in the water by mineral water: spa cure ("taking the waters") (especially common in prescientific medicine) by seawater: seaside cure (especially common in prescientific medicine) by aroma: aromatherapy by other materials with mechanism of action unknown by occlusion with duct tape: duct tape occlusion therapy By energy by electric energy as electric current: electrotherapy, electroconvulsive therapy Transcranial magnetic stimulation Vagus nerve stimulation by magnetic energy: magnet therapy pulsed electromagnetic field therapy magnetic resonance therapy by electromagnetic radiation (EMR): by light: light therapy (phototherapy) ultraviolet light therapy PUVA therapy photodynamic therapy photothermal therapy cytoluminescent therapy blood irradiation therapy by darkness: dark therapy by lasers: laser therapy low level laser therapy by gamma rays: radiosurgery Gamma Knife radiosurgery stereotactic radiation therapy cobalt therapy by radiation generally: radiation therapy (radiotherapy) intraoperative radiation therapy by EMR particles: particle therapy proton therapy electron therapy intraoperative electron radiation therapy Auger therapy neutron therapy fast neutron therapy neutron capture therapy of cancer by radioisotopes emitting EMR: by nuclear medicine by brachytherapy quackery type: electromagnetic therapy (alternative medicine) by mechanical: manual therapy as massotherapy and therapy by exercise as in physical therapy inversion therapy by sound: by ultrasound: ultrasonic lithotripsy extracorporeal shockwave therapy sonodynamic therapy by music: music therapy by temperature by heat: heat therapy (thermotherapy) by moderately elevated ambient temperatures: hyperthermia therapy by dry warm surroundings: Waon therapy by dry or humid warm surroundings: sauna, including infrared sauna, for sweat therapy by cold: by extreme cold to specific tissue volumes: cryotherapy by ice and compression: cold compression therapy by ambient cold: hypothermia therapy for neonatal encephalopathy (in newborns) targeted temperature management (therapeutic hypothermia, protective hypothermia) by hot and cold alternation: contrast bath therapy By procedure and human interaction Surgery by counseling, such as psychotherapy (see also: list of psychotherapies) systemic therapy by group psychotherapy by cognitive behavioral therapy by cognitive therapy by behaviour therapy by dialectical behavior therapy by cognitive emotional behavioral therapy by cognitive rehabilitation therapy by family therapy by education by psychoeducation by information therapy by speech therapy, physical therapy, occupational therapy, vision therapy, massage therapy, chiropractic or acupuncture by lifestyle modifications, such as avoiding unhealthy food or maintaining a predictable sleep schedule by coaching By animal interaction by pets, assistance animals, or working animals: animal-assisted therapy by horses: equine therapy, hippotherapy by dogs: pet therapy with therapy dogs, including grief therapy dogs by cats: pet therapy with therapy cats by fish: ichthyotherapy (wading with fish), aquarium therapy (watching fish) by maggots: maggot therapy by worms: by internal worms: helminthic therapy by leeches: leech therapy by immersion: animal bath By meditation by mindfulness: mindfulness-based cognitive therapy By reading by bibliotherapy By creativity by expression: expressive therapy by writing: writing therapy journal therapy by play: play therapy by art: art therapy sensory art therapy comic book therapy by gardening: horticultural therapy by dance: dance therapy by drama: drama therapy by recreation: recreational therapy by music: music therapy By sleeping and waking by deep sleep: deep sleep therapy by sleep deprivation: wake therapy
Biology and health sciences
Medical procedures
null
7001745
https://en.wikipedia.org/wiki/Impedance%20of%20free%20space
Impedance of free space
In electromagnetism, the impedance of free space, , is a physical constant relating the magnitudes of the electric and magnetic fields of electromagnetic radiation travelling through free space. That is, where is the electric field strength, and is the magnetic field strength. Its presently accepted value is , where Ω is the ohm, the SI unit of electrical resistance. The impedance of free space (that is, the wave impedance of a plane wave in free space) is equal to the product of the vacuum permeability and the speed of light in vacuum . Before 2019, the values of both these constants were taken to be exact (they were given in the definitions of the ampere and the metre respectively), and the value of the impedance of free space was therefore likewise taken to be exact. However, with the revision of the SI that came into force on 20 May 2019, the impedance of free space as expressed with an SI unit is subject to experimental measurement because only the speed of light in vacuum retains an exactly defined value. Terminology The analogous quantity for a plane wave travelling through a dielectric medium is called the intrinsic impedance of the medium and designated (eta). Hence is sometimes referred to as the intrinsic impedance of free space, and given the symbol . It has numerous other synonyms, including: wave impedance of free space, the vacuum impedance, intrinsic impedance of vacuum, characteristic impedance of vacuum, wave resistance of free space. Relation to other constants From the above definition, and the plane wave solution to Maxwell's equations, where H/m is the magnetic constant, also known as the permeability of free space, F/m is the electric constant, also known as the permittivity of free space, is the speed of light in free space, The reciprocal of is sometimes referred to as the admittance of free space and represented by the symbol . Historical exact value Between 1948 and 2019, the SI unit the ampere was defined by choosing the numerical value of to be exactly . Similarly, since 1983 the SI metre has been defined relative to the second by choosing the value of to be . Consequently, until the 2019 revision, exactly, or exactly, or This chain of dependencies changed when the ampere was redefined on 20 May 2019. Approximation as 120π ohms It is very common in textbooks and papers written before about 1990 to substitute the approximate value 120 ohms for . This is equivalent to taking the speed of light to be precisely in conjunction with the then-current definition of as . For example, Cheng 1989 states that the radiation resistance of a Hertzian dipole is (result in ohms; not exact). This practice may be recognized from the resulting discrepancy in the units of the given formula. Consideration of the units, or more formally dimensional analysis, may be used to restore the formula to a more exact form, in this case to
Physical sciences
Physical constants
Physics
7005062
https://en.wikipedia.org/wiki/Energy%20conversion%20efficiency
Energy conversion efficiency
Energy conversion efficiency (η) is the ratio between the useful output of an energy conversion machine and the input, in energy terms. The input, as well as the useful output may be chemical, electric power, mechanical work, light (radiation), or heat. The resulting value, η (eta), ranges between 0 and 1. Overview Energy conversion efficiency depends on the usefulness of the output. All or part of the heat produced from burning a fuel may become rejected waste heat if, for example, work is the desired output from a thermodynamic cycle. Energy converter is an example of an energy transformation. For example, a light bulb falls into the categories energy converter. Even though the definition includes the notion of usefulness, efficiency is considered a technical or physical term. Goal or mission oriented terms include effectiveness and efficacy. Generally, energy conversion efficiency is a dimensionless number between 0 and 1.0, or 0% to 100%. Efficiencies cannot exceed 100%, which would result in a perpetual motion machine, which is impossible. However, other effectiveness measures that can exceed 1.0 are used for refrigerators, heat pumps and other devices that move heat rather than convert it. It is not called efficiency, but the coefficient of performance, or COP. It is a ratio of useful heating or cooling provided relative to the work (energy) required. Higher COPs equate to higher efficiency, lower energy (power) consumption and thus lower operating costs. The COP usually exceeds 1, especially in heat pumps, because instead of just converting work to heat (which, if 100% efficient, would be a COP of 1), it pumps additional heat from a heat source to where the heat is required. Most air conditioners have a COP of 2.3 to 3.5. When talking about the efficiency of heat engines and power stations the convention should be stated, i.e., HHV ( Gross Heating Value, etc.) or LCV (a.k.a. Net Heating value), and whether gross output (at the generator terminals) or net output (at the power station fence) are being considered. The two are separate but both must be stated. Failure to do so causes endless confusion. Related, more specific terms include Electrical efficiency, useful power output per electrical power consumed; Mechanical efficiency, where one form of mechanical energy (e.g. potential energy of water) is converted to mechanical energy (work); Thermal efficiency or Fuel efficiency, useful heat and/or work output per input energy such as the fuel consumed; 'Total efficiency', e.g., for cogeneration, useful electric power and heat output per fuel energy consumed. Same as the thermal efficiency. Luminous efficiency, that portion of the emitted electromagnetic radiation is usable for human vision. Chemical conversion efficiency The change of Gibbs energy of a defined chemical transformation at a particular temperature is the minimum theoretical quantity of energy required to make that change occur (if the change in Gibbs energy between reactants and products is positive) or the maximum theoretical energy that might be obtained from that change (if the change in Gibbs energy between reactants and products is negative). The energy efficiency of a process involving chemical change may be expressed relative to these theoretical minima or maxima.The difference between the change of enthalpy and the change of Gibbs energy of a chemical transformation at a particular temperature indicates the heat input required or the heat removal (cooling) required to maintain that temperature. A fuel cell may be considered to be the reverse of electrolysis. For example, an ideal fuel cell operating at a temperature of 25 °C having gaseous hydrogen and gaseous oxygen as inputs and liquid water as the output could produce a theoretical maximum amount of electrical energy of 237.129 kJ (0.06587 kWh) per gram mol (18.0154 gram) of water produced and would require 48.701 kJ (0.01353 kWh) per gram mol of water produced of heat energy to be removed from the cell to maintain that temperature. An ideal electrolysis unit operating at a temperature of 25 °C having liquid water as the input and gaseous hydrogen and gaseous oxygen as products would require a theoretical minimum input of electrical energy of 237.129 kJ (0.06587 kWh) per gram mol (18.0154 gram) of water consumed and would require 48.701 kJ (0.01353 kWh) per gram mol of water consumed of heat energy to be added to the unit to maintain that temperature. It would operate at a cell voltage of 1.24 V. For a water electrolysis unit operating at a constant temperature of 25 °C without the input of any additional heat energy, electrical energy would have to be supplied at a rate equivalent of the enthalpy (heat) of reaction or 285.830 kJ (0.07940 kWh) per gram mol of water consumed. It would operate at a cell voltage of 1.48 V. The electrical energy input of this cell is 1.20 times greater than the theoretical minimum so the energy efficiency is 0.83 compared to the ideal cell.  A water electrolysis unit operating with a higher voltage that 1.48 V and at a temperature of 25 °C would have to have heat energy removed in order to maintain a constant temperature and the energy efficiency would be less than 0.83. The large entropy difference between liquid water and gaseous hydrogen plus gaseous oxygen accounts for the significant difference between the Gibbs energy of reaction and the enthalpy (heat) of reaction. Fuel heating values and efficiency In Europe the usable energy content of a fuel is typically calculated using the lower heating value (LHV) of that fuel, the definition of which assumes that the water vapor produced during fuel combustion (oxidation) remains gaseous, and is not condensed to liquid water so the latent heat of vaporization of that water is not usable. Using the LHV, a condensing boiler can achieve a "heating efficiency" in excess of 100% (this does not violate the first law of thermodynamics as long as the LHV convention is understood, but does cause confusion). This is because the apparatus recovers part of the heat of vaporization, which is not included in the definition of the lower heating value of a fuel. In the U.S. and elsewhere, the higher heating value (HHV) is used, which includes the latent heat for condensing the water vapor, and thus the thermodynamic maximum of 100% efficiency cannot be exceeded. Wall-plug efficiency, luminous efficiency, and efficacy In optical systems such as lighting and lasers, the energy conversion efficiency is often referred to as wall-plug efficiency. The wall-plug efficiency is the measure of output radiative-energy, in watts (joules per second), per total input electrical energy in watts. The output energy is usually measured in terms of absolute irradiance and the wall-plug efficiency is given as a percentage of the total input energy, with the inverse percentage representing the losses. The wall-plug efficiency differs from the luminous efficiency in that wall-plug efficiency describes the direct output/input conversion of energy (the amount of work that can be performed) whereas luminous efficiency takes into account the human eye's varying sensitivity to different wavelengths (how well it can illuminate a space). Instead of using watts, the power of a light source to produce wavelengths proportional to human perception is measured in lumens. The human eye is most sensitive to wavelengths of 555 nanometers (greenish-yellow) but the sensitivity decreases dramatically to either side of this wavelength, following a Gaussian power-curve and dropping to zero sensitivity at the red and violet ends of the spectrum. Due to this the eye does not usually see all of the wavelengths emitted by a particular light-source, nor does it see all of the wavelengths within the visual spectrum equally. Yellow and green, for example, make up more than 50% of what the eye perceives as being white, even though in terms of radiant energy white-light is made from equal portions of all colors (i.e.: a 5 mW green laser appears brighter than a 5 mW red laser, yet the red laser stands-out better against a white background). Therefore, the radiant intensity of a light source may be much greater than its luminous intensity, meaning that the source emits more energy than the eye can use. Likewise, the lamp's wall-plug efficiency is usually greater than its luminous efficiency. The effectiveness of a light source to convert electrical energy into wavelengths of visible light, in proportion to the sensitivity of the human eye, is referred to as luminous efficacy, which is measured in units of lumens per watt (lm/w) of electrical input-energy. Unlike efficacy (effectiveness), which is a unit of measurement, efficiency is a unitless number expressed as a percentage, requiring only that the input and output units be of the same type. The luminous efficiency of a light source is thus the percentage of luminous efficacy per theoretical maximum efficacy at a specific wavelength. The amount of energy carried by a photon of light is determined by its wavelength. In lumens, this energy is offset by the eye's sensitivity to the selected wavelengths. For example, a green laser pointer can have greater than 30 times the apparent brightness of a red pointer of the same power output. At 555 nm in wavelength, 1 watt of radiant energy is equivalent to 683 lumens, thus a monochromatic light source at this wavelength, with a luminous efficacy of 683 lm/w, would have a luminous efficiency of 100%. The theoretical-maximum efficacy lowers for wavelengths at either side of 555 nm. For example, low-pressure sodium lamps produce monochromatic light at 589 nm with a luminous efficacy of 200 lm/w, which is the highest of any lamp. The theoretical-maximum efficacy at that wavelength is 525 lm/w, so the lamp has a luminous efficiency of 38.1%. Because the lamp is monochromatic, the luminous efficiency nearly matches the wall-plug efficiency of < 40%. Calculations for luminous efficiency become more complex for lamps that produce white light or a mixture of spectral lines. Fluorescent lamps have higher wall-plug efficiencies than low-pressure sodium lamps, but only have half the luminous efficacy of ~ 100 lm/w, thus the luminous efficiency of fluorescents is lower than sodium lamps. A xenon flashtube has a typical wall-plug efficiency of 50–70%, exceeding that of most other forms of lighting. Because the flashtube emits large amounts of infrared and ultraviolet radiation, only a portion of the output energy is used by the eye. The luminous efficacy is therefore typically around 50 lm/w. However, not all applications for lighting involve the human eye nor are restricted to visible wavelengths. For laser pumping, the efficacy is not related to the human eye so it is not called "luminous" efficacy, but rather simply "efficacy" as it relates to the absorption lines of the laser medium. Krypton flashtubes are often chosen for pumping Nd:YAG lasers, even though their wall-plug efficiency is typically only ~ 40%. Krypton's spectral lines better match the absorption lines of the neodymium-doped crystal, thus the efficacy of krypton for this purpose is much higher than xenon; able to produce up to twice the laser output for the same electrical input. All of these terms refer to the amount of energy and lumens as they exit the light source, disregarding any losses that might occur within the lighting fixture or subsequent output optics. Luminaire efficiency refers to the total lumen-output from the fixture per the lamp output. With the exception of a few light sources, such as incandescent light bulbs, most light sources have multiple stages of energy conversion between the "wall plug" (electrical input point, which may include batteries, direct wiring, or other sources) and the final light-output, with each stage producing a loss. Low-pressure sodium lamps initially convert the electrical energy using an electrical ballast, to maintain the proper current and voltage, but some energy is lost in the ballast. Similarly, fluorescent lamps also convert the electricity using a ballast (electronic efficiency). The electricity is then converted into light energy by the electrical arc (electrode efficiency and discharge efficiency). The light is then transferred to a fluorescent coating that only absorbs suitable wavelengths, with some losses of those wavelengths due to reflection off and transmission through the coating (transfer efficiency). The number of photons absorbed by the coating will not match the number then reemitted as fluorescence (quantum efficiency). Finally, due to the phenomenon of the Stokes shift, the re-emitted photons will have a longer wavelength (thus lower energy) than the absorbed photons (fluorescence efficiency). In very similar fashion, lasers also experience many stages of conversion between the wall plug and the output aperture. The terms "wall-plug efficiency" or "energy conversion efficiency" are therefore used to denote the overall efficiency of the energy-conversion device, deducting the losses from each stage, although this may exclude external components needed to operate some devices, such as coolant pumps. Example of energy conversion efficiency
Physical sciences
Thermodynamics
Physics
323392
https://en.wikipedia.org/wiki/Theoretical%20computer%20science
Theoretical computer science
Theoretical computer science is a subfield of computer science and mathematics that focuses on the abstract and mathematical foundations of computation. It is difficult to circumscribe the theoretical areas precisely. The ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT) provides the following description: History While logical inference and mathematical proof had existed previously, in 1931 Kurt Gödel proved with his incompleteness theorem that there are fundamental limitations on what statements could be proved or disproved. Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon. In the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks and parallel distributed processing were established. In 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory. Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed, as shown below: Topics Algorithms An algorithm is a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning. An algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input. Automata theory Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science, under discrete mathematics (a section of mathematics and also of computer science). Automata comes from the Greek word αὐτόματα meaning "self-acting". Automata Theory is the study of self-operating virtual machines to help in the logical understanding of input and output process, without or with intermediate stage(s) of computation (or any function/process). Coding theory Coding theory is the study of the properties of codes and their fitness for a specific application. Codes are used for data compression, cryptography, error correction and more recently also for network coding. Codes are studied by various scientific disciplines – such as information theory, electrical engineering, mathematics, and computer science – for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction (or detection) of errors in the transmitted data. Computational complexity theory Computational complexity theory is a branch of the theory of computation that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. Computational geometry Computational geometry is a branch of computer science devoted to the study of algorithms that can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization. Other important applications of computational geometry include robotics (motion planning and visibility problems), geographic information systems (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification), computer-aided engineering (CAE) (mesh generation), computer vision (3D reconstruction). Computational learning theory Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and uses them to induce a classifier. This classifier is a function that assigns labels to samples including the samples that have never been previously seen by the algorithm. The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples. Computational number theory Computational number theory, also known as algorithmic number theory, is the study of algorithms for performing number theoretic computations. The best known problem in the field is integer factorization. Cryptography Cryptography is the practice and study of techniques for secure communication in the presence of third parties (called adversaries). More generally, it is about constructing and analyzing protocols that overcome the influence of adversaries and that are related to various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation. Modern cryptography intersects the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce. Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in integer factorization algorithms, and faster computing technology require these solutions to be continually adapted. There exist information-theoretically secure schemes that cannot be broken even with unlimited computing power—an example is the one-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms. Data structures A data structure is a particular way of organizing data in a computer so that it can be used efficiently. Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, databases use B-tree indexes for small percentages of data retrieval and compilers and databases use dynamic hash tables as look up tables. Data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in both main memory and in secondary memory. Distributed computation Distributed computing studies distributed systems. A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages. The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications, and blockchain networks like Bitcoin. A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs. There are many alternatives for the message passing mechanism, including RPC-like connectors and message queues. An important goal and challenge of distributed systems is location transparency. Information-based complexity Information-based complexity (IBC) studies optimal algorithms and computational complexity for continuous problems. IBC has studied continuous problems as path integration, partial differential equations, systems of ordinary differential equations, nonlinear equations, integral equations, fixed points, and very-high-dimensional integration. Formal methods Formal methods are a particular kind of mathematics based techniques for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification. Information theory Information theory is a branch of applied mathematics, electrical engineering, and computer science involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Since its inception it has broadened to find applications in many other areas, including statistical inference, natural language processing, cryptography, neurobiology, the evolution and function of molecular codes, model selection in statistics, thermal physics, quantum computing, linguistics, plagiarism detection, pattern recognition, anomaly detection and other forms of data analysis. Applications of fundamental topics of information theory include lossless data compression (e.g. ZIP files), lossy data compression (e.g. MP3s and JPEGs), and channel coding (e.g. for Digital Subscriber Line (DSL)). The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, and electrical engineering. Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields. Important sub-fields of information theory are source coding, channel coding, algorithmic complexity theory, algorithmic information theory, information-theoretic security, and measures of information. Machine learning Machine learning is a scientific discipline that deals with the construction and study of algorithms that can learn from data. Such algorithms operate by building a model based on inputs and using that to make predictions or decisions, rather than following only explicitly programmed instructions. Machine learning can be considered a subfield of computer science and statistics. It has strong ties to artificial intelligence and optimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based algorithms is infeasible. Example applications include spam filtering, optical character recognition (OCR), search engines and computer vision. Machine learning is sometimes conflated with data mining, although that focuses more on exploratory data analysis. Machine learning and pattern recognition "can be viewed as two facets of the same field." Natural computation Parallel computation Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved "in parallel". There are several different forms of parallel computing: bit-level, instruction level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors. Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance. The maximum possible speed-up of a single program as a result of parallelization is known as Amdahl's law. Programming language theory and program semantics Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of theoretical computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals. In programming language theory, semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages. It does so by evaluating the meaning of syntactically legal strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically illegal strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will execute on a certain platform, hence creating a model of computation. Quantum computation A quantum computer is a computation system that makes direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses qubits (quantum bits), which can be in superpositions of states. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Yuri Manin in 1980 and Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968. Experiments have been carried out in which quantum computational operations were executed on a very small number of qubits. Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis. Symbolic computation Computer algebra, also called symbolic computation or algebraic computation is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although, properly speaking, computer algebra should be a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have not any given value and are thus manipulated as symbols (therefore the name of symbolic computation). Software applications that perform symbolic calculations are called computer algebra systems, with the term system alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a user interface for the input/output of mathematical expressions, a large set of routines to perform usual operations, like simplification of expressions, differentiation using chain rule, polynomial factorization, indefinite integration, etc. Very-large-scale integration Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. Before the introduction of VLSI technology most ICs had a limited set of functions they could perform. An electronic circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI allows IC makers to add all of these circuits into one chip. Organizations European Association for Theoretical Computer Science SIGACT Simons Institute for the Theory of Computing Journals and newsletters Discrete Mathematics and Theoretical Computer Science Information and Computation Theory of Computing (open access journal) Formal Aspects of Computing Journal of the ACM SIAM Journal on Computing (SICOMP) SIGACT News Theoretical Computer Science Theory of Computing Systems TheoretiCS (open access journal) International Journal of Foundations of Computer Science Chicago Journal of Theoretical Computer Science (open access journal) Foundations and Trends in Theoretical Computer Science Journal of Automata, Languages and Combinatorics Acta Informatica Fundamenta Informaticae ACM Transactions on Computation Theory Computational Complexity Journal of Complexity ACM Transactions on Algorithms Information Processing Letters Open Computer Science (open access journal) Conferences Annual ACM Symposium on Theory of Computing (STOC) Annual IEEE Symposium on Foundations of Computer Science (FOCS) Innovations in Theoretical Computer Science (ITCS) Mathematical Foundations of Computer Science (MFCS) International Computer Science Symposium in Russia (CSR) ACM–SIAM Symposium on Discrete Algorithms (SODA) IEEE Symposium on Logic in Computer Science (LICS) Computational Complexity Conference (CCC) International Colloquium on Automata, Languages and Programming (ICALP) Annual Symposium on Computational Geometry (SoCG) ACM Symposium on Principles of Distributed Computing (PODC) ACM Symposium on Parallelism in Algorithms and Architectures (SPAA) Annual Conference on Learning Theory (COLT) International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM) Symposium on Theoretical Aspects of Computer Science (STACS) European Symposium on Algorithms (ESA) Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX) Workshop on Randomization and Computation (RANDOM) International Symposium on Algorithms and Computation (ISAAC) International Symposium on Fundamentals of Computation Theory (FCT) International Workshop on Graph-Theoretic Concepts in Computer Science (WG)
Mathematics
Discrete mathematics
null
323559
https://en.wikipedia.org/wiki/Common%20wood%20pigeon
Common wood pigeon
The common wood pigeon (Columba palumbus), also known as simply wood pigeon, is a large species in the dove and pigeon family (Columbidae), native to the western Palearctic. It belongs to the genus Columba, which includes closely related species such as the rock dove (Columba livia). It has a flexible diet, predominantly feeding on vegetable matter, including cereal crops, leading to them being regarded as an agricultural pest. Wood pigeons are extensively hunted over large parts of their range, but this does not seem to have a great impact on their population. Taxonomy The common wood pigeon was formally described by the Swedish naturalist Carl Linnaeus in 1758 in the tenth edition of his Systema Naturae. He placed it with all the other pigeons in the genus Columba and coined the binomial name Columba palumbus. The specific epithet palumbus is an alternate form of the Latin palumbes for a wood pigeon. Five subspecies are recognised, one of which is now extinct: C. p. palumbus Linnaeus, 1758 – Europe to western Siberia and Iraq; Northwest Africa † C. p. maderensis Tschusi, 1904 – Madeira (extinct) C. p. azorica Hartert, 1905 – the eastern and central Azores C. p. iranica (Zarudny, 1910) – southwestern and northern Iran to southwestern Turkmenistan C. p. casiotis (Bonaparte, 1854) – southeastern Iran and Kazakhstan to western China, northwestern India and Nepal † = extinct Fossil records of the species are known from the early Middle Pleistocene of Sicily. Description The three Western European Columba pigeons, common wood pigeon, stock dove and rock dove, though superficially alike, have very distinctive characteristics; the common wood pigeon may be identified at once by its larger size at and weight , and the white on its neck and wing. It is otherwise a basically grey bird, with a pinkish breast. The wingspan can range from and the wing chord measures . The tail measures , the bill is and the tarsus is . Adult birds bear a series of green and white patches on their necks, and a pink patch on their chest. The eye colour is a pale yellow, in contrast to that of rock doves, which is orange-red, and the stock dove, which is black. Juvenile birds do not have the white patches on either side of the neck. When they are about 6 months old (about three months out of the nest) they gain small white patches on both sides of the neck, which gradually enlarge until they are fully formed when the bird is about 6–8 months old. Juvenile birds also have a greyer beak and an overall lighter grey appearance than adult birds. Distribution and habitat In the colder northern and eastern parts of Europe and western Asia the common wood pigeon is a migrant, but in southern and western Europe it is a well distributed and often abundant resident. In Great Britain wood pigeons are commonly seen in parks and gardens and are seen with increasing numbers in towns and cities. Behaviour Its flight is quick, performed by regular beats, with an occasional sharp flick of the wings, characteristic of pigeons in general. It takes off with a loud clattering. It perches well, and in its nuptial display walks along a horizontal branch with swelled neck, lowered wings, and fanned tail. During the display flight the bird climbs, the wings are smartly cracked like a whiplash, and the bird glides down on stiff wings. The common wood pigeon is gregarious, often forming very large flocks outside the breeding season. Like many species of pigeon, wood pigeons take advantage of trees and buildings to gain a vantage point over the surrounding area, and their distinctive call means that they are usually heard before they are seen. Wood pigeons are known to fiercely defend their territory, and will fight each other to gain access to nesting and roosting locations. Male wood pigeons will typically attempt to drive competitors off by threat displays and pursuit, but will also directly fight, jumping and striking their rival with both wings. This species can be an agricultural pest, and it is often shot, being a legal quarry species in most European countries. It is wary in rural areas, but often quite tame where it is not persecuted. Breeding It breeds in trees in woods, parks and gardens, laying two white eggs in a simple stick nest which hatch after 17 to 19 days. Wood pigeons seem to have a preference for trees near roadways and rivers. Males exhibit aggressive behaviour towards each other during the breeding season by jumping and flapping wings at each other. Their plumage becomes much darker, especially the head, during hot summer periods. Breeding can happen year round if there is food abundant however breeding season most commonly occurs between April and October. The nests are vulnerable to attack, particularly by crows. The young usually fly at 33 to 34 days; however, if the nest is disturbed, some young may be able to survive having left the nest as early as 20 days from hatching. In a study carried out using ring-recovery data, the survival rate for juveniles in their first year was 52 per cent, and the adult annual survival rate was 61 per cent. For birds that survive the first year the typical lifespan is thus only three years, but the maximum recorded age is 17 years and 8 months for a bird ringed and recovered on the Orkney Islands. Diet Most of its diet is vegetable, round and fleshy leaves from Caryophyllaceae, Asteraceae, and cruciferous vegetables taken from open fields or gardens and lawns; young shoots and seedlings are favoured, and it will take grain, pine nuts, and certain fruits and berries. In the autumn they also eat figs and acorns, and in winter buds of trees and bushes. They will also eat larvae, ants, and small worms. They need open water to drink and bathe in. Young common wood pigeons swiftly become fat, as a result of the crop milk they are fed by their parents. This is an extremely rich fluid that is produced in the adult birds' crops during the breeding season. Calls The call of the wood pigeon is a loud and sustained characteristic cooing phrase, coo-COO-COO-coo-coo. In Ireland and the UK, the traditional mnemonic for the distinctive call of the bird has been interpreted as "Take two cows, Teddy", or "Take two cows, Taffy". Other interpretations for the birdsong include "I am a pigeon", "My toe bleeds, Betty", and "I don't want to go". Predators Predators of the wood pigeon typically consist of the Eurasian sparrowhawk, Eurasian goshawk and domestic cat. The eggs and babies of wood pigeons are also often predated by magpies and crows. Hunting The wood pigeon is widely hunted over large parts of its range, with millions of birds being shot annually, in part because it has been regarded as an agricultural pest, especially of cereal crops. In 1953, the British Government introduced a subsidy for the cost of cartridges to sport-hunters of wood pigeons, which was later abolished in 1969. In culture The wood pigeon is mentioned several times in the Eclogues written by the ancient Roman poet Virgil. Referring to its distinctive husky call, Virgil writes in Eclogue 1; Here beneath high rocks The gatherers of leaves, with cheerful songs Fill the high winds. Meanwhile thy turtle doves And hoarse wood pigeons from the lofty elms Make endless moan.
Biology and health sciences
Columbimorphae
Animals
323654
https://en.wikipedia.org/wiki/Eublepharis
Eublepharis
Eublepharis is a genus of terrestrial geckos native to eastern and southwestern Asia. The genus was first described by the British zoologist John Edward Gray in 1827. The etymology of their name is 'eu' = good (=true) |'blephar' = eyelid, and all have fully functional eyelids. Members of this genus are found in eastern and southwestern Asia. These geckos are sturdily built. Their tail is shorter than their snout–vent length, and their body is covered with numerous wart-like bumps. The toes do not have adhesive lamellae or membranes (Eublepharis cannot climb like their other gecko cousins). Like all members of Eublepharidae, they are primarily nocturnal. Included in this group is the popular pet leopard gecko Eublepharis macularius. Species of the genus Eublepharis The members of the Goniurosaurus kuroiwae superspecies were formerly considered members of the genus Eublepharis.
Biology and health sciences
Lizards and other Squamata
Animals
323825
https://en.wikipedia.org/wiki/Skink
Skink
Skinks are lizards belonging to the family Scincidae, a family in the infraorder Scincomorpha. With more than 1,500 described species across 100 different taxonomic genera, the family Scincidae is one of the most diverse families of lizards. Skinks are characterized by their smaller legs in comparison to typical lizards and are found in different habitats except arctic and subarctic regions. Etymology The word skink, which entered the English language around 1580–1590, comes from classical Greek and Latin , names that referred to various specific lizards. Description Skinks look like lizards of the family Lacertidae (sometimes called true lizards), but most species of skinks have no pronounced neck and relatively small legs. Several genera (e.g., Typhlosaurus) have no limbs at all. This is not true for all skinks, however, as some species such as the red-eyed crocodile skink have a head that is very distinguished from the body. These lizards also have legs that are relatively small proportional to their body size. Skinks' skulls are covered by substantial bony scales, usually matching up in shape and size, while overlapping. Other genera, such as Neoseps, have reduced limbs and fewer than five toes (digits) on each foot. In such species, their locomotion resembles that of snakes more than that of lizards with well-developed limbs. As a general rule, the longer the digits, the more arboreal the species is likely to be. A biological ratio can determine the ecological niche of a given skink species. The Scincidae ecological niche index (SENI) is a ratio based on anterior foot length at the junction of the ulna/radius-carpal bones to the longest digit divided by the snout-to-vent length. Most species of skinks have long, tapering tails they can shed if predators grab onto them. Such species generally can regenerate the lost part of a tail, though imperfectly. A lost tail can grow back within around three to four months. Species with stumpy tails have no special regenerative abilities. Some species of skinks are quite small; Scincella lateralis typically ranges from , more than half of which is the tail. Most skinks, though, are medium-sized, with snout-to-vent lengths around , although some grow larger; the Solomon Islands skink (Corucia zebrata) is the largest known extant species and may attain a snout-to-vent length of some . Skinks can often hide easily in their habitat because of their protective colouring (camouflage). Blood color Skinks in the genus Prasinohaema have green blood because of a buildup of the waste product biliverdin. Evolutionary history The oldest known skink is Electroscincus zedi described from the mid-Cretaceous (late Albian to early Cenomanian) Burmese amber from Myanmar, dating to around . Based on the presence of osteoderms, Electroscincus appears to belong to the Scincidae crown group, indicating that some divergence among the extant skink subfamilies must have already occurred by 100 million years ago. Other definitive skink fossils are known from the Miocene. Skink genera known from fossils include the following: Behavior A trait apparent in many species of skink is digging and burrowing. Many spend their time underground where they are mostly safe from predators, sometimes even digging out tunnels for easy navigation. They also use their tongues to sniff the air and track their prey. When they encounter their prey, they chase it down until they corner it or manage to land a bite and then swallow it whole. Despite being voracious hunters at times, all species pose no threat to humans and will generally avoid interaction in the wild. Being neither poisonous nor venomous, their bites are also mild and minor. Diet Skinks are generally carnivorous and in particular insectivorous. Typical prey include flies, crickets, grasshoppers, beetles, and caterpillars. Various species also eat earthworms, millipedes, centipedes, snails, slugs, isopods (woodlice etc), moths, small lizards (including geckos), and small rodents. Some species, particularly those favored as home pets, are omnivorous and have more varied diets and can be maintained on a regimen of roughly 60% vegetables/leaves/fruit and 40% meat (insects and rodents). Species of the genus Tristiidon are mainly frugivorous, but occasionally eat moss and insects. Breeding Although most species of skinks are oviparous, laying eggs in clutches, some 45% of skink species are viviparous in one sense or another. Many species are ovoviviparous, the young (skinklets) developing lecithotrophically in eggs that hatch inside the mother's reproductive tract, and emerging as live births. In some genera, however, such as Tiliqua and Corucia, the young developing in the reproductive tract derive their nourishment from a mammal-like placenta attached to the female – unambiguous examples of viviparous matrotrophy. Furthermore, an example recently described in Trachylepis ivensi is the most extreme to date: a purely reptilian placenta directly comparable in structure and function, to a eutherian placenta. Clearly, such vivipary repeatedly has developed independently in the evolutionary history of the Scincidae and the different examples are not ancestral to the others. In particular, placental development of whatever degree in lizards is phylogenetically analogous, rather than homologous, to functionally similar processes in mammals. Nesting Skinks typically seek out environments protected from the elements, such as thick foliage, underneath man-made structures, and ground-level buildings such as garages and first-floor apartments. When two or more skinks are seen in a small area, it is typical to find a nest nearby. Skinks are considered to be territorial and often are seen standing in front of or "guarding" their nest area. If a nest is nearby, one can expect to see 10-30 lizards within the period of a month. In parts of the southern United States, nests are commonly found in houses and apartments, especially along the coast. The nest is where the skink lays its small white eggs, up to 4-8 at a time. Habitat Skinks are very specific in their habitat as some can depend on vegetation while others may depend on land and the soil. As a family, skinks are cosmopolitan; species occur in a variety of habitats worldwide, apart from boreal and polar regions. Various species occur in ecosystems ranging from deserts and mountains to grasslands. Many species are good burrowers. More species are terrestrial or fossorial (burrowing) than arboreal (tree-climbing) or aquatic species. Some are "sand swimmers", especially the desert species, such as the mole skink or sand skink in Florida. Some use a very similar action in moving through grass tussocks. Most skinks are diurnal (day-active) and typically bask on rocks or logs during the day. Predators Raccoons, foxes, possums, snakes, coatis, weasels, crows, cats, dogs, herons, hawks, lizards, and other predators of small land vertebrates also prey on various skinks. This can be troublesome, given the long gestation period for some skinks, making them an easy target to predators such as the mongoose, which often threaten the species to at least near extinction, such as the Anguilla Bank skink. Invasive rodents are a major threat to skinks that have been overlooked, especially tropical skinks. Skinks are also hunted for food by indigenous peoples in New Guinea, including by the Kalam people in the highlands of Madang Province, Papua New Guinea. Genetics Genomic architecture Despite making up 15% of reptiles, skinks have a relatively conserved chromosome number, between 11 and 16 pairs. Skink genomes are typically about 1.5 Gb, approximately one-half the size of the human genome. The Christmas Island blue-tailed skink (Cryptoblepharus egeriae) was sequenced in 2022, representing the first skink reference genome. Sex determination systems Skinks were long thought to have both genetic sex determination (GSD) and temperature-dependent sex determination (TSD). Despite having sex chromosomes that are not distinguishable with a microscope, all major skink lineages share an old XY system that is over 80 million years old. These X and Y specific regions are highly divergent and contain multiple chromosomal rearrangements and repetitive sequences. Genera Many genera, Mabuya for example, are still insufficiently studied, and their systematics are at times controversial, see for example the taxonomy of the western skink, Plestiodon skiltonianus. Mabuya in particular, is being split, many species being allocated to new genera such as Trachylepis, Chioninia, and Eutropis. Subfamily Acontinae (limbless skinks; 30 species in 2 genera) Acontias (25 species) Typhlosaurus (5 species) Subfamily Egerniinae (social skinks; 63 species in 9 genera) Bellatorias (3 species) Corucia (1 species) Cyclodomorphus (9 species) Egernia (17 species) Liopholis (12 species) Lissolepis (2 species) Tiliqua (7 species) Tribolonotus (10 species) Subfamily Eugongylinae (eugongylid skinks; 455 species in 50 genera) Ablepharus (18 species) Acritoscincus (3 species) Alpinoscincus (2 species) Anepischetosia (1 species) Austroablepharus (3 species) Caesoris (1 species) Caledoniscincus (14 species) Carinascincus (8 species) Carlia (46 species) Celatiscincus (2 species) Cophoscincopus (4 species) Cryptoblepharus (53 species) Emoia (78 species) Epibator (3 species) Eroticoscincus (1 species) Eugongylus (5 species) Geomyersia (2 species) Geoscincus (1 species) Graciliscincus (1 species) Harrisoniascincus (1 species) Kanakysaurus (2 species) Kuniesaurus (1 species) Lacertaspis (5 species) Lacertoides (1 species) Lampropholis (14 species) Leiolopisma (4 species) Leptosiaphos (18 species) Liburnascincus (4 species) Lioscincus (2 species) Lobulia (8 species) Lygisaurus (14 species) Marmorosphax (5 species) Menetia (5 species) Morethia (8 species) Nannoscincus (12 species) Nubeoscincus (2 species) Oligosoma (53 species) Panaspis (21 species) Phaeoscincus (2 species) Phasmasaurus (2 species) Phoboscincus (2 species) Proablepharus (2 species) Pseudemoia (6 species) Pygmaeascincus (3 species) Saproscincus (12 species) Sigaloseps (6 species) Simiscincus (1 species) Tachygia (1 species) Techmarscincus (1 species) Tropidoscincus (3 species) Subfamily Lygosominae (lygosomid skinks; 56 species in 6 genera) Haackgreerius (1 species) Lamprolepis (3 species) Lygosoma (16 species) Mochlus (18 species) Riopa (9 species) Subdoluseps (8 species) Subfamily Mabuyinae (mabuyid skinks; 226 species in 25 genera) Alinea (2 species) Aspronema (2 species) Brasiliscincus (3 species) Capitellum (3 species) Chioninia (7 species) Copeoglossum (5 species) Dasia (10 species) Eumecia (2 species) Eutropis (48 species) Exila (1 species) Heremites (3 species) Lubuya (1 species) Mabuya (9 species) Manciola (1 species) Maracaiba (2 species) Marisora (13 species) Notomabuya (1 species) Otosaurus (1 species) Panopa (2 species) Psychosaura (2 species) Spondylurus (17 species) Toenayar (1 species) Trachylepis (87 species) Varzea (2 species) Vietnascincus (1 species) Subfamily Sphenomorphinae (sphenomorphid skinks; 591 species in 41 genera) Anomalopus (4 species) Calorodius (1 species) Calyptotis (4 species) Coeranoscincus (2 species) Coggeria (1 species) Concinnia (7 species) Ctenotus (103 species) Eremiascincus (15 species) Eulamprus (5 species) Fojia (1 species) Glaphyromorphus (11 species) Gnypetoscincus (1 species) Hemiergis (7 species) Insulasaurus (4 species) Isopachys (4 species) Kaestlea (5 species) Lankascincus (10 species) Larutia (9 species) Leptoseps (2 species) Lerista (97 species) Lipinia (28 species) Nangura (1 species) Notoscincus (2 species) Ophioscincus (3 species) Ornithuroscincus (9 species) Orosaura (1 species) Palaia (1 species) Papuascincus (4 species) Parvoscincus (24 species) Pinoyscincus (5 species) Praeteropus (4 species) Prasinohaema (5 species) Protoblepharus (3 species) Ristella (4 species) Saiphos (1 species) Scincella (38 species) Sepsiscus (1 species) Silvascincus (2 species) Sphenomorphus (113 species) Tropidophorus (29 species) Tumbunascincus (1 species) Tytthoscincus (23 species) Subfamily Scincinae (typical skinks; 294 species in 35 genera) Amphiglossus (2 species) Ateuchosaurus (2 species) Barkudia (2 species) Brachymeles (42 species) Brachyseps (8 species) Chalcides (32 species) Chalcidoseps (1 species) Eumeces (6 species) Eurylepis (2 species) Feylinia (6 species) Flexiseps (15 species) Gongylomorphus (3 species) Grandidierina (4 species) Hakaria (1 species) Janetaescincus (2 species) Jarujinia (1 species) Madascincus (12 species) Melanoseps (8 species) Mesoscincus (3 species) Nessia (9 species) Ophiomorus (12 species) Pamelaescincus (1 species) Paracontias (14 species) Plestiodon (50 species) Proscelotes (3 species) Pseudoacontias (4 species) Pygomeles (3 species) Scelotes (22 species) Scincopus (1 species) Scincus (5 species) Scolecoseps (4 species) Sepsina (5 species) Sepsophis (1 species) Typhlacontias (7 species) Voeltzkowia (3 species) Gallery
Biology and health sciences
Reptiles
null
323964
https://en.wikipedia.org/wiki/Subsistence%20agriculture
Subsistence agriculture
Subsistence agriculture occurs when farmers grow crops on smallholdings to meet the needs of themselves and their families. Subsistence agriculturalists target farm output for survival and for mostly local requirements. Planting decisions occur principally with an eye toward what the family will need during the coming year, and only secondarily toward market prices. Tony Waters, a professor of sociology, defines "subsistence peasants" as "people who grow what they eat, build their own houses, and live without regularly making purchases in the marketplace". Despite the self-sufficiency in subsistence farming, most subsistence farmers also participate in trade to some degree. Although their amount of trade as measured in cash is less than that of consumers in countries with modern complex markets, they use these markets mainly to obtain goods, not to generate income for food; these goods are typically not necessary for survival and may include sugar, iron roofing-sheets, bicycles, used clothing, and so forth. Many have important trade contacts and trade items that they can produce because of their special skills or special access to resources valued in the marketplace. Subsistence farming today is most common in developing countries. Subsistence agriculture generally features: small capital/finance requirements, mixed cropping, limited use of agrochemicals (e.g. pesticides and fertilizer), unimproved varieties of crops and animals, little or no surplus yield for sale, use of crude/traditional tools (e.g. hoes, machetes, and cutlasses), mainly the production of crops, small scattered plots of land, reliance on unskilled labor (often family members), and (generally) low yields. History Subsistence agriculture was the dominant mode of production in the world until recently, when market-based capitalism became widespread. Subsistence agriculture largely disappeared in Europe by the beginning of the twentieth century. It began to decrease in North America with the movement of sharecroppers and tenant farmers out of the American South and Midwest during the 1930s and 1940s. In Central and Eastern Europe, semi-subsistence agriculture reappeared within the transition economy after 1990 but declined in significance (or disappeared) in most countries by the accession to the EU in 2004 or 2007. Contemporary practices Subsistence farming continues today in large parts of rural Africa, and parts of Asia and Latin America. In 2015, about 2 billion people (slightly more than 25% of the world's population) in 500 million households living in rural areas of developing nations survive as "smallholder" farmers, working less than 2 hectares (5 acres) of land. Around 98% of China's farmers work on small farms, and China accounts for around half of the total world farms. In India, 80% of the total farmers are smallholder farmers; Ethiopia and Asia have almost 90% being small; while Mexico and Brazil recorded 50% and 20% being small. Areas where subsistence farming is largely practiced today, such as India and other regions in Asia, have seen a recent decline in the practice. This is due to processes such as urbanization, the transformation of land into rural areas, and integration of capitalist forms of farming. In India, the increase in industrialization and decrease in rural agriculture has led to rural unemployment and increased poverty for those in lower caste groups. Those that are able to live and work in urbanized areas are able to increase their income while those that remain in rural areas take large decreases, which is why there was no large decline in poverty.  This effectively widens the income gap between lower and higher castes and makes it harder for those in rural areas to move up in caste ranking. This era has marked a time of increased farmer suicides and the "vanishing village". Adaptation to global warming Most subsistence agriculture is practiced in developing countries located in tropical climates. Effects on crop production brought about by climate change will be more intense in these regions as extreme temperatures are linked to lower crop yields. Farmers have been forced to respond to increased temperatures through things such as increased land and labor inputs which threaten long-term productivity. Coping measures in response to variable climates can include reducing daily food consumption and selling livestock to compensate for the decreased productivity. These responses often threaten the future of household farms in the following seasons as many farmers will sell draft animals used for labor and will also consume seeds saved for planting. Measuring the full extent of future climate change impacts is difficult to determine as smallholder farms are complex systems with many different interactions. Different locations have different adaptation strategies available to them such as crop and livestock substitutions. Rates of production for cereal crops, such as wheat, oats, and maize have been declining largely due to heat's effects on crop fertility. This has forced many farmers to switch to more heat tolerant crops to maintain levels of productivity. Substitution of crops for heat tolerant alternatives limits the overall diversity of crops grown on smallholder farms. As many farmers farm to meet daily food needs, this can negatively impact nutrition and diet among many families practicing subsistence agriculture. Water availability has a crucial role in determining the productivity of subsistence agriculture, especially in dryland regions. Rain-needed farming, common in many areas, relies only on natural precipitation. Because of this, dryland farming is particularly susceptible to the ill effects of climate change in areas where weather patterns are already very erratic.doi:10.3390/atmos11121287 Types of subsistence farming Shifting agriculture In this type of farming, a patch of forest land is cleared by a combination of felling (chopping down) and burning, and crops are grown. After two–three years the fertility of the soil begins to decline, the land is abandoned and the farmer moves to clear a fresh piece of land elsewhere in the forest as the process continues. While the land is left fallow the forest regrows in the cleared area and soil fertility and biomass is restored. After a decade or more, the farmer may return to the first piece of land. This form of agriculture is sustainable at low population densities, but higher population loads require more frequent clearing which prevents soil fertility from recovering, opens up more of the forest canopy, and encourages scrub at the expense of large trees, eventually resulting in deforestation and soil erosion. Shifting cultivation is called dredd in India, ladang in Indonesia and jhumming in North East India. Sedentary farming While shifting agriculture's slash-and-burn technique may describe the method for opening new land, commonly the farmers in question have in existence at the same time smaller fields, sometimes merely gardens, near the homestead there they practice intensive "non-shifting" techniques. These farmers pair this with "slash and burn" techniques to clear additional land and (by the burning) provide fertilizer (ash). Such gardens near the homestead often regularly receive household refuse. The manure of any household chickens or goats are initially thrown into compost piles just to get them out of the way. However, such farmers often recognize the value of such compost and apply it regularly to their smaller fields. They also may irrigate part of such fields if they are near a source of water. In some areas of tropical Africa, at least, such smaller fields may be ones in which crops are grown on raised beds. Thus farmers practicing "slash and burn" agriculture are often much more sophisticated agriculturalists than the term "slash and burn" subsistence farmers suggests. Nomadic herding In this type of farming people migrate along with their animals from one place to another in search of fodder for their animals. Generally they rear cattle, sheep, goats, camels and/or yaks for milk, skin, meat and wool. This way of life is common in parts of central and western Asia, India, east and southwest Africa and northern Eurasia. Examples are the nomadic Bhotiyas and Gujjars of the Himalayas. They carry their belongings, such as tents, etc., on the backs of donkeys, horses, and camels. In mountainous regions, like Tibet and the Andes, yak and llama are reared. Reindeer are the livestock in arctic and sub-arctic areas. Sheep, goats, and camels are common animals, and cattle and horses are also important. Intensive subsistence farming In intensive subsistence agriculture, the farmer cultivates a small plot of land using simple tools and more labour. Climate with large number of days with sunshine and fertile soils, permits growing of more than one crop annually on the same plot. Farmers use their small land holdings to produce enough for their local consumption, while remaining produce is used for exchange against other goods. It results in much more food being produced per acre compared to other subsistence patterns. In the most intensive situation, farmers may even create terraces along steep hillsides to cultivate rice paddies. Such fields are found in densely populated parts of Asia, such as in the Philippines. They may also intensify by using manure, artificial irrigation and animal waste as fertilizer. Intensive subsistence farming is prevalent in the thickly populated areas of the monsoon regions of south, southwest, and southeast Asia. Poverty alleviation Subsistence agriculture can be used as a poverty alleviation strategy, specifically as a safety net for food-price shocks and for food security. Poor countries are limited in fiscal and institutional resources that would allow them to contain rises in domestic prices as well as to manage social assistance programs, which is often because they are using policy tools that are intended for middle- and high-income countries. Low-income countries tend to have populations in which 80% of poor are in rural areas. More than 90% of rural households have access to land, yet most of these poor have insufficient access to food. Subsistence agriculture can be used in low-income countries as a part of policy responses to a food crisis in the short and medium term and provide a safety net for the poor in these countries. Agriculture is more successful than non-agricultural jobs in combating poverty in countries with a larger population of people without education or who are unskilled. However, there are levels of poverty to be aware of to target agriculture towards the right audience. Agriculture is better at reducing poverty in those that have an income of $1 per day than those that have an income of $2 per day in Africa. People who make less income are more likely to be poorly educated and have fewer opportunities; therefore, they work more labor-intensive jobs, such as agriculture. People who make $2 have more opportunities to work in less labor-intensive jobs in non-agricultural fields.
Technology
Forms
null
323988
https://en.wikipedia.org/wiki/Eurasian%20collared%20dove
Eurasian collared dove
The Eurasian collared dove (Streptopelia decaocto), often simply just collared dove, is a dove species native to Europe, Asia, and northern Africa. It has also been introduced to Japan, North and Central America, and the islands in the Caribbean. Taxonomy The Hungarian naturalist Imre Frivaldszky first described the Eurasian collared dove with the scientific name Columba risoria varietas C. decaocto in 1838, considering it a wild variety of the domesticated barbary dove. The type locality is Plovdiv in Bulgaria. It is now placed in genus Streptopelia that was described in 1855 by the French ornithologist Charles Lucien Bonaparte. The Burmese collared dove (S. xanthocycla) was formerly considered a subspecies of the Eurasian collared dove, but was split as a distinct species by the IOC in 2021. Two other subspecies were formerly sometimes accepted, S. d. stoliczkae from Turkestan in central Asia and S. d. intercedens from southern India and Sri Lanka; they are now considered junior synonyms of the species. The Eurasian collared dove is also closely related to the Sunda collared dove of southeast Asia and the African collared dove of Sub-Saharan Africa, forming a superspecies with these. Identification from the African collared dove is very difficult with silent birds, with the African species being marginally smaller and paler, but the calls are very distinct, a soft purring "Cou'crrrrroouw" in the African collared dove quite unlike the Eurasian collared dove's three-note cooing. Etymology The generic name is from the Ancient Greek streptos meaning "collar" and peleia meaning "dove". The specific epithet, decaocto, is Greek for "eighteen". The association of the dove with the number eighteen has its roots in a Greek myth. A maid who worked hard for little money was unhappy that she was only paid 18 silver coins a year and begged the gods to let the world know how little she was rewarded by her mistress. Zeus, hearing her pleas, created the collared dove, which has called out "decaocto" ever since to tell the world of the maid's mistreatment. In several Balkan languages, the number 18 is a three-syllable word (e.g. tiz-en-nyolc in Frivaldszky's native Hungarian), so is ultimately onomatopoeic from the bird's call. As most of its European range in the 19th century, including its type locality, was within the Turkish-controlled Ottoman Empire, its name in many European languages translates as Turkish dove, e.g. Danish Tyrkerdue, German Türkentaube, French Tourterelle turque. Description The Eurasian collared dove is a medium-sized dove, distinctly smaller than the wood pigeon, similar in length to a rock dove but slimmer and longer-tailed, and slightly larger than the related European turtle dove, with an average length of from tip of beak to tip of tail, with a wingspan of , and a weight of . It is grey-buff to pinkish-grey overall, a little darker above than below, with a blue-grey underwing patch. The tail feathers are grey-buff above, and dark grey and tipped white below; the outer tail feathers are also tipped whitish above. It has a black half-collar edged with white on its nape from which it gets its name. The short legs are red and the bill is black. The iris is red, but from a distance the eyes appear to be black, as the pupil is relatively large and only a narrow rim of reddish-brown iris can be seen around the black pupil. The eye is surrounded by a small area of bare skin, which is either white or yellow. The two sexes are virtually indistinguishable; juveniles differ in having a poorly developed collar, and a brown iris. The subspecies S. d. xanthocycla differs in having yellow rather than white eye-rings, darker grey on the head and the underparts a slightly darker pink. The song is a three-syllable goo-GOO-goo, with stress placed on the second syllable. The Eurasian collared dove also makes a harsh loud screeching call lasting about two seconds, particularly in flight just before landing. A rough way to describe the screeching sound is a hah-hah. Eurasian collared doves cooing in early spring are sometimes mistakenly reported as the calls of early-arriving common cuckoos and, as such, a mistaken sign of spring's return. Distribution and habitat The Eurasian collared dove is not migratory, but is strongly dispersive. Over the last century, it has been one of the great colonisers of the bird world, travelling far beyond its native range to colonise colder countries, becoming a permanent resident in several of them. Its original range at the end of the 19th century was warm temperate and subtropical Asia from Turkey east to southern China and south through India to Sri Lanka. In 1838 it was reported in Bulgaria, but not until the 20th century did it expand across Europe, appearing in parts of the Balkans between 1900 and 1920, and then spreading rapidly northwest, reaching Germany in 1945, Great Britain by 1953 (breeding for the first time in 1956), Ireland in 1959, and the Faroe Islands in the early 1970s. Subsequent spread was 'sideways' from this fast northwestern spread, reaching northeast to north of the Arctic Circle in Norway and east to the Ural Mountains in Russia, and southwest to the Canary Islands and northern Africa from Morocco to Egypt, by the end of the 20th century. In the east of its range, it has also spread northeast to most of central and northern China, and locally (probably introduced) in Japan. It has also reached Iceland as a vagrant (41 records up to 2006), but has not colonised successfully there. Invasive status in North America In 1974, fewer than 50 Eurasian collared doves escaped captivity in Nassau, New Providence, Bahamas. From the Bahamas, the species spread to Florida, and is now found in nearly every state in the U.S., as well as in Mexico. In Arkansas (the United States), the species was recorded first in 1989 and since then has grown in numbers and is now present in 42 of 75 counties in the state. It spread from the southeastern corner of the state in 1997 to the northwestern corner in five years, covering a distance of about at a rate of per year. This is more than double the rate of per year observed in Europe. As of 2012, few negative impacts have been demonstrated in Florida, where the species is most prolific. However, the species is known as an aggressive competitor and there is concern that as populations continue to grow, native birds will be out-competed by the invaders. One study, however, found that Eurasian collared doves are not more aggressive or competitive than native mourning doves, despite similar dietary preferences. Population growth has ceased in areas where the species has long been established, such as Florida, and in these regions, recent observations suggest the population is in decline. The population is still growing exponentially in areas of more recent introduction; up to 2015, the Eurasian collared dove experienced a greater than 1.5% yearly population increase throughout nearly the entirety of its North American range. Carrying capacities appear to be highest in areas with higher temperatures and intermediate levels of development, such as suburban areas and some agricultural areas. While the spread of disease to native species has not been recorded in a study, Eurasian collared doves are known carriers of the parasite Trichomonas gallinae and pigeon paramyxovirus type 1. Both Trichomonas gallinae and pigeon paramyxovirus type 1 can spread to native birds via commingling at feeders and by consumption of doves by predators. Pigeon paramyxovirus type 1 is an emergent disease and has the potential to affect domestic poultry, making the Eurasian collared dove a threat to not only native biodiversity, but a possible economic threat, as well. Behaviour and ecology Breeding Eurasian collared doves typically breed close to human habitation wherever food resources are abundant and trees are available for nesting; almost all nests are within of inhabited buildings. The female lays two white eggs in a stick nest, which she incubates during the night and which the male incubates during the day. Incubation lasts between 14 and 18 days, with the young fledging after 15 to 19 days. Breeding occurs throughout the year when abundant food is available, though only rarely in winter in areas with cold winters such as northeastern Europe. Three to four broods per year are common, although up to six broods in a year have been recorded. Eurasian collared doves are a monogamous species, and share parental duties when caring for young. The male's mating display is a ritual flight, which, as with many other pigeons, consists of a rapid, near-vertical climb to height followed by a long glide downward in a circle, with the wings held below the body in an inverted "V" shape. At all other times, flight is typically direct using fast and clipped wing beats and without use of gliding. Food and feeding The Eurasian collared dove is not wary and often feeds very close to human habitation, including visiting bird tables; the largest populations are typically found around farms where spilt grain is frequent around grain stores or where livestock are fed. It is a gregarious species and sizeable winter flocks form around food supplies such as grain (its main food), seeds, shoots, and insects. Flocks most commonly number between 10 and 50, but flocks of up to 10,000 have been recorded.
Biology and health sciences
Columbimorphae
Animals
323990
https://en.wikipedia.org/wiki/Cash%20crop
Cash crop
A cash crop, also called profit crop, is an agricultural crop which is grown to sell for profit. It is typically purchased by parties separate from a farm. The term is used to differentiate a marketed crop from a staple crop ("subsistence crop") in subsistence agriculture, which is one fed to the producer's own livestock or grown as food for the producer's family. In earlier times, cash crops were usually only a small (but vital) part of a farm's total yield, while today, especially in developed countries and among smallholders almost all crops are mainly grown for revenue. In the least developed countries, cash crops are usually crops which attract demand in more developed nations, and hence have some export value. Prices for major cash crops are set in international trade markets with global scope, with some local variation (termed as "basis") based on freight costs and local supply and demand balance. A consequence of this is that a nation, region, or individual producer relying on such a crop may suffer low prices should a bumper crop elsewhere lead to excess supply on the global markets. This system has been criticized by traditional farmers. Coffee is an example of a product that has been susceptible to significant commodity futures price variations. Globalization Issues involving subsidies and trade barriers on such crops have become controversial in discussions of globalization. Many developing countries take the position that the current international trade system is unfair because it has caused tariffs to be lowered in industrial goods while allowing for low tariffs and agricultural subsidies for agricultural goods. This makes it difficult for a developing nation to export its goods overseas, and forces developing nations to compete with imported goods which are exported from developed nations at artificially low prices. The practice of exporting at artificially low prices is known as dumping, and is illegal in most nations. Controversy over this issue led to the collapse of the Cancún trade talks in 2003, when the Group of 22 refused to consider agenda items proposed by the European Union unless the issue of agricultural subsidies was addressed. Per climate zones Arctic The Arctic climate is generally not conducive for the cultivation of cash crops. However, one potential cash crop for the Arctic is Rhodiola rosea, a hardy plant used as a medicinal herb that grows in the Arctic. There is currently consumer demand for the plant, but the available supply is less than the demand (as of 2011). Temperate Cash crops grown in regions with a temperate climate include many cereals (wheat, rye, corn, barley, oats), oil-yielding crops (e.g. grapeseed, mustard seeds), vegetables (e.g. potatoes), lumber yielding trees (e.g. Spruce, Pines, Firs), tree fruit or top fruit (e.g. apples, cherries) and soft fruit (e.g. strawberries, raspberries). Subtropical In regions with a subtropical climate, oil-yielding crops (e.g. soybeans), cotton, rice, tobacco, indigo, citrus, pomegranates, and some vegetables and herbs are the predominant cash crops. Tropical In regions with a tropical climate, coffee, cocoa, sugar cane, bananas, oranges, cotton and jute are common cash crops. The oil palm is a tropical palm tree, and the fruit from it is used to make palm oil. The impact of climate change on the ranges of pests and diseasesespecially those of coffee, cocoa, and bananais commonly underestimated. Limiting temperature rise to is vital to maintaining productivity in the tropics. By continent and country Africa Around 60 percent of African workers are employed in the agricultural sector, with about three-fifths of African farmers being subsistence farmers. For example, in Burkina Faso 85% of its residents (over two million people) are reliant upon cotton production for income, and over half of the country's population lives in poverty. Larger farms tend to grow cash crops such as coffee, tea, cotton, cocoa, fruit and rubber. These farms, typically operated by large corporations, cover dozens of square kilometres and employ large numbers of laborers. Subsistence farms provide a source of food and a relatively small income for families, but generally fail to produce enough to make re-investment possible. The situation in which African nations export crops while a significant number of people on the continent struggle with hunger has been blamed on developed countries, including the United States, Japan and the European Union. These countries protect their own agricultural sectors, through high import tariffs and offer subsidies to their farmers, which some have contended is leading to the overproduction of commodities such as cotton, grain and milk. The result of this is that the global price of such products is continually reduced until Africans are unable to compete in world markets, except in cash crops that do not grow easily in temperate climates. Africa has realized significant growth in biofuel plantations, many of which are on lands which were purchased by British companies. Jatropha curcas is a cash crop grown for biofuel production in Africa. Some have criticized the practice of raising non-food plants for export while Africa has problems with hunger and food shortages, and some studies have correlated the proliferation of land acquisitions, often for use to grow non-food cash crops with increasing hunger rates in Africa. Australia Australia produces significant amounts of lentils. It was estimated in 2010 that Australia would produce approximately 143,000 tons of lentils. Most of Australia's lentil harvest is exported to the Indian subcontinent and the Middle East. Italy Italy's Cassa per il Mezzogiorno in 1950 led to the government implementing incentives to grow cash crops such as tomatoes, tobacco and citrus fruits. As a result, they created an over abundance of these crops causing an over saturation of these crops on the global market. This caused these crops to depreciate. United States Cash cropping in the United States dates back to the colonial period with crops like tobacco, indigo, cotton and others farmed on massive scales on southern plantations primarily fueled by black slave labor, even after the end of slavery this system continued in some form with the share cropping system where farmers would live and work on large plantations for a share of the crop to sell themselves. Cash cropping of fruits rose to prominence after the baby boomer generation and the end of World War II. It was seen as a way to feed the large population boom and continues to be the main factor in having an affordable food supply in the United States. According to the 1997 U.S. Census of Agriculture, 90% of the farms in the United States are still owned by families, with an additional 6% owned by a partnership. Cash crop farmers have utilized precision agricultural technologies combined with time-tested practices to produce affordable food. Based upon United States Department of Agriculture (USDA) statistics for 2010, states with the highest fruit production quantities are California, Florida and Washington. Vietnam Coconut is a cash crop of Vietnam. Global cash crops Coconut palms are cultivated in more than 80 countries of the world, with a total production of 61 million tonnes per year. The oil and milk derived from it are commonly used in cooking and frying; coconut oil is also widely used in soaps and cosmetics. Sustainability of cash crops Approximately 70% of the world's food is produced by 500 million smallholder farmers. For their livelihood they depend on the production of cash crops, basic commodities that are hard to differentiate in the market. The great majority (80%) of the world's farms measure 2 hectares or less. These smallholder farmers are mainly found in developing countries and are often unorganized, illiterate or have only basic education. Smallholder farmers have little bargaining power and incomes are low, leading to a situation in which they cannot invest much in upscaling their businesses. In general, farmers lack access to agricultural inputs and finance, and do not have enough knowledge on good agricultural and business practices. These high level problems are in many cases threatening the future of agricultural sectors and theories start evolving on how to secure a sustainable future for agriculture. Sustainable market transformations are initiated in which industry leaders work together in a pre-competitive environment to change market conditions. Sustainable intensification focuses on facilitating entrepreneurial farmers. To stimulate farm investment, projects on access to finance for agriculture are also popping up. One example is the SCOPE methodology, an assessment tool that measures the management maturity and professionalism of producer organizations as to give financing organizations better insights in the risks involved in financing. Currently, agricultural finance is always considered risky and avoided by financial institutions. Black market cash crops Coca, opium poppies and cannabis are significant black market cash crops, the prevalence of which varies. In the United States, cannabis is considered by some to be the most valuable cash crop. In 2006, it was reported in a study by Jon Gettman, a marijuana policy researcher, that in contrast to government figures for legal crops such as corn and wheat and using the study's projections for U.S. cannabis production at that time, cannabis was cited as "the top cash crop in 12 states and among the top three cash crops in 30". The study also estimated cannabis production at the time (in 2006) to be valued at US$35.8 billion, which exceeded the combined value of corn at $23.3 billion and wheat at $7.5 billion.
Technology
Basics_2
null
324375
https://en.wikipedia.org/wiki/Parameter%20%28computer%20programming%29
Parameter (computer programming)
In computer programming, a parameter or a formal argument is a special kind of variable used in a subroutine to refer to one of the pieces of data provided as input to the subroutine. These pieces of data are the values of the arguments (often called actual arguments or actual parameters) with which the subroutine is going to be called/invoked. An ordered list of parameters is usually included in the definition of a subroutine, so that, each time the subroutine is called, its arguments for that call are evaluated, and the resulting values can be assigned to the corresponding parameters. Unlike argument in usual mathematical usage, the argument in computer science is the actual input expression passed/supplied to a function, procedure, or routine in the invocation/call statement, whereas the parameter is the variable inside the implementation of the subroutine. For example, if one defines the add subroutine as def add(x, y): return x + y, then x, y are parameters, while if this is called as add(2, 3), then 2, 3 are the arguments. Variables (and expressions thereof) from the calling context can be arguments: if the subroutine is called as a = 2; b = 3; add(a, b) then the variables a, b are the arguments, not the values 2, 3. See the Parameters and arguments section for more information. The semantics for how parameters can be declared and how the (value of) arguments are passed to the parameters of subroutines are defined by the evaluation strategy of the language, and the details of how this is represented in any particular computer system depend on the calling convention of that system. In the most common case, call by value, a parameter acts within the subroutine as a new local variable initialized to the value of the argument (a local (isolated) copy of the argument if the argument is a variable), but in other cases, e.g. call by reference, the argument variable supplied by the caller can be affected by actions within the called subroutine. Example The following program in the C programming language defines a function that is named "SalesTax" and has one parameter named "price". The type of price is "double" (i.e. a double-precision floating point number). The function's return type is also a double. double SalesTax(double price) { return 0.05 * price; } After the function has been defined, it can be invoked as follows: SalesTax(10.00); In this example, the function has been invoked with the argument 10.00. When this happens, 10.00 will be assigned to price, and the function begins calculating its result. The steps for producing the result are specified below, enclosed in {}. 0.05 * price indicates that the first thing to do is multiply 0.05 by the value of price, which gives 0.50. return means the function will produce the result of 0.05 * price. Therefore, the final result (ignoring possible round-off errors one encounters with representing decimal fractions as binary fractions) is 0.50. Parameters and arguments The terms parameter and argument may have different meanings in different programming languages. Sometimes they are used interchangeably, and the context is used to distinguish the meaning. The term parameter (sometimes called formal parameter) is often used to refer to the variable as found in the function declaration, while argument (sometimes called actual parameter) refers to the actual input supplied at a function call statement. For example, if one defines a function as def f(x): ..., then x is the parameter, and if it is called by a = ...; f(a) then a is the argument. A parameter is an (unbound) variable, while the argument can be a literal or variable or more complex expression involving literals and variables. In case of call by value, what is passed to the function is the value of the argument – for example, f(2) and a = 2; f(a) are equivalent calls – while in call by reference, with a variable as argument, what is passed is a reference to that variable - even though the syntax for the function call could stay the same. The specification for pass-by-reference or pass-by-value would be made in the function declaration and/or definition. Parameters appear in procedure definitions; arguments appear in procedure calls. In the function definition f(x) = x*x the variable x is a parameter; in the function call f(2) the value 2 is the argument of the function. Loosely, a parameter is a type, and an argument is an instance. A parameter is an intrinsic property of the procedure, included in its definition. For example, in many languages, a procedure to add two supplied integers together and calculate the sum would need two parameters, one for each integer. In general, a procedure may be defined with any number of parameters, or no parameters at all. If a procedure has parameters, the part of its definition that specifies the parameters is called its parameter list. By contrast, the arguments are the expressions supplied to the procedure when it is called, usually one expression matching one of the parameters. Unlike the parameters, which form an unchanging part of the procedure's definition, the arguments may vary from call to call. Each time a procedure is called, the part of the procedure call that specifies the arguments is called the argument list. Although parameters are also commonly referred to as arguments, arguments are sometimes thought of as the actual values or references assigned to the parameter variables when the subroutine is called at run-time. When discussing code that is calling into a subroutine, any values or references passed into the subroutine are the arguments, and the place in the code where these values or references are given is the parameter list. When discussing the code inside the subroutine definition, the variables in the subroutine's parameter list are the parameters, while the values of the parameters at runtime are the arguments. For example, in C, when dealing with threads it is common to pass in an argument of type void* and cast it to an expected type: void ThreadFunction(void* pThreadArgument) { // Naming the first parameter 'pThreadArgument' is correct, rather than // 'pThreadParameter'. At run time the value we use is an argument. As // mentioned above, reserve the term parameter for when discussing // subroutine definitions. } To better understand the difference, consider the following function written in C: int Sum(int addend1, int addend2) { return addend1 + addend2; } The function Sum has two parameters, named addend1 and addend2. It adds the values passed into the parameters, and returns the result to the subroutine's caller (using a technique automatically supplied by the C compiler). The code which calls the Sum function might look like this: int value1 = 40; int value2 = 2; int sum_value = Sum(value1, value2); The variables value1 and value2 are initialized with values. value1 and value2 are both arguments to the sum function in this context. At runtime, the values assigned to these variables are passed to the function Sum as arguments. In the Sum function, the parameters addend1 and addend2 are evaluated, yielding the arguments 40 and 2, respectively. The values of the arguments are added, and the result is returned to the caller, where it is assigned to the variable sum_value. Because of the difference between parameters and arguments, it is possible to supply inappropriate arguments to a procedure. The call may supply too many or too few arguments; one or more of the arguments may be a wrong type; or arguments may be supplied in the wrong order. Any of these situations causes a mismatch between the parameter and argument lists, and the procedure will often return an unintended answer or generate a runtime error. Alternative convention in Eiffel Within the Eiffel software development method and language, the terms argument and parameter have distinct uses established by convention. The term argument is used exclusively in reference to a routine's inputs, and the term parameter is used exclusively in type parameterization for generic classes. Consider the following routine definition: sum (addend1: INTEGER; addend2: INTEGER): INTEGER do Result := addend1 + addend2 end The routine sum takes two arguments addend1 and addend2, which are called the routine's formal arguments. A call to sum specifies actual arguments, as shown below with value1 and value2. sum_value: INTEGER value1: INTEGER = 40 value2: INTEGER = 2 … sum_value := sum (value1, value2) Parameters are also thought of as either formal or actual. Formal generic parameters are used in the definition of generic classes. In the example below, the class HASH_TABLE is declared as a generic class which has two formal generic parameters, G representing data of interest and K representing the hash key for the data: class HASH_TABLE [G, K -> HASHABLE] … When a class becomes a client to HASH_TABLE, the formal generic parameters are substituted with actual generic parameters in a generic derivation. In the following attribute declaration, my_dictionary is to be used as a character string based dictionary. As such, both data and key formal generic parameters are substituted with actual generic parameters of type STRING. my_dictionary: HASH_TABLE [STRING, STRING] Datatypes In strongly typed programming languages, each parameter's type must be specified in the procedure declaration. Languages using type inference attempt to discover the types automatically from the function's body and usage. Dynamically typed programming languages defer type resolution until run-time. Weakly typed languages perform little to no type resolution, relying instead on the programmer for correctness. Some languages use a special keyword (e.g. void) to indicate that the subroutine has no parameters; in formal type theory, such functions take an empty parameter list (whose type is not void, but rather unit). Argument passing The exact mechanism for assigning arguments to parameters, called argument passing, depends upon the evaluation strategy used for that parameter (typically call by value), which may be specified using keywords. Default arguments Some programming languages such as Ada, C++, Clojure, Common Lisp, Fortran 90, Python, Ruby, Tcl, and Windows PowerShell allow for a default argument to be explicitly or implicitly given in a subroutine's declaration. This allows the caller to omit that argument when calling the subroutine. If the default argument is explicitly given, then that value is used if it is not provided by the caller. If the default argument is implicit (sometimes by using a keyword such as Optional) then the language provides a well-known value (such as null, Empty, zero, an empty string, etc.) if a value is not provided by the caller. PowerShell example: function doc($g = 1.21) { "$g gigawatts? $g gigawatts? Great Scott!" } PS > doc 1.21 gigawatts? 1.21 gigawatts? Great Scott! PS > doc 88 88 gigawatts? 88 gigawatts? Great Scott! Default arguments can be seen as a special case of the variable-length argument list. Variable-length parameter lists Some languages allow subroutines to be defined to accept a variable number of arguments. For such languages, the subroutines must iterate through the list of arguments. PowerShell example: function marty { $args | foreach { "back to the year $_" } } PS > marty 1985 back to the year 1985 PS > marty 2015 1985 1955 back to the year 2015 back to the year 1985 back to the year 1955 Named parameters Some programming languages—such as Ada and Windows PowerShell—allow subroutines to have named parameters. This allows the calling code to be more self-documenting. It also provides more flexibility to the caller, often allowing the order of the arguments to be changed, or for arguments to be omitted as needed. PowerShell example: function jennifer($adjectiveYoung, $adjectiveOld) { "Young Jennifer: I'm $adjectiveYoung!" "Old Jennifer: I'm $adjectiveOld!" } PS > jennifer 'fresh' 'experienced' Young Jennifer: I'm fresh! Old Jennifer: I'm experienced! PS > jennifer -adjectiveOld 'experienced' -adjectiveYoung 'fresh' Young Jennifer: I'm fresh! Old Jennifer: I'm experienced! Multiple parameters in functional languages In lambda calculus, each function has exactly one parameter. What is thought of as functions with multiple parameters is usually represented in lambda calculus as a function which takes the first argument, and returns a function which takes the rest of the arguments; this is a transformation known as currying. Some programming languages, like ML and Haskell, follow this scheme. In these languages, every function has exactly one parameter, and what may look like the definition of a function of multiple parameters, is actually syntactic sugar for the definition of a function that returns a function, etc. Function application is left-associative in these languages as well as in lambda calculus, so what looks like an application of a function to multiple arguments is correctly evaluated as the function applied to the first argument, then the resulting function applied to the second argument, etc. Output parameters An output parameter, also known as an out parameter or return parameter, is a parameter used for output, rather than the more usual use for input. Using call by reference parameters, or call by value parameters where the value is a reference, as output parameters is an idiom in some languages, notably C and C++, while other languages have built-in support for output parameters. Languages with built-in support for output parameters include Ada (see Ada subprograms), Fortran (since Fortran 90; see Fortran "intent"), various procedural extensions to SQL, such as PL/SQL (see PL/SQL functions) and Transact-SQL, C# and the .NET Framework, Swift, and the scripting language TScript (see TScript function declarations). More precisely, one may distinguish three types of parameters or parameter modes: s, output parameters, and s; these are often denoted in, out, and in out or inout. An input argument (the argument to an input parameter) must be a value, such as an initialized variable or literal, and must not be redefined or assigned to; an output argument must be an assignable variable, but it need not be initialized, any existing value is not accessible, and must be assigned a value; and an input/output argument must be an initialized, assignable variable, and can optionally be assigned a value. The exact requirements and enforcement vary between languages – for example, in Ada 83 output parameters can only be assigned to, not read, even after assignment (this was removed in Ada 95 to remove the need for an auxiliary accumulator variable). These are analogous to the notion of a value in an expression being an r-value (has a value), an l-value (can be assigned), or an r-value/l-value (has a value and can be assigned), respectively, though these terms have specialized meanings in C. In some cases only input and input/output are distinguished, with output being considered a specific use of input/output, and in other cases only input and output (but not input/output) are supported. The default mode varies between languages: in Fortran 90 input/output is default, while in C# and SQL extensions input is default, and in TScript each parameter is explicitly specified as input or output. Syntactically, parameter mode is generally indicated with a keyword in the function declaration, such as void f(out int x) in C#. Conventionally output parameters are often put at the end of the parameter list to clearly distinguish them, though this is not always followed. TScript uses a different approach, where in the function declaration input parameters are listed, then output parameters, separated by a colon (:) and there is no return type to the function itself, as in this function, which computes the size of a text fragment: TextExtent(WString text, Font font : Integer width, Integer height) Parameter modes are a form of denotational semantics, stating the programmer's intent and allowing compilers to catch errors and apply optimizations – they do not necessarily imply operational semantics (how the parameter passing actually occurs). Notably, while input parameters can be implemented by call by value, and output and input/output parameters by call by reference – and this is a straightforward way to implement these modes in languages without built-in support – this is not always how they are implemented. This distinction is discussed in detail in the Ada '83 Rationale, which emphasizes that the parameter mode is abstracted from which parameter passing mechanism (by reference or by copy) is actually implemented. For instance, while in C# input parameters (default, no keyword) are passed by value, and output and input/output parameters (out and ref) are passed by reference, in PL/SQL input parameters (IN) are passed by reference, and output and input/output parameters (OUT and IN OUT) are by default passed by value and the result copied back, but can be passed by reference by using the NOCOPY compiler hint. A syntactically similar construction to output parameters is to assign the return value to a variable with the same name as the function. This is found in Pascal and Fortran 66 and Fortran 77, as in this Pascal example: function f(x, y: integer): integer; begin f := x + y; end; This is semantically different in that when called, the function is simply evaluated – it is not passed a variable from the calling scope to store the output in. Use The primary use of output parameters is to return multiple values from a function, while the use of input/output parameters is to modify state using parameter passing (rather than by shared environment, as in global variables). An important use of returning multiple values is to solve the semipredicate problem of returning both a value and an error status – see Semipredicate problem: Multivalued return. For example, to return two variables from a function in C, one may write: int width int height; F(x, &width, &height); where x is an input parameter and width and height are output parameters. A common use case in C and related languages is for exception handling, where a function places the return value in an output variable, and returns a Boolean corresponding to whether the function succeeded or not. An archetypal example is the TryParse method in .NET, especially C#, which parses a string into an integer, returning true on success and false on failure. This has the following signature: public static bool TryParse(string s, out int result) and may be used as follows: int result; if (!Int32.TryParse(s, result)) { // exception handling } Similar considerations apply to returning a value of one of several possible types, where the return value can specify the type and then value is stored in one of several output variables. Drawbacks Output parameters are often discouraged in modern programming, essentially as being awkward, confusing, and too low-level – commonplace return values are considerably easier to understand and work with. Notably, output parameters involve functions with side effects (modifying the output parameter) and are semantically similar to references, which are more confusing than pure functions and values, and the distinction between output parameters and input/output parameters can be subtle. Further, since in common programming styles most parameters are simply input parameters, output parameters and input/output parameters are unusual and hence susceptible to misunderstanding. Output and input/output parameters prevent function composition, since the output is stored in variables, rather than in the value of an expression. Thus one must initially declare a variable, and then each step of a chain of functions must be a separate statement. For example, in C++ the following function composition: Object obj = G(y, F(x)); when written with output and input/output parameters instead becomes (for F it is an output parameter, for G an input/output parameter): Object obj; F(x, &obj); G(y, &obj); In the special case of a function with a single output or input/output parameter and no return value, function composition is possible if the output or input/output parameter (or in C/C++, its address) is also returned by the function, in which case the above becomes: Object obj; G(y, F(x, &obj)); Alternatives There are various alternatives to the use cases of output parameters. For returning multiple values from a function, an alternative is to return a tuple. Syntactically this is clearer if automatic sequence unpacking and parallel assignment can be used, as in Go or Python, such as: def f(): return 1, 2 a, b = f() For returning a value of one of several types, a tagged union can be used instead; the most common cases are nullable types (option types), where the return value can be null to indicate failure. For exception handling, one can return a nullable type, or raise an exception. For example, in Python one might have either: result = parse(s) if result is None: # exception handling or, more idiomatically: try: result = parse(s) except ParseError: # exception handling The micro-optimization of not requiring a local variable and copying the return when using output variables can also be applied to conventional functions and return values by sufficiently sophisticated compilers. The usual alternative to output parameters in C and related languages is to return a single data structure containing all return values. For example, given a structure encapsulating width and height, one can write: WidthHeight width_and_height = F(x); In object-oriented languages, instead of using input/output parameters, one can often use call by sharing, passing a reference to an object and then mutating the object, though not changing which object the variable refers to.
Technology
Software development: General
null
324498
https://en.wikipedia.org/wiki/Mortar%20%28masonry%29
Mortar (masonry)
Mortar is a workable paste which hardens to bind building blocks such as stones, bricks, and concrete masonry units, to fill and seal the irregular gaps between them, spread the weight of them evenly, and sometimes to add decorative colours or patterns to masonry walls. In its broadest sense, mortar includes pitch, asphalt, and soft clay, as those used between bricks, as well as cement mortar. The word "mortar" comes from the Old French word mortier, "builder's mortar, plaster; bowl for mixing." (13c.). Cement mortar becomes hard when it cures, resulting in a rigid aggregate structure; however, the mortar functions as a weaker component than the building blocks and serves as the sacrificial element in the masonry, because mortar is easier and less expensive to repair than the building blocks. Bricklayers typically make mortars using a mixture of sand, a binder, and water. The most common binder since the early 20th century is Portland cement, but the ancient binder lime (producing lime mortar) is still used in some specialty new construction. Lime, lime mortar, and gypsum in the form of plaster of Paris are used particularly in the repair and repointing of historic buildings and structures, so that the repair materials will be similar in performance and appearance to the original materials. Several types of cement mortars and additives exist. Ancient mortar The first mortars were made of mud and clay, as demonstrated in the 10th millennia BCE buildings of Jericho, and the 8th millennia BCE of Ganj Dareh. According to Roman Ghirshman, the first evidence of humans using a form of mortar was at the Mehrgarh of Baluchistan in what is today Pakistan, built of sun-dried bricks in 6500 BCE. Gypsum mortar, also called plaster of Paris, was used in the construction of many ancient structures. It is made from gypsum, which requires a lower firing temperature. It is therefore easier to make than lime mortar and sets up much faster, which may be a reason it was used as the typical mortar in ancient, brick arch and vault construction. Gypsum mortar is not as durable as other mortars in damp conditions. In the Indian subcontinent, multiple cement types have been observed in the sites of the Indus Valley civilization, with gypsum appearing at sites such as the Mohenjo-daro city-settlement, which dates to earlier than 2600 BCE. Gypsum cement that was "light grey and contained sand, clay, traces of calcium carbonate, and a high percentage of lime" was used in the construction of wells, drains, and on the exteriors of "important looking buildings." Bitumen mortar was also used at a lower-frequency, including in the Great Bath at Mohenjo-daro. In early Egyptian pyramids, which were constructed during the Old Kingdom (~2600–2500 BCE), the limestone blocks were bound by a mortar of mud and clay, or clay and sand. In later Egyptian pyramids, the mortar was made of gypsum, or lime. Gypsum mortar was essentially a mixture of plaster and sand and was quite soft. 2nd millennia BCE Babylonian constructions used lime or pitch for mortar. Historically, building with concrete and mortar next appeared in Greece. The excavation of the underground aqueduct of Megara revealed that a reservoir was coated with a pozzolanic mortar 12 mm thick. This aqueduct dates back to c. 500 BCE. Pozzolanic mortar is a lime based mortar, but is made with an additive of volcanic ash that allows it to be hardened underwater; thus it is known as hydraulic cement. The Greeks obtained the volcanic ash from the Greek islands Thira and Nisiros, or from the then Greek colony of Dicaearchia (Pozzuoli) near Naples, Italy. The Romans later improved the use and methods of making what became known as pozzolanic mortar and cement. Even later, the Romans used a mortar without pozzolana using crushed terra cotta, introducing aluminum oxide and silicon dioxide into the mix. This mortar was not as strong as pozzolanic mortar, but, because it was denser, it better resisted penetration by water. Hydraulic mortar was not available in ancient China, possibly due to a lack of volcanic ash. Around 500 CE, sticky rice soup was mixed with slaked lime to make an inorganic−organic composite sticky rice mortar that had more strength and water resistance than lime mortar. It is not understood how the art of making hydraulic mortar and cement, which was perfected and in such widespread use by both the Greeks and Romans, was then lost for almost two millennia. During the Middle Ages when the Gothic cathedrals were being built, the only active ingredient in the mortar was lime. Since cured lime mortar can be degraded by contact with water, many structures suffered over the centuries from wind-blown rain. Ordinary Portland cement mortar Ordinary Portland cement mortar, commonly known as OPC mortar or just cement mortar, is created by mixing powdered ordinary Portland cement, fine aggregate and water. It was invented in 1794 by Joseph Aspdin and patented on 18 December 1824, largely as a result of efforts to develop stronger mortars. It was made popular during the late nineteenth century, and had by 1930 became more popular than lime mortar as construction material. The advantages of Portland cement is that it sets hard and quickly, allowing a faster pace of construction. Furthermore, fewer skilled workers are required in building a structure with Portland cement. As a general rule, however, Portland cement should not be used for the repair or repointing of older buildings built in lime mortar, which require the flexibility, softness and breathability of lime if they are to function correctly. In the United States and other countries, five standard types of mortar (available as dry pre-mixed products) are generally used for both new construction and repair. Strengths of mortar change based on the mix ratio for each type of mortar, which are specified under the ASTM standards. These premixed mortar products are designated by one of the five letters, M, S, N, O, and K. Type M mortar is the strongest, and Type K the weakest. The mix ratio is always expressed by volume of . These type letters are taken from the alternate letters of the words "MaSoN wOrK". Polymer cement mortar Polymer cement mortars (PCM) are the materials which are made by partially replacing the cement hydrate binders of conventional cement mortar with polymers. The polymeric admixtures include latexes or emulsions, redispersible polymer powders, water-soluble polymers, liquid thermoset resins and monomers. Although they increase cost of mortars when used as an additive, they enhance properties. Polymer mortar has low permeability that may be detrimental to moisture accumulation when used to repair a traditional brick, block or stone wall. It is mainly designed for repairing concrete structures. The use of recovered plastics in mortars is being researched and is gaining ground. Depolymerizing PET to use as a polymeric binder to enhance mortars is actively being studied. Lime mortar The setting speed can be increased by using impure limestone in the kiln, to form a hydraulic lime that will set on contact with water. Such a lime must be stored as a dry powder. Alternatively, a pozzolanic material such as calcined clay or brick dust may be added to the mortar mix. Addition of a pozzolanic material will make the mortar set reasonably quickly by reaction with the water. It would be problematic to use Portland cement mortars to repair older buildings originally constructed using lime mortar. Lime mortar is softer than cement mortar, allowing brickwork a certain degree of flexibility to adapt to shifting ground or other changing conditions. Cement mortar is harder and allows little flexibility. The contrast can cause brickwork to crack where the two mortars are present in a single wall. Lime mortar is considered breathable in that it will allow moisture to freely move through and evaporate from the surface. In old buildings with walls that shift over time, cracks can be found which allow rain water into the structure. The lime mortar allows this moisture to escape through evaporation and keeps the wall dry. Re−pointing or rendering an old wall with cement mortar stops the evaporation and can cause problems associated with moisture behind the cement. Pozzolanic mortar Pozzolana is a fine, sandy volcanic ash. It was originally discovered and dug at Pozzuoli, nearby Mount Vesuvius in Italy, and was subsequently mined at other sites, too. The Romans learned that pozzolana added to lime mortar allowed the lime to set relatively quickly and even under water. Vitruvius, the Roman architect, spoke of four types of pozzolana. It is found in all the volcanic areas of Italy in various colours: black, white, grey and red. Pozzolana has since become a generic term for any siliceous and/or aluminous additive to slaked lime to create hydraulic cement. Finely ground and mixed with lime it is a hydraulic cement, like Portland cement, and makes a strong mortar that will also set under water. The fact that the materials involved in the creation of pozzolana are found in abundance within certain territories make its use more common there, with areas inside of Central Europe as well as inside of Southern Europe being an example (significantly because of the many European volcanoes of note). It has, as such, been commonly associated with a variety of large structures constructed by the Roman Empire. Radiocarbon dating As the mortar hardens, the current atmosphere is encased in the mortar and thus provides a sample for analysis. Various factors affect the sample and raise the margin of error for the analysis. Radiocarbon dating of mortar began as early as the 1960s, soon after the method was established (Delibrias and Labeyrie 1964; Stuiver and Smith 1965; Folk and Valastro 1976). The very first data were provided by van Strydonck et al. (1983), Heinemeier et al.(1997) and Ringbom and Remmer (1995). Methodological aspects were further developed by different groups (an international team headed by Åbo Akademi University, and teams from CIRCE, CIRCe, ETHZ, Poznań, RICH and Milano-Bicocca laboratory. To evaluate the different anthropogenic carbon extraction methods for radiocarbon dating as well as to compare the different dating methods, i.e. radiocarbon and OSL, the first intercomparison study (MODIS) was set up and published in 2017.
Technology
Building materials
null
324499
https://en.wikipedia.org/wiki/Mortar%20%28weapon%29
Mortar (weapon)
A mortar today is usually a simple, lightweight, man-portable, muzzle-loaded cannon, consisting of a smooth-bore (although some models use a rifled barrel) metal tube fixed to a base plate (to spread out the recoil) with a lightweight bipod mount and a sight. Mortars are typically used as indirect fire weapons for close fire support with a variety of ammunition. Historically mortars were heavy siege artillery. Mortars launch explosive shells (technically called bombs) in high-arching ballistic trajectories. History Mortars have been used for hundreds of years. The earliest reported use of mortars was in Korea in a 1413 naval battle when Korean gunsmiths developed the wan'gu (gourd-shaped mortar) (완구, 碗口). The earliest version of the wan'gu dates back to 1407. Ch'oe Hae-san (1380–1443), the son of Ch'oe Mu-sŏn (1325–1395), is generally credited with inventing the wan'gu. In the Ming dynasty, general Qi Jiguang recorded the use of a mini cannon called the hu dun pao that was similar to the mortar. The first use in siege warfare was at the 1453 siege of Constantinople by Mehmed the Conqueror. An Italian account of the 1456 siege of Belgrade by Giovanni da Tagliacozzo states that the Ottoman Turks used seven mortars that fired "stone shots one Italian mile high". The time of flight of these was apparently long enough that casualties could be avoided by posting observers to give warning of their trajectories. Early mortars, such as the Pumhart von Steyr, were large and heavy and could not be easily transported. Simply made, these weapons were no more than iron bowls reminiscent of the kitchen and apothecary mortars whence they drew their name. An early transportable mortar was invented by Baron Menno van Coehoorn in 1701. This mortar fired an exploding shell, which had a fuse that was lit by the hot gases when fired. The Coehorn mortar gained quick popularity, necessitating a new form of naval ship, the bomb vessel. Mortars played a significant role in the Venetian conquest of Morea, and in the course of this campaign an ammunition depot in the Parthenon was blown up. An early use of these more mobile mortars as field artillery (rather than siege artillery) was by British forces in the suppression of the Jacobite rising of 1719 at the Battle of Glen Shiel. High angle trajectory mortars held a great advantage over standard field guns in the rough terrain of the West Highlands of Scotland. The mortar had fallen out of general use in Europe by the Napoleonic era, although Manby Mortars were widely used on the coast to launch lines to ships in distress, and interest in their use as a weapon was not revived until the beginning of the 20th century. Mortars were heavily used by both sides during the American Civil War. At the Siege of Vicksburg, General Ulysses S. Grant reported making mortars "by taking logs of the toughest wood that could be found, boring them out for shells and binding them with strong iron bands. These answered as Coehorns, and shells were successfully thrown from them into the trenches of the enemy". During the Russo-Japanese War, Lieutenant General Leonid Gobyato of the Imperial Russian Army applied the principles of indirect fire from closed firing positions in the field; and with the collaboration of General Roman Kondratenko, he designed the first mortar that fired navy shells. The German Army studied the Siege of Port Arthur, where heavy artillery had been unable to destroy defensive structures like barbed wire and bunkers. Consequently they developed a short-barrelled rifled muzzle-loading mortar called the Minenwerfer. Heavily used during World War I, they were made in three sizes: , , and . Types Stokes mortar It was not until the Stokes mortar was devised by Sir Wilfred Stokes in 1915 during the First World War that the modern mortar transportable by one person was born. In the conditions of trench warfare, there was a great need for a versatile and easily portable weapon that could be manned by troops under cover in the trenches. Stokes' design was initially rejected in June 1915 because it was unable to use existing stocks of British mortar ammunition, and it took the intervention of David Lloyd George (at that time Minister of Munitions) and Lieutenant Colonel J. C. Matheson of the Trench Warfare Supply Department (who reported to Lloyd George) to expedite manufacture of the Stokes mortar. The weapon proved to be extremely useful in the muddy trenches of the Western Front, as a mortar round could be aimed to fall directly into trenches, where artillery shells, because of their low angle of flight, could not possibly go. The Stokes mortar was a simple muzzle-loaded weapon, consisting of a smoothbore metal tube fixed to a base plate (to absorb recoil) with a lightweight bipod mount. When a mortar bomb was dropped into the tube, an impact sensitive primer in the base of the bomb would make contact with a firing pin at the base of the tube and detonate, firing the bomb towards the target. The Stokes mortar could fire as many as 25 bombs per minute and had a maximum range of , firing the original cylindrical unstabilised projectile. A modified version of the mortar, which fired a modern fin-stabilised streamlined projectile and had a booster charge for longer range, was developed after World War I; this was in effect a new weapon. By World War II, it could fire as many as 30 bombs per minute and had a range of over with some shell types. The French developed an improved version of the Stokes mortar as the Brandt Mle 27, further refined as the Brandt Mle 31; this design was widely copied with and without license. These weapons were the prototypes for all subsequent light mortar developments around the world. Mortar carrier Mortar carriers are vehicles which carry a mortar as a primary weapon. Numerous vehicles have been used to mount mortars, from improvised civilian trucks used by insurgents, to modified infantry fighting vehicles, such as variants of the M3 half-track and M113 armored personnel carrier, to vehicles specifically intended to carry a mortar. Simpler vehicles carry a standard infantry mortar while in more complex vehicles the mortar is fully integrated into the vehicle and cannot be dismounted from the vehicle. Mortar carriers cannot be fired while on the move, and some must be dismounted to fire. There are numerous armoured fighting vehicles and even main battle tanks that can be equipped with a mortar, either outside or inside of the cabin. The Israeli Merkava tank uses a mortar as a secondary armament. The Russian army uses the 2S4 Tyulpan self-propelled heavy mortar which is one of the largest mortars in current use. Gun-mortars Gun-mortars are breech-loaded mortars usually equipped with a hydraulic recoil mechanism, and sometimes equipped with an autoloader. They are usually mounted on an armoured vehicle and are capable of both direct fire and indirect fire. The archetypes are the Brandt Mle CM60A1 and Brandt 60 mm LR, which combine features of modern infantry mortars together with those of modern cannon. Such weapons are most commonly smoothbore, firing fin-stabilised rounds, using relatively small propellant charges in comparison to projectile weight. While some have been fitted with rifled barrels, such as the 2S31 Vena and 2S9 Nona. They have short barrels in comparison to guns and are much more lightly built than guns of a similar calibre – all characteristics of infantry mortars. This produces a hybrid weapon capable of engaging area targets with indirect high-angle fire, and also specific targets such as vehicles and bunkers with direct fire. Such hybrids are much heavier and more complicated than infantry mortars, superior to rocket-propelled grenades in the anti-armour and bunker-busting role, but have a reduced range compared to modern gun-howitzers and inferior anti-tank capability compared to modern anti-tank guided weapons. However, they do have a niche in, for example, providing a multi-role anti-personnel, anti-armour capability in light mobile formations. Such systems, like the Soviet 120 mm 2S9 Nona, are mostly self-propelled (although a towed variant exists). The AMOS (Advanced Mortar System) is an example of an even more advanced gun mortar system. It uses a 120 mm automatic twin-barrelled, breech-loaded mortar turret, which can be mounted on a variety of armoured vehicles and attack boats. A modern example of a gun-mortar is the 2B9 Vasilek. Spigot mortar A spigot mortar consists mainly of a solid rod or spigot, onto which a hollow tube in the projectile fits—inverting the normal tube-mortar arrangement. At the top of the tube in the projectile, a cavity contains propellant, such as cordite. There is usually a trigger mechanism built into the base of the spigot, with a long firing pin running up the length of the spigot activating a primer inside the projectile and firing the propellant charge. The advantage of a spigot mortar is that the firing unit (baseplate and spigot) is smaller and lighter than a conventional tube mortar of equivalent payload and range. It is also somewhat simpler to manufacture. Further, most spigot mortars have no barrel in the conventional sense, which means ammunition of almost any weight and diameter can be fired from the same mortar. The disadvantage is that while most mortar bombs have a streamlined shape towards the back that fits a spigot mortar application well, using that space for the spigot mortar tube takes volume and mass away from the payload of the projectile. If a soldier is carrying only a few projectiles, the projectile weight disadvantage is not significant. However, the weight of a large quantity of the heavier and more complex spigot projectiles offsets the weight saved. A near-silent mortar can operate using the spigot principle. Each round has a close-fitting sliding plug in the tube that fits over the spigot. When the round is fired, the projectile is pushed off the spigot, but before the plug clears the spigot it is caught by a constriction at the base of the tube. This traps the gases from the propelling charge and hence the sound of the firing. After World War II the Belgium Fly-K silent spigot mortar was accepted into French service as the TN-8111. Spigot mortars generally fell out of favour after World War II and were replaced by smaller conventional mortars. Military applications of spigot mortars include: The petard mortar used on the Churchill AVRE by Britain in World War II. The Type 98 mortar used by Japan in World War II to some psychological effect in the battles of Iwo Jima and Okinawa The Blacker Bombard and PIAT anti-tank launchers used by Britain in World War II. The Hedgehog launcher, used from the deck of a ship, used 24 spigot mortars which fired a diamond pattern of anti-submarine projectiles into the sea ahead of the ship. A sinking projectile detonated if it struck a submarine, and the pattern was such that any submarine partly in the landing zone of the projectiles would be struck one or more times. Non-military applications include the use of small-calibre spigot mortars to launch lightweight, low-velocity foam dummy targets used for training retriever dogs for bird hunting. Simple launchers use a separate small primer cap as the sole propellant (similar or identical to the cartridges used in industrial nail guns). Improvised Insurgent groups often use improvised, or "homemade" mortars to attack fortified military installations or terrorise civilians. They are usually constructed from heavy steel piping mounted on a steel frame. These weapons may fire standard mortar rounds, purpose-made shells, repurposed gas cylinders filled with explosives and shrapnel, or any other type of improvised explosive, incendiary or chemical munitions. These were called "barrack busters" by the Provisional Irish Republican Army (PIRA). Syrian civil war Improvised mortars used by insurgents in the Syrian civil war are known as hell cannons. Observers have noted that they are "wildly inaccurate" and responsible for hundreds of civilian deaths. Sri Lankan civil war Improvised mortars used in the Sri Lankan civil war by the rebel Tamil Tigers are known as "Pasilan 2000", also known as a "rocket mortar" or "Arti-mortar" like the cannon, successor to the Baba mortar used by the LTTE for ground operations since the 1980s. As Baba mortar rounds contained tar, they caused a fire when they hit the ground. The Baba, the prototype mortar, was crude. But with time the weapon has improved. The Pasilan 2000, the improved version, has been developed with characteristics similar to a rocket launcher. The Pasilan 2000 was a heavy mortar fired from a mobile launcher mounted on a tractor. The shell does not emit constant muzzle flares like artillery or MBRL. This is ideal for LTTE's camouflage and conceals attacking style. Once a round is fired, forward observers/spies/civilian spotters can correct the fire. The way the tube is installed is similar to the positioning of rocket pods. The length and calibre of the barrel indicate Pasilan 2000 system has common features to the Chinese made Type 82 30-tube MLRS (introduced by the Palestinian Liberation Army (PLA) in the early 1980s) rather than rail-guided Katyusha variants such as the Qassam Rocket. The warhead weight is and it is filled with TNT. It had a range of . The rocket has since then undergone some modifications. The Pasilan 2000 was more lethal than Baba mortar. But it was not heavily used for ground attacks during the Eelam War IV. Modern Design Most modern mortar systems consist of four main components: a barrel, a base plate, a bipod and a sight. Modern mortars normally range in calibre from 60 mm (2.36 in) to 120 mm (4.72 in). However, both larger and smaller mortars have been produced. The modern mortar is a muzzle-loaded weapon and relatively simple to operate. It consists of a barrel into which the gunners drop a mortar round. When the round reaches the base of the barrel it hits a fixed firing pin that fires the round. The barrel is generally set at an angle of between 45 and 85 degrees (800 to 1500 mils), with the higher angle producing a shorter horizontal trajectory. Some mortars have a moving firing pin, operated by a lanyard or trigger mechanism. Ammunition Ammunition for mortars generally comes in two main varieties: fin-stabilised and spin-stabilised. Examples of the former have short fins on their posterior portion, which control the path of the bomb in flight. Spin-stabilised mortar bombs rotate as they travel along and leave the mortar tube, which stabilises them in much the same way as a rifle bullet. Both types of rounds can be either illumination (infrared or visible illumination), smoke, high explosive, and training rounds. Mortar bombs are often referred to, incorrectly, as "mortars". Operators may fire spin-stabilised rounds from either a smoothbore or a rifled barrel. Rifled mortars are more accurate but slower to load. Since mortars are generally muzzle-loaded, mortar bombs for rifled barrels usually have a pre-engraved band, called an obturator, that engages with the rifling of the barrel. Exceptions to this are the U.S. M2 4.2-inch mortar and M30 mortar, whose ammunition has a sub-calibre expandable ring that enlarged when fired. This allows the projectile to slide down the barrel freely but grip the rifling when fired. The system resembles the Minié ball for muzzle-loading rifles. For extra range, propellant rings (augmentation charges) are attached to the bomb's fins. The rings are usually easy to remove, because they have a major influence on the speed and thus the range of the bomb. Some mortar rounds can be fired without any augmentation charges, e.g., the 81 mm L16 mortar. Precision guided The XM395 Precision Guided Mortar Munition (PGMM) is a 120 mm guided mortar round developed by Alliant Techsystems. Based on Orbital ATK's Precision Guidance Kit for 155 mm artillery projectiles, XM395 combines GPS guidance and directional control surfaces into a package that replaces standard fuses, transforming existing 120 mm mortar bodies into precision-guided munitions. The XM395 munition consists of a GPS-guided kit fitted to standard 120 mm smoothbore mortar rounds that includes the fitting of a nose and tail subsystem containing the maneuvering parts. The Strix mortar round is a Swedish endphase-guided projectile fired from a 120 mm mortar currently manufactured by Saab Bofors Dynamics. STRIX is fired like a conventional mortar round. The round contains an infrared imaging sensor that it uses to guide itself onto any tank or armoured fighting vehicle in the vicinity where it lands. The seeker is designed to ignore targets that are already burning. Launched from any 120 mm mortar, STRIX has a normal range of up to . The addition of a special sustainer motor increases the range to . The GMM 120 (Guided Mortar Munition 120; known as Patzmi; also referred to as Morty) is a GPS and/or laser-guided mortar munition, which was developed by Israel Military Industries. Another Israeli guided mortar is Iron Sting, developed by Elbit. The Russian KM-8 Gran is also laser-guided. Compared to long range artillery Modern mortars and their ammunition are generally much smaller and lighter than long range artillery, such as field guns and howitzers, which allows light () and medium (/) mortars to be considered light weapons; i.e. capable of transport by personnel without vehicle assistance. Mortars are short-range weapons and often more effective than long range artillery for many purposes within their shorter range. In particular, because of its high, parabolic trajectory with a near vertical descent, the mortar can land bombs on nearby targets, including those behind obstacles or in fortifications, such as light vehicles behind hills or structures, or infantry in trenches or spider holes. This also makes it possible to launch attacks from positions lower than the target of the attack. (For example, long-range artillery could not shell a target away and higher, a target easily accessible to a mortar.) In trench warfare, mortars can use plunging fire directly into the enemy trenches, which is very hard or impossible to accomplish with long range artillery because of its much flatter trajectory. Mortars are also highly effective when used from concealed positions, such as the natural escarpments on hillsides or from woods, especially if forward observers are being employed in strategic positions to direct fire, an arrangement where the mortar is in relatively close proximity both to its forward observer and its target, allowing for fire to be quickly and accurately delivered with lethal effect. Mortars suffer from instability when used on snow or soft ground, because the recoil pushes them into the ground or snow unevenly. A Raschen bag addresses this problem. Fin-stabilised mortar bombs do not have to withstand the rotational forces placed upon them by rifling or greater pressures, and can therefore carry a higher payload in a thinner skin than rifled artillery ammunition. Because of the difference in available volume, a smooth-bore mortar of a given diameter will have a greater explosive yield than a similarly sized artillery shell of a gun or howitzer. For example, a 120 mm mortar bomb has approximately the same explosive capability as a 152 mm/155 mm artillery shell. Also, fin-stabilised munitions fired from a smooth-bore, which do not rely on the spin imparted by a rifled bore for greater accuracy, do not have the drawback of veering in the direction of the spin. Largest mortars From the 17th to the mid-20th century, very heavy, relatively immobile siege mortars were used, of up to calibre, often made of cast iron and with an outside barrel diameter many times that of the bore diameter. An early example was Roaring Meg, with a barrel diameter and firing a hollow ball filled with gunpowder and used during the English Civil War in 1646. The largest mortars ever developed were the Belgian "Monster Mortar" () developed by Henri-Joseph Paixhans in 1832, Mallet's Mortar () developed by Robert Mallet in 1857, and the "Little David" (() developed in the United States for use in World War II. Although the latter two had a calibre of , only the "Monster Mortar" was used in combat (at the Battle of Antwerp in 1832). The World War II German Karl-Gerät was a mortar and the largest to see combat in modern warfare.
Technology
Artillery and siege
null