id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
332090
https://en.wikipedia.org/wiki/Computably%20enumerable%20set
Computably enumerable set
In computability theory, a set S of natural numbers is called computably enumerable (c.e.), recursively enumerable (r.e.), semidecidable, partially decidable, listable, provable or Turing-recognizable if: There is an algorithm such that the set of input numbers for which the algorithm halts is exactly S. Or, equivalently, There is an algorithm that enumerates the members of S. That means that its output is a list of all the members of S: s1, s2, s3, ... . If S is infinite, this algorithm will run forever, but each element of S will be returned after a finite amount of time. Note that these elements do not have to be listed in a particular way, say from smallest to largest. The first condition suggests why the term semidecidable is sometimes used. More precisely, if a number is in the set, one can decide this by running the algorithm, but if the number is not in the set, the algorithm can run forever, and no information is returned. A set that is "completely decidable" is a computable set. The second condition suggests why computably enumerable is used. The abbreviations c.e. and r.e. are often used, even in print, instead of the full phrase. In computational complexity theory, the complexity class containing all computably enumerable sets is RE. In recursion theory, the lattice of c.e. sets under inclusion is denoted . Formal definition A set S of natural numbers is called computably enumerable if there is a partial computable function whose domain is exactly S, meaning that the function is defined if and only if its input is a member of S. Equivalent formulations The following are all equivalent properties of a set S of natural numbers: Semidecidability: The set S is computably enumerable. That is, S is the domain (co-range) of a partial computable function. The set S is (referring to the arithmetical hierarchy). There is a partial computable function f such that: Enumerability: The set S is the range of a partial computable function. The set S is the range of a total computable function, or empty. If S is infinite, the function can be chosen to be injective. The set S is the range of a primitive recursive function or empty. Even if S is infinite, repetition of values may be necessary in this case. Diophantine: There is a polynomial p with integer coefficients and variables x, a, b, c, d, e, f, g, h, i ranging over the natural numbers such that (The number of bound variables in this definition is the best known so far; it might be that a lower number can be used to define all Diophantine sets.) There is a polynomial from the integers to the integers such that the set S contains exactly the non-negative numbers in its range. The equivalence of semidecidability and enumerability can be obtained by the technique of dovetailing. The Diophantine characterizations of a computably enumerable set, while not as straightforward or intuitive as the first definitions, were found by Yuri Matiyasevich as part of the negative solution to Hilbert's Tenth Problem. Diophantine sets predate recursion theory and are therefore historically the first way to describe these sets (although this equivalence was only remarked more than three decades after the introduction of computably enumerable sets). Examples Every computable set is computably enumerable, but it is not true that every computably enumerable set is computable. For computable sets, the algorithm must also say if an input is not in the set – this is not required of computably enumerable sets. A recursively enumerable language is a computably enumerable subset of a formal language. The set of all provable sentences in an effectively presented axiomatic system is a computably enumerable set. Matiyasevich's theorem states that every computably enumerable set is a Diophantine set (the converse is trivially true). The simple sets are computably enumerable but not computable. The creative sets are computably enumerable but not computable. Any productive set is not computably enumerable. Given a Gödel numbering of the computable functions, the set (where is the Cantor pairing function and indicates is defined) is computably enumerable (cf. picture for a fixed x). This set encodes the halting problem as it describes the input parameters for which each Turing machine halts. Given a Gödel numbering of the computable functions, the set is computably enumerable. This set encodes the problem of deciding a function value. Given a partial function f from the natural numbers into the natural numbers, f is a partial computable function if and only if the graph of f, that is, the set of all pairs such that f(x) is defined, is computably enumerable. Properties If A and B are computably enumerable sets then A ∩ B, A ∪ B and A × B (with the ordered pair of natural numbers mapped to a single natural number with the Cantor pairing function) are computably enumerable sets. The preimage of a computably enumerable set under a partial computable function is a computably enumerable set. A set is called co-computably-enumerable or co-c.e. if its complement is computably enumerable. Equivalently, a set is co-r.e. if and only if it is at level of the arithmetical hierarchy. The complexity class of co-computably-enumerable sets is denoted co-RE. A set A is computable if and only if both A and the complement of A are computably enumerable. Some pairs of computably enumerable sets are effectively separable and some are not. Remarks According to the Church–Turing thesis, any effectively calculable function is calculable by a Turing machine, and thus a set S is computably enumerable if and only if there is some algorithm which yields an enumeration of S. This cannot be taken as a formal definition, however, because the Church–Turing thesis is an informal conjecture rather than a formal axiom. The definition of a computably enumerable set as the domain of a partial function, rather than the range of a total computable function, is common in contemporary texts. This choice is motivated by the fact that in generalized recursion theories, such as α-recursion theory, the definition corresponding to domains has been found to be more natural. Other texts use the definition in terms of enumerations, which is equivalent for computably enumerable sets.
Mathematics
Computability theory
null
332264
https://en.wikipedia.org/wiki/Computable%20set
Computable set
In computability theory, a set of natural numbers is called computable, recursive, or decidable if there is an algorithm which takes a number as input, terminates after a finite amount of time (possibly depending on the given number) and correctly decides whether the number belongs to the set or not. A set which is not computable is called noncomputable or undecidable. A more general class of sets than the computable ones consists of the computably enumerable (c.e.) sets, also called semidecidable sets. For these sets, it is only required that there is an algorithm that correctly decides when a number is in the set; the algorithm may give no answer (but not the wrong answer) for numbers not in the set. Formal definition A subset of the natural numbers is called computable if there exists a total computable function such that if and if . In other words, the set is computable if and only if the indicator function is computable. Examples and non-examples Examples: Every finite or cofinite subset of the natural numbers is computable. This includes these special cases: The empty set is computable. The entire set of natural numbers is computable. Each natural number (as defined in standard set theory) is computable; that is, the set of natural numbers less than a given natural number is computable. The subset of prime numbers is computable. A recursive language is a computable subset of a formal language. The set of Gödel numbers of arithmetic proofs described in Kurt Gödel's paper "On formally undecidable propositions of Principia Mathematica and related systems I" is computable; see Gödel's incompleteness theorems. Non-examples: The set of Turing machines that halt is not computable. The isomorphism class of two finite simplicial complexes is not computable. The set of busy beaver champions is not computable. Hilbert's tenth problem is not computable. Properties If A is a computable set then the complement of A is a computable set. If A and B are computable sets then A ∩ B, A ∪ B and the image of A × B under the Cantor pairing function are computable sets. A is a computable set if and only if A and the complement of A are both computably enumerable (c.e.). The preimage of a computable set under a total computable function is a computable set. The image of a computable set under a total computable bijection is computable. (In general, the image of a computable set under a computable function is c.e., but possibly not computable). A is a computable set if and only if it is at level of the arithmetical hierarchy. A is a computable set if and only if it is either the range of a nondecreasing total computable function, or the empty set. The image of a computable set under a nondecreasing total computable function is computable.
Mathematics
Computability theory
null
332372
https://en.wikipedia.org/wiki/Rate%20%28mathematics%29
Rate (mathematics)
In mathematics, a rate is the quotient of two quantities, often represented as a fraction. If the divisor (or fraction denominator) in the rate is equal to one expressed as a single unit, and if it is assumed that this quantity can be changed systematically (i.e., is an independent variable), then the dividend (the fraction numerator) of the rate expresses the corresponding rate of change in the other (dependent) variable. In some cases, it may be regarded as a change to a value, which is caused by a change of a value in respect to another value. For example, acceleration is a change in velocity with respect to time Temporal rate is a common type of rate ("per unit of time"), such as speed, heart rate, and flux. In fact, often rate is a synonym of rhythm or frequency, a count per second (i.e., hertz); e.g., radio frequencies or sample rates. In describing the units of a rate, the word "per" is used to separate the units of the two measurements used to calculate the rate; for example, a heart rate is expressed as "beats per minute". Rates that have a non-time divisor or denominator include exchange rates, literacy rates, and electric field (in volts per meter). A rate defined using two numbers of the same units will result in a dimensionless quantity, also known as ratio or simply as a rate (such as tax rates) or counts (such as literacy rate). Dimensionless rates can be expressed as a percentage (for example, the global literacy rate in 1998 was 80%), fraction, or multiple. Properties and examples Rates and ratios often vary with time, location, particular element (or subset) of a set of objects, etc. Thus they are often mathematical functions. A rate (or ratio) may often be thought of as an output-input ratio, benefit-cost ratio, all considered in the broad sense. For example, miles per hour in transportation is the output (or benefit) in terms of miles of travel, which one gets from spending an hour (a cost in time) of traveling (at this velocity). A set of sequential indices may be used to enumerate elements (or subsets) of a set of ratios under study. For example, in finance, one could define I by assigning consecutive integers to companies, to political subdivisions (such as states), to different investments, etc. The reason for using indices I is so a set of ratios (i=0, N) can be used in an equation to calculate a function of the rates such as an average of a set of ratios. For example, the average velocity found from the set of v I 's mentioned above. Finding averages may involve using weighted averages and possibly using the harmonic mean. A ratio r=a/b has both a numerator "a" and a denominator "b". The value of a and b may be a real number or integer. The inverse of a ratio r is 1/r = b/a. A rate may be equivalently expressed as an inverse of its value if the ratio of its units is also inverse. For example, 5 miles (mi) per kilowatt-hour (kWh) corresponds to 1/5 kWh/mi (or 200 Wh/mi). Rates are relevant to many aspects of everyday life. For example: How fast are you driving? The speed of the car (often expressed in miles per hour) is a rate. What interest does your savings account pay you? The amount of interest paid per year is a rate. Rate of change Consider the case where the numerator of a rate is a function where happens to be the denominator of the rate . A rate of change of with respect to (where is incremented by ) can be formally defined in two ways: where f(x) is the function with respect to x over the interval from a to a+h. An instantaneous rate of change is equivalent to a derivative. For example, the average speed of a car can be calculated using the total distance traveled between two points, divided by the travel time. In contrast, the instantaneous velocity can be determined by viewing a speedometer. Temporal rates In chemistry and physics: Speed, the rate of change of position, or the change of position per unit of time Acceleration, the rate of change in speed, or the change in speed per unit of time Power, the rate of doing work, or the amount of energy transferred per unit time Frequency, the number of occurrences of a repeating event per unit of time Angular frequency and rotation speed, the number of turns per unit of time Reaction rate, the speed at which chemical reactions occur Volumetric flow rate, the volume of fluid which passes through a given surface per unit of time; e.g., cubic meters per second Counts-per-time rates Radioactive decay, the amount of radioactive material in which one nucleus decays per second, measured in becquerels In computing: Bit rate, the number of bits that are conveyed or processed by a computer per unit of time Symbol rate, the number of symbol changes (signaling events) made to the transmission medium per second Sampling rate, the number of samples (signal measurements) per second Miscellaneous definitions: Rate of reinforcement, number of reinforcements per unit of time, usually per minute Heart rate, usually measured in beats per minute Economics/finance rates/ratios Exchange rate, how much one currency is worth in terms of the other Inflation rate, the ratio of the change in the general price level during a year to the starting price level Interest rate, the price a borrower pays for the use of the money they do not own (ratio of payment to amount borrowed) Price–earnings ratio, market price per share of stock divided by annual earnings per share Rate of return, the ratio of money gained or lost on an investment relative to the amount of money invested Tax rate, the tax amount divided by the taxable income Unemployment rate, the ratio of the number of people who are unemployed to the number in the labor force Wage rate, the amount paid for working a given amount of time (or doing a standard amount of accomplished work) (ratio of payment to time) Other rates Birth rate, and mortality rate, the number of births or deaths scaled to the size of that population, per unit of time Literacy rate, the proportion of the population over age fifteen that can read and write Sex ratio or gender ratio, the ratio of males to females in a population
Mathematics
Basics
null
332638
https://en.wikipedia.org/wiki/Satin
Satin
A satin weave is a type of fabric weave that produces a characteristically glossy, smooth or lustrous material, typically with a glossy top surface and a dull back; it is not durable, as it tends to snag. It is one of three fundamental types of textile weaves alongside plain weave and twill weave. The satin weave is characterised by four or more fill or weft yarns floating over a warp yarn, and four warp yarns floating over a single weft yarn. Floats are missed interfacings, for example where the warp yarn lies on top of the weft in a warp-faced satin. These floats explain the high lustre and even sheen, as unlike in other weaves, light is not scattered as much when hitting the fibres, resulting in a stronger reflection. Satin is usually a warp-faced weaving technique in which warp yarns are "floated" over weft yarns, although there are also weft-faced satins. If a fabric is formed with a satin weave using filament fibres such as silk, polyester or nylon, the corresponding fabric is termed a 'satin', although some definitions insist that a satin fabric is only made from silk. If the yarns used are short-staple yarns such as cotton, the fabric formed is considered a sateen. Many variations can be made of the basic satin weave, including a granite weave and a check weave. Satin is commonly used in clothing, for items such as lingerie, nightgowns, blouses, and evening gowns, but is also used for boxer shorts, shirts and neckties. It is also used in the production of pointe shoes for ballet. Other uses include interior furnishing fabrics, upholstery, and bed sheets. History China Satin was originally made solely of silk, which, for much of history, was produced and found mainly in China. In China, various forms of satin fabrics existed, which came under several names, such as (), (), (), (), () and (). Chinese satin, in its original form, was supposed to be a five- or six-end warp satin. The six-end warp satin weave was mostly likely a derivative of the six-end warp twill weave during the Tang and Northern Song dynasty periods. Europe Silk satin was introduced to Europe during the 12th century. As an imported fabric, it was considerably expensive, and was worn only by the upper classes. Etymology The word "satin" derives its origin from the Chinese port city of Quanzhou (), which was known as Zayton in Europe and Arab countries during the Yuan dynasty (13th–14th century). During that period, Quanzhou was visited by Arab merchants and by Europeans. The Arabs referred to silk satin imported from Quanzhou as . During the latter part of the Middle Ages, Quanzhou was a major shipping port of silk, using the Maritime Silk Road to reach Europe. It was mostly used in the Arab world. Types of satin weave Satin-weave fabrics are more flexible, with better draping characteristics than plain weaves. In a satin weave, the fill yarn passes over multiple warp yarns before interlacing under one warp yarn. Common satin weaves are: 4-harness satin weave (4HS), also called crowfoot satin, in which the fill yarn passes over three warp yarns and under one warp yarn. It is more pliable than a plain weave. 5-harness satin weave (5HS); the fill yarn passes over four warp yarns and then under one warp yarn. 8-harness satin weave (8HS), in which the fill yarn passes over seven warp yarns and then under one warp yarn, is the most pliable satin weave. Types of satin Antique satin – is a type of satin-back shantung, woven with slubbed or unevenly spun weft yarns. Baronet or baronette – has a cotton back and a rayon or silk front, similar to georgette. Charmeuse – is a lightweight, draping satin-weave fabric with a dull reverse. Cuttanee – fine heavy and stout silk and cotton satin Double face(d) – satin is woven with a glossy surface on both sides. It is possible for both sides to have a different pattern, albeit using the same colours. Duchesse satin – is a particularly luxurious, heavy, stiff satin. Faconne – is jacquard woven satin. Farmer's satin or Venetian cloth – is made from mercerised cotton. Gattar – is satin made with a silk warp and a cotton weft. Messaline – is lightweight and loosely woven. Polysatin or poly-satin – is an abbreviated term for polyester satin. Slipper satin – is stiff and medium- to heavy-weight fabric. Sultan – is a worsted fabric with a satin face. Surf satin – was a 1910s American trademark for a taffeta fabric used for swimsuits. Uses for satin Because of the different ways the weave is employed, satin has a range of functions from interior décor to fashion. Dresses: Satin's drape and shiny texture make it a favorite for evening gowns and bridal gowns. Nowadays, many vendors are using it to make Satin Shirts. Upholstery: Satin was first used for ornamental furniture in Europe at the Palace of Versailles, and it is still used for pillow covers, chairs, and other forms of cushioned furniture today. Bed sheets: Satin is frequently used for bed linens because of its flexible and silky texture. Footwear: Satin is a popular fabric for shoe makers, from ballerina slippers to high heels. Fashion accessories: Satin is commonly used for evening bags and clutches in the fashion industry. Crafting: Satin in the form of ribbons is very common for crafting various products such as rosette leis, corsage, and even decorative flowers.
Technology
Weaving
null
332666
https://en.wikipedia.org/wiki/Fa%C3%A7ade
Façade
A façade or facade (; ) is generally the front part or exterior of a building. It is a loanword from the French (), which means "frontage" or "face". In architecture, the façade of a building is often the most important aspect from a design standpoint, as it sets the tone for the rest of the building. From the engineering perspective, the façade is also of great importance due to its impact on energy efficiency. For historical façades, many local zoning regulations or other laws greatly restrict or even forbid their alteration. Etymology The word is a loanword from the French , which in turn comes from the Italian , from meaning 'face', ultimately from post-classical Latin . The earliest usage recorded by the Oxford English Dictionary is 1656. Façades added to earlier buildings It was quite common in the Georgian period for existing houses in English towns to be given a fashionable new façade. For example, in the city of Bath, The Bunch of Grapes in Westgate Street appears to be a Georgian building, but the appearance is only skin deep and some of the interior rooms still have Jacobean plasterwork ceilings. This new construction has happened also in other places: in Santiago de Compostela the three-metre-deep Casa do Cabido was built to match the architectural order of the square, and the main Churrigueresque façade of the Santiago de Compostela Cathedral, facing the Plaza del Obradoiro, is actually encasing and concealing the older Portico of Glory. High rise façades In modern high-rise building, the exterior walls are often suspended from the concrete floor slabs. Examples include curtain walls and precast concrete walls. The façade can at times be required to have a fire-resistance rating, for instance, if two buildings are very close together, to lower the likelihood of fire spreading from one building to another. In general, the façade systems that are suspended or attached to the precast concrete slabs will be made from aluminum (powder coated or anodized) or stainless steel. In recent years more lavish materials such as titanium have sometimes been used, but due to their cost and susceptibility to panel edge staining these have not been popular. Whether rated or not, fire protection is always a design consideration. The melting point of aluminum, , is typically reached within minutes of the start of a fire. Fire stops for such building joints can be qualified, too. Putting fire sprinkler systems on each floor has a profoundly positive effect on the fire safety of buildings with curtain walls. The extended use of new materials, like polymers, resulted in an increase of high-rise building façade fires over the past few years, since they are more flammable than traditional materials. Some building codes also limit the percentage of window area in exterior walls. When the exterior wall is not rated, the perimeter slab edge becomes a junction where rated slabs are abutting an unrated wall. For rated walls, one may also choose rated windows and fire doors, to maintain that wall's rating. Film sets and theme parks On a film set and within most themed attractions, many of the buildings are only façade, which are far cheaper than actual buildings, and not subject to building codes (within film sets). In film sets, they are simply held up with supports from behind, and sometimes have boxes for actors to step in and out of from the front if necessary for a scene. Within theme parks, they are usually decoration for the interior ride or attraction, which is based on a simple building design. Examples
Technology
Architectural elements
null
332769
https://en.wikipedia.org/wiki/Snowmobile
Snowmobile
A snowmobile, also known as a snowmachine (chiefly Alaskan), motor sled (chiefly Canadian), motor sledge, skimobile, or snow scooter, is a motorized vehicle designed for winter travel and recreation on snow. Their engines normally drive a continuous track at the rear, while skis at the front provide directional control. The earliest snowmobiles were powered by readily available industrial four-stroke, air-cooled engines. These would quickly be replaced by lighter and more powerful two-stroke gasoline internal combustion engines and since the mid-2000s four-stroke engines had re-entered the market. The challenges of cross-country transportation in the winter led to the invention of an all-terrain vehicle specifically designed for travel across deep snow where other vehicles foundered. , the snowmobile market has been shared between the four large North American makers (Bombardier Recreational Products (BRP), Arctic Cat, Yamaha, and Polaris) and some specialized makers like the Quebec-based AD Boivin, manufacturer of the Snow Hawk and the European Alpina snowmobile. The second half of the 20th century saw the rise of recreational snowmobiling, whose riders are called snowmobilers, sledders, or slednecks. Recreational riding is known as snowcross/racing, trail riding, freestyle, boondocking, ditchbanging and grass drags. In the summertime snowmobilers can drag race on grass, asphalt strips, or even across water (as in snowmobile skipping). Snowmobiles are sometimes modified to compete in long-distance off-road races. History Early designs A patent (554.482) for the Sled-Propeller design, without a model, was submitted on Sept. 5, 1895 by inventors William J. Culman and William B. Follis of Brule, Wisconsin. The American Motor Sleigh was a short-lived novelty vehicle produced in Boston in 1905. Designed for travel on snow, it consisted of a sleigh body mounted on a framework that held an engine, a drive-shaft system, and runners. Although considered an interesting novelty, sales were low and production ceased in 1906. An Aerosledge, a propeller-driven and running on skis, was built in 1909–1910 by Russian inventor Igor Sikorsky of helicopter fame. Aerosanis were used by the Soviet Red Army during the Winter War and World War II. There is some dispute over whether Aerosanis count as snowmobiles because they were not propelled by tracks. Adolphe Kégresse designed an original caterpillar tracks system, called the Kégresse track, while working for Tsar Nicholas II of Russia between 1906 and 1916. These used a flexible belt rather than interlocking metal segments and could be fitted to a conventional car or truck to turn it into a half-track, suitable for use over soft ground, including snow. Conventional front wheels and steering were used but the wheel could be fitted with skis as seen in the upper right image. He applied it to several cars in the Royal garage including Rolls-Royce cars and Packard trucks. Although this was not a snowmobile, it is an ancestor of the modern concept. In 1911 a 24-year-old, Harold J. Kalenze (pronounced Collins), patented the Vehicle Propeller in Brandon, Manitoba, Canada. In 1914, O. M. Erickson and Art Olsen of the P.N. Bushnell company in Aberdeen, South Dakota, built an open two-seater "motor-bob" out of an Indian motorcycle modified with a cowl-cover, side-by-side seating, and a set of sled-runners fore and aft. While it did not have the tracks of a true snowmobile, its appearance was otherwise similar to the modern version and is one of the earliest examples of a personal motorized snow-vehicle. In 1915 Ray H. Muscott of Waters, Michigan, received the Canadian patent for his motor sleigh, or "traineau automobile", and on June 27, 1916, he received the first United States patent for a snow-vehicle using the now recognized format of rear track(s) and front skis. Many individuals later modified Ford Model Ts with the undercarriage replaced by tracks and skis following this design. They were popular for rural mail delivery for a time. The common name for these conversion of cars and small trucks was Snowflyers. Development of modern designs Carl Eliason of Sayner developed the prototype of the modern snowmobile in the 1920s when he mounted a two-cylinder motorcycle engine on a long sled, steered it with skis under the front, and propelled it with single, endless track. Eliason made 40 snowmobiles, patented in 1927. Upon receiving an order for 200 from Finland, he sold his patent to the FWD Company of Clintonville. They made 300 for military use, then transferred the patent to a Canadian subsidiary. In 1917, Virgil D. White set up to create a patent for his conversion kit that changed the Ford Model T into a "snowmobile". He also copyrighted the term "snowmobile". At the time, the conversion kit was expensive, costing about $395. Virgil White applied his patent in 1918 and created his own snowmobile. In 1922, his conversion kit was on the markets and available only through Ford dealerships. The relatively dry snow conditions of the United States Midwest suited the converted Ford Model Ts and other like vehicles, but they were not suitable for humid snow areas such as southern Quebec and New England. This led Joseph-Armand Bombardier from the small town of Valcourt, Quebec, to invent a different caterpillar track system suitable for all kinds of snow conditions. Bombardier had already made some "metal" tracked vehicles since 1928, but his new revolutionary track traction system (a toothed wheel covered in rubber, and a rubber-and-cotton track that wraps around the back wheels) was his first major invention. He started production of the B-7, an enclosed, seven-passenger snowmobile, in 1937, and introduced the B-12, a twelve-passenger model, in 1942. The B-7 had a V-8 flathead engine from Ford Motor Company. The B-12 had a flathead in line six-cylinder engine from Chrysler industrial, and 2,817 units were produced until 1951. It was used in many applications, such as ambulances, Canada Post vehicles, winter "school buses", forestry machines, and even army vehicles in World War II. Bombardier had always dreamed of a smaller version, more like the size of a motor scooter. Post-war developments In 1951 Fritz Riemerschmid devised what he called a snow scooter. The machine had a track mounted beneath a snowboard like base, on top of which were an enclosed engine with motorcycle like seat and fuel tank. the vehicle was steered via a steering wheel and cables linked to two small skis on outriggers either side of the vehicle. In the mid-1950s, a United States firm built a "snowmobile the arctic area of Alaska that had the drive train reversed of today's snowmobiles with two front wheels—the larger one behind the smaller one—with tires driving an endless loop track". Little is known about this "snowmobile" meant to haul cargo and trade goods to isolated settlements. An odd version of snowmobile is the Swedish Larven, made by the Lenko Company of Östersund, from the 1960s until the end of the 1980s. It was a very small and basic design, with just an engine in the rear and a track. The driver sat on it and steered using skis on his feet. Design Most modern snowmobiles are powered by either a four- or two-stroke internal combustion engine, with the exception of the Taiga TS2. Historically, snowmobiles have always used two-stroke engines because of their reduced complexity, weight and cost, compared to a similarly powered four-stroke. However, four-stroke powered snowmobiles have been gaining popularity steadily in the last fifteen or so years, with manufacturer Yamaha producing four-stroke snowmobiles only. The Whistler Blackcomb ski resort is testing Taiga's electric snowmobiles with lower noise, and similar vehicles exist. Early snowmobiles used simple rubber tracks, but modern snowmobiles' tracks are usually made of a Kevlar composite construction. Older snowmobiles could generally accommodate two people; however, most snowmobiles manufactured since the 1990s have been designed to only accommodate one person. Snowmobiles built with the ability to accommodate two people are referred to as "2-up" snowmobiles or "touring" models and make up an extremely small share of the market. Most snowmobiles do not have any enclosures, except for a windshield. Performance The first snowmobiles made do with as little as engines, but engine sizes and efficiency have improved drastically. In the early 1990s, the biggest engines available (typically 600cc-800cc displacement range) produced around . As of 2022, several snowmobiles are available with engines sizes up to 1,200 cc, producing 150+ hp, as well as several models with up to 1,000 cc engines producing closer to . Recently, some models are turbo-charged, resulting in dramatic increase of engine horsepower. Snowmobiles are capable of moving across steep hillsides without sliding down-slope if the rider transfers their weight towards the uphill side, a process called side-hilling. Higher-powered modern snowmobiles can achieve speeds over . Drag racing snowmobiles can reach speeds over . Mountain sleds permit access in remote areas with deep snow, which was nearly impossible a few decades ago. This is mainly due to alterations, enhancements, and additions of original trail model designs such as weight, weight distribution, track length, paddle depth, and power. Technology and design advances in mountain snowmobiles have improved since 2003 with Ski-Doo's introduction of the "REV" framework platform. Most two-stroke mountain snowmobiles have a top engine size of 800 cc, producing around , although some 1,000 cc factory machines have been produced. These may not be as popular as many 800 cc models outperform them because of weight and an increase of unneeded power. Cornices and other kinds of jumps are sought after for aerial maneuvers. Riders often search for non-tracked, virgin terrain and are known to "trailblaze" or "boondock" deep into remote territory where there is absolutely no visible path to follow. However, this type of trailblazing is dangerous as contact with buried rocks, logs, and frozen ground can cause extensive damage and injuries. Riders look for large open fields of fresh snow where they can carve. Some riders use extensively modified snowmobiles, customized with aftermarket accessories like handle-bar risers, handguards, custom/lightweight hoods, windshields, and seats, running board supports, studs, and numerous other modifications that increase power and maneuverability. Many of these customizations can now be purchased straight off the showroom floor on stock models. Trail snowmobiles improved in the past 15 years as well (many of them borrowed from endeavors to produce winning mountain sleds). Heavy "muscle sleds" can produce speeds in excess of due to powerful engines (up to 1,200 cc stock, and custom engines exceeding 1,200 cc), short tracks, and good traction on groomed trails. Sno-cross oriented snowmobiles often have an engine size cap of 440 or 600 cc, but lighter machines with redesigned stances, formats, and weight control have produced extremely fast and quickly accelerating race sleds. Brands According to the research center RISE, approximately 135,000 snowmobiles will be sold worldwide yearly. Snowmobiles are widely used in arctic territories for travel. However, the tiny Arctic population means a correspondingly small market. Most snowmobiles are sold for recreational purposes in places where snow cover is stable during winter. The number of snowmobiles in Europe and other parts of the world is low. Snowmobiles designed to perform various work tasks have been available for many years with dual tracks from such manufacturers as Aktiv (Sweden), who made the Grizzly, Ockelbo (Sweden), who made the 8000, and Bombardier who made the Alpine and later the Alpine II. Currently, there are two manufacturers of dual-track snowmobiles; Russia's Buran and the Italian Alpina snowmobiles (under the name Sherpa and Superclass). Polaris Edgar and Allen Hetteen and David Johnson of Roseau, Minnesota, invented what we now know as the modern snowmobile in 1955–1956, but the early machines were heavy () and slow (). Their company, Hetteen Hoist & Derrick Co., became Polaris Industries which introduced their first commercial model, the Polaris Sno Traveler in 1957. Ski-Doo In 1960, Joseph-Armand Bombardier introduced his own snowmobile using an open-cockpit one- or two-person form, similar to the 1957 Polaris Sno Traveler, and started selling it under the brand name Ski-Doo through his company Bombardier Inc. (now manufactured by Bombardier Recreational Products). Competitors copied and improved his design; in the 1970s there were over a hundred snowmobile manufacturers. From 1970 to 1973, two million machines were sold, peaking at 500,000 sold in 1971. Many of the snowmobile companies were small and the biggest manufacturers were often attempts by motorcycle makers and outboard motor makers to branch off in a new market. Most of these companies went bankrupt or were acquired by larger companies during the 1973 oil crisis and succeeding recessions. Sales rebounded to 260,000 in 1997 but gradually decreased afterward, influenced by warmer winters and the use during all four seasons of small one- or two-person ATVs. Alpina Alpina Snowmobiles are manufactured in Vicenza, Italy, by Alpina s.r.l., a manufacturer of various on-snow implements that had been building dual-track snowmobiles since 1995. Alpina manufactures one basic dual-track snowmobile design. In 2002 the Sherpa was introduced and is the model name for the four-stroke machine. Prior to introducing the Sherpa, Alpina offered a two-stroke series designated the Superclass. The four-stroke Sherpa is currently the top machine in production. A new version of the Superclass has been released in 2017, with a lot of innovations and a new four-stroke engine. The Sherpa and Superclass series shared the same basic dual-track platform, twin tracks with dual skis up front. Power for the Sherpa is supplied by a 1.6L in-line four-cylinder gasoline automotive engine. The new Superclass power is provided by a 1.2L 3-cylinder four-stroke gasoline engine. The Sherpa and Superclass are designed as working snowmobiles for carrying supplies, pulling cargo sleds, pulling trail grooming implements, carrying several passengers, and negotiating deep snow. Engine and transmission combination are designed to deliver optimum power to pull or carry large loads while top-end speeds are kept below , depending on the model. The large footprint of the dual tracks and dual skis allows the Sherpa and Superclass to "float" on top of deep snow and not sink in and get stuck. Taiga Electric Taiga Motors in Montreal created the first commercially produced electric snowmobile. The Taiga TS2 can go from zero to in 3 seconds, with of instant torque. The Taiga TS2 weighs . Sport The International 500 is a large racing event held annually in Sault Sainte Marie, Michigan. It is a race on a track, with the current purse being in excess of $40,000. It has been running since February 1969. Drag racing is common with snowmobiles year-round, with summer and fall often with grass or closed-course (asphalt or concrete) drag strips. The largest event is Hay Days in North Branch, Minnesota, on the first weekend following Labor Day. The World Championship Watercross or snowmobile skipping races are held in Grantsburg, Wisconsin, in July. The snowmobiles are raced on a marked course, similar to motocross courses, without the ramps and on water. The Snocross racing series are snowmobile races on a motocross-like course. The races are held during the winter season in Northern United States and Canada. One of the largest in New York is the Northeast SnoX Challenge in early January in Malone, New York, and run by Rock Maple Racing and sponsored by the Malone Chamber of Commerce. Snowmobiles are used for ice racing. The racing is held on an "Ice Oval" track. The World Championship Snowmobile Derby is held each winter in Eagle River, Wisconsin. Alaska's "Iron Dog" is the longest snowmachine race in the world. It is long and runs from Big Lake to Nome to Fairbanks. The name refers to dog mushing, long popular in Alaska. Vintage Snowmobile Racing is the racing of vintage snowmobiles and has grown in popularity as a sporting event on the Canadian prairie and in America. The World Championship Hill Climb competition is held in Jackson, Wyoming, at the Snow King Mountain resort each year in March. 2019 was the 43rd year of the four-day event and drew around 10,000 in attendance. Variants A snow bike takes a typical dirt bike and replaces the rear wheel with a single tread system similar to a snowmobile and the front wheel with a large ski. It is much smaller and nimbler than a snowmobile, and it has a tighter turning radius, which lets the rider go where many snowmobiles cannot. The first prototype of motorcycles with a rear tread date to the 1920s, with subsequent failed attempts to bring them to market. Many motorcycles made after the 1990s can be fitted with kits that transform them into snow bikes. In 2017, Winter X Games XXI introduced the first snow bike event in the form of a SnowBikeCross race. The following year they introduced a Best Trick event. Accidents and safety As a result of their inherent maneuverability, acceleration, and high-speed abilities, skill and physical strength are both required to operate a snowmobile. Snowmobile injuries and fatalities are high compared to those caused by on road motor vehicle traffic. Losing control of a snowmobile could easily cause extensive damage, injury, or death. One such cause of snowmobile accidents is loss of control from a loose grip. If the rider falls off, the loss of control can easily result in the snowmobile colliding with a nearby object, such as a rock or tree. Most snowmobiles are fitted with a cord connected to a kill switch, which would stop the snowmobile if the rider falls off; however, not all riders use this device every time they operate a snowmobile. Swerving off of the path may result in rolling the snowmobile or crashing into an obstacle. In unfamiliar areas, riders may crash into suspended barbed wire or haywire fences at high speeds. Each year a number of serious or fatal accidents are caused by these factors. Each year, riders are killed by hitting other snowmobiles, automobiles, pedestrians, rocks, trees, or fences, or falling through thin ice. On average, 10 people a year have died in such crashes in Minnesota alone, with alcohol a contributing factor in many cases. In Saskatchewan, 16 out of 21 deaths in snowmobile collisions between 1996 and 2000 were caused by the effects of alcohol. Wrestler Lindsey Durlacher died in 2011 following surgery for a broken sternum he sustained in a snowmobile accident. Fatal collisions with trains can also occur when a snowmobile operator engages in the illegal practice of "rail riding", riding between railroad track rails over snow-covered sleepers. Inability to hear the sound of an oncoming train over the engine noise of a snowmobile makes this activity extremely dangerous. Collision with large animals such as moose and deer, which may venture onto a snowmobile trail, is another major cause of snowmobile accidents. Most often such encounters occur at night or in low-visibility conditions when the animal could not be seen in time to prevent a collision. Also even when successful, a sudden maneuver to miss hitting the animal could still result in the operator losing control of the snowmobile. The next leading cause of injury and death is avalanches, which can result from the practice of highmarking, or driving a snowmobile as far up a hill as it can go. During the 2018–2019 season, 7 snowmobilers in the United States were killed. Avalanche safety education is critical for those accessing the backcountry. Risks can be reduced through education, proper training, appropriate gear, attention to published avalanche warnings and avoiding drinking alcohol. In some areas of Western U.S., organizations provide avalanche training, some of which is free. It is recommended that snowmobile riders wear a helmet and a snowmobile suit. Legislation Depending on jurisdiction, there may be penalties for driving outside permitted areas, without an approved helmet, without a driver's license, with an unregistered snowmobile, or while under the influence of alcohol or other substances. There may also be regulations regarding noise and wildlife. In some jurisdictions, a driver's license is required to operate a snowmobile. A specific snowmobile driver's license is required in, for example, Norway and Sweden. In Finland, a snowmobile driver's license is not required if the driver already has another type of appropriate driver's license (for example car or tractor). Environmental impact The environmental impact of snowmobiles has been the subject of much debate. Governments have been reacting slowly to noise and air pollution, partly because of lobbying from manufacturers and snowmobilers. For instance, in 1999, the Canadian government adopted the Canadian Environmental Protection Act, 1999, but the set of rules governing pollution emissions for off-road vehicles was only released in January 2005. In another example of regulation, only four-stroke snowmobiles are allowed in Yellowstone National Park since a bylaw was recently passed to minimize CO2 emissions and noise. In Yellowstone, snowmobiles account for 80% of total hydrocarbon emissions and 50% of carbon monoxide emissions in the winter. This is just less than 2% and 1% respectively of the overall annual pollution within the park. Snowmobiles are only allowed to be ridden on the unplowed roads used in the summer, and riding off the roads is prohibited. This accounts for less than 1% (0.002%) of the park area. In 2005 the US Forest Service published a Travel Management Rule for off-highway vehicles, strengthening the implementation of Executive Orders issued in the 1970s. However, these rules were not applied to snowmobiles. In 2015, following a decision in a lawsuit brought by Winter Wildlands Alliance against the Forest Service, the rules were extended to snowmobiles, referred to as an over-snow vehicle (OSV). National Forests with sufficient snow for winter recreation are now required to designate where OSVs are allowed to travel and where they are prohibited. In doing so, the Forest Service must minimize 1) damage to soil, watershed, vegetation, and other forest resources; 2) harassment of wildlife and significant disruption of wildlife habitats; and 3) conflicts between motor vehicle use and existing or proposed recreational uses of National Forest System lands or neighboring Federal lands. Air Most snowmobiles are still powered by two-stroke engines, although Alpina and Yamaha have been using four-strokes since 2002 and 2003, respectively. However, in the last decade several manufacturers have been successful in designing less polluting motors, and putting most of them in production. Yamaha and Arctic-Cat were the first to mass-produce four-stroke models, which are significantly less polluting than the early two-stroke machines. Alpina offers only four-stroke EFI engines equipped with a catalytic converter and dual oxygen-probe. Bombardier's E-Tec two-stroke motors emit 85% less pollutants than previous carbureted two-strokes. Polaris has developed a fuel-injection technology called "Cleanfire Injection" on their two-strokes. The industry is also working on a direct-injected "clean two strokes" that is better in terms of NOX emissions. Independent researchers, undergraduates and graduate students participate in contests to lessen the impact of emissions from snowmobiles. The Clean Snow Mobile Challenge is held yearly at Michigan Technological University regrouping the entries from universities from across United States and Canada. Some of the participants in recent years have been the École polytechnique de Montréal with a Quasiturbine engine and students from École de technologie supérieure of the UQAM with a less polluting two-stroke engine using E85 and direct injection. Noise Maximum noise restrictions have been enacted by law for both production of snowmobiles and aftermarket components. For instance, in Quebec (Canada) noise levels must be 78 decibels or less at 20 meters from a snowmobile path. As of 2009, snowmobiles produce 90% less noise than in the 1960s but there are still numerous complaints. Efforts to reduce noise focus on suppressing mechanical noise of the suspension components and tracks. Arctic Cat in 2005 introduced "Silent Track technology" on touring models such as the T660 Turbo, Bearcat, and some M-Series sleds. Ski-Doo has since then also used comparative "silent track technology" on some models. The use of aftermarket exhaust systems ("cans" or "silencers") is controversial. These replace the stock muffler with a less restrictive system that is usually claimed to increase power output of the engine. However, these aftermarket exhausts are often much louder than those from the factory, with only some being slightly quieter than a completely open, unbaffled system. Most, if not all, local snowmobile clubs (that maintain and groom trail systems) do not recommend them because of noise. Local and state authorities have set up checkpoints on high-traffic trails, checking for excessively loud systems and issuing citations. Typically these systems are installed on two-stroke powered machines (giving the distinctive "braap" sound); however, in recent years aftermarket companies have released silencers for four-stroke models as well. Economic impact According to the International Snowmobile Manufacturers Association, snowmobilers in Canada and the United States spend over $28 billion on snowmobiling each year. This includes expenditures on equipment, clothing, accessories, snowmobiling vacations (lodging, fuel, and food), maintenance and others. Often this is the only source of income for some smaller towns, such as Bralorne, British Columbia, that rely solely on tourism during the summer and winter months. Once a booming gold mining town, Bralorne is now a very small town with a population of 60, and it is relatively inaccessible by car in the winter. The economy relies on visits from snowmobilers, who contribute to the economy by spending money on gas, food, and hotels. Social impact Since the invention of snowmobiles, isolated communities of northern North America have always had a demand for them. However, the early snowmobiles designs were not economical or functional enough for the harsh environment of northern North America. Joseph-Armand Bombardier started producing the Ski-Doo in 1959 at the request of a priest. The priest had asked Bombardier to make an economical and reliable means of winter travel. The Ski-Doo greatly changed life in northern North America's isolated communities, where Ski-Doo replaced sled dogs by the end of the 1960s. The Ski-Doo also greatly improved communication between isolated communities. In northern North America, historically, isolated communities depended on dog sledding and snowshoeing as their primary method of transportation for hunting during the winter months. The Ski-Doo allowed trappers to travel greater distances faster, allowing them to expand their hunting grounds. Prospectors, mining companies, foresters, backcountry cabin owners, the Royal Canadian Mounted Police and Canadian Army also found snowmobiles very effective because they were the most economical method of transportation of small loads. Joseph-Armand Bombardier's tests of Ski-Dog proved that snowmobiling was fun, and snowmobiling became a new form of outdoor recreation. People who once sat dormant throughout winter were now given the opportunity in more outdoor activities.
Technology
Motorized road transport
null
332902
https://en.wikipedia.org/wiki/Olympiad
Olympiad
An olympiad (, Olympiás) is a period of four years, particularly those associated with the ancient and modern Olympic Games. Although the ancient Olympics were established during Greece's Archaic Era, it was not until Hippias that a consistent list was established and not until Ephorus in the Hellenistic period that the first recorded Olympic contest was used as a calendar epoch. Ancient authors agreed that other Olympics had been held before the race won by Coroebus but disagreed on how many; the convention was established to place Coroebus's victory at a time equivalent to the summer of 776  in the Proleptic Julian calendar, and to treat it as Year 1 of Olympiad 1. Olympiad 2 began with the next games in the summer of 772 . Thus, for N less than 195, Olympiad N is reckoned as having started in the year   and ended four years later. For N greater than or equal to 195, Olympiad N began in   and ended four years later. By extrapolation, the year of the th Olympiad begins roughly around 2 August . In reference to the modern Olympics, their Olympiads are four year periods beginning on January 1 of the year of the Summer Games. Thus, the modern Olympiad I began 1 January 1896, Olympiad II began 1 January 1900, and so on. Olympiad XXXIII began 1 January 2024. Because the Julian and Gregorian calendars go directly from 1  to  1, the cycle of modern Olympiads is ahead of the ancient cycle by one year. Ancient Olympics Each olympiad started with the holding of the games, which originally began on the first or second full moon after the summer solstice. After the introduction of the Metonic cycle about 432 BC, the start of the games was determined slightly differently. Within each olympiad, time was reckoned by referring to its 1st, 2nd, 3rd, or 4th year. Ancient writers sometimes describe their Olympiads as lasting five years but do so by counting inclusively; in fact each comprised a four year period. For example, the first year of Olympiad 140 began in the summer of 220  and lasted until the middle of 219 . After the 2nd, 3rd, and 4th years of Olympiad 140, the games in the summer of 216  would begin the first year of Olympiad 141. Historians The sophist Hippias was the first writer to compile a comprehensive list of the Olympic victors (, olympioníkes). Although his Olympic Record (, Olympionikō̂n Anagraphḗ) is now entirely lost, it apparently formed the basis of all later Olympic dating. By the time of Eratosthenes, his dating of Coroebus's victory to 776  had been generally accepted. The panhellenic nature of the games, their regular schedule, and the improved victor list allowed Greek historians to use the Olympiads as a way of reckoning time that did not depend on the various calendars of the city-states. (See e.g. the Attic calendar of the Athenians.) The first to do so consistently was Timaeus of Tauromenium in the third century . Nevertheless, since for events of the early history of the games the reckoning was used in retrospect, some of the dates given by later historian for events before the 5th century  are very unreliable. In the 2nd century, Phlegon of Tralles summarized the events of each Olympiad in a book called Olympiads; fragments survive in the work of the Byzantine writer Photius. Christian chroniclers continued to use this Greek system of dating as a way of synchronizing biblical events with Greek and Roman history. In the 3rd century, Sextus Julius Africanus compiled a list of Olympic victors up to 217 , and this list has been preserved in the Chronicle of Eusebius. Examples of Ancient Olympiad dates Early historians sometimes used the names of Olympic victors as a method of dating events to a specific year. For instance, Thucydides says in his account of the year 428 BC: "It was the Olympiad in which the Rhodian Dorieus gained his second victory." Dionysius of Halicarnassus dates the foundation of Rome to the first year of the seventh Olympiad, 752 & 751 . Since Rome was founded on April 21, which was in the last half of the ancient Olympic year, it would be 751  specifically. In Book 1 chapter 75 Dionysius states: "...Romulus, the first ruler of the city, began his reign in the first year of the seventh Olympiad, when Charops at Athens was in the first year of his ten-year term as archon." Diodorus Siculus dates the Persian invasion of Greece to 480 : "Calliades was archon in Athens, and the Romans made Spurius Cassius and Proculus Verginius Tricostus consuls, and the Eleians celebrated the Seventy-fifth Olympiad, that in which Astylus of Syracuse won the stadion. It was in this year that king Xerxes made his campaign against Greece." Jerome, in his Latin translation of the Chronicle of Eusebius, dates the birth of Jesus Christ to year 3 of Olympiad 194, the 42nd year of the reign of the emperor Augustus, which equates to the year 2 . Anolympiad Though the games were held without interruption, on more than one occasion they were held by others than the Eleians. The Eleians declared such games Anolympiads (non-Olympics), but it is assumed the winners were nevertheless recorded. End of the era During the 3rd century, records of the games are so scanty that historians are not certain whether after 261 they were still held every four years. Some winners were recorded though, until the 293rd and last Olympiad of  393. In 394, Roman Emperor Theodosius I outlawed the games at Olympia as pagan. Though it would have been possible to continue the reckoning by just counting four-year periods, by the middle of the 5th century reckoning by Olympiads had ceased. Modern Olympics Start and end The Summer Olympics are more correctly referred to as the Games of the Olympiad. The first poster to announce the games using this term was the one for the 1932 Summer Olympics, in Los Angeles, using the phrase: Call to the games of the Xth Olympiad. The modern Olympiad is a period of four years: the first Olympiad started on 1 January 1896, and an Olympiad starts on 1 January of the years evenly divisible by four. This means that the count of the Olympiads continues, even if Olympic Games are cancelled: For instance, the regular intervals would have meant (summer) Olympic Games should have occurred in 1940 and 1944, but both were cancelled due to World War II. Nonetheless, the count of the Olympiads continued: The 1936 Games were those of the XI Olympiad, while the next Summer Games were those of 1948, which were the Games of the XIV Olympiad. The current Olympiad is the XXXIII of the modern era, which began on 1 January 2024. Note, however, that the official numbering of the Winter Olympics does not count Olympiads, it counts only the Games themselves. For example, the first Winter Games, in 1924, are not designated as Winter Games of the VII Olympiad, but as the I Winter Olympic Games. (The first Winter Games were termed as "Olympic" in a later year.) The 1936 Summer Games were the Games of the XI Olympiad. After the 1940 and 1944 Summer Games were canceled due to World War II, the Games resumed in 1948 as the Games of the XIV Olympiad. However, the 1936 Winter Games were the IV Winter Olympic Games, and on the resumption of the Winter Games in 1948, the event was designated the V Winter Olympic Games. The 2020 Summer Games were the Games of the XXXII Olympiad. On 24 March 2020, due to the COVID-19 pandemic, it was postponed to 2021 rather than cancelled, and thus becoming the first postponement in the 124-year history of the Olympics. Some media people have from time to time referred to a particular (e.g., the nth) Winter Olympics as "the Games of the nth Winter Olympiad", perhaps believing it to be the correct formal name for the Winter Games by analogy with that of the Summer Games. Indeed, at least one IOC-published article has applied this nomenclature as well. This analogy is sometimes extended further by media references to "Summer Olympiads". However, the IOC does not seem to make an official distinction between Olympiads for the summer and winter games, and such usage, particularly for the Winter Olympics, is inconsistent with the numbering discussed above. Quadrennium Some Olympic Committees often use the term quadrennium, which it claims refers to the same four-year period. However, it indicates these quadrennia in calendar years, starting with the first year after the Summer Olympics and ending with the year the next Olympics are held. This would suggest a more precise period of four years, but, for example, the 2001–2004 Quadrennium would then not be exactly the same period as the XXVII Olympiad, which was 2000–2003. Cultural Olympiad A Cultural Olympiad is a concept protected by the International Olympic Committee and may be used only within the limits defined by an Organizing Committee for the Olympic Games. From one Games to the next, the scale of the Cultural Olympiad varies considerably, sometimes involving activity over the entire Olympiad and other times emphasizing specific periods within it. Baron Pierre de Coubertin established the principle of Olympic Art Competitions at a special congress in Paris in 1906, and the first official programme was presented during the 1912 Games in Stockholm. These competitions were also named the "Pentathlon of the Muses", as their purpose was to bring artists to present their work and compete for "art" medals across five categories: architecture, music, literature, sculpture and painting. Nowadays, while there are no competitions as such, cultural and artistic practice is displayed via the Cultural Olympiad. The 2010 Winter Olympics in Vancouver presented the Cultural Olympiad Digital Edition. The 2012 Olympics included an extensive Cultural Olympiad with the London 2012 Festival in the host city, and events elsewhere including the World Shakespeare Festival produced by the RSC. The 2016 games' Cultural Olympiad was scaled back due to Brazil's recession; there was no published programme, with director Carla Camurati promising "secret" and "spontaneous" events such as flash mobs. Cultural events in time for the 2020 Summer Olympics in Tokyo were planned before being canceled due to pandemic restrictions in Japan. Instead, an alternative virtual event was held. Other uses The English term is still often used popularly to indicate the games themselves, a usage that is uncommon in ancient Greek (as an Olympiad is most often the time period between and including sets of games). It is also used to indicate international competitions other than physical sports. This includes international science olympiads, such as the International Geography Olympiad, International Mathematical Olympiad, International Forensics Olympiad, and the International Linguistics Olympiad and their associated national qualifying tests (e.g., the United States of America Mathematical Olympiad, the USA Forensics Olympiad or the United Kingdom Linguistics Olympiad), and also events in mind-sports, such as the Science Olympiad, Mindsport Olympiad, Chess Olympiad, International History Olympiad and Computer Olympiad. In these cases Olympiad is used to indicate a regular event of international competition for top achieving participants; it does not necessarily indicate a four-year period. In some languages, like Czech and Slovak, Olympiad () is the correct term for the games. The Olympiad (L'Olimpiade) is also the name of some 60 operas set in Ancient Greece.
Physical sciences
Time
Basics and measurement
333119
https://en.wikipedia.org/wiki/Epidermis
Epidermis
The epidermis is the outermost of the three layers that comprise the skin, the inner layers being the dermis and hypodermis. The epidermal layer provides a barrier to infection from environmental pathogens and regulates the amount of water released from the body into the atmosphere through transepidermal water loss. The epidermis is composed of multiple layers of flattened cells that overlie a base layer (stratum basale) composed of columnar cells arranged perpendicularly. The layers of cells develop from stem cells in the basal layer. The thickness of the epidermis varies from 31.2μm for the penis to 596.6μm for the sole of the foot with most being roughly 90μm. Thickness does not vary between the sexes but becomes thinner with age. The human epidermis is an example of epithelium, particularly a stratified squamous epithelium. The word epidermis is derived through Latin , itself and . Something related to or part of the epidermis is termed epidermal. Structure Cellular components The epidermis primarily consists of keratinocytes (proliferating basal and differentiated suprabasal), which comprise 90% of its cells, but also contains melanocytes, Langerhans cells, Merkel cells, and inflammatory cells. Epidermal thickenings called Rete ridges (or rete pegs) extend downward between dermal papillae. Blood capillaries are found beneath the epidermis, and are linked to an arteriole and a venule. The epidermis itself has no blood supply and is nourished almost exclusively by diffused oxygen from the surrounding air. Cellular mechanisms for regulating water and sodium levels (ENaCs) are found in all layers of the epidermis. Cell boundaries Epidermal cells are tightly interconnected to serve as a tight barrier against the exterior environment. The junctions between the epidermal cells are of the adherens junction type, formed by transmembrane proteins called cadherins. Inside the cell, the cadherins are linked to actin filaments. In immunofluorescence microscopy, the actin filament network appears as a thick border surrounding the cells, although the actin filaments are actually located inside the cell and run parallel to the cell membrane. Because of the proximity of the neighboring cells and tightness of the junctions, the actin immunofluorescence appears as a border between cells. Layers The epidermis is composed of four or five layers, depending on the skin region. Those layers from outermost to innermost are: cornified layer (stratum corneum) Composed of 10 to 30 layers of polyhedral, anucleated corneocytes (final step of keratinocyte differentiation), with the palms and soles having the most layers. Corneocytes contain a protein envelope (cornified envelope proteins) underneath the plasma membrane, are filled with water-retaining keratin proteins, attached together through corneodesmosomes and surrounded in the extracellular space by stacked layers of lipids. Most of the barrier functions of the epidermis localize to this layer. clear/translucent layer (stratum lucidum, only in palms and soles) This narrow layer is found only on the palms and soles. The epidermis of these two areas is known as "thick skin" because with this extra layer, the skin has 5 epidermal layers instead of 4. granular layer (stratum granulosum) Keratinocytes lose their nuclei and their cytoplasm appears granular. Lipids, contained into those keratinocytes within lamellar bodies, are released into the extracellular space through exocytosis to form a lipid barrier that prevents water loss from the body as well as entry of foreign substances. Those polar lipids are then converted into non-polar lipids and arranged parallel to the cell surface. For example glycosphingolipids become ceramides and phospholipids become free fatty acids. spinous layer (stratum spinosum) Keratinocytes become connected through desmosomes and produce lamellar bodies, from within the Golgi, enriched in polar lipids, glycosphingolipids, free sterols, phospholipids and catabolic enzymes. Langerhans cells, immunologically active cells, are located in the middle of this layer. basal/germinal layer (stratum basale/germinativum) Composed mainly of proliferating and non-proliferating keratinocytes, attached to the basement membrane by hemidesmosomes. Melanocytes are present, connected to numerous keratinocytes in this and other strata through dendrites. Merkel cells are also found in the stratum basale with large numbers in touch-sensitive sites such as the fingertips and lips. They are closely associated with cutaneous nerves and seem to be involved in light touch sensation. Malpighian layer (stratum malpighii) This is usually defined as both the stratum basale and stratum spinosum. The epidermis is separated from the dermis, its underlying tissue, by a basement membrane. Cellular kinetics Cell division As a stratified squamous epithelium, the epidermis is maintained by cell division within the stratum basale. Differentiating cells delaminate from the basement membrane and are displaced outward through the epidermal layers, undergoing multiple stages of differentiation until, in the stratum corneum, losing their nucleus and fusing to squamous sheets, which are eventually shed from the surface (desquamation). Differentiated keratinocytes secrete keratin proteins, which contribute to the formation of an extracellular matrix that is an integral part of the skin barrier function. In normal skin, the rate of keratinocyte production equals the rate of loss, taking about two weeks for a cell to journey from the stratum basale to the top of the stratum granulosum, and an additional four weeks to cross the stratum corneum. The entire epidermis is replaced by new cell growth over a period of about 48 days. Calcium concentration Keratinocyte differentiation throughout the epidermis is in part mediated by a calcium gradient, increasing from the stratum basale until the outer stratum granulosum, where it reaches its maximum, and decreasing in the stratum corneum. Calcium concentration in the stratum corneum is very low in part because those relatively dry cells are not able to dissolve the ions. This calcium gradient parallels keratinocyte differentiation and as such is considered a key regulator in the formation of the epidermal layers. Elevation of extracellular calcium concentrations induces an increase in intracellular free calcium concentrations. Part of that intracellular increase comes from calcium released from intracellular stores and another part comes from transmembrane calcium influx, through both calcium-sensitive chloride channels and voltage-independent cation channels permeable to calcium. Moreover, it has been suggested that an extracellular calcium-sensing receptor (CaSR) also contributes to the rise in intracellular calcium concentration. Development Epidermal organogenesis, the formation of the epidermis, begins in the cells covering the embryo after neurulation, the formation of the central nervous system. In most vertebrates, this original one-layered structure quickly transforms into a two-layered tissue; a temporary outer layer, the embryonic periderm, which is disposed once the inner basal layer or stratum germinativum has formed. This inner layer is a germinal epithelium that gives rise to all epidermal cells. It divides to form the outer spinous layer (stratum spinosum). The cells of these two layers, together called the Malpighian layer(s) after Marcello Malpighi, divide to form the superficial granular layer (Stratum granulosum) of the epidermis. The cells in the stratum granulosum do not divide, but instead form skin cells called keratinocytes from the granules of keratin. These skin cells finally become the cornified layer (stratum corneum), the outermost epidermal layer, where the cells become flattened sacks with their nuclei located at one end of the cell. After birth these outermost cells are replaced by new cells from the stratum granulosum and throughout life they are shed at a rate of 30 - 90 milligrams of skin flakes every hour, or 0.720 - 2.16 grams per day. Epidermal development is a product of several growth factors, two of which are: Transforming growth factor Alpha (TGFα) is an autocrine growth factor by which basal cells stimulate their own division. Keratinocyte growth factor (KGF or FGF7) is a paracrine growth factor produced by the underlying dermal fibroblasts in which the proliferation of basal cells is regulated. Function Barrier The epidermis serves as a barrier to protect the body against microbial pathogens, oxidant stress (UV light), and chemical compounds, and provides mechanical resistance to minor injury. Most of this barrier role is played by the stratum corneum. Characteristics Physical barrier: Epidermal keratinocytes are tightly linked by cell–cell junctions associated to cytoskeletal proteins, giving the epidermis its mechanical strength. Chemical barrier: Highly organized lipids, acids, hydrolytic enzymes, and antimicrobial peptides inhibit passage of external chemicals and pathogens into the body. Immunologically active barrier: The humoral and cellular constituents of the immune system found in the epidermis actively combat infection. Water content of the stratum corneum drops towards the surface, creating hostile conditions for pathogenic microorganism growth. An acidic pH (around 5.0) and low amounts of water make the epidermis hostile to many microorganic pathogens. Non-pathogenic microorganisms on the surface of the epidermis help defend against pathogens by competing for food, limiting its availability, and producing chemical secretions that inhibit the growth of pathogenic microbiota. Permeability Psychological stress, through an increase in glucocorticoids, compromises the stratum corneum and thus the barrier function. Sudden and large shifts in humidity alter stratum corneum hydration in a way that could allow entry of pathogenic microorganisms. Skin hydration The ability of the skin to hold water is primarily due to the stratum corneum and is critical for maintaining healthy skin. Skin hydration is quantified using corneometry. Lipids arranged through a gradient and in an organized manner between the cells of the stratum corneum form a barrier to transepidermal water loss. Skin color The amount and distribution of melanin pigment in the epidermis is the main reason for variation in skin color in Homo sapiens. Melanin is found in the small melanosomes, particles formed in melanocytes from where they are transferred to the surrounding keratinocytes. The size, number, and arrangement of the melanosomes vary between racial groups, but while the number of melanocytes can vary between different body regions, their numbers remain the same in individual body regions in all human beings. In white and Asian skin the melanosomes are packed in "aggregates", but in black skin they are larger and distributed more evenly. The number of melanosomes in the keratinocytes increases with UV radiation exposure, while their distribution remain largely unaffected. Touch The skin contains specialized epidermal touch receptor cells called Merkel cells. Historically, the role of Merkel cells in sensing touch has been thought to be indirect, due their close association with nerve endings. However, recent work in mice and other model organisms demonstrates that Merkel cells intrinsically transform touch into electrical signals that are transmitted to the nervous system. Clinical significance Laboratory culture of keratinocytes to form a 3D structure (artificial skin) recapitulating most of the properties of the epidermis is routinely used as a tool for drug development and testing. Hyperplasia Epidermal hyperplasia (thickening resulting from cell proliferation) has various forms: Acanthosis is diffuse epidermal hyperplasia (thickening of the skin, and not to be confused with acanthocytes). It implies increased thickness of the Malpighian layer (stratum basale and stratum spinosum). Acanthosis nigricans is a black, poorly defined, velvety hyperpigmented acanthosis, usually observed in the back of neck, axilla, and other folded regions of the skin. Focal epithelial hyperplasia (Heck's disease) is an asymptomatic, benign neoplastic condition characterized by multiple white to pinkish papules that occur diffusely in the oral cavity. Pseudoepitheliomatous hyperplasia (PEH) is a benign condition characterized by hyperplasia of the epidermis and epithelium of skin appendages, with irregular squamous strands extending down into the dermis, and closely simulating squamous cell carcinoma (SCC). In contrast, hyperkeratosis is a thickening of the stratum corneum, and is not necessarily due to hyperplasia. Additional images
Biology and health sciences
Integumentary system
Biology
333163
https://en.wikipedia.org/wiki/Halfbeak
Halfbeak
Hemiramphidae is a family of fishes that are commonly called halfbeaks, spipe fish or spipefish. They are a geographically widespread and numerically abundant family of epipelagic fish inhabiting warm waters around the world. The halfbeaks are named for their distinctive jaws, in which the lower jaws are significantly longer than the upper jaws. The similar viviparous halfbeaks (family Zenarchopteridae) have often been included in this family. Though not commercially important themselves, these forage fish support artisanal fisheries and local markets worldwide. They are also fed upon by other commercially important predatory fishes, such as billfishes, mackerels, and sharks. Taxonomy In 1758, Carl Linnaeus was the first to scientifically describe a halfbeak, Esox brasiliensis (now Hemiramphus brasiliensis). In 1775 Peter Forsskål described two more species as Esox, Esox far and Esox marginatus. It was not until 1816 that Georges Cuvier created the genus Hemiramphus; from then on, all three were classified as Hemiramphus. In 1859, Gill erected Hemiramphidae, deriving its name from Hemiramphus, the family's type genus. The name comes from the Greek hemi, meaning half, and rhamphos, meaning beak or bill. There are currently eight genera (including 60 species) within the family Hemirampphidae: Arrhamphus Günther, 1866 Chriodorus Goode & Bean, 1882 Euleptorhamphus Gill, 1859 Hemiramphus Cuvier, 1816 Hyporhamphus Gill, 1859 Melapedalion Fowler, 1934 Oxyporhamphus Gill, 1864 Rhynchorhamphus Fowler, 1928 This family is primarily marine and found in the Atlantic, Pacific, and Indian Oceans, though some inhabit estuaries and rivers. Evolution The halfbeaks' fossil record extends into the Lower Tertiary. The earliest known halfbeak is "Hemiramphus" edwardsi from the Eocene at Monte Bolca, Italy. Apart from differences in the length of the upper and lower jaws, recent and fossil halfbeaks are distinguished by the fusion of the third pair of upper pharyngeal bones into a plate. Phylogeny The phylogeny of the halfbeaks is in a state of flux. On the one hand, there is little question that they are most closely related to three other families of streamlined, surface water fishes: the flyingfishes, needlefishes, and sauries. Traditionally, these four families have been taken to together comprise the order Beloniformes. The halfbeaks and flyingfishes are considered to form one group, the superfamily Exocoetoidea, and the needlefishes and sauries another, the superfamily Scomberesocoidea. On the other hand, recent studies have demonstrated that rather than forming a single monophyletic group (a clade), the halfbeak family actually includes a number of lineages ancestral to the flyingfishes and the needlefishes. In other words, as traditionally defined, the halfbeak family is paraphyletic. Within the subfamily Hemiramphinae, the "flying halfbeak" genus Oxyporhamphus has proved to be particularly problematic; while morphologically closer to the flyingfishes, molecular evidence places it with Hemiramphus and Euleptorhamphus. Together, these three genera form the sister group to the flyingfish family. The other two hemiramphine genera Hyporhamphus and Arrhamphus form another clade of less clear placement. Rather than being closely related to the flyingfishes, the subfamily Zenarchopterinae appears to be the sister group of the needlefishes and sauries. This is based on the pharyngeal jaw apparatus, sperm ultrastructure, and molecular evidence. However, this hypothesis has awkward implications for how the morphological evolution of the group is understood, because the fused pharyngeal plate has been considered reliably diagnostic of the halfbeak family. Furthermore, the existing theory that because juvenile needlefish pass through a developmental stage where the lower jaw is longer than the upper jaw (the so-called "halfbeak stage") the theory that halfbeaks are paedomorphic needlefish is untenable. In fact the unequal lengths of the upper and lower jaws of halfbeaks appears to be the basal condition, with needlefish being relatively derived in comparison. Morphology The halfbeaks are elongate, streamlined fish adapted to living in open water. Halfbeaks can grow to over SL in the case of Euleptorhampus viridis. The scales are relatively large, cycloid (smooth), and easily detached. There are no spines in the fins. A distinguishing characteristic is that the third pair of upper pharyngeal bones are anklylosed (fused) into a plate. Halfbeaks are one of several fish families that lack a stomach, all of which possess a pharyngeal jaw apparatus (pharyngeal mill). Most species have an extended lower jaw, at least as juveniles, though this feature may be lost as the fish mature, as with Chriodorus, for example. As is typical for surface dwelling, open water fish, most species are silvery, darker above and lighter below, an example of countershading. The tip of the lower jaw is bright red or orange in most species. Halfbeaks carry several adaptations to feeding at the water surface. The eyes and nostrils are at the top of the head and the upper jaw is mobile, but not the lower jaw. Combined with their streamlined shape and the concentration of fins towards the back (similar to that of a pike), these adaptations allow halfbeaks to locate, catch, and swallow food items very effectively. Range and habitat Halfbeaks inhabit warm seas, predominantly at the surface, in the Atlantic, Indian, and Pacific oceans. A small number are found in estuaries. Most species of marine halfbeaks are known from continental coastlines, but some extend into the western and central Pacific, and one species (Hyporhamphus ihi) is endemic to New Zealand. Hemiramphus is a worldwide marine genus. Ecology and behavior Feeding Marine halfbeaks are omnivores feeding on algae; marine plants such as seagrasses; plankton; invertebrates such as pteropods and crustaceans; and smaller fishes. For some subtropical species at least, juveniles are more predatory than adults. Some tropical species feed on animals during the day and plants at night, while other species alternate between carnivory in the summer and herbivory in the winter. They are in turn eaten by many ecologically and commercially important fish, such as billfish, mackerel, and sharks, and so are a key link between trophic levels. Behavior Marine halfbeaks are typically pelagic schooling forage fish. The southern sea garfish Hyporhamphus melanochir for example is found in sheltered bays, coastal seas, estuaries around southern Australia in waters down to a depth of . These fish school near the surface at night but swim closer to the sea floor during the day, particularly among beds of seagrasses. Genetic analysis of the different sub-populations of the eastern sea garfish Hyporhamphus melanochir in South Australian coastal waters reveals that there is a small but consistent migration of individuals among theme, sufficient to keep them genetically homogeneous. Some marine halfbeaks, including Euleptorhamphus velox and Euleptorhamphus viridis, are known for their ability to jump out of the water and glide over the surface for considerable distances, and have consequently sometimes been called flying halfbeaks. Reproduction Hemiramphidae species are all external fertilizers. They are usually egg-layers and often produce relatively small numbers of fairly large eggs for fish of their size, typically in shallow coastal waters, such as the seagrass meadows of Florida Bay. The eggs of Hemiramphus brasiliensis and H. balao are typically in diameter and have attaching filaments. They hatch when they grow to about in diameter. Hyporhamphus melanochir eggs are slightly larger, around in diameter, and are unusually large when they hatch, being up to in size. Relatively little is known about the ecology of juvenile marine halfbeaks, though estuarine habitats seem to be favored by at least some species. The southern sea garfish Hyporhamphus melanochir grows rapidly at first, attaining a length of up to in the first three years, after which point growth slows. This species lives for a maximum age of about 9 years, at which point the fish reach up to and weigh about . Relationship to humans Halfbeak fisheries Halfbeaks are not a major target for commercial fisheries, though small fisheries for them exist in some places, for example in South Australia where fisheries target the southern sea garfish (Hyporhamphus melanochir). and the eastern sea garfish (Hyporhamphus australis). Halfbeaks are caught by a variety of methods including seines and pelagic trawls, dip-netting under lights at night, and with haul nets. They are utilized fresh, dried, smoked, or salted, and they are considered good eating. However, even where halfbeaks are targeted by fisheries, they tend to be of secondary importance compared with other edible fish species. In some localities significant bait fisheries exist to supply sport fishermen. One study of a bait fishery in Florida that targets Hemiramphus brasiliensis and Hemiramphus balao suggests that despite increases in the size of the fishery the population is stable and the annual catch is valued at around $500,000.
Biology and health sciences
Acanthomorpha
Animals
333170
https://en.wikipedia.org/wiki/Fluctuation%20theorem
Fluctuation theorem
The fluctuation theorem (FT), which originated from statistical mechanics, deals with the relative probability that the entropy of a system which is currently away from thermodynamic equilibrium (i.e., maximum entropy) will increase or decrease over a given amount of time. While the second law of thermodynamics predicts that the entropy of an isolated system should tend to increase until it reaches equilibrium, it became apparent after the discovery of statistical mechanics that the second law is only a statistical one, suggesting that there should always be some nonzero probability that the entropy of an isolated system might spontaneously decrease; the fluctuation theorem precisely quantifies this probability. Statement Roughly, the fluctuation theorem relates to the probability distribution of the time-averaged irreversible entropy production, denoted . The theorem states that, in systems away from equilibrium over a finite time t, the ratio between the probability that takes on a value A and the probability that it takes the opposite value, −A, will be exponential in At. In other words, for a finite non-equilibrium system in a finite time, the FT gives a precise mathematical expression for the probability that entropy will flow in a direction opposite to that dictated by the second law of thermodynamics. Mathematically, the FT is expressed as: This means that as the time or system size increases (since is extensive), the probability of observing an entropy production opposite to that dictated by the second law of thermodynamics decreases exponentially. The FT is one of the few expressions in non-equilibrium statistical mechanics that is valid far from equilibrium. Note that the FT does not state that the second law of thermodynamics is wrong or invalid. The second law of thermodynamics is a statement about macroscopic systems. The FT is more general. It can be applied to both microscopic and macroscopic systems. When applied to macroscopic systems, the FT is equivalent to the second law of thermodynamics. History The FT was first proposed and tested using computer simulations, by Denis Evans, E.G.D. Cohen and Gary Morriss in 1993. The first derivation was given by Evans and Debra Searles in 1994. Since then, much mathematical and computational work has been done to show that the FT applies to a variety of statistical ensembles. The first laboratory experiment that verified the validity of the FT was carried out in 2002. In this experiment, a plastic bead was pulled through a solution by a laser. Fluctuations in the velocity were recorded that were opposite to what the second law of thermodynamics would dictate for macroscopic systems. In 2020, observations at high spatial and spectral resolution of the solar photosphere have shown that solar turbulent convection satisfies the symmetries predicted by the fluctuation relation at a local level. Second law inequality A simple consequence of the fluctuation theorem given above is that if we carry out an arbitrarily large ensemble of experiments from some initial time t=0, and perform an ensemble average of time averages of the entropy production, then an exact consequence of the FT is that the ensemble average cannot be negative for any value of the averaging time t: This inequality is called the second law inequality. This inequality can be proved for systems with time dependent fields of arbitrary magnitude and arbitrary time dependence. It is important to understand what the second law inequality does not imply. It does not imply that the ensemble averaged entropy production is non-negative at all times. This is untrue, as consideration of the entropy production in a viscoelastic fluid subject to a sinusoidal time dependent shear rate shows (e.g., rogue waves). In this example the ensemble average of the time integral of the entropy production over one cycle is however nonnegative – as expected from the second law inequality. Nonequilibrium partition identity Another remarkably simple and elegant consequence of the fluctuation theorem is the so-called "nonequilibrium partition identity" (NPI): Thus in spite of the second law inequality, which might lead you to expect that the average would decay exponentially with time, the exponential probability ratio given by the FT exactly cancels the negative exponential in the average above leading to an average which is unity for all time. Implications There are many important implications from the fluctuation theorem. One is that small machines (such as nanomachines or even mitochondria in a cell) will spend part of their time actually running in "reverse". What is meant by "reverse" is that it is possible to observe that these small molecular machines are able to generate work by taking heat from the environment. This is possible because there exists a symmetry relation in the work fluctuations associated with the forward and reverse changes a system undergoes as it is driven away from thermal equilibrium by the action of an external perturbation, which is a result predicted by the Crooks fluctuation theorem. The environment itself continuously drives these molecular machines away from equilibrium and the fluctuations it generates over the system are very relevant because the probability of observing an apparent violation of the second law of thermodynamics becomes significant at this scale. This is counterintuitive because, from a macroscopic point of view, it would describe complex processes running in reverse. For example, a jet engine running in reverse, taking in ambient heat and exhaust fumes to generate kerosene and oxygen. Nevertheless, the size of such a system makes this observation almost impossible to occur. Such a process is possible to be observed microscopically because, as it has been stated above, the probability of observing a "reverse" trajectory depends on system size and is significant for molecular machines if an appropriate measurement instrument is available. This is the case with the development of new biophysical instruments such as the optical tweezers or the atomic force microscope. Crooks fluctuation theorem has been verified through RNA folding experiments. Dissipation function Strictly speaking the fluctuation theorem refers to a quantity known as the dissipation function. In thermostatted nonequilibrium states that are close to equilibrium, the long time average of the dissipation function is equal to the average entropy production. However the FT refers to fluctuations rather than averages. The dissipation function is defined as where k is the Boltzmann constant, is the initial (t = 0) distribution of molecular states , and is the molecular state arrived at after time t, under the exact time reversible equations of motion. is the INITIAL distribution of those time evolved states. Note: in order for the FT to be valid we require that . This condition is known as the condition of ergodic consistency. It is widely satisfied in common statistical ensembles - e.g. the canonical ensemble. The system may be in contact with a large heat reservoir in order to thermostat the system of interest. If this is the case is the heat lost to the reservoir over the time (0,t) and T is the absolute equilibrium temperature of the reservoir. With this definition of the dissipation function the precise statement of the FT simply replaces entropy production with the dissipation function in each of the FT equations above. Example: If one considers electrical conduction across an electrical resistor in contact with a large heat reservoir at temperature T, then the dissipation function is the total electric current density J multiplied by the voltage drop across the circuit, , and the system volume V, divided by the absolute temperature T, of the heat reservoir times the Boltzmann constant. Thus the dissipation function is easily recognised as the Ohmic work done on the system divided by the temperature of the reservoir. Close to equilibrium the long time average of this quantity is (to leading order in the voltage drop), equal to the average spontaneous entropy production per unit time. However, the fluctuation theorem applies to systems arbitrarily far from equilibrium where the definition of the spontaneous entropy production is problematic. Relation to Loschmidt's paradox The second law of thermodynamics, which predicts that the entropy of an isolated system out of equilibrium should tend to increase rather than decrease or stay constant, stands in apparent contradiction with the time-reversible equations of motion for classical and quantum systems. The time reversal symmetry of the equations of motion show that if one films a given time dependent physical process, then playing the movie of that process backwards does not violate the laws of mechanics. It is often argued that for every forward trajectory in which entropy increases, there exists a time reversed anti trajectory where entropy decreases, thus if one picks an initial state randomly from the system's phase space and evolves it forward according to the laws governing the system, decreasing entropy should be just as likely as increasing entropy. It might seem that this is incompatible with the second law of thermodynamics which predicts that entropy tends to increase. The problem of deriving irreversible thermodynamics from time-symmetric fundamental laws is referred to as Loschmidt's paradox. The mathematical derivation of the fluctuation theorem and in particular the second law inequality shows that, for a nonequilibrium process, the ensemble averaged value for the dissipation function will be greater than zero. This result requires causality, i.e. that cause (the initial conditions) precede effect (the value taken on by the dissipation function). This is clearly demonstrated in section 6 of that paper, where it is shown how one could use the same laws of mechanics to extrapolate backwards from a later state to an earlier state, and in this case the fluctuation theorem would lead us to predict the ensemble average dissipation function to be negative, an anti-second law. This second prediction, which is inconsistent with the real world, is obtained using an anti-causal assumption. That is to say that effect (the value taken on by the dissipation function) precedes the cause (here the later state has been incorrectly used for the initial conditions). The fluctuation theorem shows how the second law is a consequence of the assumption of causality. When we solve a problem we set the initial conditions and then let the laws of mechanics evolve the system forward in time, we don't solve problems by setting the final conditions and letting the laws of mechanics run backwards in time. Summary The fluctuation theorem is of fundamental importance to non-equilibrium statistical mechanics. The FT (together with the universal causation proposition) gives a generalisation of the second law of thermodynamics which includes as a special case, the conventional second law. It is then easy to prove the Second Law Inequality and the NonEquilibrium Partition Identity. When combined with the central limit theorem, the FT also implies the Green-Kubo relations for linear transport coefficients, close to equilibrium. The FT is however, more general than the Green-Kubo Relations because unlike them, the FT applies to fluctuations far from equilibrium. In spite of this fact, scientists have not yet been able to derive the equations for nonlinear response theory from the FT. The FT does not imply or require that the distribution of time averaged dissipation be Gaussian. There are many examples known where the distribution of time averaged dissipation is non-Gaussian and yet the FT (of course) still correctly describes the probability ratios. Lastly the theoretical constructs used to prove the FT can be applied to nonequilibrium transitions between two different equilibrium states. When this is done the so-called Jarzynski equality or nonequilibrium work relation, can be derived. This equality shows how equilibrium free energy differences can be computed or measured (in the laboratory), from nonequilibrium path integrals. Previously quasi-static (equilibrium) paths were required. The reason why the fluctuation theorem is so fundamental is that its proof requires so little. It requires: knowledge of the mathematical form of the initial distribution of molecular states, that all time evolved final states at time t, must be present with nonzero probability in the distribution of initial states (t = 0) – the so-called condition of ergodic consistency and an assumption of time reversal symmetry. In regard to the latter "assumption", while the equations of motion of quantum dynamics may be time-reversible, quantum processes are nondeterministic by nature. What state a wave function collapses into cannot be predicted mathematically, and further the unpredictability of a quantum system comes not from the myopia of an observer's perception, but on the intrinsically nondeterministic nature of the system itself. In physics, the laws of motion of classical mechanics exhibit time reversibility, as long as the operator π reverses the conjugate momenta of all the particles of the system, i.e. (T-symmetry). In quantum mechanical systems, however, the weak nuclear force is not invariant under T-symmetry alone; if weak interactions are present reversible dynamics are still possible, but only if the operator π also reverses the signs of all the charges and the parity of the spatial co-ordinates (C-symmetry and P-symmetry). This reversibility of several linked properties is known as CPT symmetry. Thermodynamic processes can be reversible or irreversible, depending on the change in entropy during the process.
Physical sciences
Thermodynamics
Physics
333219
https://en.wikipedia.org/wiki/Eulerian%20path
Eulerian path
In graph theory, an Eulerian trail (or Eulerian path) is a trail in a finite graph that visits every edge exactly once (allowing for revisiting vertices). Similarly, an Eulerian circuit or Eulerian cycle is an Eulerian trail that starts and ends on the same vertex. They were first discussed by Leonhard Euler while solving the famous Seven Bridges of Königsberg problem in 1736. The problem can be stated mathematically like this: Given the graph in the image, is it possible to construct a path (or a cycle; i.e., a path starting and ending on the same vertex) that visits each edge exactly once? Euler proved that a necessary condition for the existence of Eulerian circuits is that all vertices in the graph have an even degree, and stated without proof that connected graphs with all vertices of even degree have an Eulerian circuit. The first complete proof of this latter claim was published posthumously in 1873 by Carl Hierholzer. This is known as Euler's Theorem: A connected graph has an Euler cycle if and only if every vertex has an even number of incident edges. The term Eulerian graph has two common meanings in graph theory. One meaning is a graph with an Eulerian circuit, and the other is a graph with every vertex of even degree. These definitions coincide for connected graphs. For the existence of Eulerian trails it is necessary that zero or two vertices have an odd degree; this means the Königsberg graph is not Eulerian. If there are no vertices of odd degree, all Eulerian trails are circuits. If there are exactly two vertices of odd degree, all Eulerian trails start at one of them and end at the other. A graph that has an Eulerian trail but not an Eulerian circuit is called semi-Eulerian. Definition An Eulerian trail, or Euler walk, in an undirected graph is a walk that uses each edge exactly once. If such a walk exists, the graph is called traversable or semi-eulerian. An Eulerian cycle, also called an Eulerian circuit or Euler tour, in an undirected graph is a cycle that uses each edge exactly once. If such a cycle exists, the graph is called Eulerian or unicursal. The term "Eulerian graph" is also sometimes used in a weaker sense to denote a graph where every vertex has even degree. For finite connected graphs the two definitions are equivalent, while a possibly unconnected graph is Eulerian in the weaker sense if and only if each connected component has an Eulerian cycle. For directed graphs, "path" has to be replaced with directed path and "cycle" with directed cycle. The definition and properties of Eulerian trails, cycles and graphs are valid for multigraphs as well. An Eulerian orientation of an undirected graph G is an assignment of a direction to each edge of G such that, at each vertex v, the indegree of v equals the outdegree of v. Such an orientation exists for any undirected graph in which every vertex has even degree, and may be found by constructing an Euler tour in each connected component of G and then orienting the edges according to the tour. Every Eulerian orientation of a connected graph is a strong orientation, an orientation that makes the resulting directed graph strongly connected. Properties An undirected graph has an Eulerian cycle if and only if every vertex has even degree, and all of its vertices with nonzero degree belong to a single connected component. An undirected graph can be decomposed into edge-disjoint cycles if and only if all of its vertices have even degree. So, a graph has an Eulerian cycle if and only if it can be decomposed into edge-disjoint cycles and its nonzero-degree vertices belong to a single connected component. An undirected graph has an Eulerian trail if and only if exactly zero or two vertices have odd degree, and all of its vertices with nonzero degree belong to a single connected component. A directed graph has an Eulerian cycle if and only if every vertex has equal in degree and out degree, and all of its vertices with nonzero degree belong to a single strongly connected component. Equivalently, a directed graph has an Eulerian cycle if and only if it can be decomposed into edge-disjoint directed cycles and all of its vertices with nonzero degree belong to a single strongly connected component. A directed graph has an Eulerian trail if and only if at most one vertex has at most one vertex has every other vertex has equal in-degree and out-degree, and all of its vertices with nonzero degree belong to a single connected component of the underlying undirected graph. Constructing Eulerian trails and circuits Fleury's algorithm Fleury's algorithm is an elegant but inefficient algorithm that dates to 1883. Consider a graph known to have all edges in the same component and at most two vertices of odd degree. The algorithm starts at a vertex of odd degree, or, if the graph has none, it starts with an arbitrarily chosen vertex. At each step it chooses the next edge in the path to be one whose deletion would not disconnect the graph, unless there is no such edge, in which case it picks the remaining edge left at the current vertex. It then moves to the other endpoint of that edge and deletes the edge. At the end of the algorithm there are no edges left, and the sequence from which the edges were chosen forms an Eulerian cycle if the graph has no vertices of odd degree, or an Eulerian trail if there are exactly two vertices of odd degree. While the graph traversal in Fleury's algorithm is linear in the number of edges, i.e. , we also need to factor in the complexity of detecting bridges. If we are to re-run Tarjan's linear time bridge-finding algorithm after the removal of every edge, Fleury's algorithm will have a time complexity of . A dynamic bridge-finding algorithm of allows this to be improved to , but this is still significantly slower than alternative algorithms. Hierholzer's algorithm Hierholzer's 1873 paper provides a different method for finding Euler cycles that is more efficient than Fleury's algorithm: Choose any starting vertex v, and follow a trail of edges from that vertex until returning to v. It is not possible to get stuck at any vertex other than v, because the even degree of all vertices ensures that, when the trail enters another vertex w there must be an unused edge leaving w. The tour formed in this way is a closed tour, but may not cover all the vertices and edges of the initial graph. As long as there exists a vertex u that belongs to the current tour but that has adjacent edges not part of the tour, start another trail from u, following unused edges until returning to u, and join the tour formed in this way to the previous tour. Since we assume the original graph is connected, repeating the previous step will exhaust all edges of the graph. By using a data structure such as a doubly linked list to maintain the set of unused edges incident to each vertex, to maintain the list of vertices on the current tour that have unused edges, and to maintain the tour itself, the individual operations of the algorithm (finding unused edges exiting each vertex, finding a new starting vertex for a tour, and connecting two tours that share a vertex) may be performed in constant time each, so the overall algorithm takes linear time, . This algorithm may also be implemented with a deque. Because it is only possible to get stuck when the deque represents a closed tour, one should rotate the deque by removing edges from the tail and adding them to the head until unstuck, and then continue until all edges are accounted for. This also takes linear time, as the number of rotations performed is never larger than (intuitively, any "bad" edges are moved to the head, while fresh edges are added to the tail) Counting Eulerian circuits Complexity issues The number of Eulerian circuits in digraphs can be calculated using the so-called BEST theorem, named after de Bruijn, van Aardenne-Ehrenfest, Smith and Tutte. The formula states that the number of Eulerian circuits in a digraph is the product of certain degree factorials and the number of rooted arborescences. The latter can be computed as a determinant, by the matrix tree theorem, giving a polynomial time algorithm. BEST theorem is first stated in this form in a "note added in proof" to the Aardenne-Ehrenfest and de Bruijn paper (1951). The original proof was bijective and generalized the de Bruijn sequences. It is a variation on an earlier result by Smith and Tutte (1941). Counting the number of Eulerian circuits on undirected graphs is much more difficult. This problem is known to be #P-complete. In a positive direction, a Markov chain Monte Carlo approach, via the Kotzig transformations (introduced by Anton Kotzig in 1968) is believed to give a sharp approximation for the number of Eulerian circuits in a graph, though as yet there is no proof of this fact (even for graphs of bounded degree). Special cases An asymptotic formula for the number of Eulerian circuits in the complete graphs was determined by McKay and Robinson (1995): A similar formula was later obtained by M.I. Isaev (2009) for complete bipartite graphs: Applications Eulerian trails are used in bioinformatics to reconstruct the DNA sequence from its fragments. They are also used in CMOS circuit design to find an optimal logic gate ordering. There are some algorithms for processing trees that rely on an Euler tour of the tree (where each edge is treated as a pair of arcs). The de Bruijn sequences can be constructed as Eulerian trails of de Bruijn graphs. In infinite graphs In an infinite graph, the corresponding concept to an Eulerian trail or Eulerian cycle is an Eulerian line, a doubly-infinite trail that covers all of the edges of the graph. It is not sufficient for the existence of such a trail that the graph be connected and that all vertex degrees be even; for instance, the infinite Cayley graph shown, with all vertex degrees equal to four, has no Eulerian line. The infinite graphs that contain Eulerian lines were characterized by . For an infinite graph or multigraph to have an Eulerian line, it is necessary and sufficient that all of the following conditions be met: is connected. has countable sets of vertices and edges. has no vertices of (finite) odd degree. Removing any finite subgraph from leaves at most two infinite connected components in the remaining graph, and if has even degree at each of its vertices then removing leaves exactly one infinite connected component. Undirected Eulerian graphs Euler stated a necessary condition for a finite graph to be Eulerian as all vertices must have even degree. Hierholzer proved this is a sufficient condition in a paper published in 1873. This leads to the following necessary and sufficient statement for what a finite graph must have to be Eulerian: An undirected connected finite graph is Eulerian if and only if every vertex of G has even degree. The following result was proved by Veblen in 1912: An undirected connected graph is Eulerian if and only if it is the disjoint union of some cycles.Hierholzer developed a linear time algorithm for constructing an Eulerian tour in an undirected graph. Directed Eulerian graphs It is possible to have a directed graph that has all even out-degrees but is not Eulerian. Since an Eulerian circuit leaves a vertex the same number of times as it enters that vertex, a necessary condition for an Eulerian circuit to exist is that the in-degree and out-degree are equal at each vertex. Obviously, connectivity is also necessary. König proved that these conditions are also sufficient. That is, a directed graph is Eulerian if and only if it is connected and the in-degree and out-degree are equal at each vertex. In this theorem it doesn't matter whether "connected" means "weakly connected" or "strongly connected" since they are equivalent for Eulerian graphs. Hierholzer's linear time algorithm for constructing an Eulerian tour is also applicable to directed graphs. Mixed Eulerian graphs All mixed graphs that are both even and symmetric are guaranteed to be Eulerian. However, this is not a necessary condition, as it is possible to construct a non-symmetric, even graph that is Eulerian. Ford and Fulkerson proved in 1962 in their book Flows in Networks a necessary and sufficient condition for a graph to be Eulerian, viz., that every vertex must be even and satisfy the balance condition, i.e. for every subset of vertices S, the difference between the number of arcs leaving S and entering S must be less than or equal to the number of edges incident with S. The process of checking if a mixed graph is Eulerian is harder than checking if an undirected or directed graph is Eulerian because the balanced set condition concerns every possible subset of vertices.
Mathematics
Graph theory
null
333299
https://en.wikipedia.org/wiki/Regular%20polyhedron
Regular polyhedron
A regular polyhedron is a polyhedron whose symmetry group acts transitively on its flags. A regular polyhedron is highly symmetrical, being all of edge-transitive, vertex-transitive and face-transitive. In classical contexts, many different equivalent definitions are used; a common one is that the faces are congruent regular polygons which are assembled in the same way around each vertex. A regular polyhedron is identified by its Schläfli symbol of the form {n, m}, where n is the number of sides of each face and m the number of faces meeting at each vertex. There are 5 finite convex regular polyhedra (the Platonic solids), and four regular star polyhedra (the Kepler–Poinsot polyhedra), making nine regular polyhedra in all. In addition, there are five regular compounds of the regular polyhedra. The regular polyhedra There are five convex regular polyhedra, known as the Platonic solids; four regular star polyhedra, the Kepler–Poinsot polyhedra; and five regular compounds of regular polyhedra: Platonic solids Kepler–Poinsot polyhedra Regular compounds Characteristics Equivalent properties The property of having a similar arrangement of faces around each vertex can be replaced by any of the following equivalent conditions in the definition: The vertices of a convex regular polyhedron all lie on a sphere. All the dihedral angles of the polyhedron are equal All the vertex figures of the polyhedron are regular polygons. All the solid angles of the polyhedron are congruent. Concentric spheres A convex regular polyhedron has all of three related spheres (other polyhedra lack at least one kind) which share its centre: An insphere, tangent to all faces. An intersphere or midsphere, tangent to all edges. A circumsphere, tangent to all vertices. Symmetry The regular polyhedra are the most symmetrical of all the polyhedra. They lie in just three symmetry groups, which are named after the Platonic solids: Tetrahedral Octahedral (or cubic) Icosahedral (or dodecahedral) Any shapes with icosahedral or octahedral symmetry will also contain tetrahedral symmetry. Euler characteristic The five Platonic solids have an Euler characteristic of 2. This simply reflects that the surface is a topological 2-sphere, and so is also true, for example, of any polyhedron which is star-shaped with respect to some interior point. Interior points The sum of the distances from any point in the interior of a regular polyhedron to the sides is independent of the location of the point (this is an extension of Viviani's theorem.) However, the converse does not hold, not even for tetrahedra. Duality of the regular polyhedra In a dual pair of polyhedra, the vertices of one polyhedron correspond to the faces of the other, and vice versa. The regular polyhedra show this duality as follows: The tetrahedron is self-dual, i.e. it pairs with itself. The cube and octahedron are dual to each other. The icosahedron and dodecahedron are dual to each other. The small stellated dodecahedron and great dodecahedron are dual to each other. The great stellated dodecahedron and great icosahedron are dual to each other. The Schläfli symbol of the dual is just the original written backwards, for example the dual of {5, 3} is {3, 5}. History Prehistory Stones carved in shapes resembling clusters of spheres or knobs have been found in Scotland and may be as much as 4,000 years old. Some of these stones show not only the symmetries of the five Platonic solids, but also some of the relations of duality amongst them (that is, that the centres of the faces of the cube gives the vertices of an octahedron). Examples of these stones are on display in the John Evans room of the Ashmolean Museum at Oxford University. Why these objects were made, or how their creators gained the inspiration for them, is a mystery. There is doubt regarding the mathematical interpretation of these objects, as many have non-platonic forms, and perhaps only one has been found to be a true icosahedron, as opposed to a reinterpretation of the icosahedron dual, the dodecahedron. It is also possible that the Etruscans preceded the Greeks in their awareness of at least some of the regular polyhedra, as evidenced by the discovery near Padua (in Northern Italy) in the late 19th century of a dodecahedron made of soapstone, and dating back more than 2,500 years (Lindemann, 1987). Greeks The earliest known written records of the regular convex solids originated from Classical Greece. When these solids were all discovered and by whom is not known, but Theaetetus (an Athenian) was the first to give a mathematical description of all five (Van der Waerden, 1954), (Euclid, book XIII). H.S.M. Coxeter (Coxeter, 1948, Section 1.9) credits Plato (400 BC) with having made models of them, and mentions that one of the earlier Pythagoreans, Timaeus of Locri, used all five in a correspondence between the polyhedra and the nature of the universe as it was then perceived – this correspondence is recorded in Plato's dialogue Timaeus. Euclid's reference to Plato led to their common description as the Platonic solids. One might characterise the Greek definition as follows: A regular polygon is a (convex) planar figure with all edges equal and all corners equal. A regular polyhedron is a solid (convex) figure with all faces being congruent regular polygons, the same number arranged all alike around each vertex. This definition rules out, for example, the square pyramid (since although all the faces are regular, the square base is not congruent to the triangular sides), or the shape formed by joining two tetrahedra together (since although all faces of that triangular bipyramid would be equilateral triangles, that is, congruent and regular, some vertices have 3 triangles and others have 4). This concept of a regular polyhedron would remain unchallenged for almost 2000 years. Regular star polyhedra Regular star polygons such as the pentagram (star pentagon) were also known to the ancient Greeks – the pentagram was used by the Pythagoreans as their secret sign, but they did not use them to construct polyhedra. It was not until the early 17th century that Johannes Kepler realised that pentagrams could be used as the faces of regular star polyhedra. Some of these star polyhedra may have been discovered by others before Kepler's time, but Kepler was the first to recognise that they could be considered "regular" if one removed the restriction that regular polyhedra be convex. Two hundred years later Louis Poinsot also allowed star vertex figures (circuits around each corner), enabling him to discover two new regular star polyhedra along with rediscovering Kepler's. These four are the only regular star polyhedra, and have come to be known as the Kepler–Poinsot polyhedra. It was not until the mid-19th century, several decades after Poinsot published, that Cayley gave them their modern English names: (Kepler's) small stellated dodecahedron and great stellated dodecahedron, and (Poinsot's) great icosahedron and great dodecahedron. The Kepler–Poinsot polyhedra may be constructed from the Platonic solids by a process called stellation. The reciprocal process to stellation is called facetting (or faceting). Every stellation of one polyhedron is dual, or reciprocal, to some facetting of the dual polyhedron. The regular star polyhedra can also be obtained by facetting the Platonic solids. This was first done by Bertrand around the same time that Cayley named them. By the end of the 19th century there were therefore nine regular polyhedra – five convex and four star. Regular polyhedra in nature Each of the Platonic solids occurs naturally in one form or another. The tetrahedron, cube, and octahedron all occur as crystals. These by no means exhaust the numbers of possible forms of crystals (Smith, 1982, p212), of which there are 48. Neither the regular icosahedron nor the regular dodecahedron are amongst them, but crystals can have the shape of a pyritohedron, which is visually almost indistinguishable from a regular dodecahedron. Truly icosahedral crystals may be formed by quasicrystalline materials which are very rare in nature but can be produced in a laboratory. A more recent discovery is of a series of new types of carbon molecule, known as the fullerenes (see Curl, 1991). Although C60, the most easily produced fullerene, looks more or less spherical, some of the larger varieties (such as C240, C480 and C960) are hypothesised to take on the form of slightly rounded icosahedra, a few nanometres across. Regular polyhedra appear in biology as well. The coccolithophore Braarudosphaera bigelowii has a regular dodecahedral structure, about 10 micrometres across. In the early 20th century, Ernst Haeckel described a number of species of radiolarians, some of whose shells are shaped like various regular polyhedra. Examples include Circoporus octahedrus, Circogonia icosahedra, Lithocubus geometricus and Circorrhegma dodecahedra; the shapes of these creatures are indicated by their names. The outer protein shells of many viruses form regular polyhedra. For example, HIV is enclosed in a regular icosahedron, as is the head of a typical myovirus. In ancient times the Pythagoreans believed that there was a harmony between the regular polyhedra and the orbits of the planets. In the 17th century, Johannes Kepler studied data on planetary motion compiled by Tycho Brahe and for a decade tried to establish the Pythagorean ideal by finding a match between the sizes of the polyhedra and the sizes of the planets' orbits. His search failed in its original objective, but out of this research came Kepler's discoveries of the Kepler solids as regular polytopes, the realisation that the orbits of planets are not circles, and the laws of planetary motion for which he is now famous. In Kepler's time only five planets (excluding the earth) were known, nicely matching the number of Platonic solids. Kepler's work, and the discovery since that time of Uranus and Neptune, have invalidated the Pythagorean idea. Around the same time as the Pythagoreans, Plato described a theory of matter in which the five elements (earth, air, fire, water and spirit) each comprised tiny copies of one of the five regular solids. Matter was built up from a mixture of these polyhedra, with each substance having different proportions in the mix. Two thousand years later Dalton's atomic theory would show this idea to be along the right lines, though not related directly to the regular solids. Further generalisations The 20th century saw a succession of generalisations of the idea of a regular polyhedron, leading to several new classes. Regular skew apeirohedra In the first decades, Coxeter and Petrie allowed "saddle" vertices with alternating ridges and valleys, enabling them to construct three infinite folded surfaces which they called regular skew polyhedra. Coxeter offered a modified Schläfli symbol {l,m|n} for these figures, with {l,m} implying the vertex figure, with m regular l-gons around a vertex. The n defines n-gonal holes. Their vertex figures are regular skew polygons, vertices zig-zagging between two planes. Regular skew polyhedra Finite regular skew polyhedra exist in 4-space. These finite regular skew polyhedra in 4-space can be seen as a subset of the faces of uniform 4-polytopes. They have planar regular polygon faces, but regular skew polygon vertex figures. Two dual solutions are related to the 5-cell, two dual solutions are related to the 24-cell, and an infinite set of self-dual duoprisms generate regular skew polyhedra as {4, 4 | n}. In the infinite limit these approach a duocylinder and look like a torus in their stereographic projections into 3-space. Regular polyhedra in non-Euclidean and other spaces Studies of non-Euclidean (hyperbolic and elliptic) and other spaces such as complex spaces, discovered over the preceding century, led to the discovery of more new polyhedra such as complex polyhedra which could only take regular geometric form in those spaces. Regular polyhedra in hyperbolic space In H3 hyperbolic space, paracompact regular honeycombs have Euclidean tiling facets and vertex figures that act like finite polyhedra. Such tilings have an angle defect that can be closed by bending one way or the other. If the tiling is properly scaled, it will close as an asymptotic limit at a single ideal point. These Euclidean tilings are inscribed in a horosphere just as polyhedra are inscribed in a sphere (which contains zero ideal points). The sequence extends when hyperbolic tilings are themselves used as facets of noncompact hyperbolic tessellations, as in the heptagonal tiling honeycomb {7,3,3}; they are inscribed in an equidistant surface (a 2-hypercycle), which has two ideal points. Regular tilings of the real projective plane Another group of regular polyhedra comprise tilings of the real projective plane. These include the hemi-cube, hemi-octahedron, hemi-dodecahedron, and hemi-icosahedron. They are (globally) projective polyhedra, and are the projective counterparts of the Platonic solids. The tetrahedron does not have a projective counterpart as it does not have pairs of parallel faces which can be identified, as the other four Platonic solids do. These occur as dual pairs in the same way as the original Platonic solids do. Their Euler characteristics are all 1. Abstract regular polyhedra By now, polyhedra were firmly understood as three-dimensional examples of more general polytopes in any number of dimensions. The second half of the century saw the development of abstract algebraic ideas such as Polyhedral combinatorics, culminating in the idea of an abstract polytope as a partially ordered set (poset) of elements. The elements of an abstract polyhedron are its body (the maximal element), its faces, edges, vertices and the null polytope or empty set. These abstract elements can be mapped into ordinary space or realised as geometrical figures. Some abstract polyhedra have well-formed or faithful realisations, others do not. A flag is a connected set of elements of each dimension – for a polyhedron that is the body, a face, an edge of the face, a vertex of the edge, and the null polytope. An abstract polytope is said to be regular if its combinatorial symmetries are transitive on its flags – that is to say, that any flag can be mapped onto any other under a symmetry of the polyhedron. Abstract regular polytopes remain an active area of research. Five such regular abstract polyhedra, which can not be realised faithfully, were identified by H. S. M. Coxeter in his book Regular Polytopes (1977) and again by J. M. Wills in his paper "The combinatorially regular polyhedra of index 2" (1987). All five have C2×S5 symmetry but can only be realised with half the symmetry, that is C2×A5 or icosahedral symmetry. They are all topologically equivalent to toroids. Their construction, by arranging n faces around each vertex, can be repeated indefinitely as tilings of the hyperbolic plane. In the diagrams below, the hyperbolic tiling images have colors corresponding to those of the polyhedra images. {| class="wikitable" width=400 |- align=center ! Polyhedron |Medial rhombic triacontahedron |Dodecadodecahedron |Medial triambic icosahedron |Ditrigonal dodecadodecahedron |Excavated dodecahedron |- align=center !Type ||Dual {5,4}6 ||{5,4}6 ||Dual of {5,6}4 ||{5,6}4 || {6,6}6 |- align=center !(v,e,f) |(24,60,30) ||(30,60,24) ||(24,60,20) ||(20,60,24) ||(20,60,20) |- align=center !Vertex figure |{5}, {5/2} |(5.5/2)2 |{5}, {5/2} |(5.5/3)3 | |- align=center valign=top !Faces |30 rhombi |12 pentagons12 pentagrams |20 hexagons |12 pentagons12 pentagrams |20 hexagrams |- align=center ! Tiling |{4, 5} |{5, 4} |{6, 5} |{5, 6} |{6, 6} |- align=center ! χ | −6 | −6 | −16 | −16 | −20 |} Petrie dual The Petrie dual of a regular polyhedron is a regular map whose vertices and edges correspond to the vertices and edges of the original polyhedron, and whose faces are the set of skew Petrie polygons. Spherical polyhedra The usual five regular polyhedra can also be represented as spherical tilings (tilings of the sphere): Regular polyhedra that can only exist as spherical polyhedra For a regular polyhedron whose Schläfli symbol is {m, n}, the number of polygonal faces may be found by: The Platonic solids known to antiquity are the only integer solutions for m ≥ 3 and n ≥ 3. The restriction m ≥ 3 enforces that the polygonal faces must have at least three sides. When considering polyhedra as a spherical tiling, this restriction may be relaxed, since digons (2-gons) can be represented as spherical lunes, having non-zero area. Allowing m = 2 admits a new infinite class of regular polyhedra, which are the hosohedra. On a spherical surface, the regular polyhedron {2, n} is represented as n abutting lunes, with interior angles of 2/n. All these lunes share two common vertices. A regular dihedron, {n, 2} (2-hedron) in three-dimensional Euclidean space can be considered a degenerate prism consisting of two (planar) n-sided polygons connected "back-to-back", so that the resulting object has no depth, analogously to how a digon can be constructed with two line segments. However, as a spherical tiling, a dihedron can exist as nondegenerate form, with two n-sided faces covering the sphere, each face being a hemisphere, and vertices around a great circle. It is regular if the vertices are equally spaced. The hosohedron {2,n} is dual to the dihedron {n,2}. Note that when n = 2, we obtain the polyhedron {2,2}, which is both a hosohedron and a dihedron. All of these have Euler characteristic 2.
Mathematics
Three-dimensional space
null
333306
https://en.wikipedia.org/wiki/Regular%20polygon
Regular polygon
In Euclidean geometry, a regular polygon is a polygon that is direct equiangular (all angles are equal in measure) and equilateral (all sides have the same length). Regular polygons may be either convex or star. In the limit, a sequence of regular polygons with an increasing number of sides approximates a circle, if the perimeter or area is fixed, or a regular apeirogon (effectively a straight line), if the edge length is fixed. General properties These properties apply to all regular polygons, whether convex or star: A regular n-sided polygon has rotational symmetry of order n. All vertices of a regular polygon lie on a common circle (the circumscribed circle); i.e., they are concyclic points. That is, a regular polygon is a cyclic polygon. Together with the property of equal-length sides, this implies that every regular polygon also has an inscribed circle or incircle that is tangent to every side at the midpoint. Thus a regular polygon is a tangential polygon. A regular n-sided polygon can be constructed with compass and straightedge if and only if the odd prime factors of n are distinct Fermat primes. A regular n-sided polygon can be constructed with origami if and only if for some , where each distinct is a Pierpont prime. Symmetry The symmetry group of an n-sided regular polygon is the dihedral group Dn (of order 2n): D2, D3, D4, ... It consists of the rotations in Cn, together with reflection symmetry in n axes that pass through the center. If n is even then half of these axes pass through two opposite vertices, and the other half through the midpoint of opposite sides. If n is odd then all axes pass through a vertex and the midpoint of the opposite side. Regular convex polygons All regular simple polygons (a simple polygon is one that does not intersect itself anywhere) are convex. Those having the same number of sides are also similar. An n-sided convex regular polygon is denoted by its Schläfli symbol . For , we have two degenerate cases: Monogon {1} Degenerate in ordinary space. (Most authorities do not regard the monogon as a true polygon, partly because of this, and also because the formulae below do not work, and its structure is not that of any abstract polygon.) Digon {2}; a "double line segment" Degenerate in ordinary space. (Some authorities do not regard the digon as a true polygon because of this.) In certain contexts all the polygons considered will be regular. In such circumstances it is customary to drop the prefix regular. For instance, all the faces of uniform polyhedra must be regular and the faces will be described simply as triangle, square, pentagon, etc. Angles For a regular convex n-gon, each interior angle has a measure of: degrees; radians; or full turns, and each exterior angle (i.e., supplementary to the interior angle) has a measure of degrees, with the sum of the exterior angles equal to 360 degrees or 2π radians or one full turn. As n approaches infinity, the internal angle approaches 180 degrees. For a regular polygon with 10,000 sides (a myriagon) the internal angle is 179.964°. As the number of sides increases, the internal angle can come very close to 180°, and the shape of the polygon approaches that of a circle. However the polygon can never become a circle. The value of the internal angle can never become exactly equal to 180°, as the circumference would effectively become a straight line (see apeirogon). For this reason, a circle is not a polygon with an infinite number of sides. Diagonals For , the number of diagonals is ; i.e., 0, 2, 5, 9, ..., for a triangle, square, pentagon, hexagon, ... . The diagonals divide the polygon into 1, 4, 11, 24, ... pieces. For a regular n-gon inscribed in a circle of radius , the product of the distances from a given vertex to all other vertices (including adjacent vertices and vertices connected by a diagonal) equals n. Points in the plane For a regular simple n-gon with circumradius R and distances di from an arbitrary point in the plane to the vertices, we have For higher powers of distances from an arbitrary point in the plane to the vertices of a regular -gon, if , then , and , where is a positive integer less than . If is the distance from an arbitrary point in the plane to the centroid of a regular -gon with circumradius , then , where = 1, 2, …, . Interior points For a regular n-gon, the sum of the perpendicular distances from any interior point to the n sides is n times the apothem (the apothem being the distance from the center to any side). This is a generalization of Viviani's theorem for the n = 3 case. Circumradius The circumradius R from the center of a regular polygon to one of the vertices is related to the side length s or to the apothem a by For constructible polygons, algebraic expressions for these relationships exist . The sum of the perpendiculars from a regular n-gon's vertices to any line tangent to the circumcircle equals n times the circumradius. The sum of the squared distances from the vertices of a regular n-gon to any point on its circumcircle equals 2nR2 where R is the circumradius. The sum of the squared distances from the midpoints of the sides of a regular n-gon to any point on the circumcircle is 2nR2 − ns2, where s is the side length and R is the circumradius. If are the distances from the vertices of a regular -gon to any point on its circumcircle, then . Dissections Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into or parallelograms. These tilings are contained as subsets of vertices, edges and faces in orthogonal projections m-cubes. In particular, this is true for any regular polygon with an even number of sides, in which case the parallelograms are all rhombi. The list gives the number of solutions for smaller polygons. Area The area A of a convex regular n-sided polygon having side s, circumradius R, apothem a, and perimeter p is given by For regular polygons with side s = 1, circumradius R = 1, or apothem a = 1, this produces the following table: (Since as , the area when tends to as grows large.) Of all n-gons with a given perimeter, the one with the largest area is regular. Constructible polygon Some regular polygons are easy to construct with compass and straightedge; other regular polygons are not constructible at all. The ancient Greek mathematicians knew how to construct a regular polygon with 3, 4, or 5 sides, and they knew how to construct a regular polygon with double the number of sides of a given regular polygon. This led to the question being posed: is it possible to construct all regular n-gons with compass and straightedge? If not, which n-gons are constructible and which are not? Carl Friedrich Gauss proved the constructibility of the regular 17-gon in 1796. Five years later, he developed the theory of Gaussian periods in his Disquisitiones Arithmeticae. This theory allowed him to formulate a sufficient condition for the constructibility of regular polygons: A regular n-gon can be constructed with compass and straightedge if n is the product of a power of 2 and any number of distinct Fermat primes (including none). (A Fermat prime is a prime number of the form ) Gauss stated without proof that this condition was also necessary, but never published his proof. A full proof of necessity was given by Pierre Wantzel in 1837. The result is known as the Gauss–Wantzel theorem. Equivalently, a regular n-gon is constructible if and only if the cosine of its common angle is a constructible number—that is, can be written in terms of the four basic arithmetic operations and the extraction of square roots. Regular skew polygons A regular skew polygon in 3-space can be seen as nonplanar paths zig-zagging between two parallel planes, defined as the side-edges of a uniform antiprism. All edges and internal angles are equal. More generally regular skew polygons can be defined in n-space. Examples include the Petrie polygons, polygonal paths of edges that divide a regular polytope into two halves, and seen as a regular polygon in orthogonal projection. In the infinite limit regular skew polygons become skew apeirogons. Regular star polygons A non-convex regular polygon is a regular star polygon. The most common example is the pentagram, which has the same vertices as a pentagon, but connects alternating vertices. For an n-sided star polygon, the Schläfli symbol is modified to indicate the density or "starriness" m of the polygon, as {n/m}. If m is 2, for example, then every second point is joined. If m is 3, then every third point is joined. The boundary of the polygon winds around the center m times. The (non-degenerate) regular stars of up to 12 sides are: Pentagram – {5/2} Heptagram – {7/2} and {7/3} Octagram – {8/3} Enneagram – {9/2} and {9/4} Decagram – {10/3} Hendecagram – {11/2}, {11/3}, {11/4} and {11/5} Dodecagram – {12/5} m and n must be coprime, or the figure will degenerate. The degenerate regular stars of up to 12 sides are: Tetragon – {4/2} Hexagons – {6/2}, {6/3} Octagons – {8/2}, {8/4} Enneagon – {9/3} Decagons – {10/2}, {10/4}, and {10/5} Dodecagons – {12/2}, {12/3}, {12/4}, and {12/6} Depending on the precise derivation of the Schläfli symbol, opinions differ as to the nature of the degenerate figure. For example, {6/2} may be treated in either of two ways: For much of the 20th century (see for example ), we have commonly taken the /2 to indicate joining each vertex of a convex {6} to its near neighbors two steps away, to obtain the regular compound of two triangles, or hexagram. Coxeter clarifies this regular compound with a notation {kp}[k{p}]{kp} for the compound {p/k}, so the hexagram is represented as {6}[2{3}]{6}. More compactly Coxeter also writes 2{n/2}, like 2{3} for a hexagram as compound as alternations of regular even-sided polygons, with italics on the leading factor to differentiate it from the coinciding interpretation. Many modern geometers, such as Grünbaum (2003), regard this as incorrect. They take the /2 to indicate moving two places around the {6} at each step, obtaining a "double-wound" triangle that has two vertices superimposed at each corner point and two edges along each line segment. Not only does this fit in better with modern theories of abstract polytopes, but it also more closely copies the way in which Poinsot (1809) created his star polygons – by taking a single length of wire and bending it at successive points through the same angle until the figure closed. Duality of regular polygons All regular polygons are self-dual to congruency, and for odd n they are self-dual to identity. In addition, the regular star figures (compounds), being composed of regular polygons, are also self-dual. Regular polygons as faces of polyhedra A uniform polyhedron has regular polygons as faces, such that for every two vertices there is an isometry mapping one into the other (just as there is for a regular polygon). A quasiregular polyhedron is a uniform polyhedron which has just two kinds of face alternating around each vertex. A regular polyhedron is a uniform polyhedron which has just one kind of face. The remaining (non-uniform) convex polyhedra with regular faces are known as the Johnson solids. A polyhedron having regular triangles as faces is called a deltahedron.
Mathematics
Two-dimensional space
null
333420
https://en.wikipedia.org/wiki/Archimedes%27%20principle
Archimedes' principle
Archimedes' principle (also spelled Archimedes's principle) states that the upward buoyant force that is exerted on a body immersed in a fluid, whether fully or partially, is equal to the weight of the fluid that the body displaces. Archimedes' principle is a law of physics fundamental to fluid mechanics. It was formulated by Archimedes of Syracuse. Explanation In On Floating Bodies, Archimedes suggested that (c. 246 BC): Archimedes' principle allows the buoyancy of any floating object partially or fully immersed in a fluid to be calculated. The downward force on the object is simply its weight. The upward, or buoyant, force on the object is that stated by Archimedes' principle above. Thus, the net force on the object is the difference between the magnitudes of the buoyant force and its weight. If this net force is positive, the object rises; if negative, the object sinks; and if zero, the object is neutrally buoyant—that is, it remains in place without either rising or sinking. In simple words, Archimedes' principle states that, when a body is partially or completely immersed in a fluid, it experiences an apparent loss in weight that is equal to the weight of the fluid displaced by the immersed part of the body(s). Formula Consider a cuboid immersed in a fluid, its top and bottom faces orthogonal to the direction of gravity (assumed constant across the cube's stretch). The fluid will exert a normal force on each face, but only the normal forces on top and bottom will contribute to buoyancy. The pressure difference between the bottom and the top face is directly proportional to the height (difference in depth of submersion). Multiplying the pressure difference by the area of a face gives a net force on the cuboid—the buoyancy—equaling in size the weight of the fluid displaced by the cuboid. By summing up sufficiently many arbitrarily small cuboids this reasoning may be extended to irregular shapes, and so, whatever the shape of the submerged body, the buoyant force is equal to the weight of the displaced fluid. The weight of the displaced fluid is directly proportional to the volume of the displaced fluid (if the surrounding fluid is of uniform density). The weight of the object in the fluid is reduced, because of the force acting on it, which is called upthrust. In simple terms, the principle states that the buoyant force (Fb) on an object is equal to the weight of the fluid displaced by the object, or the density (ρ) of the fluid multiplied by the submerged volume (V) times the gravity (g) We can express this relation in the equation: where denotes the buoyant force applied onto the submerged object, denotes the density of the fluid, represents the volume of the displaced fluid and is the acceleration due to gravity. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy. Suppose a rock's weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting on it. Suppose that, when the rock is lowered into the water, it displaces water of weight 3 newtons. The force it then exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyant force: 10 − 3 = 7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea-floor. It is generally easier to lift an object through the water than it is to pull it out of the water. For a fully submerged object, Archimedes' principle can be reformulated as follows: then inserted into the quotient of weights, which has been expanded by the mutual volume yields the formula below. The density of the immersed object relative to the density of the fluid can easily be calculated without measuring any volume is (This formula is used for example in describing the measuring principle of a dasymeter and of hydrostatic weighing.) Example: If you drop wood into water, buoyancy will keep it afloat. Example: A helium balloon in a moving car. When increasing speed or driving in a curve, the air moves in the opposite direction to the car's acceleration. However, due to buoyancy, the balloon is pushed "out of the way" by the air and will drift in the same direction as the car's acceleration. When an object is immersed in a liquid, the liquid exerts an upward force, which is known as the buoyant force, that is proportional to the weight of the displaced liquid. The sum force acting on the object, then, is equal to the difference between the weight of the object ('down' force) and the weight of displaced liquid ('up' force). Equilibrium, or neutral buoyancy, is achieved when these two weights (and thus forces) are equal. Forces and equilibrium The equation to calculate the pressure inside a fluid in equilibrium is: where f is the force density exerted by some outer field on the fluid, and σ is the Cauchy stress tensor. In this case the stress tensor is proportional to the identity tensor: Here δij is the Kronecker delta. Using this the above equation becomes: Assuming the outer force field is conservative, that is it can be written as the negative gradient of some scalar valued function: Then: Therefore, the shape of the open surface of a fluid equals the equipotential plane of the applied outer conservative force field. Let the z-axis point downward. In this case the field is gravity, so Φ = −ρfgz where g is the gravitational acceleration, ρf is the mass density of the fluid. Taking the pressure as zero at the surface, where z is zero, the constant will be zero, so the pressure inside the fluid, when it is subject to gravity, is So pressure increases with depth below the surface of a liquid, as z denotes the distance from the surface of the liquid into it. Any object with a non-zero vertical depth will have different pressures on its top and bottom, with the pressure on the bottom being greater. This difference in pressure causes the upward buoyancy force. The buoyancy force exerted on a body can now be calculated easily, since the internal pressure of the fluid is known. The force exerted on the body can be calculated by integrating the stress tensor over the surface of the body which is in contact with the fluid: The surface integral can be transformed into a volume integral with the help of the Gauss theorem: where V is the measure of the volume in contact with the fluid, that is the volume of the submerged part of the body, since the fluid doesn't exert force on the part of the body which is outside of it. The magnitude of buoyancy force may be appreciated a bit more from the following argument. Consider any object of arbitrary shape and volume V surrounded by a liquid. The force the liquid exerts on an object within the liquid is equal to the weight of the liquid with a volume equal to that of the object. This force is applied in a direction opposite to gravitational force, that is of magnitude: where ρf is the density of the fluid, Vdisp is the volume of the displaced body of liquid, and g is the gravitational acceleration at the location in question. If this volume of liquid is replaced by a solid body of exactly the same shape, the force the liquid exerts on it must be exactly the same as above. In other words, the "buoyancy force" on a submerged body is directed in the opposite direction to gravity and is equal in magnitude to The net force on the object must be zero if it is to be a situation of fluid statics such that Archimedes principle is applicable, and is thus the sum of the buoyancy force and the object's weight If the buoyancy of an (unrestrained and unpowered) object exceeds its weight, it tends to rise. An object whose weight exceeds its buoyancy tends to sink. Calculation of the upwards force on a submerged object during its accelerating period cannot be done by the Archimedes principle alone; it is necessary to consider dynamics of an object involving buoyancy. Once it fully sinks to the floor of the fluid or rises to the surface and settles, Archimedes principle can be applied alone. For a floating object, only the submerged volume displaces water. For a sunken object, the entire volume displaces water, and there will be an additional force of reaction from the solid floor. In order for Archimedes' principle to be used alone, the object in question must be in equilibrium (the sum of the forces on the object must be zero), therefore; and therefore showing that the depth to which a floating object will sink, and the volume of fluid it will displace, is independent of the gravitational field regardless of geographic location. (Note: If the fluid in question is seawater, it will not have the same density (ρ) at every location. For this reason, a ship may display a Plimsoll line.) It can be the case that forces other than just buoyancy and gravity come into play. This is the case if the object is restrained or if the object sinks to the solid floor. An object which tends to float requires a tension restraint force T in order to remain fully submerged. An object which tends to sink will eventually have a normal force of constraint N exerted upon it by the solid floor. The constraint force can be tension in a spring scale measuring its weight in the fluid, and is how apparent weight is defined. If the object would otherwise float, the tension to restrain it fully submerged is: When a sinking object settles on the solid floor, it experiences a normal force of: Another possible formula for calculating buoyancy of an object is by finding the apparent weight of that particular object in the air (calculated in Newtons), and apparent weight of that object in the water (in Newtons). To find the force of buoyancy acting on the object when in air, using this particular information, this formula applies: Buoyancy force = weight of object in empty space − weight of object immersed in fluid The final result would be measured in Newtons. Air's density is very small compared to most solids and liquids. For this reason, the weight of an object in air is approximately the same as its true weight in a vacuum. The buoyancy of air is neglected for most objects during a measurement in air because the error is usually insignificant (typically less than 0.1% except for objects of very low average density such as a balloon or light foam). Simplified model A simplified explanation for the integration of the pressure over the contact area may be stated as follows: Consider a cube immersed in a fluid with the upper surface horizontal. The sides are identical in area, and have the same depth distribution, therefore they also have the same pressure distribution, and consequently the same total force resulting from hydrostatic pressure, exerted perpendicular to the plane of the surface of each side. There are two pairs of opposing sides, therefore the resultant horizontal forces balance in both orthogonal directions, and the resultant force is zero. The upward force on the cube is the pressure on the bottom surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal bottom surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the bottom surface. Similarly, the downward force on the cube is the pressure on the top surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal top surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the top surface. As this is a cube, the top and bottom surfaces are identical in shape and area, and the pressure difference between the top and bottom of the cube is directly proportional to the depth difference, and the resultant force difference is exactly equal to the weight of the fluid that would occupy the volume of the cube in its absence. This means that the resultant upward force on the cube is equal to the weight of the fluid that would fit into the volume of the cube, and the downward force on the cube is its weight, in the absence of external forces. This analogy is valid for variations in the size of the cube. If two cubes are placed alongside each other with a face of each in contact, the pressures and resultant forces on the sides or parts thereof in contact are balanced and may be disregarded, as the contact surfaces are equal in shape, size and pressure distribution, therefore the buoyancy of two cubes in contact is the sum of the buoyancies of each cube. This analogy can be extended to an arbitrary number of cubes. An object of any shape can be approximated as a group of cubes in contact with each other, and as the size of the cube is decreased, the precision of the approximation increases. The limiting case for infinitely small cubes is the exact equivalence. Angled surfaces do not nullify the analogy as the resultant force can be split into orthogonal components and each dealt with in the same way. Refinements Archimedes' principle does not consider the surface tension (capillarity) acting on the body. Moreover, Archimedes' principle has been found to break down in complex fluids. There is an exception to Archimedes' principle known as the bottom (or side) case. This occurs when a side of the object is touching the bottom (or side) of the vessel it is submerged in, and no liquid seeps in along that side. In this case, the net force has been found to be different from Archimedes' principle, as, since no fluid seeps in on that side, the symmetry of pressure is broken. Principle of flotation Archimedes' principle shows the buoyant force and displacement of fluid. However, the concept of Archimedes' principle can be applied when considering why objects float. Proposition 5 of Archimedes' treatise On Floating Bodies states that In other words, for an object floating on a liquid surface (like a boat) or floating submerged in a fluid (like a submarine in water or dirigible in air) the weight of the displaced liquid equals the weight of the object. Thus, only in the special case of floating does the buoyant force acting on an object equal the objects weight. Consider a 1-ton block of solid iron. As iron is nearly eight times as dense as water, it displaces only 1/8 ton of water when submerged, which is not enough to keep it afloat. Suppose the same iron block is reshaped into a bowl. It still weighs 1 ton, but when it is put in water, it displaces a greater volume of water than when it was a block. The deeper the iron bowl is immersed, the more water it displaces, and the greater the buoyant force acting on it. When the buoyant force equals 1 ton, it will sink no farther. When any boat displaces a weight of water equal to its own weight, it floats. This is often called the "principle of flotation": A floating object displaces a weight of fluid equal to its own weight. Every ship, submarine, and dirigible must be designed to displace a weight of fluid at least equal to its own weight. A 10,000-ton ship's hull must be built wide enough, long enough and deep enough to displace 10,000 tons of water and still have some hull above the water to prevent it from sinking. It needs extra hull to fight waves that would otherwise fill it and, by increasing its mass, cause it to submerge. The same is true for vessels in air: a dirigible that weighs 100 tons needs to displace 100 tons of air. If it displaces more, it rises; if it displaces less, it falls. If the dirigible displaces exactly its weight, it hovers at a constant altitude. While they are related to it, the principle of flotation and the concept that a submerged object displaces a volume of fluid equal to its own volume are not Archimedes' principle. Archimedes' principle, as stated above, equates the buoyant force to the weight of the fluid displaced. One common point of confusion regarding Archimedes' principle is the meaning of displaced volume. Common demonstrations involve measuring the rise in water level when an object floats on the surface in order to calculate the displaced water. This measurement approach fails with a buoyant submerged object because the rise in the water level is directly related to the volume of the object and not the mass (except if the effective density of the object equals exactly the fluid density). Eureka Archimedes reportedly exclaimed "Eureka" after he realized how to detect whether a crown is made of impure gold. While he did not use Archimedes' principle in the widespread tale and used displaced water only for measuring the volume of the crown, there is an alternative approach using the principle: Balance the crown and pure gold on a scale in the air and then put the scale into water. According to Archimedes' principle, if the density of the crown differs from the density of pure gold, the scale will get out of balance under water.
Physical sciences
Fluid mechanics
Physics
333625
https://en.wikipedia.org/wiki/Bus%20rapid%20transit
Bus rapid transit
Bus rapid transit (BRT), also referred to as a busway or transitway, is a trolleybus, electric bus and public transport bus service system designed to have much more capacity, reliability, and other quality features than a conventional bus system. Typically, a BRT system includes roadways that are dedicated to buses, and gives priority to buses at intersections where buses may interact with other traffic; alongside design features to reduce delays caused by passengers boarding or leaving buses, or paying fares. BRT aims to combine the capacity and speed of a light rail transit (LRT) or mass rapid transit (MRT) system with the flexibility, lower cost and simplicity of a bus system. The world's first BRT system was the Runcorn Busway in Runcorn New Town, England, which entered service in 1971. , a total of 166 cities in six continents have implemented BRT systems, accounting for of BRT lanes and about 32.2 million passengers every day. The majority of these are in Latin America, where about 19.6 million passengers ride daily, and which has the most cities with BRT systems, with 54, led by Brazil with 21 cities. The Latin American countries with the most daily ridership are Brazil (10.7 million), Colombia (3.0 million), and Mexico (2.5 million). In the other regions, China (4.3 million) and Iran (2.1 million) stand out. Currently, TransJakarta is the largest BRT network in the world, with about of corridors connecting the Indonesian capital city. Terminology Bus rapid transit is a mode of mass rapid transit (MRT) and describes a high-capacity urban public-transit system with its own right of way, vehicles at short headways, platform-level boarding, and preticketing. The expression "BRT" is mainly used in the Americas and China; in India, it is called "BRTS" (BRT System); in Europe it is often called a "busway" or a "BHLS" (stands for Bus with a High Level of Service). The term transitway was originated in 1981 with the opening of the OC Transpo transitway in Ottawa, Ontario, Canada. Critics have charged that the term "bus rapid transit" has sometimes been misapplied to systems that lack most or all the essential features which differentiate it from conventional bus services. The term "bus rapid transit creep" has been used to describe severely degraded levels of bus service which fall far short of the BRT Standard promoted by the Institute for Transportation and Development Policy (ITDP) and other organizations. Reasons for use Compared to other common transit modes such as light rail transit (LRT), bus rapid transit (BRT) service is attractive to transit authorities because it does not cost as much to establish and operate: no track needs to be laid, bus drivers typically require less training and less pay than rail operators, and bus maintenance is less complex than rail maintenance. Moreover, buses are more flexible than rail vehicles, because a bus route can be altered, either temporarily or permanently, to meet changing demand or contend with adverse road conditions with comparatively little investment of resources. History The first use of a protected busway was the East Side Trolley Tunnel in Providence, Rhode Island. It was converted from trolley to bus use in 1948. However, the first BRT system in the world was the Runcorn Busway in Runcorn, England. First conceived in the Runcorn New Town Masterplan in 1966, it opened for services in October 1971 and all were operational by 1980. The central station is at Runcorn Shopping City where buses arrive on dedicated raised busways to two enclosed stations. Arthur Ling, Runcorn Development Corporation's Master Planner, said that he had invented the concept while sketching on the back of an envelope. The town was designed around the transport system, with most residents no more than five minutes walking distance, or , from the Busway. The second BRT system in the world was the Rede Integrada de Transporte (RIT, integrated transportation network), implemented in Curitiba, Brazil, in 1974. The Rede Integrada de Transporte was inspired by the previous transport system of the National Urban Transport Company of Peru (In Spanish: ENATRU), which only had quick access on Lima downtown, but it would not be considered BRT itself. Many of the elements that have become associated with BRT were innovations first suggested by Carlos Ceneviva, within the team of Curitiba Mayor Jaime Lerner. Initially just dedicated bus lanes in the center of major arterial roads, in 1980 the Curitiba system added a feeder bus network and inter-zone connections, and in 1992 introduced off-board fare collection, enclosed stations, and platform-level boarding. Other systems made further innovations, including platooning (three buses entering and leaving bus stops and traffic signals at once) in Porto Alegre, and passing lanes and express service in São Paulo. In the United States, BRT began in 1977, with Pittsburgh's South Busway, operating on of exclusive lanes. Its success led to the Martin Luther King Jr. East Busway in 1983, a fuller BRT deployment including a dedicated busway of , traffic signal preemption, and peak service headway as low as two minutes. After the opening of the West Busway, in length in 2000, Pittsburgh's Busway system is today over 18.5 miles long. The OC Transpo BRT system in Ottawa, Canada, was introduced in 1983. The first element of its BRT system was dedicated bus lanes through the city centre, with platformed stops. The introduction of exclusive separate busways (termed 'Transitway') occurred in 1983. By 1996, all of the originally envisioned 31 km Transitway system was in operation; further expansions were opened in 2009, 2011, and 2014. As of 2019, the central part of the Transitway has been converted to light rail transit, due to the downtown section being operated beyond its designed capacity. In 1995, Quito, Ecuador, opened MetrobusQ its first BRT trolleybuses in Quito, using articulated trolleybuses. The TransMilenio in Bogotá, Colombia, opening in 2000, was the first BRT system to combine the best elements of Curitiba's BRT with other BRT advances, and achieved the highest capacity and highest speed BRT system in the world. In January 2004 the first BRT in Southeast Asia, TransJakarta, opened in Jakarta, Indonesia. , at , it is the longest BRT system in the world. Africa's first BRT system was opened in Lagos, Nigeria, in March 2008 but is considered a light BRT system by many people. Johannesburg, South Africa, BRT Rea Vaya, was the first true BRT in Africa, in August 2009, carrying 16,000 daily passengers. Rea Vaya and MIO (BRT in Cali, Colombia, opened 2009) were the first two systems to combine full BRT with some services that also operated in mixed traffic, then joined the BRT trunk infrastructure. In 2017 Marrakesh, Morocco, opened its first BRT Marrakesh trolleybus system (BHNS De Marrakesh) trolleybuses Corridors of 8 km (5.0 mi), of which 3 km (1.9 mi) of overhead wiring for operation as trolleybus. Main features BRT systems normally include most of the following features: Dedicated lanes and alignment Bus-only lanes make for faster travel and ensure that buses are not delayed by mixed traffic congestion. A median alignment bus-only keeps buses away from busy curb-side side conflicts, where cars and trucks are parking, standing and turning. Separate rights of way may be used such as the completely elevated Xiamen BRT. Transit malls or 'bus streets' may also be created in city centers. Off-board fare collection Fare prepayment at the station, instead of on board the bus, eliminates the delay caused by passengers paying on board. Fare machines at stations also allow riders to purchase multi-ride stored-value cards and have multiple payment options. Prepayment also allows riders to board at all doors, further speeding up stops. Bus priority, turning and standing restrictions Prohibiting turns for traffic across the bus lane significantly reduces delays to the buses. Bus priority will often be provided at signalized intersections to reduce delays by extending the green phase or reducing the red phase in the required direction compared to the normal sequence. Prohibiting turns may be the most important measure for moving buses through intersections. Platform-level boarding The station platforms for BRT systems should be level with the bus floor for quick and easy boarding, making it fully accessible for wheelchairs, disabled passengers and baby strollers, with minimal delays. High-level platforms for high-floored buses makes it difficult to have stops outside dedicated platforms, or to have conventional buses stop at high-level platforms, so these BRT stops are distinct from street-level bus stops. Similar to rail vehicles, there is a risk of a dangerous gap between bus and platform, and is even greater due to the nature of bus operations. Kassel curbs or other methods may be used to ease quick and safe alignment of the BRT vehicle with a platform. A popular compromise is low-floor buses with a low step at the door, which can allow easy boarding at low-platform stops compatible with other buses. This intermediate design may be used with some low- or medium-capacity BRT systems. The MIO system in Santiago de Cali, Colombia, pioneered in 2009 the use of dual buses, with doors on the left side of the bus that are located at the height of high-level platforms, and doors on the right side that are located at curb height. These buses can use the main line with its exclusive lanes and high level platforms, located on the center of the street and thus, boarding and leaving passengers on the left side. These buses can exit the main line and use normal lanes that share with other vehicles and stop at regular stations located on sidewalks on the right side of the street. Additional features Groups of criteria form the BRT Standard 2016, which is updated by the Technical Committee of the BRT Standard. High capacity vehicles High-capacity vehicles such as articulated or even bi-articulated buses may be used, typically with multiple doors for fast entry and exit. Double-decker buses or guided buses may also be used. Advanced powertrain control may be used for a smoother ride. Quality stations Bottleneck BRT stations typically provide loading areas for simultaneous boarding and alighting of buses through multiple doors coordinated via displays and loudspeakers. An example of high-quality stations include those used on TransMilenio in Bogotá since December 2000, the MIO in Cali since November 2008, Metrolinea in Bucaramanga since December 2009, Megabús in Pereira since May 2009. This design is also used in Johannesburg's Rea Vaya. The term "station" is more flexibly applied in North America and ranges from enclosed waiting areas (Ottawa and Cleveland) to large open-sided shelters (Los Angeles and San Bernardino). Prominent brand or identity A unique and distinctive identity can contribute to BRT's attractiveness as an alternative to driving cars, (such as Viva, Max, TransMilenio, Metropolitano, Metronit, Select) marking stops and stations as well as the buses. Large cities usually have big bus networks. A map showing all bus lines might be incomprehensible, and cause people to wait for low-frequency buses that may not even be running at the time they are needed. By identifying the main bus lines having high-frequency service, with a special brand and separate maps, it is easier to understand the entire network. Public transit apps are more convenient than a static map, featuring services like trip planning, live arrival and departure times, up-to-date line schedules, local station maps, service alerts, and advisories that may affect one's current trip. Transit and Moovit are examples of apps that are available in many cities around the world. Some operators of bus rapid transit systems have developed their own apps, like Transmilenio. These apps even include all the schedules and live arrival times and stations for buses that feed the BRT, like the SITP (Sistema Integrado de Transporte Público or Public Transit Integrated System) in Bogotá. In tunnels or subterranean structures A special issue arises in the use of buses in metro transit structures. Since the areas where the demand for an exclusive bus right-of-way are apt to be in dense downtown areas where an above-ground structure may be unacceptable on historic, logistic, or environmental grounds, use of BRT in tunnels may not be avoidable. Since buses are usually powered by internal combustion engines, bus metros raise ventilation issues similar to those of motor vehicle tunnels. Powerful fans typically exchange air through ventilation shafts to the surface; these are usually as remote as possible from occupied areas, to minimize the effects of noise and concentrated pollution. A straightforward way to reduce air quality problems is to use internal combustion engines with lower emissions. The 2008 Euro V European emission standards set a limit on carbon monoxide from heavy-duty diesel engines of 1.5 g/kWh, one third of the 1992 Euro I standard. As a result, less forced ventilation will be required in tunnels to achieve the same air quality. Another alternative is to use electric propulsion, which Seattle's Metro Bus Tunnel and Boston's Silver Line Phase II implemented. In Seattle, dual-mode (electric/diesel electric) buses manufactured by Breda were used until 2004, with the center axle driven by electric motors obtaining power from trolley wires through trolley poles in the subway, and with the rear axle driven by a conventional diesel powertrain on freeways and streets. Boston is using a similar approach, after initially using trolleybuses pending delivery of the dual-mode vehicles that was completed in 2005. In 2004, Seattle replaced its "Transit Tunnel" fleet with diesel-electric hybrid buses, which operate similarly to hybrid cars outside the tunnel and in a low-noise, low-emissions "hush mode" (in which the diesel engine operates but does not exceed idle speed) when underground. The need to provide electric power in underground environments brings the capital and maintenance costs of such routes closer to those of light rail, and raises the question of building or eventually converting to light rail. In Seattle, the downtown transit tunnel was retrofitted for conversion to a shared hybrid-bus and light-rail facility in preparation for Seattle's Central Link Light Rail line, which opened in July 2009. In March 2019, expansion of the light rail in the tunnel moved busses back to surface streets. Bi-articulated battery electric buses cause no problems in tunnels anymore but provide BRT capacity. Performance A BRT system can be measured by a number of factors. The BRT Standard was developed by the Institute for Transportation and Development Policy (ITDP) to score BRT corridors, producing a list of rated BRT corridors meeting the minimum definition of BRT. The highest rated systems received a "gold" ranking. The latest edition of the standard was published in 2016. Other metrics used to evaluate BRT performance include: The vehicle headway is the average time interval between vehicles on the same line. Buses can operate at headways of 10 seconds or less, but average headways on TransMilenio at busy intersections are 13 seconds, 14 seconds for the busiest section of the Metrobus (Istanbul), 7 seconds in Belo Horizonte, 6 seconds in Rio de Janeiro. Vehicle capacity, which can range from 50 passengers for a conventional bus up to some 300 for a bi-articulated vehicle or 500. The effectiveness of the stations to handle passenger demand. High volumes of passengers on vehicles require large bus stations and more boarding areas at busy interchange points. This is the standard bottleneck of BRT (and heavy rail). The effectiveness of the feeder system: can these deliver people to stations at the required speed? Local passenger demand. Without enough local demand for travel, the capacity will not be used. Based on this data, the minimum headway and maximum current vehicle capacities, the theoretical maximum throughput measured in passengers per hour per direction (PPHPD) for a single traffic lane is some 150,000 passengers per hour (250 passengers per vehicle, one vehicle every 6 seconds). In real world conditions BRT Rio (de Janeiro, BRS Presidente Vargas) with 65.000 PPHPD holds the record, TransMilenio Bogotá and Metrobus Istanbul perform 49,000 – 45,000 PPHPD, most other busy systems operating in the 15,000 to 25,000 range. Research of the Institute for Transportation and Development Policy (ITDP) shows a capacity ranking of MRT modes, based on reported performance of 14 light rail systems, 14 heavy rail systems (just 1-track + 3 2-track-systems "highest capacity") and 56 BRT systems. The study concludes, that BRT-"capacity on TransMilenio exceeds all but the highest capacity heavy rail systems, and it far exceeds the highest light rail system." Performance data of 84 systems show 37,700 passengers in peak hour per direction (PPHPD) in the best BRT system 36,000 in the best 1-track-heavy rail system 13,400 in the best light rail system More topical are these BRT data 45,000 PPHPD in a 1-lane-system using articulated buses (2020 in Istanbul) 320 busses per hour per direction in the corridor Nossa Senhora de Copacabana in Rio de Janeiro for the year 2014 meaning a bus every 11 seconds. 65,400 PPHPD in 600 buses in the corridor Presidente Vargas in Rio de Janeiro for the years 2012 resp. 2014, which means 10 buses per minute or a bus every 6 seconds. Comparison with light rail After the first BRT system opened in 1971, cities were slow to adopt BRT because they believed that the capacity of BRT was limited to about 12,000 passengers per hour traveling in a given direction during peak demand. While this is a capacity rarely needed in the US (12,000 is more typical as a total daily ridership), in the developing world this capacity constraint (or rumor of a capacity constraint) was a significant argument in favor of heavy rail metro investments in some venues. When TransMilenio opened in 2000, it changed the paradigm by giving buses a passing lane at each station stop and introducing express services within the BRT infrastructure. These innovations increased the maximum achieved capacity of a BRT system to 35,000 passengers per hour. The single-lane roads of Istanbul Metrobus had been frequently blocked by Phileas buses breaking down, causing delays for all the buses in a single direction. After focusing on Mercedes-Benz buses, capacity increased to 45,000 pph. Light rail, by comparison, has reported passenger capacities between 3,500 pph (mainly street running) to 19,000 pph (fully grade-separated). There are conditions that favor light rail over BRT, but they are fairly narrow. These conditions are a corridor with only one available lane in each direction, more than 16,000 passengers per direction per hour but less than 20,000, and a long block length, because the train cannot block intersections. These conditions are rare, but in that specific instance, light rail might have a minimal operational advantage. The United States Government Accountability Office (U.S. GAO) summarized in the report "Mass Transit – Bus Rapid Transit Shows Promise", the U.S. Federal Transit Administration (FTA) provided funding for the construction of heavy rail and of light rail at that time, but not of BRT. The FTA funding of BRT "rather focuses on obtaining and sharing information on projects being pursued by local transit agencies". In spite of this different funding the capital costs of BRT systems were lower in many US communities than those of light rail systems and performance often similar. The GAO stated, BRT systems were generally more flexible compared to light rail and faster. "While transit officials noted a public bias toward Light Rail, research has found that riders have no preference for rail over bus when service characteristics are equal." Comparison with heavy rail Fjellstrom/Wright distributed a map of the mid-term goal to expand Bogota's BRT system, TransMilenio, so that 85% of the city's 7 million inhabitants live within 500m distance to a TransMileneo line. Such an expansion program would be unrealistic for a rail-based MRT-system, according to Bogota's mayor. An additional use of BRT is the replacement of heavy rail services, due to infrastructure damage, reduced ridership, or a combination of both where lower maintenance costs are desired while taking advantage of an existing dedicated right of way. One such system in Japan consists of portions of the JR East Kesennuma and Ofuanto Lines, which were catastrophically damaged during the 2011 Tōhoku earthquake and tsunami, and later repaired as a bus lane over the same right of way, providing improved service with much lower restoration and maintenance costs. Another system set to open in August 2023 is a portion of the JR Kyushu Hitahikosan Line, which was damaged due to torrential rain in 2017. In both cases, ridership had dropped considerably since the lines opened, and the higher capacity of a rail line is no longer needed or cost-effective compared to buses on the same alignments. Comparison with conventional bus services Conventional scheduled bus services use general traffic lanes, which can be slow due to traffic congestion, and the speed of bus services is further reduced by long dwell times. In 2013, the New York City authorities noted that buses on 34th Street, which carried 33,000 bus riders a day on local and express routes, traveled at , only slightly faster than walking pace. Even despite the implementation of Select Bus Service (New York City's version of a bus rapid transit system), dedicated bus lanes, and traffic cameras on the 34th Street corridor, buses on the corridor were still found to travel at an average of 4.5 mph. In the 1960s, Reuben Smeed predicted that the average speed of traffic in central London would be without other disincentives such as road pricing, based on the theory that this was the minimum speed that people will tolerate. When the London congestion charge was introduced in 2003, the average traffic speed was indeed which was the highest speed since the 1970s. By way of contrast, typical speeds of BRT systems range from . Cost The capital cost of implementing BRT is lower than for light rail: A study by the U.S. Government Accountability Office (GAO) from 2000 found that the average capital cost per mile for busways was $13.5 million while light rail average cost was $34.8 million. The total investment varies considerably due to factors such as cost of the roadway, amount of grade separation, station structures and traffic signal systems. In 2003, a study edited by the German GTZ compared various MRT systems all over the world and concluded ″Bus Rapid Transit (BRT) can provide high-quality, metro-like transit service at a fraction of the cost of other options″. In 2013, the analysis of a database of nineteen LRT projects, twenty-six HRT projects, and forty-two BRT projects specified "In higher income countries ... an HRT alternative is likely to cost up to 40 times as much as a BRT alternative". and a surface LRT alternative about 4 times that of a BRT alternative. Operational cost of running a BRT system is generally lower than light rail, though the exact comparison varies, and labor costs depend heavily on wages, which vary between countries. For the same level of ridership and demand, higher labor costs in the developed world relative to developing countries will tend to encourage developed world transit operators to prefer operate services with larger but less frequent vehicles. This will allow the service to achieve the same capacity while minimizing the number of drivers. This may come as a hidden cost to passengers in lower demand routes who experience significantly lower frequencies and longer waiting times and limit gain of ridership. In the study done by the U.S. GAO, BRT systems usually had lower cost as well based on "operating cost per vehicle hour", as on "operating cost per revenue mile", and on "operating cost per passenger trip", mainly because of lower vehicle cost and lower infrastructure cost. An ambitious light rail system runs partly grade separated (e.g. underground), which gives free right-of-way and much faster traffic compared to passing the traffic signals needed in a surface level system. Underground BRT was suggested as early as 1954. As long as most buses still run on diesel, air quality can become a significant concern in tunnels, but the Downtown Seattle Transit Tunnel is an example of using hybrid buses, which switch to overhead electric propulsion while they are underground, eliminating diesel emissions and reducing fuel usage. Alternatives are elevated busways or - more expensive - elevated railways. Criticism BRT systems have been widely promoted by non-governmental organizations such as the Shell-funded EMBARQ program, Rockefeller Foundation and Institute for Transportation and Development Policy (ITDP), whose consultant pool includes the former mayor of Bogota (Colombia), Enrique Peñalosa (former president of ITDP). Supported by contributions of bus-producing companies such as Volvo, the ITDP not only established a proposed "standard" for BRT system implementation, but developed intensive lobby activities around the world to convince local governments to select BRT systems over rail-based transportation models (subways, light trains, etc.). "Fake" BRT systems (BRT creep) Bus rapid transit creep is a phenomenon commonly defined as a bus rapid transit (BRT) system that fails to meet the requirements to be considered "true BRT". These systems are often marketed as a fully realized bus rapid transit system, but end up being described as more of an improvement to regular bus service by proponents of the "BRT creep" term. Notably, the Institute for Transportation and Development Policy (ITDP) published several guidelines in an attempt to define what constitutes the term of "true BRT", known as the BRT Standard, in an attempt to avert this phenomenon. The most extreme versions of BRT creep lead to systems that cannot even truly be recognized as "Bus Rapid Transit". For example, a rating from the ITDP determined that the Boston Silver Line was best classified as "Not BRT" after local decision makers gradually decided to do away with most BRT-specific features. The study also evaluates New York City's Select Bus Service (which is supposed to be BRT-standard) as "Not BRT". Environmental issues Unlike electric-powered trains commonly used in rapid transit and light rail systems, bus rapid transit often uses diesel- or gasoline-fueled engines. The typical bus diesel engine causes noticeable levels of air pollution, noise and vibration. It is noted however that BRT can still provide significant environmental benefits over private cars. In addition, BRT systems can replace an inefficient conventional bus network for more efficient, faster and less polluting BRT buses. For example, Bogotá previously used 2,700 conventional buses providing transportation to 1.6 million passengers daily, while in 2013 TransMilenio transported 1.9 million passengers using only 630 BRT buses, a fleet less than a quarter in size of the old fleet, that circulates at twice the speed, with a huge reduction in air pollution. To reduce direct emissions some systems use alternative forms of traction such as electric or hybrid engines. BRT systems can use trolleybuses to lower air pollution and noise emissions such as those in Beijing and Quito. The price penalty of installing overhead lines could be offset by the environmental benefits and potential for savings from centrally generated electricity, especially in cities where electricity is less expensive than other fuel sources. Trolleybus electrical systems can be potentially reused for future light rail conversion. TransJakarta buses use cleaner compressed natural gas-fueled engines, while Bogotá started to use hybrid buses in 2012; these hybrid systems use regenerative braking to charge batteries when the bus stops and then use electric motors to propel the bus up to 40 km/h, then automatically switching to the diesel engine for higher speeds, which allows for considerable savings in fuel consumption and pollutant dispersion. Overcrowding and poor quality service Many BRT systems suffer from overcrowding in buses and stations as well as long wait times for buses. In Santiago de Chile, the average of the system is six passengers per square meter () inside vehicles. Users have reported days where the buses take too long to arrive, and are too overcrowded to accept new passengers. As of June 2017, the system has an approval rating of 15% among commuters, and it has lost 27% of its passengers, who have turned mostly to cars. In Bogotá the overcrowding was even worse; the average of TransMilenio was eight passengers per square meter (). Only 29% felt satisfied with the system. The data also showed that 23% of the citizens agreed with building more TransMilenio lines, in contrast of the 42% who considered that a rapid transit system should be built. Several cases of sexual assault had been reported by female users in TransMilenio. According to a 2012 survey made by the secretary of the woman of Bogota, 64% of women said they had been victims of sexual assault in the system. The system had even been ranked as the most dangerous transport for women. The poor quality of the system had occasioned an increment in the number of cars and motorcycles in the city; citizens preferred these transport means over TransMilenio. According to official data, the number of cars increased from approximately 666,000 in 2005 to 1,586,700 in 2016. The number of motorcycles was also growing, with 660,000 sold in Bogota in 2013, two times the number of cars sold.At the end of 2018 Transmilenio ordered 1383 new buses as a replacement of the older ones in service. 52% were compressed natural gas (CNG) buses made by Scania with Euro 6 emission rating, 48% were diesel engine made by Volvo with Euro 5 emission rating. More (or renewed?) orders have produced an impressive result: "To improve public and environmental health, the City of Bogotá has assembled a fleet of 1,485 electric buses for its public transportation system - placing the city among the three largest e-bus fleets outside of China."In the year 2022 Bogotá has won the Sustainable Transport Award, an award given out by the Institute for Transportation and Development Policy, which is partially funded by bus manufacturers. Reasons stated include the TransMilenio system and its urban cycling strategy. The system in Jakarta had been experiencing issues, with complaints of overcrowding in buses and stations and low frequency of the routes. There were extensive safety concerns as well; rampant sexual harassment has been reported, and the fire safety of the buses has been under scrutiny after one of the buses, a Zhongtong imported from China, suddenly and spontaneously caught on fire. The quality of the service was so bad that the then-governor of Jakarta, Basuki Tjahaja Purnama, in March 2015 publicly apologized for the poor performance of the system. Failures and reversals The temporary unpopularity of Delhi's BRT(2016) and the riots and spontaneous user demonstrations in Bogotá(2016) raised doubts about the ability of BRTs to keep pace with increased ridership. On the other hand the speed of increased BRT ridership confirmed the research finding no general preference for rail over bus, see the end of chapter "Comparison with light rail". Bogota has regained trust and safety according to the Sustainable Transport Award 2022. A lack of permanence of BRT has been criticized, with some arguing that BRT systems can be used as an excuse to build roads that others later try to convert for use by non-BRT vehicles. Examples of this can be found in Delhi, where a BRT system was scrapped, and in Aspen, Colorado, where drivers are lobbying the government to allow mixed-use traffic in former BRT lanes as of 2017, although in other US cities, such as Albuquerque, New Mexico, just the opposite is true. Such exuse might be a side effect of the advantages connected with the flexibility of BRT. Experts have considered a failure of BRT to land use structure. Some cities that are sprawled and have no mixed use have insufficient ridership to make BRT economically viable. In Africa, the African Urban Institute criticized the viability of ongoing BRTs across the continent. Impact A 2018 study found that the introduction of a BRT network in Mexico City reduced air pollution, as measured by emissions of CO, NOX, and PM10.
Technology
Motorized road transport
null
333835
https://en.wikipedia.org/wiki/Free%20object
Free object
In mathematics, the idea of a free object is one of the basic concepts of abstract algebra. Informally, a free object over a set A can be thought of as being a "generic" algebraic structure over A: the only equations that hold between elements of the free object are those that follow from the defining axioms of the algebraic structure. Examples include free groups, tensor algebras, or free lattices. The concept is a part of universal algebra, in the sense that it relates to all types of algebraic structure (with finitary operations). It also has a formulation in terms of category theory, although this is in yet more abstract terms. Definition Free objects are the direct generalization to categories of the notion of basis in a vector space. A linear function between vector spaces is entirely determined by its values on a basis of the vector space The following definition translates this to any category. A concrete category is a category that is equipped with a faithful functor to Set, the category of sets. Let be a concrete category with a faithful functor . Let be a set (that is, an object in Set), which will be the basis of the free object to be defined. A free object on is a pair consisting of an object in and an injection (called the canonical injection), that satisfies the following universal property: For any object in and any map between sets , there exists a unique morphism in such that . That is, the following diagram commutes: If free objects exist in , the universal property implies every map between two sets induces a unique morphism between the free objects built on them, and this defines a functor . It follows that, if free objects exist in , the functor , called the free functor is a left adjoint to the faithful functor ; that is, there is a bijection Examples The creation of free objects proceeds in two steps. For algebras that conform to the associative law, the first step is to consider the collection of all possible words formed from an alphabet. Then one imposes a set of equivalence relations upon the words, where the relations are the defining relations of the algebraic object at hand. The free object then consists of the set of equivalence classes. Consider, for example, the construction of the free group in two generators. One starts with an alphabet consisting of the five letters . In the first step, there is not yet any assigned meaning to the "letters" or ; these will be given later, in the second step. Thus, one could equally well start with the alphabet in five letters that is . In this example, the set of all words or strings will include strings such as aebecede and abdc, and so on, of arbitrary finite length, with the letters arranged in every possible order. In the next step, one imposes a set of equivalence relations. The equivalence relations for a group are that of multiplication by the identity, , and the multiplication of inverses: . Applying these relations to the strings above, one obtains where it was understood that is a stand-in for , and is a stand-in for , while is the identity element. Similarly, one has Denoting the equivalence relation or congruence by , the free object is then the collection of equivalence classes of words. Thus, in this example, the free group in two generators is the quotient This is often written as where is the set of all words, and is the equivalence class of the identity, after the relations defining a group are imposed. A simpler example are the free monoids. The free monoid on a set X, is the monoid of all finite strings using X as alphabet, with operation concatenation of strings. The identity is the empty string. In essence, the free monoid is simply the set of all words, with no equivalence relations imposed. This example is developed further in the article on the Kleene star. General case In the general case, the algebraic relations need not be associative, in which case the starting point is not the set of all words, but rather, strings punctuated with parentheses, which are used to indicate the non-associative groupings of letters. Such a string may equivalently be represented by a binary tree or a free magma; the leaves of the tree are the letters from the alphabet. The algebraic relations may then be general arities or finitary relations on the leaves of the tree. Rather than starting with the collection of all possible parenthesized strings, it can be more convenient to start with the Herbrand universe. Properly describing or enumerating the contents of a free object can be easy or difficult, depending on the particular algebraic object in question. For example, the free group in two generators is easily described. By contrast, little or nothing is known about the structure of free Heyting algebras in more than one generator. The problem of determining if two different strings belong to the same equivalence class is known as the word problem. As the examples suggest, free objects look like constructions from syntax; one may reverse that to some extent by saying that major uses of syntax can be explained and characterised as free objects, in a way that makes apparently heavy 'punctuation' explicable (and more memorable). Free universal algebras Let be a set and be an algebraic structure of type generated by . The underlying set of this algebraic structure , often called its universe, is denoted by . Let be a function. We say that (or informally just ) is a free algebra of type on the set of free generators if the following universal property holds: For every algebra of type and every function , where is the universe of , there exists a unique homomorphism such that the following diagram commutes: This means that . Free functor The most general setting for a free object is in category theory, where one defines a functor, the free functor, that is the left adjoint to the forgetful functor. Consider a category C of algebraic structures; the objects can be thought of as sets plus operations, obeying some laws. This category has a functor, , the forgetful functor, which maps objects and morphisms in C to Set, the category of sets. The forgetful functor is very simple: it just ignores all of the operations. The free functor F, when it exists, is the left adjoint to U. That is, takes sets X in Set to their corresponding free objects F(X) in the category C. The set X can be thought of as the set of "generators" of the free object F(X). For the free functor to be a left adjoint, one must also have a Set-morphism . More explicitly, F is, up to isomorphisms in C, characterized by the following universal property: Whenever is an algebra in , and is a function (a morphism in the category of sets), then there is a unique -morphism such that . Concretely, this sends a set into the free object on that set; it is the "inclusion of a basis". Abusing notation, (this abuses notation because X is a set, while F(X) is an algebra; correctly, it is ). The natural transformation is called the unit; together with the counit , one may construct a T-algebra, and so a monad. The cofree functor is the right adjoint to the forgetful functor. Existence There are general existence theorems that apply; the most basic of them guarantees that Whenever C is a variety, then for every set X there is a free object F(X) in C. Here, a variety is a synonym for a finitary algebraic category, thus implying that the set of relations are finitary, and algebraic because it is monadic over Set. General case Other types of forgetfulness also give rise to objects quite like free objects, in that they are left adjoint to a forgetful functor, not necessarily to sets. For example, the tensor algebra construction on a vector space is the left adjoint to the functor on associative algebras that ignores the algebra structure. It is therefore often also called a free algebra. Likewise the symmetric algebra and exterior algebra are free symmetric and anti-symmetric algebras on a vector space. List of free objects Specific kinds of free objects include: free algebra free associative algebra free commutative algebra free category free strict monoidal category free group free abelian group free partially commutative group free Kleene algebra free lattice free Boolean algebra free distributive lattice free Heyting algebra free modular lattice free Lie algebra free magma free module, and in particular, vector space free monoid free commutative monoid free partially commutative monoid free ring free semigroup free semiring free commutative semiring free theory term algebra discrete space
Mathematics
Category theory
null
334037
https://en.wikipedia.org/wiki/Occupational%20therapy
Occupational therapy
Occupational therapy (OT) is a healthcare profession that involves the use of assessment, intervention, consultation, and coaching to develop, recover, or maintain meaningful occupations of individuals, groups, or communities. The field of OT consists of health care practitioners trained and educated to support mental health and physical performance. Occupational therapists specialize in teaching, educating, and supporting participation in activities that occupy an individual's time. It is an independent health profession sometimes categorized as an allied health profession and consists of occupational therapists (OTs) and occupational therapy assistants (OTAs). OTs and OTAs have different roles, with OTs licensed to complete comprehensive occupational therapy evaluations. Both professionals work with people who want to improve their ability to participate in meaningful occupations. The American Occupational Therapy Association defines an occupational therapist as someone who "helps people across their lifespan participate in the things they want and/or need to do through the therapeutic use of everyday activities (occupations)". Definitions by other professional occupational therapy organizations are similar. Common interventions include: Helping disabled children to participate in meaningful activities at home, school, and within the community (independent mobility is often a central concern) Training in assistive device technology, meaningful and purposeful activities, and life skills. Physical injury rehabilitation Mental dysfunction rehabilitation Support of individuals across the age spectrum experiencing physical and cognitive changes Assessing ergonomics and assistive seating options to maximize independent function, while alleviating the risk of pressure injury Education in the disease and rehabilitation process Advocating for patient health Exploring vocational activities with clients Occupational therapists are university-educated professionals and must pass a licensing exam to practice. Currently, entry level occupational therapists must have a master's degree while certified occupational therapy assistants require a two-year associate degree to practice in the United States. Individuals must pass a national board certification and apply for a state license in most states. Occupational therapists often work closely with professionals in physical therapy, speech–language pathology, audiology, nursing, nutrition, social work, psychology, medicine, and assistive technology. History Early history The earliest evidence of using occupations as a method of therapy can be found in ancient times. In c. 100 BCE, Greek physician Asclepiades treated patients with a mental illness humanely using therapeutic baths, massage, exercise, and music. Later, the Roman Celsus prescribed music, travel, conversation and exercise to his patients. However, by medieval times the use of these interventions with people with mental illness was rare, if not nonexistent. Moral treatment and graded activity In late 18th-century Europe, doctors such as Philippe Pinel and Johann Christian Reil reformed the mental asylum system. Their institutions used rigorous work and leisure activities. This became part of what was known as moral treatment. Although it was thriving in Europe, interest in the reform movement fluctuated in the United States throughout the 19th century. In the late 19th and early 20th centuries, the establishment of public health measures to control infectious diseases included the building of fever hospitals. Patients with tuberculosis were recommended to have a regime of prolonged bed rest followed by a gradual increase in exercise. This was a time in which the rising incidence of disability related to industrial accidents, tuberculosis, and mental illness brought about an increasing social awareness of the issues involved. The Arts and Crafts movement that took place between 1860 and 1910 also impacted occupational therapy. The movement emerged against the monotony and lost autonomy of factory work in the developed world. Arts and crafts were used to promote learning through doing, provided a creative outlet, and served as a way to avoid boredom during long hospital stays. From the late 1870's, Scottish tuberculosis doctor Robert William Philip prescribed graded activity from complete rest through to gentle exercise and eventually to activities such as digging, sawing, carpentry and window cleaning. During this period a farm colony near Edinburgh and a village settlement near Papworth in England were established, both of which aimed to employ people in appropriate long-term work prior to their return to open employment. Development into a health profession In the United States, the health profession of occupational therapy was conceived in the early 1910s as a reflection of the Progressive Era. Early professionals merged highly valued ideals, such as having a strong work ethic and the importance of crafting with one's own hands with scientific and medical principles. American social worker Eleanor Clarke Slagle (1870-1942) is considered to be the "mother" of occupational therapy. Slagle proposed habit training as a primary occupational therapy model of treatment. Based on the philosophy that engagement in meaningful routines shape a person's wellbeing, habit training focused on creating structure and balance between work, rest and leisure. Although habit training was initially developed to treat individuals with mental health conditions, its basic tenets are apparent in modern treatment models that are utilized across a wide scope of client populations. In 1912, she became director of a department of occupational therapy at The Henry Phipps Psychiatric Clinic in Baltimore. World War I In 1915, Slagle opened the first occupational therapy training program, the Henry B. Favill School of Occupations, at Hull House in Chicago. British-Canadian teacher and architect Thomas B. Kidner was appointed vocational secretary of the Canadian Military Hospitals Commission in January 1916. He was given the duty of preparing soldiers returning from World War I to return to their former vocational duties or retrain soldiers no longer able to perform their previous duties. He developed a program that engaged soldiers recovering from wartime injuries or tuberculosis in occupations even while they were still bedridden. Once the soldiers were sufficiently recovered they would work in a curative workshop and eventually progress to an industrial workshop before being placed in an appropriate work setting. He used occupations (daily activities) as a medium for manual training and helping injured individuals to return to productive duties such as work.The entry of the United States into World War I in April 1917 was a crucial event in the history of the profession in that country. Up until this time, the profession had been concerned primarily with the treatment of people with mental illness. U.S. involvement in the war led to an escalating number of injured and disabled soldiers, which presented a daunting challenge to those in command. The US National Society for the Promotion of Occupational Therapy (NSPOT) was founded in October 1917 by Slagle, Kidner and others including American doctor William Rush Denton. The military enlisted the assistance of NSPOT to recruit and train over 1,200 "reconstruction aides" to help with the rehabilitation of those wounded in the war. Denton's 1918 article "The Principles of Occupational Therapy" appeared in the journal Public Health, and laid the foundation for the textbook he published in 1919 entitled Reconstruction Therapy. Denton struggled with "the cumbersomeness of the term occupational therapy", as he thought it lacked the "exactness of meaning which is possessed by scientific terms". Other titles such as "work-cure", "ergo therapy" (ergo being the Greek root for "work"), and "creative occupations" were discussed as substitutes, but ultimately, none possessed the broad meaning that the practice of occupational therapy demanded in order to capture the many forms of treatment that existed from the beginning. NSPOT formally adopted the name "occupational therapy" for the field in 1921. Inter-war period There was a struggle to keep people in the profession during the post-war years. Emphasis shifted from the altruistic war-time mentality to the financial, professional, and personal satisfaction that comes with being a therapist. To make the profession more appealing, practice was standardized, as was the curriculum. Entry and exit criteria were established, and the American Occupational Therapy Association advocated for steady employment, decent wages, and fair working conditions. Via these methods, occupational therapy sought and obtained medical legitimacy in the 1920s. The emergence of occupational therapy challenged the views of mainstream scientific medicine. Instead of focusing purely on the medical model, occupational therapists argued that a complex combination of social, economic, and biological reasons cause dysfunction. Principles and techniques were borrowed from many disciplines—including but not limited to physical therapy, nursing, psychiatry, rehabilitation, self-help, orthopedics, and social work—to enrich the profession's scope. The 1920s and 1930s were a time of establishing standards of education and laying the foundation of the profession and its organization. Eleanor Clarke Slagle proposed a 12-month course of training in 1922, and these standards were adopted in 1923. In 1928, William Denton published another textbook, Prescribing Occupational Therapy. Educational standards were expanded to a total training time of 18 months in 1930 to place the requirements for professional entry on par with those of other professions. By the early 1930s, AOTA had established educational guidelines and accreditation procedures. Margaret Barr Fulton became the first US qualified occupational therapist to work in the United Kingdom in 1925. She qualified at the Philadelphia School in the United States and was appointed to the Aberdeen Royal Hospital for mental patients where she worked until her retirement in 1963. US-style OT was introduced into England by Dr Elizabeth Casson who had visited similar establishments in America. (Casson had also earlier worked under the transformative English social reformer Octavia Hill.) In 1929 she established her own residential clinic in Bristol, Dorset House, for "women with mental disorders", and worked as its medical director. It was here in 1930 that she founded the first school of occupational therapy in the UK. The Scottish Association of Occupational Therapists was founded in 1932. The profession was served in the rest of the UK by the Association of Occupational Therapists from 1936. (The two later merged to form what is today the Royal College of Occupational Therapists in 1974.) World War II With the US entry into World War II and the ensuing skyrocketing demand for occupational therapists to treat those injured in the war, the field of occupational therapy underwent dramatic growth and change. Occupational therapists needed to be skilled not only in the use of constructive activities such as crafts, but also increasingly in the use of activities of daily living. The body that is now Occupational Therapy Australia began in 1944. Post-World War II Another textbook was published in the United States for occupational therapy in 1947, edited by Helen S. Willard and Clare S. Spackman. The profession continued to grow and redefine itself in the 1950s. In 1954, AOTA created the Eleanor Clarke Slagle Lectureship Award in its namesake's honor. Each year, this award recognizes a member of AOTA "who has creatively contributed to the development of the body of knowledge of the profession through research, education, or clinical practice." The profession also began to assess the potential for the use of trained assistants in the attempt to address the ongoing shortage of qualified therapists, and educational standards for occupational therapy assistants were implemented in 1960. The 1960s and 1970s were a time of ongoing change and growth for the profession as it struggled to incorporate new knowledge and cope with the recent and rapid growth of the profession in the previous decades. New developments in the areas of neurobehavioral research led to new conceptualizations and new treatment approaches, possibly the most groundbreaking being the sensory integrative approach developed by A. Jean Ayres. The profession has continued to grow and expand its scope and settings of practice. Occupational science, the study of occupation, was founded in 1989 by Elizabeth Yerxa at the University of Southern California as an academic discipline to provide foundational research on occupation to support and advance the practice of occupation-based occupational therapy, as well as offer a basic science to study topics surrounding "occupation". In addition, occupational therapy practitioner's roles have expanded to include political advocacy (from a grassroots base to higher legislation); for example, in 2010 PL 111-148 titled the Patient Protection and Affordable Care Act had a habilitation clause that was passed in large part due to AOTA's political efforts. Furthermore, occupational therapy practitioners have been striving personally and professionally toward concepts of occupational justice and other human rights issues that have both local and global impacts. The World Federation of Occupational Therapist's Resource Centre has many position statements on occupational therapy's roles regarding their participation in human rights issues. In 2021, U.S. News & World Report ranked occupational therapy as #19 of their list of '100 Best Jobs'. Practice frameworks An occupational therapist works systematically with a client through a sequence of actions called an "occupational therapy process." There are several versions of this process. All practice frameworks include the components of evaluation (or assessment), intervention, and outcomes. This process provides a framework through which occupational therapists assist and contribute to promoting health and ensures structure and consistency among therapists. Occupational Therapy Practice Framework (OTPF, United States) The Occupational Therapy Practice Framework (OTPF) is the core competency of occupational therapy in the United States. The OTPF is divided into two sections: domain and process. The domain includes environment, client factors, such as the individual's motivation, health status, and status of performing occupational tasks. The domain looks at the contextual picture to help the occupational therapist understand how to diagnose and treat the patient. The process is the actions taken by the therapist to implement a plan and strategy to treat the patient. Canadian Practice Process Framework The Canadian Model of Client Centered Enablement (CMCE) embraces occupational enablement as the core competency of occupational therapy and the Canadian Practice Process Framework (CPPF) as the core process of occupational enablement in Canada. The Canadian Practice Process Framework (CPPF) has eight action points and three contextual element which are: set the stage, evaluate, agree on objective plan, implement plan, monitor/modify, and evaluate outcome. A central element of this process model is the focus on identifying both client and therapists strengths and resources prior to developing the outcomes and action plan. International Classification of Functioning, Disability and Health (ICF) The International Classification of Functioning, Disability and Health (ICF) is the World Health Organisation's framework to measure health and ability by illustrating how these components impact one's function. This relates very closely to the Occupational Therapy Practice Framework, as it is stated that "the profession's core beliefs are in the positive relationship between occupation and health and its view of people as occupational beings". The ICF is built into the 2nd edition of the practice framework. Activities and participation examples from the ICF overlap Areas of Occupation, Performance Skills, and Performance Patterns in the framework. The ICF also includes contextual factors (environmental and personal factors) that relate to the framework's context. In addition, body functions and structures classified within the ICF help describe the client factors described in the Occupational Therapy Practice Framework. Further exploration of the relationship between occupational therapy and the components of the ICIDH-2 (revision of the original International Classification of Impairments, Disabilities, and Handicaps (ICIDH), which later became the ICF) was conducted by McLaughlin Gray. It is noted in the literature that occupational therapists should use specific occupational therapy vocabulary along with the ICF in order to ensure correct communication about specific concepts. The ICF might lack certain categories to describe what occupational therapists need to communicate to clients and colleagues. It also may not be possible to exactly match the connotations of the ICF categories to occupational therapy terms. The ICF is not an assessment and specialized occupational therapy terminology should not be replaced with ICF terminology. The ICF is an overarching framework for current therapy practices. Occupations According to the American Occupational Therapy Association's (AOTA) Occupational Therapy Practice Framework: Domain and Process, 4th Edition (OTPF-4), occupations are defined as "everyday activities that people do as individuals, and families, and with communities to occupy time and bring meaning and purpose to life. Occupations include things people need to, want to and are expected to do". Occupations are central to a client's (person's, group's, or population's) health, identity, and sense of competence and have particular meaning and value to that client. Occupations include activities of daily living (ADLs), instrumental activities of daily living (IADLs), education, work, play, leisure, social participation, rest and sleep. Practice settings According to the 2019 Salary and Workforce Survey by the American Occupational Therapy Association, occupational therapists work in a wide-variety of practice settings including: hospitals (28.6%), schools (18.8%), long-term care facilities/skilled nursing facilities (14.5%), free-standing outpatient (13.3%), home health (7.3%), academia (6.9%), early intervention (4.4%), mental health (2.2%), community (2.4%), and other (1.6%). According to the AOTA, the most common primary work setting for occupational therapists is in hospitals. Also according to the survey, 46% of occupational therapists work in urban areas, 39% work in suburban areas and the remaining 15% work in rural areas. The Canadian Institute for Health Information (CIHI) found that as of 2020 nearly half (46.1%) of occupational therapists worked in hospitals, 43.2% worked in community health, 3.6% work in long-term care (LTC) and 7.1% work in "other", including government, industry, manufacturing, and commercial settings. The CIHI also found that 68% of occupational therapists in Canada work in urban settings and only 3.7% work in rural settings. Areas of practice in the United States Children and youth Occupational therapists work with infants, toddlers, children, youth, and their families in a variety of settings, including schools, clinics, homes, hospitals, and the community. Evaluation assesses the child's ability to engage in daily, meaningful occupations, the underlying skills (or performance components) which may be physical, cognitive, or emotional in nature, and the fit between the client's skills and the environments and contexts in which the client functions. OT intervention and involves evaluating a young person's occupational performance in areas of feeding, playing, socializing which aligns with their neurodiversity, daily living skills, or attending school. In planning treatment, occupational therapists work in collaboration with the children and teens themselves, parents, caregivers, and teachers in order to develop functional goals within a variety of occupations meaningful to the young client. Early intervention addresses daily functioning of a child between the ages of birth to three years old. OTs who practice in early intervention support a family's ability to care for their child with special needs and promote his or her function and participation in the most natural environment. Each child is required to have an Individualized Family Service Plan (IFSP) that focuses on the family's goals for the child. It's possible for an OT to serve as the family's service coordinator and facilitate the team process for creating an IFSP for each eligible child. Objectives that an occupational therapist addresses with children and youth may take a variety of forms. Examples are as follows: Providing rehabilitation activities to children with neuromuscular disabilities such as cerebral palsy Supporting self-regulation within neurodivergent children whose neurobiology does not align with the sensory environment or the contexts in which they function Facilitating coping skills to a child with generalized anxiety disorder. Consulting with teachers, psychologists, social workers, parents/caregivers, and other professionals who work with children regarding modifications, accommodations and supports in a variety of areas, such as sensory processing, motor planning, visual processing, and executive function skills. Providing individualized treatment for sensory processing differences. Providing splinting and caregiver education in a hospital burn unit. Instructing caregivers in regard to mealtime intervention for autistic children who have feeding challenges. Facilitating handwriting development through providing intervention to develop fine motor and writing readiness skills in school-aged children. In the United States, pediatric occupational therapists work in the school setting as a "related service" for children with an Individual Education Plan (IEP). Every student who receives special education and related services in the public school system is required by law to have an IEP, which is a very individualized plan designed for each specific student (U.S. Department of Education, 2007). Related services are "developmental, corrective, and other supportive services as are required to assist a child with a disability to benefit from special education," and include a variety of professions such as speech–language pathology and audiology services, interpreting services, psychological services, and physical and occupational therapy. As a related service, occupational therapists work with children with varying disabilities to address those skills needed to access the special education program and support academic achievement and social participation throughout the school day (AOTA, n.d.-b). In doing so, occupational therapists help children fulfill their role as students and prepare them to transition to post-secondary education, career and community integration (AOTA, n.d.-b). Occupational therapists have specific knowledge to increase participation in school routines throughout the day, including: Modification of the school environment to allow physical access for children with disabilities Provide assistive technology to support student success Helping to plan instructional activities for implementation in the classroom Support the needs of students with significant challenges such as helping to determine methods for alternate assessment of learning Helping students develop the skills necessary to transition to post-high school employment, independent living or further education (AOTA). Other settings, such as homes, hospitals, and the community are important environments where occupational therapists work with children and teens to promote their independence in meaningful, daily activities. Outpatient clinics offer a growing OT intervention referred to as "Sensory Integration Treatment". This therapy, provided by experienced and knowledgeable pediatric occupational therapists, was originally developed by A. Jean Ayres, an occupational therapist. Sensory integration therapy is an evidence-based practice which enables children to better process and integrate sensory input from the child's body and from the environment, thus improving his or her emotional regulation, ability to learn, behavior, and functional participation in meaningful daily activities. Recognition of occupational therapy programs and services for children and youth is increasing worldwide. Occupational therapy for both children and adults is now recognized by the United Nations as a human right which is linked to the social determinants of health. , there are over 500,000 occupational therapists working worldwide (many of whom work with children) and 778 academic institutions providing occupational therapy instruction. Health and wellness According to the American Occupational Therapy Association's (AOTA) Occupational Therapy Practice Framework, 3rd Edition, the domain of occupational therapy is described as "Achieving health, well-being, and participation in life through engagement in occupation". Occupational therapy practitioners have a distinct value in their ability to utilize daily occupations to achieve optimal health and well-being. By examining an individual's roles, routines, environment, and occupations, occupational therapists can identify the barriers in achieving overall health, well-being and participation. Occupational therapy practitioners can intervene at primary, secondary and tertiary levels of intervention to promote health and wellness. It can be addressed in all practice settings to prevent disease and injuries, and adapt healthy lifestyle practices for those with chronic diseases. Two of the occupational therapy programs that have emerged targeting health and wellness are the Lifestyle Redesign Program and the REAL Diabetes Program. Occupational therapy interventions for health and wellness vary in each setting: School Occupational therapy practitioners target school-wide advocacy for health and wellness including: bullying prevention, backpack awareness, recess promotion, school lunches, and PE inclusion. They also heavily work with students with learning disabilities such as those on the autism spectrum. A study conducted in Switzerland showed that a large majority of occupational therapists collaborate with schools, half of them providing direct services within mainstream school settings. The results also show that services were mainly provided to children with medical diagnoses, focusing on the school environment rather than the child's disability. Outpatient Occupational therapy practitioners conduct 1:1 treatment sessions and group interventions to address: leisure, health literacy and education, modified physical activity, stress/anger management, healthy meal preparation, and medication management. Acute care Occupational therapy practitioners in acute care assess whether a patient has the cognitive, emotional and physical ability as well as the social supports needed to live independently and care for themselves after discharge from the hospital. Occupational therapists are uniquely positioned to support patients in acute care as they focus on both clinical and social determinants of health. Services delivered by occupational therapists in acute care include: Direct rehabilitation interventions, individually or in group settings to address physical, emotional and cognitive skills that are required for the patient to perform self-care and other important activities. Caregiver training to assist patients after discharge. Recommendations for adaptive equipment for increased safety and independence with activities of daily living (e.g. aids for getting dressed, shower chairs for bathing, and medication organizers for self-administering medications). They also perform home safety assessments to suggest modifications for improved safety and function after discharge. Occupational therapists use a variety of models, including the Model of Human Occupation, Person, Environment and Occupation, and Canadian Occupational Performance Model to adopt a client centered approach used for discharge planning. Hospital spending on occupational therapy services in acute care was found to be the single most significant spending category in reducing the risk of readmission to the hospital for heart failure, pneumonia, and acute myocardial infarction. Community-based Occupational therapy practitioners develop and implement community wide programs to assist in prevention of diseases and encourage healthy lifestyles by: conducting education classes for prevention, facilitating gardening, offering ergonomic assessments, and offering pleasurable leisure and physical activity programs. Mental health Mental Health Occupational therapy's foundation in mental health is deeply rooted in the moral treatment movement, which sought to replace the harsh treatment of mental disorders with the establishment of healthy routines and engagement in meaningful activities. This movement significantly influenced the development of occupational therapy, particularly through the contributions of early 20th-century practitioners and theorists like Adolph Meyer, who emphasized a holistic approach to mental health care (Christiansen & Haertl, 2014). According to the American Occupational Therapy Association (AOTA), occupational therapy is based on the principle that "active engagement in occupation promotes, facilitates, supports, and maintains health and participation" (AOTA, 2017). Occupations refer to individuals' activities to structure their time and provide meaning. The primary goals of occupational therapy include promoting physical and mental health and well-being and establishing, restoring, maintaining, and improving function and quality of life for individuals at risk of or affected by physical or mental health disorders (AOTA, 2017). Education and Professional Qualifications Occupational therapists require a master's degree or clinical doctorate, while occupational therapy assistants need at least an associate's degree. Their education encompasses extensive mental health-related topics, including biological, physical, social, and behavioral sciences, and supervised clinical experiences culminating in full-time internships. Both must pass national examinations and meet state licensure requirements. Occupational therapists apply mental and physical health knowledge, focusing on participation and occupation, using performance-based assessments to understand the relationship between occupational participation and well-being. Their education covers various aspects of mental health, including neurophysiological changes, human development, historical and contemporary perspectives on mental health, and current diagnostic criteria. This comprehensive training prepares occupational therapy practitioners to address the complex interplay of client variables, activity demands, and environmental factors in promoting health and managing health challenges (Bazyk & Downing, 2017). Occupational therapy role in mental health practice Occupational therapy practitioners play a critical role in mental health by using therapeutic activities to promote mental health and support full participation in life for individuals at risk of or experiencing psychiatric, behavioral, and substance use disorders. They work across the lifespan and in various settings, including homes, schools, workplaces, community environments, hospitals, outpatient clinics, and residential facilities (AOTA,2017). Occupational therapists and occupational therapy assistants assume diverse roles, such as case managers, care coordinators, group facilitators, community mental health providers, consultants, program developers, and advocates. Their interventions aim to facilitate engagement in meaningful occupations, enhance role performance, and improve overall well-being. This involves analyzing, adapting, and modifying tasks and environments to support clients' goals and optimal engagement in daily activities (AOTA, 2017). Occupational therapy practitioners utilize clinical reasoning, informed by various theoretical perspectives and evidence-based approaches, to guide evaluation and intervention. They are skilled in analyzing the complex interplay among client variables, activity demands, and the environments where participation occurs. For individuals experiencing any mental health issues, his or her ability to participate in occupations actively may be hindered. For example, an individual diagnosed with depression or anxiety may experience interruptions in sleep, difficulty completing self-care tasks, decreased motivation to participate in leisure activities, decreased concentration for school or job-related work, and avoidance of social interactions. Occupational therapy utilizes the public health approach to mental health (WHO, 2001) which emphasizes the promotion of mental health as well as the prevention of, and intervention for, mental illness. This model highlights the distinct value of occupational therapists in mental health promotion, prevention, and intensive interventions across the lifespan (Miles et al., 2010). Below are the three major levels of service: Tier 3: intensive interventions Intensive interventions are provided for individuals with identified mental, emotional, or behavioral disorders that limit daily functioning, interpersonal relationships, feelings of emotional well-being, and the ability to cope with challenges in daily life. Occupational therapy practitioners are committed to the recovery model which focuses on enabling persons with mental health challenges through a client-centered process to live a meaningful life in the community and reach their potential (Champagne & Gray, 2011). The focus of intensive interventions (direct–individual or group, consultation) is engagement in occupation to foster recovery or "reclaiming mental health" resulting in optimal levels of community participation, daily functioning, and quality of life; functional assessment and intervention (skills training, accommodations, compensatory strategies) (Brown, 2012); identification and implementation of healthy habits, rituals, and routines to support wellness. Tier 2: targeted services Targeted services are designed to prevent mental health problems in persons who are at risk of developing mental health challenges, such as those who have emotional experiences (e.g., trauma, abuse), situational stressors (e.g., physical disability, bullying, social isolation, obesity) or genetic factors (e.g., family history of mental illness). Occupational therapy practitioners are committed to early identification of and intervention for mental health challenges in all settings. The focus of targeted services (small groups, consultation, accommodations, education) is engagement in occupations to promote mental health and diminish early symptoms; small, therapeutic groups (Olson, 2011); environmental modifications to enhance participation (e.g., create Sensory friendly classrooms, home, or work environments) Tier 1: universal services Universal services are provided to all individuals with or without mental health or behavioral problems, including those with disabilities and illnesses (Barry & Jenkins, 2007). Occupational therapy services focus on mental health promotion and prevention for all: encouraging participation in health-promoting occupations (e.g., enjoyable activities, healthy eating, exercise, adequate sleep); fostering self-regulation and coping strategies (e.g., mindfulness, yoga); promoting mental health literacy (e.g., knowing how to take care of one's mental health and what to do when experiencing symptoms associated with ill mental health). Occupational therapy practitioners develop universal programs and embed strategies to promote mental health and well-being in a variety of settings, from schools to the workplace. The focus of universal services (individual, group, school-wide, employee/organizational level) is universal programs to help all individuals successfully participate in occupations that promote positive mental health (Bazyk, 2011); educational and coaching strategies with a wide range of relevant stakeholders focusing on mental health promotion and prevention; the development of coping strategies and resilience; environmental modifications and supports to foster participation in health-promoting occupations. Productive aging Occupational therapists work with older adults to maintain independence, participate in meaningful activities, and live fulfilling lives. Some examples of areas that occupational therapists address with older adults are driving, aging in place, low vision, and dementia or Alzheimer's disease (AD). When addressing driving, driver evaluations are administered to determine if drivers are safe behind the wheel. To enable independence of older adults at home, occupational therapists perform falls risk assessments, assess clients functioning in their homes, and recommend specific home modifications. When addressing low vision, occupational therapists modify tasks and the environment. While working with individuals with AD, occupational therapists focus on maintaining quality of life, ensuring safety, and promoting independence. Geriatrics/productive aging Occupational therapists address all aspects of aging from health promotion to treatment of various disease processes. The goal of occupational therapy for older adults is to ensure that older adults can maintain independence and reduce health care costs associated with hospitalization and institutionalization. In the community, occupational therapists can assess an older adults ability to drive and if they are safe to do so. If it is found that an individual is not safe to drive the occupational therapist can assist with finding alternate transit options. Occupational therapists also work with older adults in their home as part of home care. In the home, an occupational therapist can work on such things as fall prevention, maximizing independence with activities of daily living, ensuring safety and being able to stay in the home for as long as the person wants. An occupational therapist can also recommend home modifications to ensure safety in the home. Many older adults have chronic conditions such as diabetes, arthritis, and cardiopulmonary conditions. Occupational therapists can help manage these conditions by offering education on energy conservation strategies or coping strategies. Not only do occupational therapists work with older adults in their homes, they also work with older adults in hospitals, nursing homes and post-acute rehabilitation. In nursing homes, the role of the occupational therapist is to work with clients and caregivers on education for safe care, modifying the environment, positioning needs and enhancing IADL skills to name a few. In post-acute rehabilitation, occupational therapists work with clients to get them back home and to their prior level of function after a hospitalization for an illness or accident. Occupational therapists also play a unique role for those with dementia. The therapist may assist with modifying the environment to ensure safety as the disease progresses along with caregiver education to prevent burnout. Occupational therapists also play a role in palliative and hospice care. The goal at this stage of life is to ensure that the roles and occupations that the individual finds meaningful continue to be meaningful. If the person is no longer able to perform these activities, the occupational therapist can offer new ways to complete these tasks while taking into consideration the environment along with psychosocial and physical needs. Not only do occupational therapists work with older adults in traditional settings, they also work in senior centre's and ALFs. Visual impairment Visual impairment is one of the top 10 disabilities among American adults. Occupational therapists work with other professions, such as optometrists, ophthalmologists, and certified low vision therapists, to maximize the independence of persons with a visual impairment by using their remaining vision as efficiently as possible. AOTA's promotional goal of "Living Life to Its Fullest" speaks to who people are and learning about what they want to do, particularly when promoting the participation in meaningful activities, regardless of a visual impairment. Populations that may benefit from occupational therapy includes older adults, persons with traumatic brain injury, adults with potential to return to driving, and children with visual impairments. Visual impairments addressed by occupational therapists may be characterized into two types including low vision or a neurological visual impairment. An example of a neurological impairment is a cortical visual impairment (CVI) which is defined as "...abnormal or inefficient vision resulting from a problem or disorder affecting the parts of brain that provide sight". The following section will discuss the role of occupational therapy when working with the visually impaired. Occupational therapy for older adults with low vision includes task analysis, environmental evaluation, and modification of tasks or the environment as needed. Many occupational therapy practitioners work closely with optometrists and ophthalmologists to address visual deficits in acuity, visual field, and eye movement in people with traumatic brain injury, including providing education on compensatory strategies to complete daily tasks safely and efficiently. Adults with a stable visual impairment may benefit from occupational therapy for the provision of a driving assessment and an evaluation of the potential to return to driving. Lastly, occupational therapy practitioners enable children with visual impairments to complete self care tasks and participate in classroom activities using compensatory strategies. Adult rehabilitation Occupational therapists address the need for rehabilitation following an injury or impairment. When planning treatment, occupational therapists address the physical, cognitive, psychosocial, and environmental needs involved in adult populations across a variety of settings. Occupational therapy in adult rehabilitation may take a variety of forms: Working with adults with autism at day rehabilitation programs to promote successful relationships and community participation through instruction on social skills Increasing the quality of life for an individual with cancer by engaging them in occupations that are meaningful, providing anxiety and stress reduction methods, and suggesting fatigue management strategies Coaching individuals with hand amputations how to put on and take off a myoelectrically controlled limb as well as training for functional use of the limb Pressure sore prevention for those with sensation loss such as in spinal cord injuries. Using and implementing new technology such as speech to text software and Nintendo Wii video games Communicating via telehealth methods as a service delivery model for clients who live in rural areas Working with adults who have had a stroke to regain their activities of daily living Assistive technology Occupational therapy practitioners, or occupational therapists (OTs), are uniquely poised to educate, recommend, and promote the use of assistive technology to improve the quality of life for their clients. OTs are able to understand the unique needs of the individual in regards to occupational performance and have a strong background in activity analysis to focus on helping clients achieve goals. Thus, the use of varied and diverse assistive technology is strongly supported within occupational therapy practice models. Travel occupational therapy Because of the rising need for occupational therapy practitioners in the U.S., many facilities are opting for travel occupational therapy practitioners—who are willing to travel, often out of state, to work temporarily in a facility. Assignments can range from 8 weeks to 9 months, but typically last 13–26 weeks in length. Travel therapists work in many different settings, but the highest need for therapists are in home health and skilled nursing facility settings. There are no further educational requirements needed to be a travel occupational therapy practitioner; however, there may be different state licensure guidelines and practice acts that must be followed. According to Zip Recruiter, , the national average salary for a full-time travel therapist is $86,475 with a range between $62,500 to $100,000 across the United States. Most commonly (43%), travel occupational therapists enter the industry between the ages of 21–30. Occupational justice The practice area of occupational justice relates to the "benefits, privileges and harms associated with participation in occupations" and the effects related to access or denial of opportunities to participate in occupations. This theory brings attention to the relationship between occupations, health, well-being, and quality of life. Occupational justice can be approached individually and collectively. The individual path includes disease, disability, and functional restrictions. The collective way consists of public health, gender and sexual identity, social inclusion, migration, and environment. The skills of occupational therapy practitioners enable them to serve as advocates for systemic change, impacting institutions, policy, individuals, communities, and entire populations. Examples of populations that experience occupational injustice include refugees, prisoners, homeless persons, survivors of natural disasters, individuals at the end of their life, people with disabilities, elderly living in residential homes, individuals experiencing poverty, children, immigrants, and LGBTQI+ individuals. For example, the role of an occupational therapist working to promote occupational justice may include: Analyzing task, modifying activities and environments to minimize barriers to participation in meaningful activities of daily living. Addressing physical and mental aspects that may hinder a person's functional ability. Provide intervention that is relevant to the client, family, and social context. Contribute to global health by advocating for individuals with disabilities to participate in meaningful activities on a global level. Occupation therapists are involved with the World Health Organization (WHO), non-governmental organizations and community groups and policymaking to influence the health and well-being of individuals with disabilities worldwide Occupational therapy practitioners' role in occupational justice is not only to align with perceptions of procedural and social justice but to advocate for the inherent need of meaningful occupation and how it promotes a just society, well-being, and quality of life among people relevant to their context. It is recommended to the clinicians to consider occupational justice in their everyday practice to promote the intention of helping people participate in tasks that they want and need to do. Occupational injustice In contrast, occupational injustice relates to conditions wherein people are deprived, excluded or denied of opportunities that are meaningful to them. Types of occupational injustices and examples within the OT practice include: Occupational deprivation: The exclusion from meaningful occupations due to external factors that are beyond the person's control. For example, a person with difficulties with functional mobility may find it challenging to reintegrate into the community due to transportation barriers. OTs can help in raising awareness and bringing communities together to reduce occupational deprivation OTs can recommend the removal of environmental barriers to facilitate occupation, whilst designing programs that enable engagement. Advocacy by providing information to policy to prevent possible unintended occupational deprivation and increase social cohesion and inclusion Occupational apartheid: The exclusion of a person in chosen occupations due to personal characteristics such as age, gender, race, nationality, or socioeconomic status. An example can be seen in children with developmental disabilities from low socioeconomic backgrounds whose families would opt out of therapy due to financial constraints. OTs providing interventions within a segregated population must focus on increasing occupational engagement through large-scale environmental modification and occupational exploration. OTs can address occupational engagement through group and individual skill-building opportunities, as well as community-based experiences that explore free and local resources Occupational marginalization: Relates to how implicit norms of behavior or societal expectations prevent a person from engaging in a chosen occupation. As an example, a child with physical impairments may only be offered table-top leisure activities instead of sports as an extracurricular activity due to the functional limitations caused by his physical impairments. OTs can design, develop, and/or provide programs that mitigate the negative impacts of occupational marginalization and enhance optimal levels of performance and wellbeing that enable participation Occupational imbalance: The limited participation in a meaningful occupation brought about by another role in a different occupation. This can be seen in the situation of a caregiver of a person with a disability who also has to fulfill other roles such as being a parent to other children, a student, or a worker. OTs can advocate fostering for supportive environments for participation in occupations that promote individuals' well-being and in advocating for building healthy public policy Occupational alienation: The imposition of an occupation that does not hold meaning for that person. In the OT profession, this manifests in the provision of rote activities that do not really relate to the goals or the client's interests. OTs can develop individualized activities tailored to the interests of the individual to maximize their potential. OTs can design, develop and promote programs that can be inclusive and provide a variety of choices that the individual can engage in. Within occupational therapy practice, injustice may ensue in situations wherein professional dominance, standardized treatments, laws and political conditions create a negative impact on the occupational engagement of our clients. Awareness of these injustices will enable the therapist to reflect on his own practice and think of ways in approaching their client's problems while promoting occupational justice. Community-based therapy As occupational therapy (OT) has grown and developed, community-based practice has blossomed from an emerging area of practice to a fundamental part of occupational therapy practice (Scaffa & Reitz, 2013). Community-based practice allows for OTs to work with clients and other stakeholders such as families, schools, employers, agencies, service providers, stores, day treatment and day care and others who may influence the degree of success the client will have in participating. It also allows the therapist to see what is actually happening in the context and design interventions relevant to what might support the client in participating and what is impeding her or him from participating. Community-based practice crosses all of the categories within which OTs practice from physical to cognitive, mental health to spiritual, all types of clients may be seen in community-based settings. The role of the OT also may vary, from advocate to consultant, direct care provider to program designer, adjunctive services to therapeutic leader. Nature-based therapy Nature-based interventions and outdoor activities may be incorporated into occupational therapy practice as they can provide therapeutic benefits in various ways. Examples include therapeutic gardening, animal-assisted therapy (AAT), and adventure therapy. For instance, parents reported improvement in the emotional regulation and social engagement of their children with autism spectrum disorder (ASD) in a study of parental perceptions regarding the outcomes of AAT conducted with trained dogs. They also observed reductions in problematic behaviors. A source cited in the study found similar results with AAT employing horses and llamas. Gardening in a group setting may serve as a complementary intervention in stroke rehabilitation; in addition to being mentally restful and conducive to social connection, it helps patients master skills and can remind them of experiences from their past. Royal Rehab's Productive Garden Project in Australia, managed by a horticultural therapist, allows patients and practitioners to participate in meaningful activity outside the usual healthcare settings. Thus, tending a garden helps facilitate experiential activities, perhaps attaining a better balance between clinical and real-life pursuits during rehabilitation, in lieu of mainly relying on clinical interventions. For adults with acquired brain injury, nature-based therapy has been found to improve motor abilities, cognitive function, and general quality of life. Contributing to a theoretical understanding of such successes in nature-based approaches are: nature's positive impact on problem solving and the refocusing of attention; an innate human connection with, and positive response to, the natural world; an increased sense of well-being when in contact with nature; and the emotional, nonverbal, and cognitive aspects of human-environment interaction. Education Worldwide, there is a range of qualifications required to practice as an occupational therapist or occupational therapy assistant. Depending on the country and expected level of practice, degree options include associate degree, Bachelor's degree, entry-level master's degree, post-professional master's degree, entry-level Doctorate (OTD), post-professional Doctorate (DrOT or OTD), Doctor of Clinical Science in OT (CScD), Doctor of Philosophy in Occupational Therapy (PhD), and combined OTD/PhD degrees. Both occupational therapist and occupational therapy assistant roles exist internationally. Currently in the United States, dual points of entry exist for both OT and OTA programs. For OT, that is entry-level Master's or entry-level Doctorate. For OTA, that is associate degree or bachelor's degree. The World Federation of Occupational Therapists (WFOT) has minimum standards for the education of OTs, which was revised in 2016. All of the educational programs around the world need to meet these minimum standards. These standards are subsumed by and can be supplemented with academic standards set by a country's national accreditation organization. As part of the minimum standards, all programs must have a curriculum that includes practice placements (fieldwork). Examples of fieldwork settings include: acute care, inpatient hospital, outpatient hospital, skilled nursing facilities, schools, group homes, early intervention, home health, and community settings. The profession of occupational therapy is based on a wide theoretical and evidence based background. The OT curriculum focuses on the theoretical basis of occupation through multiple facets of science, including occupational science, anatomy, physiology, biomechanics, and neurology. In addition, this scientific foundation is integrated with knowledge from psychology, sociology and more. In the United States, Canada, and other countries around the world, there is a licensure requirement. In order to obtain an OT or OTA license, one must graduate from an accredited program, complete fieldwork requirements, and pass a national certification examination. Philosophical underpinnings The philosophy of occupational therapy has evolved over the history of the profession. The philosophy articulated by the founders owed much to the ideals of romanticism, pragmatism and humanism, which are collectively considered the fundamental ideologies of the past century. One of the most widely cited early papers about the philosophy of occupational therapy was presented by Adolf Meyer, a psychiatrist who had emigrated to the United States from Switzerland in the late 19th century and who was invited to present his views to a gathering of the new Occupational Therapy Society in 1922. At the time, Dr. Meyer was one of the leading psychiatrists in the United States and head of the new psychiatry department and Phipps Clinic at Johns Hopkins University in Baltimore, Maryland. William Rush Dunton, a supporter of the National Society for the Promotion of Occupational Therapy, now the American Occupational Therapy Association, sought to promote the ideas that occupation is a basic human need, and that occupation is therapeutic. From his statements came some of the basic assumptions of occupational therapy, which include: Occupation has a positive effect on health and well-being. Occupation creates structure and organizes time. Occupation brings meaning to life, culturally and personally. Occupations are individual. People value different occupations. These assumptions have been developed over time and are the basis of the values that underpin the Codes of Ethics issued by the national associations. The relevance of occupation to health and well-being remains the central theme. In the 1950s, criticism from medicine and the multitude of disabled World War II veterans resulted in the emergence of a more reductionistic philosophy. While this approach led to developments in technical knowledge about occupational performance, clinicians became increasingly disillusioned and re-considered these beliefs. As a result, client centeredness and occupation have re-emerged as dominant themes in the profession. Over the past century, the underlying philosophy of occupational therapy has evolved from being a diversion from illness, to treatment, to enablement through meaningful occupation. Three commonly mentioned philosophical precepts of occupational therapy are that occupation is necessary for health, that its theories are based on holism and that its central components are people, their occupations (activities), and the environments in which those activities take place. However, there have been some dissenting voices. Mocellin, in particular, advocated abandoning the notion of health through occupation as he proclaimed it obsolete in the modern world. As well, he questioned the appropriateness of advocating holism when practice rarely supports it. Some values formulated by the American Occupational Therapy Association have been critiqued as being therapist-centric and do not reflect the modern reality of multicultural practice. In recent times occupational therapy practitioners have challenged themselves to think more broadly about the potential scope of the profession, and expanded it to include working with groups experiencing occupational injustice stemming from sources other than disability. Examples of new and emerging practice areas would include therapists working with refugees, children experiencing obesity, and people experiencing homelessness. Theoretical frameworks A distinguishing facet of occupational therapy is that therapists often espouse the use theoretical frameworks to frame their practice. Many have argued that the use of theory complicates everyday clinical care and is not necessary to provide patient-driven care. Note that terminology differs between scholars. An incomplete list of theoretical bases for framing a human and their occupations include the following: Generic models Generic models are the overarching title given to a collation of compatible knowledge, research and theories that form conceptual practice. More generally they are defined as "those aspects which influence our perceptions, decisions and practice". The Person Environment Occupation Performance model (PEOP) was originally published in 1991 (Charles Christiansen & M. Carolyn Baum) and describes an individual's performance based on four elements including: environment, person, performance and occupation. The model focuses on the interplay of these components and how this interaction works to inhibit or promote successful engagement in occupation. Occupation-focused practice models Occupational Therapy Intervention Process Model (OTIPM) (Anne Fisher and others) Occupational Performance Process Model (OPPM) Model of Human Occupation (MOHO) (Gary Kielhofner and others) MOHO was first published in 1980. It explains how people select, organise and undertake occupations within their environment. The model is supported with evidence generated over thirty years and has been successfully applied throughout the world. Canadian Model of Occupational Performance and Engagement (CMOP-E) This framework was originated in 1997 by the Canadian Association of Occupational Therapists (CAOT) as the Canadian Model of Occupational Performance (CMOP). It was expanded in 2007 by Palatjko, Townsend and Craik to add engagement. This framework upholds the view that three components—the person, environment and occupation- are related. Engagement was added to encompass occupational performance. A visual model is depicted with the person located at the center of the model as a triangle. The triangles three points represent cognitive, affective, and physical components with a spiritual center. The person triangle is surrounded by an outer ring symbolizing the context of environment with an inner ring symbolizing the context of occupation. Occupational Performances Model – Australia (OPM-A) (Chris Chapparo & Judy Ranka) The OPM(A) was conceptualized in 1986 with its current form launched in 2006. The OPM(A) illustrates the complexity of occupational performance, the scope of occupational therapy practice, and provides a framework for occupational therapy education. Kawa (River) Model (Michael Iwama) Biopsychosocial models Engel's biopsychosocial model takes into account how disease and illness can be impacted by social, environmental, psychological and body functions. The biopsychosocial model is unique in that it takes the client's subjective experience and the client-provider relationship as factors to wellness. This model also factors in cultural diversity as many countries have different societal norms and beliefs. This is a multifactorial and multi-dimensional model to understand not only the cause of disease but also a person-centered approach that the provider has more of a participatory and reflective role. Other models which incorporate biology (body and brain), psychology (mind), and social (relational, attachment) elements influencing human health include interpersonal neurobiology (IPNB), polyvagal theory (PVT), and the dynamic-maturational model of attachment and adaptation (DMM). The latter two in particular provide detail about the source, mechanism and function of somatic symptoms. Kasia Kozlowska describes how she uses these models to better connect with clients, to understand complex human illness, and how she includes occupational therapists as part of a team to address functional somatic symptoms. Her research indicates children with functional neurological disorders (FND) utilize higher, or more challenging, DMM self-protective attachment strategies to cope with their family environments, and how those impact functional somatic symptoms. Pamela Meredith and colleagues have been exploring the relationship between the attachment system and psychological and neurobiological systems with implications for how occupational therapists can improve their approach and techniques. They have found correlations between attachment and adult sensory processing, distress, and pain perception. In a literature review, Meredith identified a number of ways that occupational therapists can effectively apply an attachment perspective, sometimes uniquely. Frames of reference Frames of reference are an additional knowledge base for the occupational therapist to develop their treatment or assessment of a patient or client group. Though there are conceptual models (listed above) that allow the therapist to conceptualise the occupational roles of the patient, it is often important to use further reference to embed clinical reasoning. Therefore, many occupational therapists will use additional frames of reference to both assess and then develop therapy goals for their patients or service users. Biomechanical frame of reference The biomechanical frame of reference is primarily concerned with motion during occupation. It is used with individuals who experience limitations in movement, inadequate muscle strength or loss of endurance in occupations. The frame of reference was not originally compiled by occupational therapists, and therapists should translate it to the occupational therapy perspective, to avoid the risk of movement or exercise becoming the main focus. Rehabilitative (compensatory) Neurofunctional (Gordon Muir Giles and Clark-Wilson) Dynamic systems theory Client-centered frame of reference This frame of reference is developed from the work of Carl Rogers. It views the client as the center of all therapeutic activity, and the client's needs and goals direct the delivery of the occupational therapy process. Cognitive-behavioural frame of reference Ecology of human performance model The recovery model Sensory integration Sensory integration framework is commonly implemented in clinical, community, and school-based occupational therapy practice. It is most frequently used with children with developmental delays and developmental disabilities such as autism spectrum disorder, Sensory processing disorder and dyspraxia. Core features of sensory integration in treatment include providing opportunities for the client to experience and integrate feedback using multiple sensory systems, providing therapeutic challenges to the client's skills, integrating the client's interests into therapy, organizing of the environment to support the client's engagement, facilitating a physically safe and emotionally supportive environment, modifying activities to support the client's strengths and weaknesses, and creating sensory opportunities within the context of play to develop intrinsic motivation. While sensory integration is traditionally implemented in pediatric practice, there is emerging evidence for the benefits of sensory integration strategies for adults. Global occupational therapy The World Federation of Occupational Therapists is an international voice of the profession and is a membership network of occupational therapists worldwide. WFOT supports the international practice of occupational therapy through collaboration across countries. WFOT currently includes over 100 member country organizations, 550,000 occupational therapy practitioners, and 900 approved educational programs. The profession celebrates World Occupational Therapy Day on the 27th of October annually to increase visibility and awareness of the profession, promoting the profession's development work at a local, national and international platform. WFOT has been in close collaboration with the World Health Organization (WHO) since 1959, working together in programmes that aim to improve world health. WFOT supports the vision for healthy people, in alignment with the United Nations 17 Sustainable Development Goals, which focuses on "ending poverty, fighting inequality and injustice, tackling climate change and promoting health". Occupational therapy is a major player in enabling individuals and communities to engage in "chosen and necessary occupations" and in "the creation of more meaningful lives". Occupational therapy is practiced around the world and can be translated in practice to many different cultures and environments. The construct of occupation is shared throughout the profession regardless of country, culture and context. Occupation and the active participation in occupation is now seen as a human right and is asserted as a strong influence in health and well-being. As the profession grows there is a lot of people who are travelling across countries to work as occupational therapists for better work or opportunities. Under this context, every occupational therapist is required to adapt to a new culture, foreign to their own. Understanding cultures and its communities are crucial to occupational therapy ethos. Effective occupational therapy practice includes acknowledging the values and social perspectives of each client and their families. Harnessing culture and understanding what is important to the client is truly a faster way towards independence.
Biology and health sciences
Treatments
Health
334071
https://en.wikipedia.org/wiki/Paddle%20steamer
Paddle steamer
A paddle steamer is a steamship or steamboat powered by a steam engine driving paddle wheels to propel the craft through the water. In antiquity, paddle wheelers followed the development of poles, oars and sails, whereby the first uses were wheelers driven by animals or humans. In the early 19th century, paddle wheels were the predominant way of propulsion for steam-powered boats. In the late 19th century, paddle propulsion was largely superseded by the screw propeller and other marine propulsion systems that have a higher efficiency, especially in rough or open water. Paddle wheels continue to be used by small, pedal-powered paddle boats and by some ships that operate tourist voyages. The latter are often powered by diesel engines. Paddle wheels The paddle wheel is a large steel framework wheel. The outer edge of the wheel is fitted with numerous, regularly spaced paddle blades (called floats or buckets). The bottom quarter or so of the wheel travels under water. An engine rotates the paddle wheel in the water to produce thrust, forward or backward as required. More advanced paddle-wheel designs feature "feathering" methods that keep each paddle blade closer to vertical while in the water to increase efficiency. The upper part of a paddle wheel is normally enclosed in a paddlebox to minimise splashing. Types of paddle steamers The three types of paddle wheel steamer are sidewheeler, with one paddlewheel on each side; sternwheeler, with a single paddlewheel at the stern; and (rarely) inboard, with the paddlewheel mounted in a recess amidships. Sidewheeler The earliest s were sidewheelers, and the type was by far the dominant mode of marine steam propulsion, both for steamships and steamboats, until the increasing adoption of screw propulsion from the 1850s. Though the side wheels and enclosing sponsons make them wider than sternwheelers, they may be more maneuverable, since they can sometimes move the paddles at different speeds, and even in opposite directions. This extra maneuverability makes side-wheelers popular on the narrower, winding rivers of the Murray–Darling system in Australia, where a number still operate. European sidewheelers, such as , connect the wheels with solid drive shafts that limit maneuverability and give the craft a wide turning radius. Some were built with paddle clutches that disengage one or both paddles so they can turn independently. However, wisdom gained from early experience with sidewheelers deemed that they be operated with clutches out, or as solid-shaft vessels. Crews noticed that as ships approached the dock, passengers moved to the side of the ship ready to disembark. The shift in weight, added to independent movements of the paddles, could lead to imbalance and potential capsizing. Paddle tugs were frequently operated with clutches in, as the lack of passengers aboard meant that independent paddle movement could be used safely and the added maneuverability exploited to the full. Sternwheeler Although the first sternwheelers were invented in Europe, they saw the most service in North America, especially on the Mississippi River. was built at Brownsville, Pennsylvania, in 1814 as an improvement over the less efficient side-wheelers. The second stern-wheeler built, Washington of 1816, had two decks and served as the prototype for all subsequent steamboats of the Mississippi, including those made famous in Mark Twain's book Life on the Mississippi. Inboard paddlewheeler Recessed or inboard paddlewheel boats were designed to ply narrow and snag-infested backwaters. By recessing the wheel within the hull it was protected somewhat from damage. It was enclosed and could be spun at a high speed to provide acute maneuverability. Most were built with inclined steam cylinders mounted on both sides of the paddleshaft and timed 90 degrees apart like a locomotive, making them instantly reversing. Feathering paddle wheel In a simple paddle wheel, where the paddles are fixed around the periphery, power is lost due to churning of the water as the paddles enter and leave the water surface. Ideally, the paddles should remain vertical while under water. This ideal can be approximated by use of levers and linkages connected to a fixed eccentric. The eccentric is fixed slightly forward of the main wheel centre. It is coupled to each paddle by a rod and lever. The geometry is designed such that the paddles are kept almost vertical for the short duration that they are in the water. History Western world The use of a paddle wheel in navigation appears for the first time in the mechanical treatise of the Roman engineer Vitruvius (De architectura, X 9.5–7), where he describes multigeared paddle wheels working as a ship odometer. The first mention of paddle wheels as a means of propulsion comes from the fourth– or fifth-century military treatise (chapter XVII), where the anonymous Roman author describes an ox-driven paddle-wheel warship: Italian physician Guido da Vigevano (circa 1280–1349), planning for a new crusade, made illustrations for a paddle boat that was propelled by manually turned compound cranks. One of the drawings of the Anonymous Author of the Hussite Wars shows a boat with a pair of paddlewheels at each end turned by men operating compound cranks. The concept was improved by the Italian Roberto Valturio in 1463, who devised a boat with five sets, where the parallel cranks are all joined to a single power source by one connecting rod, an idea adopted by his compatriot Francesco di Giorgio. In 1539, Spanish engineer Blasco de Garay received the support of Charles V to build ships equipped with manually-powered side paddle wheels. From 1539 to 1543, Garay built and launched five ships, the most famous being the modified Portuguese carrack La Trinidad, which surpassed a nearby galley in speed and maneuverability on June 17, 1543, in the harbor of Barcelona. The project, however, was discontinued. 19th century writer Tomás González claimed to have found proof that at least some of these vessels were steam-powered, but this theory was discredited by the Spanish authorities. It has been proposed that González mistook a steam-powered desalinator created by Garay for a steam boiler. In 1705, Papin constructed a ship powered by hand-cranked paddles. An apocryphal story originating in 1851 by Louis Figuire held that this ship was steam-powered rather than hand-powered and that it was therefore the first steam-powered vehicle of any kind. The myth was refuted as early as 1880 by , though still it finds credulous expression in some contemporary scholarly work. In 1787, Patrick Miller of Dalswinton invented a double-hulled boat that was propelled on the Firth of Forth by men working a capstan that drove paddles on each side. One of the first functioning steamships, Palmipède, which was also the first paddle steamer, was built in France in 1774 by Marquis Claude de Jouffroy and his colleagues. The steamer with rotating paddles sailed on the Doubs River in June and July 1776. In 1783, a new paddle steamer by de Jouffroy, , successfully steamed up the river Saône for 15 minutes before the engine failed. Bureaucracy and the French Revolution thwarted further progress by de Jouffroy. The next successful attempt at a paddle-driven steam ship was by Scottish engineer William Symington, who suggested steam power to Patrick Miller of Dalswinton. Experimental boats built in 1788 and 1789 worked successfully on Lochmaben Loch. In 1802, Symington built a barge-hauler, , for the Forth and Clyde Canal Company. It successfully hauled two 70-ton barges almost in 6 hours against a strong headwind on test in 1802. Enthusiasm was high, but some directors of the company were concerned about the banks of the canal being damaged by the wash from a powered vessel, and no more were ordered. While Charlotte Dundas was the first commercial paddle steamer and steamboat, the first commercial success was possibly Robert Fulton's Clermont in New York, which went into commercial service in 1807 between New York City and Albany. Many other paddle-equipped river boats followed all around the world; the first in Europe being designed by Henry Bell which started a scheduled passenger service on the River Clyde in 1812. In 1812, the first U.S. Mississippi River paddle steamer began operating out of New Orleans. By 1814, Captain Henry Shreve had developed a "steamboat" suitable for local conditions. Landings in New Orleans went from 21 in 1814 to 191 in 1819, and over 1,200 in 1833. The first stern-wheeler was designed by Gerhard Moritz Roentgen from Rotterdam, and used between Antwerp and Ghent in 1827. Team boats, paddle boats driven by horses, were used for ferries the United States from the 1820s–1850s, as they were economical and did not incur licensing costs imposed by the steam navigation monopoly. In the 1850s, they were replaced by steamboats. After the American Civil War, as the expanding railroads took many passengers, the traffic became primarily bulk cargoes. The largest, and one of the last, paddle steamers on the Mississippi was the sternwheeler Sprague. Built in 1901, she pushed coal and petroleum until 1948. In Europe from the 1820s, paddle steamers were used to take tourists from the rapidly expanding industrial cities on river cruises, or to the newly established seaside resorts, where pleasure piers were built to allow passengers to disembark regardless of the state of the tide. Later, these paddle steamers were fitted with luxurious saloons in an effort to compete with the facilities available on the railways. Notable examples are the Thames steamers which took passengers from London to Southend-on-Sea and Margate, Clyde steamers that connected Glasgow with the resort of Rothsay and the Köln-Düsseldorfer cruise steamers on the River Rhine. Paddle steamer services continued into the mid-20th century, when ownership of motor cars finally made them obsolete except for a few heritage examples. China The first mention of a paddle-wheel ship from China is in the History of the Southern Dynasties, compiled in the 7th century but describing the naval ships of the Liu Song dynasty (420–479) used by admiral Wang Zhen'e in his campaign against the Qiang in 418 AD. The ancient Chinese mathematician and astronomer Zu Chongzhi (429–500) had a paddle-wheel ship built on the Xinting River (south of Nanjing) known as the "thousand league boat". When campaigning against Hou Jing in 552, the Liang dynasty (502–557) admiral Xu Shipu employed paddle-wheel boats called "water-wheel boats". At the siege of Liyang in 573, the admiral Huang Faqiu employed foot-treadle powered paddle-wheel boats. A successful paddle-wheel warship design was made in China by Prince Li Gao in 784 AD, during an imperial examination of the provinces by the Tang dynasty (618–907) emperor. The Chinese Song dynasty (960–1279) issued the construction of many paddle-wheel ships for its standing navy, and according to the British biochemist, historian, and sinologist Joseph Needham: "...between 1132 and 1183 (AD) a great number of treadmill-operated paddle-wheel craft, large and small, were built, including sternwheelers and ships with as many as 11 paddle-wheels a side,". The standard Chinese term "wheel ship" was used by the Song period, whereas a litany of colorful terms were used to describe it beforehand. In the 12th century, the Song government used paddle-wheel ships en masse to defeat opposing armies of pirates armed with their own paddle-wheel ships. At the Battle of Caishi in 1161, paddle-wheelers were also used with great success against the Jin dynasty (1115–1234) navy. The Chinese used the paddle-wheel ship even during the First Opium War (1839–1842) and for transport around the Pearl River during the early 20th century. Seagoing paddle steamers The first seagoing trip of a paddle steamer was by the Albany in 1808. It steamed from the Hudson River along the coast to the Delaware River. This was purely for the purpose of moving a river-boat to a new market, but paddle-steamers began regular short coastal trips soon after. In 1816 Pierre Andriel, a French businessman, bought in London the paddle steamer Margery (later renamed Elise) and made an eventful London-Le Havre-Paris crossing, encountering heavy weather on the way. He later operated his ship as a river packet on the Seine, between Paris and Le Havre. In 1822 Charles Napier's , the world's first iron ship, made the first direct steam crossing from London to Paris and the first seagoing voyage by an iron ship. The first paddle-steamer to make a long ocean voyage crossing the Atlantic Ocean was , built in 1819 expressly for this service. Savannah set out from the port of Savannah, Georgia for Liverpool on May 24, 1819, sighting Ireland after 23 days at sea. This was the first powered crossing of the Atlantic, although Savannah was built as a sailing ship with a steam auxiliary; she also carried a full rig of sail for when winds were favorable, being unable to complete the voyage under power alone. In 1838, , a fairly small steam packet built for the Cork to London route, became the first vessel to cross the Atlantic under sustained steam power, beating Isambard Kingdom Brunel's much larger by a day. Great Western, however, was actually built for the transatlantic trade, and so had sufficient coal for the passage; Sirius had to burn furniture and other items after running out of coal. Great Westerns more successful crossing began the regular sailing of powered vessels across the Atlantic. was the first coastal steamship to operate in the Pacific Northwest of North America. Paddle steamers helped open Japan to the Western World in the mid-19th century. The largest paddle-steamer ever built was Brunel's , but it also had screw propulsion and sail rigging. It was long and weighed 32,000 tons, its paddlewheels being in diameter. In oceangoing service, paddle steamers became much less useful after the invention of the screw propeller, but they remained in use in coastal service and as river tugboats, thanks to their shallow draught and good maneuverability. The last crossing of the Atlantic by paddle steamer began on September 18, 1969, the first leg of a journey to conclude six months and nine days later. The steam paddle tug was never intended for oceangoing service, but nevertheless was steamed from Newcastle to San Francisco. As the voyage was intended to be completed under power, the tug was rigged as steam propelled with a sail auxiliary. The transatlantic stage of the voyage was completed exactly 150 years after the voyage of Savannah. As of 2022, the PS Waverley is the last seagoing passenger-carrying paddle steamer in the world. Paddle-driven steam warships Paddle frigates Beginning in the 1820s, the British Royal Navy began building paddle-driven steam frigates and steam sloops. By 1850 these had become obsolete due to the development of the propeller – which was more efficient and less vulnerable to cannon fire. One of the first screw-driven warships, , demonstrated her superiority over paddle steamers during numerous trials, including one in 1845 where she pulled a paddle-driven sister ship backwards in a tug of war. However, paddle warships were used extensively by the Russian Navy during the Crimean War of 1853–1856, and by the United States Navy during the Mexican War of 1846–1848 and the American Civil War of 1861–1865. With the arrival of ironclad battleships from the late 1850s, the last remaining paddle frigates were decommissioned and sold into merchant-navy service by the 1870s. These included , which became one of the first Boston steamers in 1867. Paddle minesweepers At the start of the First World War, the Royal Navy requisitioned more than fifty pleasure paddle steamers for use as auxiliary minesweepers. The large spaces on their decks intended for promenading passengers proved to be ideal for handling the minesweeping booms and cables, and the paddles allowed them to operate in coastal shallows and estuaries. These were so successful that a new class of paddle ships, the Racecourse-class minesweepers, were ordered and 32 of them were built before the end of the war. In the Second World War, some thirty pleasure paddle steamers were again requisitioned; an added advantage was that their wooden hulls did not activate the new magnetic mines. The paddle ships formed six minesweeping flotillas, based at ports around the British coast. Other paddle steamers were converted to anti-aircraft ships. More than twenty paddle steamers were used as emergency troop transports during the Dunkirk Evacuation in 1940, where they were able to get close inshore to embark directly from the beach. One example was , which saved an estimated 7,000 men over the nine days of the evacuation, and claimed to have shot down three German aircraft. Another paddle minesweeper, , was deliberately beached twice to allow soldiers to cross to other vessels using her as a jetty. The paddle steamers between them were estimated to have rescued 26,000 Allied troops during the operation, for the loss of six of them.
Technology
Naval transport
null
334351
https://en.wikipedia.org/wiki/Spirula
Spirula
Spirula spirula is a species of deep-water squid-like cephalopod mollusk. It is the only extant member of the genus Spirula, the family Spirulidae, and the order Spirulida. Because of the shape of its internal shell, it is commonly known as the ram's horn squid or the little post horn squid. Because the live animal has a light-emitting organ, it is also sometimes known as the tail-light squid. Live specimens of this cephalopod are very rarely seen because it is a deep-ocean dweller. The small internal shell of the species is, however, quite a familiar object to many beachcombers. The shell of Spirula is extremely light in weight, very buoyant, and surprisingly durable; it very commonly floats ashore onto tropical beaches (and sometimes even temperate beaches) all over the world. This seashell is known to shell collectors as the ram's horn shell or simply as Spirula. Taxonomy Swedish botanist Carl Linnaeus described Nautilus spirula Linnaeus, 1758 in his book Systema Naturae. In 1799, French naturalist Jean-Baptiste Lamarck described the genus Spirula and transferred this species to it, and Spirula spirula is the name still used today for the ram's horn squid. S. spirula is the only species in the monotypic genus Spirula. A morphometric study published in 2010 showed that shell characteristics of S. spirula vary with geography, but no subspecies or additional species were proposed. Description S. spirula has a squid-like body between 35 mm and 45 mm long. It is a decapod, with eight arms and two longer tentacles, all with suckers. The arms and tentacles can all be withdrawn completely into the mantle. The species lacks a radula (or, at most, has a vestigial radula). Shell The most distinctive feature of this species is its buoyancy organ, an internal, chambered, endogastrically coiled shell in the shape of an open planispiral (a flat spiral wherein the coils do not touch each other), and consisting of two prismatic layers. The shell functions to osmotically control buoyancy. Unlike the nautilus, which exchange air and liquid only in the three most adoral chambers (the remaining chambers always being gas-filled), spirula can bring cameral fluid into all of their chambers. Another trait is that it is mineralized, a feature only seen in cuttlefish and the nautilus amongst extant species of cephalopods. The siphuncle is marginal, on the inner surface of the spiral. Behaviour S. spirula is capable of emitting a green light from a photophore located at the tip of its mantle, between the ear-shaped fins. Evidently this seems as a counter-illumination strategy, as in situ observations have captured footage of animals in a vertical stance, with photophore pointing downward and head up. Habitat and distribution By day, Spirula lives in the deep oceans, reaching depths of 1,000 m. At night, it rises to 100–300 m. Its preferred temperature is around 10 °C, and it tends to live around oceanic islands, near the continental shelf. Most sources cite this species as tropical and they are observed to be plentiful in the subtropical seas around the Canary Islands. Shells are regularly found along the western coasts of South Africa. In 2022, records of the species have also been confirmed in the Arabian Sea. However, significant quantities of shells from dead spirula are washed ashore even in temperate regions, such as coasts of New Zealand. Because of the great buoyancy of the shells, these may possibly have been carried long distances by ocean currents. Much of the organism's life history has not been observed; for instance, they are thought to spawn in winter in deeper water, yet no spawnlings have been directly seen. They must occasionally venture into the upper 10 m of the sea, for they are sometimes found in albatross guts. The species was observed for the first time in its natural habitat in 2020, when an ROV of the Schmidt Ocean Institute recorded it in the depths near the northern Great Barrier Reef. Evolutionary relationships The order Spirulida also contains two extinct suborders: Groenlandibelina (including extinct families Groenlandibelidae and Adygeyidae), and Belopterina (including extinct families Belemnoseidae and Belopteridae). Spirula is likely the closest living relative of the extinct belemnites and aulacocerids. These three groups as a unit are closely related to the cuttlefish, as well as to the true squids.
Biology and health sciences
Cephalopods
Animals
334392
https://en.wikipedia.org/wiki/Horned%20lizard
Horned lizard
Phrynosoma, whose members are known as the horned lizards, horny toads, or horntoads, is a genus of North American lizards and the type genus of the family Phrynosomatidae. Their common names refer directly to their horns or to their flattened, rounded bodies, and blunt snouts. The generic name Phrynosoma means "toad-bodied". In common with true toads (amphibians of the family Bufonidae), horned lizards tend to move sluggishly, often remain motionless, and rely on their remarkable camouflage to avoid detection by predators. They are adapted to arid or semiarid areas. The spines on the lizard's back and sides are modified reptile scales, which prevent water loss through the skin, whereas the horns on the head are true horns (i.e., they have a bony core). A urinary bladder is absent. Of the 21 species of horned lizards, 15 are native to the USA. The largest-bodied and most widely distributed of the American species is the Texas horned lizard. Defenses Horned lizards use a variety of means to avoid predation. Their coloration generally serves as camouflage. When threatened, their first defense is to remain motionless to avoid detection. If approached too closely, they generally run in short bursts and stop abruptly to confuse the predator's visual acuity. If this fails, they puff up their bodies to cause them to appear more horned and larger so that they are more difficult to swallow. At least eight species (P. asio, P. cornutum, P. coronatum, P. ditmarsi, P. hernandesi, P. orbiculare, P. solare, and P. taurus) are also able to squirt an aimed stream of blood from the corners of the eyes for a distance up to . They do this by restricting the blood flow leaving the head, thereby increasing blood pressure and rupturing tiny vessels around the eyelids. The blood not only confuses predators but also tastes foul to canine and feline predators. It appears to have no effect against predatory birds. Only three closely related species (P. mcallii, P. modestum, and P. platyrhinos) are certainly known to either be unable to squirt blood or only do it extremely rarely. While previous thought held that compounds were added to the blood from glands in the ocular sinus cavity, current research has shown that the chemical compounds that make up the defense are already in the circulating blood. It is possible that their diet of large quantities of venomous harvester ants could be a factor; however, the origin and structure of the chemicals responsible are still unknown. The blood-squirting mechanism increases survival after contact with canine predators; the trait may provide an evolutionary advantage. Ocular autohemorrhaging has also been documented in other lizards, which suggests blood-squirting could have evolved from a less extreme defense in the ancestral branch of the genus. Recent phylogenic research supports this claim, so the species incapable of squirting blood apparently have lost the adaptation for reasons yet unstudied. To avoid being picked up by the head or neck, a horned lizard ducks or elevates its head and orients its cranial horns straight up, or back. If a predator tries to take it by the body, the lizard drives that side of its body down into the ground so the predator cannot easily get its lower jaw underneath. Population decline A University of Texas publication notes that horned lizard populations continue to disappear throughout their distribution despite protective legislation. Population declines are attributed to a number of factors, including the fragmentation and loss of habitat from real estate development and road construction, the planting of non-native grasses (both suburban and rural), conversion of native land to pastureland and agricultural uses, and pesticides. Additionally predation by domestic dogs and cats place continued pressure upon horned lizards. Fire ants (Solenopsis invicta), introduced from South America via the nursery industry's potted plants, pose a significant threat to all wildlife including horned lizards. Phrynosoma species do not eat fire ants. Fire ants kill many species of wildlife, and are fierce competitors against the native ants, which horned lizards require for food (with their specialized nutritional content). Fire ants have given all ants a bad reputation, and human attempts to eradicate ants, including invasive species and the native species on which the lizards prey, contribute to the continued displacement of native ant species and the decline of horned lizards. The Texas horned lizard (Phrynosoma cornutum) has disappeared from almost half of its geographic range. Their popularity in the early to mid-20th-century pet trade, where collectors took thousands from the wild populations to sell to pet distributors, without provision for their highly specialized nutritional needs (primarily formic acid from harvester ants), resulted in certain death for almost all the collected lizards. In 1967, the state of Texas passed protective legislation preventing the collection, exportation, and sale of Phrynosoma, and by the early 1970s, most states enacted similar laws to protect and conserve horned lizards in the USA. As recently as the early 2000s, though, the state of Nevada still allowed commercial sale of Phrynosoma species. Despite limited federal protection in Mexico, horned lizards are still offered in Mexican "pet" markets throughout the country. In 2014, the Center for Biological Diversity in Tucson petitioned the Oklahoma Department of Wildlife Conservation to have the Texas horned lizard put on the endangered species list due to the massive decline of its population in Oklahoma, where it was once plentiful. The center said it may later seek protection for the animal on a federal level; it also said that reptiles in general are dying off at up to 10,000 times their historic extinction rate, greatly due to human influences. Species and subspecies The following 21 species (listed alphabetically by scientific name) are recognized as being valid by the Reptile Database, three species of which have recognized subspecies: Note: In the above list, a binomial authority or trinomial authority in parentheses indicates that the species or subspecies was originally described in a genus other than Phrynosoma. Horned lizard (Phrynosoma) gallery Symbol Texas designated the Texas horned lizard (Phrynosoma cornutum) as the official state reptile in 1993. Wyoming’s state reptile is the “Horn Toad”, the greater short-horned lizard (Phrynosoma hernandesi). The "TCU Horned Frog" is the mascot of Texas Christian University in Fort Worth, Texas. The "Horned Toad" is also the mascot for Coalinga High School in Coalinga, California. This school is located in Western Central California and its arid region is home to the San Diego Horned Lizard, which is protected. The City of Coalinga hosts an annual "Horned Toad Derby" on Memorial day weekend which features horned toad races, a carnival and parade.
Biology and health sciences
Reptiles
null
334411
https://en.wikipedia.org/wiki/Monitor%20lizard
Monitor lizard
Monitor lizards are lizards in the genus Varanus, the only extant genus in the family Varanidae. They are native to Africa, Asia, and Oceania, and one species is also found in the Americas as an invasive species. About 80 species are recognized. Monitor lizards have long necks, powerful tails and claws, and well-developed limbs. The adult length of extant species ranges from in some species such as Varanus sparnus, to over in the case of the Komodo dragon, though the extinct megalania (Varanus priscus) may have reached lengths of more than . Most monitor species are terrestrial, but many are also arboreal or semiaquatic. While most monitor lizards are carnivorous, eating smaller reptiles, fish, birds, insects, small mammals, and eggs, a few species also eat fruit and vegetation. Etymology The generic name Varanus is derived from the Arabic word waral [Standard Arabic] / warar [colloquially] / waran [colloquially], from a common Semitic root ouran, waran, warar or waral, meaning "lizard beast". In English, they are known as "monitors" or "monitor lizards". The earlier term "monitory lizard" became rare by about 1920. The name may have been suggested by the occasional habit of varanids to stand on their two hind legs and to appear to "monitor", or perhaps from their supposed habit of "warning people of the approach of venomous animals". But all of these explanations for the name "monitor" postdate Linnaeus giving the scientific name Lacerta monitor to the Nile monitor in 1758, which may have been based on a mistaken idea by Linnaeus that the German word Waran (borrowed from Arabic) was connected to warnen (to warn), leading him to incorrectly Latinize it as monitor ('warner', 'adviser'). Austronesian languages spoken across Southeast Asia, where varanids are common, have a large number of slightly related local names for them. They are usually known as biawak (Malay, including Indonesian standard variety), bayawak (Filipino), binjawak or minjawak or nyambik (Javanese), or variations thereof. Other names include hokai (Solomon Islands); bwo, puo, or soa (Maluku); halo (Cebu); galuf or kaluf (Micronesia and the Caroline Islands); batua or butaan (Luzon); alu (Bali); hora or ghora (Komodo group of islands); phut (Burmese); and guibang (Manobo). In South Asia, they are known as in Meitei, mwpou in Boro, घोरपड in Marathi, உடும்பு in Tamil and udumbu ഉടുമ്പ് in Malayalam, in Bhojpuri, gohi (गोहि) in Maithili, in Sinhala as තලගොයා / කබරගොයා (), in Telugu as uḍumu (ఉడుము), in Kannada as (ಉಡ), in Punjabi and Magahi as गोह (goh), in Assamese as gui xaap, in Odia as ଗୋଧି (godhi), and in Bengali as গোসাপ () or গুইসাপ (), and गोह (goh) in Hindi and गोधा (godhā) in Sanskrit. The West African Nile monitor is known by several names in Yoruba, including , , and . In Wolof it is known as mbossé or bar, and is the traditional totem of the city of Kaolack. Due to confusion with the large New World lizards of the family Iguanidae, the lizards became known as "goannas" in Australia. Similarly, in South African English, they are referred to as leguaans, or likkewaans, from the Dutch term for the Iguanidae, leguanen. Distribution The various species cover a vast area, occurring through Africa, the Indian subcontinent, to China, the Ryukyu Islands in southern Japan, south to Southeast Asia to Thailand, Malaysia, Brunei, Indonesia, the Philippines, New Guinea, Australia, and islands of the Indian Ocean and the South China Sea. They have also been introduced outside of their natural range, for instance, the West African Nile monitor is now found in South Florida. Monitor lizards also occurred widely in Europe in the Neogene, with the last known remains in the region dating to the Middle Pleistocene. Habits and diet Most monitor lizards are almost entirely carnivorous, consuming prey as varied as insects, crustaceans, arachnids, myriapods, molluscs, fish, amphibians, reptiles, birds, and mammals. Most species feed on invertebrates as juveniles and shift to feeding on vertebrates as adults. Deer make up about 50% of the diet of adult Komodo dragons, the largest monitor species. In contrast, three arboreal species from the Philippines, Varanus bitatawa, mabitang, and olivaceus, are primarily fruit eaters. Biology Monitor lizards are considered unique among animals in that its members are relatively morphologically conservative, yet show a very large size range. However, finer morphological features such as the shape of the skull and limbs do vary, and are strongly related to the ecology of each species. Monitor lizards maintain large territories and employ active-pursuit hunting techniques that are reminiscent of similar-sized mammals. The active nature of monitor lizards has led to numerous studies on the metabolic capacities of these lizards. The general consensus is that monitor lizards have the highest standard metabolic rates of all extant reptiles. Like snakes, monitor lizards have highly forked tongues that act as part of the "smell" sense, where the tips of the tongue carry molecules from the environment to sensory organs in the skull. The forked apparatus allows for these lizards to sense boundaries in the molecules they collect, almost smelling in "stereo". Monitor lizards have a high aerobic scope that is afforded, in part, by their heart anatomy. Whereas most reptiles are considered to have three-chambered hearts, the hearts of monitor lizards – as with those of boas and pythons – have a well developed ventricular septum that completely separates the pulmonary and systemic sides of the circulatory system during systole. This allows monitor lizards to create mammalian-equivalent pressure differentials between the pulmonary and systemic circuits, which in turn ensure that oxygenated blood is quickly distributed to the body without also flooding the lungs with high-pressure blood. Monitor lizards are oviparous, laying from seven to 38 eggs, which they often cover with soil or protect in a hollow tree stump. Some species, including the Komodo dragon, are capable of parthenogenesis. Venom Anatomical and molecular studies indicate that most if not all varanids are venomous. Unlike snakes, monitor lizard venom glands are situated in their lower jaw. The venom of monitor lizards is diverse and complex, as a result of the diverse ecological niches monitor lizards occupy. For example, many species have anticoagulant venom, disrupting clotting through a combination of fibrinogenolysis and blocking platelet aggregation. Amongst them, arboreal species, such as the tree monitors and the banded monitor, have by far the strongest fibrinogenolytic venom. As a result, wounds from monitor lizard bites often bleed more than they would if they were simply lacerations. Venom may also cause hypotension. In some species such as the Komodo dragon and the desert monitor, venom also induces a powerful neurotoxic effect. In the latter species for instance, envenomation causes immediate paralysis in rodents (but not birds) and lesser effects of the same nature in humans. Intelligence At least some species of monitors are known to be able to count; studies feeding rock monitors varying numbers of snails showed that they can distinguish numbers up to six. Nile monitors have been observed to cooperate when foraging; one animal lures the female crocodile away from her nest, while the other opens the nest to feed on the eggs. The decoy then returns to also feed on the eggs. Komodo dragons at the National Zoo in Washington, DC, recognize their keepers and seem to have distinct personalities. Blue and green tree monitors in British zoos have been observed shredding leaves, apparently as a form of play. Human Uses As pets Monitor lizards have become a staple in the reptile pet trade. The most commonly kept monitors are the savannah monitor and Ackie dwarf monitor, due to their relatively small size, low cost, and relatively calm dispositions with regular handling. Among others, black-throated, Timor, Asian water, Nile, mangrove, emerald tree, black tree, roughneck, Dumeril's, peach-throated, crocodile, and Argus monitors have been kept in captivity. Traditional medicines Monitor lizards are poached in some South- and Southeast Asian countries, as their organs and fat are used in some traditional medicines, although there is no scientific evidence as to their effectiveness. Monitor lizard meat, particularly the tongue and liver, is eaten in parts of India and Malaysia and is supposed to be an aphrodisiac. In parts of Pakistan and southern India, as well in Northeastern India, particularly Assam, the different parts of monitor lizards are traditionally used for treating rheumatic pain, skin infections and hemorrhoids, and the oil is used as an aphrodisiac lubricant (sande ka tel). Consuming raw blood and flesh of monitor lizards has been reported to cause eosinophilic meningoencephalitis, as some monitors are hosts for the parasite Angiostrongylus cantonensis. Leather "Large-scale exploitation" of monitor lizards is undertaken for their skins, which are described as being "of considerable utility" in the leather industry. In Papua New Guinea, monitor lizard leather is used for membranes in traditional drums (called kundu), and these lizards are referred to as kundu palai or "drum lizard" in Tok Pisin, the main Papuan trade language. Monitor lizard skins are prized in making the resonant part of serjas (Bodo folk sarangis) and dotaras (native strummed string instruments of Assam, Bengal and other eastern states). The leather is also used in making a Carnatic music percussion instrument called the kanjira. Food The meat of monitor lizards is eaten by some tribes in India, Nepal, the Philippines, Australia, South Africa and West Africa as a supplemental meat source. Both meat and eggs are also eaten in Southeast Asian countries such as Vietnam and Thailand as a delicacy. Conservation According to IUCN Red List of threatened species, most of the monitor lizards species fall in the categories of least concern, but the population is decreasing globally. All but five species of monitor lizards are classified by the Convention on International Trade in Endangered Species of Wild Fauna and Flora under Appendix II, which is loosely defined as species that are not necessarily threatened with extinction but may become so unless trade in such species is subject to strict regulation to avoid use incompatible with the survival of the species in the wild. The remaining five species – the Bengal, yellow, desert, and clouded monitors and the Komodo Dragon– are classified under CITES Appendix I, which outlaws international commercial trade in the species. The yellow monitor is protected in all countries in its range except Bhutan, Nepal, India, Pakistan, and Bangladesh. In Kerala, Andhra Pradesh, Karnataka, Telangana and all other parts of South India, catching or killing of monitor lizards is banned under the Protected Species Act. Evolution Varanus is the only living genus of the family Varanidae. Varanids last shared a common ancestor with their closest living relatives, earless "monitors", during the Late Cretaceous. The oldest known varanids are from the Late Cretaceous of Mongolia. During the Eocene, the varanid Saniwa occurred in North America. The closest known relative of Varanus is Archaeovaranus from the Eocene of China, suggesting that the genus Varanus is of Asian origin. The oldest fossils of Varanus date to the early Miocene. Many of the species within the various subgenera also form species complexes with each other: Euprepriosaurus V. indicus species complex (V. indicus, V. cerambonensis, V. caerulivirens, V. colei, V. obor, V. lirugensis, V. rainerguentheri, V. zugorum) V. doreanus species complex (V. doreanus, V. finschi, V. semotus, V. yuwonoi) V. jobiensis species complex (V. jobiensis) Odatria V. acanthurus species complex (V. acanthurus, V. baritji, V. primordius, V. storri) V. timorensis species complex (V. timorensis, V. auffenbergi, V. scalaris, V. similis, V. tristis) Varanus V. gouldii species complex (V. gouldii, V. rosenbergi, V. panoptes) Polydaedalus V. exanthematicus species complex (V. exanthematicus, V. albigularis, V. yemenensis) V. niloticus species complex (V. niloticus, V. stellatus) Empagusia V. bengalensis species complex (V. bengalensis, V. nebulosus) Soterosaurus V. salvator species complex (V. salvator, V. cumingi, V. nuchalis, V. togianus, V. marmoratus) The tree monitors of the V. prasinus species complex (V. prasinus, V. beccarii, V. boehmei, V. bogerti, V. keithhornei, V. kordensis, V. macraei, V. reisingeri, V. telenesetes) were once in the subgenus Euprepriosaurus, but as of 2016, form their own subgenus Hapturosaurus. V. jobiensis was once considered to be a member of the V. indicus species complex, but is now considered to represent its own species complex. Taxonomy Genus Varanus Species marked with are extinct V. bolkayi V. darevskii V. emeritus (=V. salvadorii?) V. hooijeri V. hofmanni V. lungui V. marathonensis V. mokrensis V. pronini V. rusingensis V. semjonovi V. sivalensis V. tyrasiensis (=V. hofmanni?)Subgenus Empagusia:V. bengalensis, Bengal monitorV. dumerilii, Dumeril's monitor, brown roughneck monitorV. flavescens, golden monitor, yellow monitor, short-toed monitorV. nebulosus, clouded monitor thumb|Blue-spotted tree monitor (V. macraei) Subgenus Euprepiosaurus:V. bennetti, Bennett's long-tailed monitorV. caerulivirens, turquoise monitorV. cerambonensis, Ceram monitorV. colei Kei Islands monitorV. doreanus, blue-tailed monitorV. douarrha, New Ireland monitorV. finschi, Finsch's monitorV. indicus, mangrove monitorV. jobiensis, peach-throated monitorV. juxtindicus, Rennell Island monitorV. lirungensis, Talaud mangrove monitorV. louisiadensis, Louisiade monitorV. melinus, quince monitorV. obor, sago monitorV. rainerguentheri Rainer Günther's monitorV. semotus, Mussau Island blue-tailed monitorV. tanimbar, Tanimbar monitorV. tsukamotoi, Mariana monitorV. yuwonoi black-backed mangrove monitor, tricolor monitorV. zugorum, silver monitor, Zug's monitor Subgenus Hapturosaurus:V. beccarii, black tree monitorV. boehmei, golden-spotted tree monitorV. bogerti, Bogert's monitorV. keithhornei, canopy goanna, blue-nosed tree monitor, Nesbit River monitorV. kordensis, Biak tree monitorV. macraei, blue-spotted tree monitorV. prasinus, emerald tree monitorV. reisingeri yellow tree monitorV. telenesetes, mysterious tree monitor, Rossell tree monitor thumb|Northern Sierra Madre forest monitor (V. bitatawa) Subgenus Odatria:V. acanthurus, spiny-tailed monitor, ridge-tailed monitor, Ackie's dwarf monitorV. a. acanthurus, spiny-tailed monitorV. a. brachyurus, common spiny-tailed monitorV. auffenbergi, Auffenberg's monitor, peacock monitorV. brevicauda, short-tailed monitorV. bushi, Pilbara stripe-tailed monitor, Bush's monitorV. caudolineatus, stripe-tailed monitorV. citrinus, Gulf ridge-tailed monitorV. eremius, rusty desert monitor, pygmy desert monitorV. gilleni, pygmy mulga monitorV. glauerti, Kimberley rock monitorV. glebopalma, twilight monitor, black-palmed rock monitorV. hamersleyensis, Hamersley Range rock monitorV. insulanicus, Groote Eylandt monitorV. i. baritji, black-spotted ridge-tailed monitorV. kingorum, Kings' rock monitorV. mitchelli, Mitchell's water monitorV. ocreatus, Storr's monitorV. pilbarensis, Pilbara rock monitorV. primordius northern blunt-spined monitorV. scalaris, banded tree monitorV. semiremex rusty monitorV. similis, Similis monitor, spotted tree monitorV. sparnus, Dampier Peninsula monitorV. storri, eastern Storr's monitorV. timorensis, Timor monitorV. tristisV. t. tristis, black-headed monitorV. t. orientalis, freckled monitor Subgenus Papusaurus V. salvadorii, crocodile monitor Subgenus Philippinosaurus:V. bitatawa, northern Sierra Madre forest monitor, butikaw, bitatawaV. mabitang, Panay monitor, mabitangV. olivaceus, Gray's monitor, butaanSubgenus Polydaedalus:V. albigularis, rock monitor, white-throated monitorV. a. albigularis, white-throated monitorV. a. angolensis, Angolan monitorV. a. microstictus, black-throated monitorV. exanthematicus, savannah monitor, Bosc's monitorV. niloticus, Nile monitorV. stellatus, West African Nile monitorV. ornatus, ornate monitorV. yemenensis, Yemen monitor Subgenus Psammosaurus:V. griseus, desert monitorV. g. griseus, desert monitor, grey monitorV. g. caspius, Caspian monitorV. g. koniecznyi, Indian desert monitor, Thar desert monitorV. nesterovi, Nesterov's desert monitor Subgenus Solomonsaurus: V. spinulosus, spiny-necked mangrove monitor, Solomon Islands spiny monitor Subgenus Soterosaurus:V. bangonorum, Bangon monitorV. cumingi, Cuming's water monitor, yellow-headed water monitorV. dalubhasa, Enteng's monitorV. marmoratus, marbled water monitor, Philippine water monitorV. nuchalis large-scaled water monitorV. palawanensis, Palawan water monitorV. rasmusseniVaranus rasmusseni, The Reptile Database Rasmussen's water monitorV. rudicollis, black roughneck monitorV. salvator, Asian water monitorV. s. salvator, Sri Lankan water monitorV. s. andamanensis, Andaman water monitorV. s. bivittatus, two-striped water monitor, Javan water monitorV. s. macromaculatus, Southeast Asian water monitorV. s. ziegleri, Ziegler's water monitorV. samarensis, Samar water monitorV. togianus, Togian water monitor Subgenus Varanus:V. giganteus, perentieV. gouldii, Gould's monitor, sand monitor, sand goannaV. komodoensis, Komodo dragonV. mertensi, Mertens' monitorV. panoptesV. p. panoptes, Argus monitorV. p. horni, Horn's monitor V. p. rubidus, yellow-spotted monitorV. priscus, megalaniaV. rosenbergi, Rosenberg's monitor, heath monitorV. spenceri, Spencer's monitorV. varius, lace monitor Gallery
Biology and health sciences
Lizards and other Squamata
Animals
334415
https://en.wikipedia.org/wiki/Green%20pheasant
Green pheasant
The green pheasant (Phasianus versicolor), also known as the Japanese green pheasant, is an omnivorous bird native to the Japanese archipelago, to which it is endemic. Some taxonomic authorities consider it a subspecies of the common pheasant, Phasianus colchicus. It is the national bird of Japan. Taxonomy and systematics Some sources claim that the green pheasant is a subspecies of the common pheasant, though others claim that they are separate, though closely related, species. The green pheasant has three subspecies. The nominate subspecies, P. v. versicolor, is called the southern green pheasant or kiji. The Pacific green pheasant, P. v. tamensis, and northern green pheasant, P. v. robustipes, are the other two subspecies. There are some cases of hybrids between the green pheasant and the copper pheasant or common pheasant. Description The male (cock) southern green pheasant, P. v. versicolor, has dark green plumage on the breast, neck, mantle, and flanks. The male also has a bluish-purplish hood with clear ear tufts, red wattles, and a long, pale grey-banded tail. The female (hen) is smaller than the male, with a shorter tail, and has brownish-black colored plumage, with dark brown feathers fringed pale brown. The males of this subspecies have the darkest plumage, which is mainly green. The male Pacific green pheasant, P. v. tamensis, has lighter plumage than the nominate subspecies. Its feathers are more purple and blue. The male northern green pheasant, P. v. robustipes, has the lightest plumage and its crown and mantle are more bronze than those of the other subspecies. The females of all three subspecies look much more similar, though, like with the males, the females of P. v. versicolor normally have the darkest plumage and the females of P. v. robustipes have the lightest. Behavior Diet In the wild, green pheasants eat small animals, such as worms and insects, grains and plants. They are, in captivity, sometimes fed pellets, seeds, plants and live food. Breeding The green pheasants' breeding season starts during March or April and ends in June. Green pheasants can first breed when they are about one year old. One clutch has between six and fifteen eggs. The eggs are incubated for 23 to 25 days. In culture In Japan, many people claim that green pheasants are scared by earthquakes and 'scream'. They are the national bird of Japan because the way the female walks together with its chicks is seen as a symbol of harmony. It was featured on the Japanese 10,000 yen note. Habitat and distribution It is found throughout Honshu, Shikoku, and Kyushu as well as some smaller islands; it has also been introduced in Hawaii and (unsuccessfully) in North America as a gamebird. It inhabits woodlands and forest edges, brush, grassland, and parkland. This species is common and widespread throughout its native range. It often frequents farmlands and human settlements. The introduced populations in Hawaii are stable. Populations in Western Europe have perhaps bred with the common pheasant for a number of years and no pure green pheasants exist there any longer. This species has been crossed with the common pheasant on some game farms in North America and released. In its native range, the green pheasant outcompetes introduced populations of common pheasant; despite the two species close relation, they have differing ecological requirements, and the common pheasant is less adapted to the ecology of the green pheasant's range. Conservation Though the green pheasant population is decreasing, it is not severely fragmented. On a local and national level, green pheasants are used for food, sport hunting, specimen collecting and as pets or display animals. None of these practices are found on an international level. The green pheasant is one of the 29 designated 'game species' in Japan. These are the only species that can legally be hunted. A hunting license is required. Gallery
Biology and health sciences
Galliformes
Animals
334785
https://en.wikipedia.org/wiki/Box%20jellyfish
Box jellyfish
Box jellyfish (class Cubozoa) are cnidarian invertebrates distinguished by their box-like (i.e., cube-shaped) body. Some species of box jellyfish produce potent venom delivered by contact with their tentacles. Stings from some species, including Chironex fleckeri, Carukia barnesi, Malo kingi, and a few others, are extremely painful and often fatal to humans. Taxonomy and systematics Historically, cubozoans were classified as an order of Scyphozoa until 1973, when they were put in their own class due to their unique biological cycle (lack of strobilation) and morphology. At least 51 species of box jellyfish were known as of 2018. These are grouped into two orders and eight families. A few new species have since been described, and it is likely that additional undescribed species remain. Cubozoa represents the smallest cnidarian class with approximately 50 species. Class Cubozoa Order Carybdeida Family Alatinidae Family Carukiidae Family Carybdeidae Family Tamoyidae Family Tripedaliidae Order Chirodropida Family Chirodropidae Family Chiropsalmidae Family Chiropsellidae Description The medusa form of a box jellyfish has a squarish, box-like bell, from which its name is derived. From each of the four lower corners of this hangs a short pedalium or stalk which bears one or more long, slender, hollow tentacles. The rim of the bell is folded inwards to form a shelf known as a velarium which restricts the bell's aperture and creates a powerful jet when the bell pulsates. As a result, box jellyfish can move more rapidly than other jellyfish; speeds of up to per minute have been recorded. In the center of the underside of the bell is a mobile appendage called the manubrium which somewhat resembles an elephant's trunk. At its tip is the mouth. The interior of the bell is known as the gastrovascular cavity. It is divided by four equidistant septa into a central stomach and four gastric pockets. The eight gonads are located in pairs on either side of the four septa. The margins of the septa bear bundles of small gastric filaments which house nematocysts and digestive glands and help to subdue prey. Each septum is extended into a septal funnel that opens onto the oral surface and facilitates the flow of fluid into and out of the animal. The box jellyfish's nervous system is more developed than that of many other jellyfish. They possess a ring nerve at the base of the bell that coordinates their pulsing movements, a feature found elsewhere only in the crown jellyfish. Whereas some other jellyfish have simple pigment-cup ocelli, box jellyfish are unique in the possession of true eyes, complete with retinas, corneas and lenses. Their eyes are set in clusters at the ends of sensory structures called rhopalia which are connected to their ring nerve. Each rhopalium contains two image-forming lens eyes. The upper lens eye looks straight up out of the water with a field of view that matches Snell's window. In species such as Tripedalia cystophora, the upper lens eye is used to navigate to their preferred habitats at the edges of mangrove lagoons by observing the direction of the tree canopy. The lower lens eye is primarily used for object avoidance. Research has shown that the minimum visual angle for obstacles avoided by their lower lens eyes matches the half-widths of their receptive fields. Each rhopalium also has two pit eyes on either side of the upper lens eye which likely act as mere light meters, and two slit eyes on either side of the lower lens eye which are likely used to detect vertical movement. In total, the box jellyfish have six eyes on each of their four rhopalia, creating a total of 24 eyes. The rhopalia also feature a heavy crystal-like structure called a statolith, which, due to the flexibility of the rhopalia, keep the eyes oriented vertically regardless of the orientation of the bell. Box jellyfish also display complex, probably visually-guided behaviors such as obstacle avoidance and fast directional swimming. Research indicates that, owing to the number of rhopalial nerve cells and their overall arrangement, visual processing and integration at least partly happen within the rhopalia of box jellyfish. The complex nervous system supports a relatively advanced sensory system compared to other jellyfish, and box jellyfish have been described as having an active, fish-like behavior. Depending on species, a fully grown box jellyfish can measure up to along each box side ( in diameter), and the tentacles can grow up to in length. Its weight can reach . However, the thumbnail-sized Irukandji is a box jellyfish, and lethal despite its small size. There are about 15 tentacles on each corner. Each tentacle has about 500,000 cnidocytes, containing nematocysts, a harpoon-shaped microscopic mechanism that injects venom into the victim. Many different kinds of nematocysts are found in cubozoans. Distribution Although the notoriously dangerous species of box jellyfish are largely restricted to the tropical Indo-Pacific region, various species of box jellyfish can be found widely in tropical and subtropical oceans (between 42° N and 42 °S), including the Atlantic Ocean and the east Pacific Ocean, with species as far north as California (Carybdea confusa), the Mediterranean Sea (Carybdea marsupialis) and Japan (such as Chironex yamaguchii), and as far south as South Africa (for example, Carybdea branchi) and New Zealand (such as Copula sivickisi). Though box jellies are known to inhabit the Indo-Pacific region, there is very little collected data or studies proving this. It was only in 2014, that the first ever box jelly sightings (Tripedalia cystophora) were officially published in Australia, Thailand and the Indian Ocean. There are three known species in Hawaiian waters, all from the genus Carybdea: C. alata, C. rastoni, and C. sivickisi. Within these tropical and subtropical environments, box jellyfish tend to reside closer to shore. They have been spotted in near-shore habitats such as mangroves, coral reefs, kelp forests, and sandy beaches. Recently, in 2023, a new genus and species of box jellyfish was discovered in the Indo-Pacific region, specifically the Gulf of Thailand. Discovered and named after scientist Lisa-ann Gershwin, this new species of box jellyfish, Gershwinia thailandensis, is a member of the Carukiidae family. Gershwinia thailandensis is described as its own new species as it has sensory structures with specialized horns and lacks a common digestive system among box jelly, the stomach gastric phaecellae. Due to this and other observations, structural and biological, Gershwinia thailandensis was accepted as a new species of box jellyfish. Detection Cubozoans are widely distributed throughout tropical and subtropical regions, yet the detection of these organisms can be quite difficult and costly due to a high amount of variation in their occurrence and abundance, their translucent body, two different life stages (medusa and polyp), and vast amounts of size variability within the different species in the class Cubozoa. Understanding the ecological distribution of cubozoans can be difficult work, and some of the costly methods like visual observations, a variety of different nets, light attraction techniques, and most recently the use of drones have had some levels of success in locating and tracking different species of cubozoa, but are limited by both anthropogenic and environmental factors. A new form of detection, environmental DNA (eDNA), has been developed and employed to help aid in the analysis of the populations of box jellyfish which can be implemented to mitigate the effects that box jellyfish have on coastal anthropogenic activities. This relatively easy and cost-effective method utilizes extra-organismal genetic material that can be found in the water column via shedding throughout the lifespan of an organism. This process for identifying box jellyfish using the eDNA technique involves collecting a water sample and filtering the sample through a cellulose nitrate membrane filter to extract any genetic material from the water sample. Once the DNA is extracted, it is analyzed for species-specific matches to see if the eDNA sequences sampled correlate with existing DNA sequences for box jellyfish. Given the results, the presence or absence of the box jellyfish can be indicated through the matching of genetic material. If a match is found, then the box jellyfish was present in the area. The utilization of eDNA can provide a cost-effective and efficient way to monitor populations of box jellyfish in both medusa and polyp life stages, to then use the data to help understand more about their ecology and limit the effects on coastal anthropogenic activities. Ecology Age and growth It has been found that the statoliths, which are composed of calcium sulfate hemihydrate, exhibit clear sequential incremental layers, thought to be laid down on a daily basis. This has enabled researchers to estimate growth rates, ages, and age to maturity. Chironex fleckeri, for example, increases its inter-pedalia distance (IPD) by per day, reaching an IPD of when 45 to 50 days old. The maximum age of any individual examined was 88 days by which time it had grown to an IPD of . In the wild, the box jellyfish will live up to 3 months, but can survive up to seven or eight months in a science lab tank. Behavior The box jellyfish actively hunts its prey (small fish), rather than drifting as do true jellyfish. They are strong swimmers, capable of achieving speeds of up to 1.5 to 2 metres per second or about . and rapidly turning up to 180° in a few bell contractions. Some species are capable of avoiding obstacles. The majority of box jellyfishes feed by extending their tentacles and accelerating for a short time upwards, then turn upside-down and stop pulsating. Then the jellyfish slowly sinks, until prey finds itself entangled by tentacles. At this point the pedalia fold and bring the prey to the oral opening. The venom of cubozoans is distinct from that of scyphozoans, and is used to catch prey (small fish and invertebrates, including prawns and bait fish) and for defence from predators, which include the butterfish, batfish, rabbitfish, crabs (blue swimmer crab) and various species of turtle including the hawksbill sea turtle and flatback sea turtle. It seems that sea turtles are unaffected by the stings because they seem to relish box jellyfish. Reproduction Cubozoans usually have an annual life cycle. Box jellyfish reach sexual maturity when their bell diameter reaches 5 millimeters. Chirodropida reproduces by external fertilization. Carybdeida instead reproduces by internal fertilization and is ovoviviparous; sperm is transferred by spermatozeugmata, a type of spermatophore. Hours after the fertilization, the female releases an embryo strand that contains its own nematocytes; both euryteles and isorhizas. Cubozoas are the only class of cnidarian that contains species that perform the “wedding dance” to transfer the spermatophores from the male into the females, including the Carybdea sivickisi species. It is previously believed that medusa species only reproduce once in their life before dying a few weeks later, a semelparity lifestyle. Alternatively, in July 2023, the box jelly species Chiropsalmus quadrumanus, were found to potentially have iteroparous reproduction, meaning they reproduce multiple times in their life. Oogenesis appears to happen numerous times as oocytes are discovered in four stages; pre-vitellogenic, early vitellogenic, mid vitellogenic, and late vitellogenic. Continuous research needs to be conducted to determine if box jellyfish are semelparity or iteroparous, or if it is species dependent. Genetics Box jellyfish have a mitochondrial genome that is arranged into eight linear chromosomes. As of 2022, only two Cubozoan species were fully sequenced, Alatina alata and Morbakka virulenta. A. alata has 66,156 genes, the largest gene count for any Medusozoan. The mitochondrial genome of box jellyfish is uniquely structured into multiple linear fragments. Each one of the eight linear chromosomes have between one and four genes including two extra genes. These two extra genes (mt-polB and orf314) encode proteins. There are only a few studies that have been completed involving the research of mitochondrial gene expression in box jellyfish. Danger to humans Box jellyfish have been long known for their powerful sting. The lethality of the Cubozoan venom to humans is the primary reason for its research. Although unspecified species of box jellyfish have been called in newspapers "the world's most venomous creature" and the deadliest creature in the sea, only a few species in the class have been confirmed to be involved in human deaths; some species are not harmful to humans, possibly delivering a sting that is no more than painful. When the venom of the box jellyfish was sequenced, it was found that more than 170 toxin proteins were identified. The high quantity of toxin proteins that the box jellyfish possess is the reason they are known to be so dangerous. Stings from the box jellyfish can lead to skin irritation, cardiotoxicity, and can even be fatal. Australia Hugo Flecker, who worked on various venomous animal species and poisonous plants, was concerned at the unexplained deaths of swimmers. He identified the cause as the species of box jellyfish later named Chironex fleckeri. In 1945, he described another jellyfish envenoming which he named the "Irukandji Syndrome", later identified as caused by the box jellyfish species Carukia barnesi. In Australia, fatalities are most often caused by the largest species of this class of jellyfish, Chironex fleckeri, one of the world's most venomous creatures. After severe Chironex fleckeri stings, cardiac arrest can occur quickly, within just two minutes. C. fleckeri has caused at least 79 deaths since the first report in 1883, but even in this species most encounters appear to result only in mild envenoming. While most recent deaths in Australia have been in children, including a 14-year old who died in February 2022, which is linked to their smaller body mass, in February 2021, a 17-year-old boy died about 10 days after being stung while swimming at a beach on Queensland's western Cape York. The previous fatality was in 2007. At least two deaths in Australia have been attributed to the thumbnail-sized Irukandji box jellyfish. People stung by these may suffer severe physical and psychological symptoms, known as Irukandji syndrome. Nevertheless, most victims do survive, and out of 62 people treated for Irukandji envenomation in Australia in 1996, almost half could be discharged home with few or no symptoms after 6 hours, and only two remained hospitalized approximately a day after they were stung. Preventative measures in Australia include nets deployed on beaches to keep jellyfish out, and jugs of vinegar placed along swimming beaches to be used for rapid first aid. Hawaii: research and dangers Researchers at the University of Hawaii's Department of Tropical Medicine found the venom causes cells to become porous enough to allow potassium leakage, causing hyperkalemia, which can lead to cardiovascular collapse and death as quickly as within 2 to 5 minutes. In Hawaii, box jellyfish numbers peak approximately seven to ten days after a full moon, when they come near the shore to spawn. Sometimes, the influx is so severe that lifeguards have closed infested beaches, such as Hanauma Bay, until the numbers subside. Malaysia, Philippines, Japan, Thailand, and Texas In parts of the Malay Archipelago, the number of lethal cases is far higher than in Australia. In the Philippines, an estimated 20–40 people die annually from Chirodropid stings, probably owing to limited access to medical facilities and antivenom. The recently discovered and very similar Chironex yamaguchii may be equally dangerous, as it has been implicated in several deaths in Japan. It is unclear which of these species is the one usually involved in fatalities in the Malay Archipelago. Warning signs and first aid stations have been erected in Thailand following the death of a 5-year-old French boy in August 2014. A woman died in July 2015 after being stung off Ko Pha Ngan, and another at Lamai Beach at Ko Samui on 6 October 2015. In 1990, a 4-year-old child died after being stung by Chiropsalmus quadrumanus at Galveston Island, Texas, on the Gulf of Mexico. Either this species or Chiropsoides buitendijki is considered the likely perpetrator of two deaths in West Malaysia. Protection and treatment Protective clothing Wearing pantyhose, full body lycra suits, dive skins, or wetsuits is an effective protection against box jellyfish stings. The pantyhose were formerly thought to work because of the length of the box jellyfish's stingers (nematocysts), but it is now known to be related to the way the stinger cells work. The stinging cells on a box jellyfish's tentacles are not triggered by touch, but by chemicals found on skin, which are not present on the hose's outer surface, so the jellyfish's nematocysts do not fire. First aid for stings Once a tentacle of the box jellyfish adheres to skin, it pumps nematocysts with venom into the skin, causing the sting and agonizing pain. Flushing with vinegar is used to deactivate undischarged nematocysts to prevent the release of additional venom. A 2014 study reported that vinegar also increased the amount of venom released from already-discharged nematocysts; however, this study has been criticized on methodological grounds. Vinegar is made available on Australian beaches and in other places with venomous jellyfish. Removal of additional tentacles is usually done with a towel or gloved hand, to prevent secondary stinging. Tentacles can still sting if separated from the bell, or after the creature is dead. Removal of tentacles may cause unfired nematocysts to come into contact with the skin and fire, resulting in a greater degree of envenomation. Although commonly recommended in folklore and even some papers on sting treatment, there is no scientific evidence that urine, ammonia, meat tenderizer, sodium bicarbonate, boric acid, lemon juice, fresh water, steroid cream, alcohol, cold packs, papaya, or hydrogen peroxide will disable further stinging, and these substances may even hasten the release of venom. Heat packs have been proven for moderate pain relief. The use of pressure immobilization bandages, methylated spirits, or vodka is generally not recommended for use on jelly stings. Possible antidotes in humans In 2011, researchers at the University of Hawaii announced that they had developed an effective treatment against the stings of Hawaiian box jellyfish by "deconstructing" the venom contained in their tentacles. Its effectiveness was demonstrated in the PBS Nova episode "Venom: Nature's Killer", originally shown on North American television in February 2012. Their research found that injected zinc gluconate prevented the disruption of red blood cells and reduced the toxic effects on the cardiac activity of research mice. It was later found that copper gluconate was even more effective. A cream containing copper gluconate has been produced, to be applied to inhibit the injected venom; although it is used by U.S. military divers, evidence that it is effective in humans is only anecdotal. In April 2019, a team of researchers at the University of Sydney announced that they had found a possible antidote to Chironex fleckeri venom that would stop pain and skin necrosis if administered within 15 minutes of being stung. The research was the result of work done with CRISPR whole genome editing in which the researchers selectively deactivated skin-cell genes until they were able to identify ATP2B1, a calcium transporting ATPase, as a host factor supporting cytotoxicity. The research showed the therapeutic use of existing drugs targeting cholesterol in mice, although the efficacy of the approach had not been demonstrated in humans.
Biology and health sciences
Cnidarians
Animals
334816
https://en.wikipedia.org/wiki/Route%20of%20administration
Route of administration
In pharmacology and toxicology, a route of administration is the way by which a drug, fluid, poison, or other substance is taken into the body. Routes of administration are generally classified by the location at which the substance is applied. Common examples include oral and intravenous administration. Routes can also be classified based on where the target of action is. Action may be topical (local), enteral (system-wide effect, but delivered through the gastrointestinal tract), or parenteral (systemic action, but is delivered by routes other than the GI tract). Route of administration and dosage form are aspects of drug delivery. Classification Routes of administration are usually classified by application location (or exposition). The route or course the active substance takes from application location to the location where it has its target effect is usually rather a matter of pharmacokinetics (concerning the processes of uptake, distribution, and elimination of drugs). Exceptions include the transdermal or transmucosal routes, which are still commonly referred to as routes of administration. The location of the target effect of active substances is usually rather a matter of pharmacodynamics (concerning, for example, the physiological effects of drugs). An exception is topical administration, which generally means that both the application location and the effect thereof is local. Topical administration is sometimes defined as both a local application location and local pharmacodynamic effect, and sometimes merely as a local application location regardless of location of the effects. By application location Enteral/gastrointestinal route Through the gastrointestinal tract is sometimes termed enteral or enteric administration (literally meaning 'through the intestines'). Enteral/enteric administration usually includes oral (through the mouth) and rectal (into the rectum) administration, in the sense that these are taken up by the intestines. However, uptake of drugs administered orally may also occur already in the stomach, and as such gastrointestinal (along the gastrointestinal tract) may be a more fitting term for this route of administration. Furthermore, some application locations often classified as enteral, such as sublingual (under the tongue) and sublabial or buccal (between the cheek and gums/gingiva), are taken up in the proximal part of the gastrointestinal tract without reaching the intestines. Strictly enteral administration (directly into the intestines) can be used for systemic administration, as well as local (sometimes termed topical), such as in a contrast enema, whereby contrast media are infused into the intestines for imaging. However, for the purposes of classification based on location of effects, the term enteral is reserved for substances with systemic effects. Many drugs as tablets, capsules, or drops are taken orally. Administration methods directly into the stomach include those by gastric feeding tube or gastrostomy. Substances may also be placed into the small intestines, as with a duodenal feeding tube and enteral nutrition. Enteric coated tablets are designed to dissolve in the intestine, not the stomach, because the drug present in the tablet causes irritation in the stomach. The rectal route is an effective route of administration for many medications, especially those used at the end of life. The walls of the rectum absorb many medications quickly and effectively. Medications delivered to the distal one-third of the rectum at least partially avoid the "first pass effect" through the liver, which allows for greater bio-availability of many medications than that of the oral route. Rectal mucosa is highly vascularized tissue that allows for rapid and effective absorption of medications. A suppository is a solid dosage form that fits for rectal administration. In hospice care, a specialized rectal catheter, designed to provide comfortable and discreet administration of ongoing medications provides a practical way to deliver and retain liquid formulations in the distal rectum, giving health practitioners a way to leverage the established benefits of rectal administration. The Murphy drip is an example of rectal infusion. Parenteral route The parenteral route is any route that is not enteral (par- + enteral). Parenteral administration can be performed by injection, that is, using a needle (usually a hypodermic needle) and a syringe, or by the insertion of an indwelling catheter. Locations of application of parenteral administration include: Central nervous system: Epidural (synonym: peridural) (injection or infusion into the epidural space), e.g. epidural anesthesia. Intracerebral (into the cerebrum) administration by direct injection into the brain. Used in experimental research of chemicals and as a treatment for malignancies of the brain. The intracerebral route can also interrupt the blood brain barrier from holding up against subsequent routes. Intracerebroventricular (into the cerebral ventricles) administration into the ventricular system of the brain. One use is as a last line of opioid treatment for terminal cancer patients with intractable cancer pain. Epicutaneous (application onto the skin). It can be used both for local effect as in allergy testing and typical local anesthesia, as well as systemic effects when the active substance diffuses through skin in a transdermal route. Sublingual and buccal medication administration is a way of giving someone medicine orally (by mouth). Sublingual administration is when medication is placed under the tongue to be absorbed by the body. The word "sublingual" means "under the tongue." Buccal administration involves placement of the drug between the gums and the cheek. These medications can come in the form of tablets, films, or sprays. Many drugs are designed for sublingual administration, including cardiovascular drugs, steroids, barbiturates, opioid analgesics with poor gastrointestinal bioavailability, enzymes and, increasingly, vitamins and minerals. Extra-amniotic administration, between the endometrium and fetal membranes. Nasal administration (through the nose) can be used for topically acting substances, as well as for insufflation of e.g. decongestant nasal sprays to be taken up along the respiratory tract. Such substances are also called inhalational, e.g. inhalational anesthetics. Intra-arterial (into an artery), e.g. vasodilator drugs in the treatment of vasospasm and thrombolytic drugs for treatment of embolism. Intra-articular, into a joint space. It is generally performed by joint injection. It is mainly used for symptomatic relief in osteoarthritis. Intracardiac (into the heart), e.g. adrenaline during cardiopulmonary resuscitation (no longer commonly performed). Intracavernous injection, an injection into the base of the penis. Intradermal, (into the skin itself) is used for skin testing some allergens, and also for mantoux test for tuberculosis. Intralesional (into a skin lesion), is used for local skin lesions, e.g. acne medication. Intramuscular (into a muscle), e.g. many vaccines, antibiotics, and long-term psychoactive agents. Recreationally the colloquial term 'muscling' is used. Intraocular, into the eye, e.g., some medications for glaucoma or eye neoplasms. Intraosseous infusion (into the bone marrow) is, in effect, an indirect intravenous access because the bone marrow drains directly into the venous system. This route is occasionally used for drugs and fluids in emergency medicine and pediatrics when intravenous access is difficult. Intraperitoneal, (infusion or injection into the peritoneum) e.g. peritoneal dialysis. Intrathecal (into the spinal canal) is most commonly used for spinal anesthesia and chemotherapy. Intrauterine. Intravaginal administration, in the vagina. Intravenous (into a vein), e.g. many drugs, total parenteral nutrition. Intravesical infusion is into the urinary bladder. Intravitreal, through the eye. Subcutaneous (under the skin). This generally takes the form of subcutaneous injection, e.g. with insulin. Skin popping is a slang term that includes subcutaneous injection, and is usually used in association with recreational drugs. In addition to injection, it is also possible to slowly infuse fluids subcutaneously in the form of hypodermoclysis. Transdermal (diffusion through the intact skin for systemic rather than topical distribution), e.g. transdermal patches such as fentanyl in pain therapy, nicotine patches for treatment of addiction and nitroglycerine for treatment of angina pectoris. Perivascular administration (perivascular medical devices and perivascular drug delivery systems are conceived for local application around a blood vessel during open vascular surgery). Transmucosal (diffusion through a mucous membrane), e.g. insufflation (snorting) of cocaine, sublingual, i.e. under the tongue, sublabial, i.e. between the lips and gingiva, and oral spray or vaginal suppository for nitroglycerine. Topical route The definition of the topical route of administration sometimes states that both the application location and the pharmacodynamic effect thereof is local. In other cases, topical is defined as applied to a localized area of the body or to the surface of a body part regardless of the location of the effect. By this definition, topical administration also includes transdermal application, where the substance is administered onto the skin but is absorbed into the body to attain systemic distribution. If defined strictly as having local effect, the topical route of administration can also include enteral administration of medications that are poorly absorbable by the gastrointestinal tract. One such medication is the antibiotic vancomycin, which cannot be absorbed in the gastrointestinal tract and is used orally only as a treatment for Clostridioides difficile colitis. Choice of routes The reason for choice of routes of drug administration are governing by various factors: Physical and chemical properties of the drug. The physical properties are solid, liquid and gas. The chemical properties are solubility, stability, pH, irritancy etc. Site of desired action: the action may be localised and approachable or generalised and not approachable. Rate of extent of absorption of the drug from different routes. Effect of digestive juices and the first pass metabolism of drugs. Condition of the patient. In acute situations, in emergency medicine and intensive care medicine, drugs are most often given intravenously. This is the most reliable route, as in acutely ill patients the absorption of substances from the tissues and from the digestive tract can often be unpredictable due to altered blood flow or bowel motility. Convenience Enteral routes are generally the most convenient for the patient, as no punctures or sterile procedures are necessary. Enteral medications are therefore often preferred in the treatment of chronic disease. However, some drugs can not be used enterally because their absorption in the digestive tract is low or unpredictable. Transdermal administration is a comfortable alternative; there are, however, only a few drug preparations that are suitable for transdermal administration. Desired target effect Identical drugs can produce different results depending on the route of administration. For example, some drugs are not significantly absorbed into the bloodstream from the gastrointestinal tract and their action after enteral administration is therefore different from that after parenteral administration. This can be illustrated by the action of naloxone (Narcan), an antagonist of opiates such as morphine. Naloxone counteracts opiate action in the central nervous system when given intravenously and is therefore used in the treatment of opiate overdose. The same drug, when swallowed, acts exclusively on the bowels; it is here used to treat constipation under opiate pain therapy and does not affect the pain-reducing effect of the opiate. Oral The oral route is generally the most convenient and costs the least. However, some drugs can cause gastrointestinal tract irritation. For drugs that come in delayed release or time-release formulations, breaking the tablets or capsules can lead to more rapid delivery of the drug than intended. The oral route is limited to formulations containing small molecules only while biopharmaceuticals (usually proteins) would be digested in the stomach and thereby become ineffective. Biopharmaceuticals have to be given by injection or infusion. However, recent research found various ways to improve oral bioavailability of these drugs. In particular permeation enhancers, ionic liquids, lipid-based nanocarriers, enzyme inhibitors and microneedles have shown potential. Oral administration is often denoted "PO" from "per os", the Latin for "by mouth". The bioavailability of oral administration is affected by the amount of drug that is absorbed across the intestinal epithelium and first-pass metabolism. Oral mucosal The oral mucosa is the mucous membrane lining the inside of the mouth. Buccal Buccally administered medication is achieved by placing the drug between gums and the inner lining of the cheek. In comparison with sublingual tissue, buccal tissue is less permeable resulting in slower absorption. Sublabial Sublingual Sublingual administration is fulfilled by placing the drug between the tongue and the lower surface of the mouth. The sublingual mucosa is highly permeable and thereby provides access to the underlying expansive network composed of capillaries, leading to rapid drug absorption. Intranasal Drug administration via the nasal cavity yields rapid drug absorption and therapeutic effects. This is because drug absorption through the nasal passages does not go through the gut before entering capillaries situated at tissue cells and then systemic circulation and such absorption route allows transport of drugs into the central nervous system via the pathways of olfactory and trigeminal nerve. Intranasal absorption features low lipophilicity, enzymatic degradation within the nasal cavity, large molecular size, and rapid mucociliary clearance from the nasal passages, which explains the low risk of systemic exposure of the administered drug absorbed via intranasal. Local By delivering drugs almost directly to the site of action, the risk of systemic side effects is reduced. Skin absorption (dermal absorption), for example, is to directly deliver drug to the skin and, hopefully, to the systemic circulation. However, skin irritation may result, and for some forms such as creams or lotions, the dosage is difficult to control. Upon contact with the skin, the drug penetrates into the dead stratum corneum and can afterwards reach the viable epidermis, the dermis, and the blood vessels. Parenteral The term parenteral is from para-1 'beside' + Greek enteron 'intestine' + -al. This name is due to the fact that it encompasses a route of administration that is not intestinal. However, in common English the term has mostly been used to describe the four most well-known routes of injection. The term injection encompasses intravenous (IV), intramuscular (IM), subcutaneous (SC) and intradermal (ID) administration. Parenteral administration generally acts more rapidly than topical or enteral administration, with onset of action often occurring in 15–30 seconds for IV, 10–20 minutes for IM and 15–30 minutes for SC. They also have essentially 100% bioavailability and can be used for drugs that are poorly absorbed or ineffective when they are given orally. Some medications, such as certain antipsychotics, can be administered as long-acting intramuscular injections. Ongoing IV infusions can be used to deliver continuous medication or fluids. Disadvantages of injections include potential pain or discomfort for the patient and the requirement of trained staff using aseptic techniques for administration. However, in some cases, patients are taught to self-inject, such as SC injection of insulin in patients with insulin-dependent diabetes mellitus. As the drug is delivered to the site of action extremely rapidly with IV injection, there is a risk of overdose if the dose has been calculated incorrectly, and there is an increased risk of side effects if the drug is administered too rapidly. Respiratory tract Mouth inhalation Inhaled medications can be absorbed quickly and act both locally and systemically. Proper technique with inhaler devices is necessary to achieve the correct dose. Some medications can have an unpleasant taste or irritate the mouth. In general, only 20–50% of the pulmonary-delivered dose rendered in powdery particles will be deposited in the lung upon mouth inhalation. The remainder of 50-70% undeposited aerosolized particles are cleared out of lung as soon as exhalation. An inhaled powdery particle that is >8 μm is structurally predisposed to depositing in the central and conducting airways (conducting zone) by inertial impaction. An inhaled powdery particle that is between 3 and 8 μm in diameter tend to largely deposit in the transitional zones of the lung by sedimentation. An inhaled powdery particle that is <3 μm in diameter is structurally predisposed to depositing primarily in the respiratory regions of the peripheral lung via diffusion. Particles that deposit in the upper and central airways are generally absorbed systemically to great extent because they are only partially removed by mucociliary clearance, which results in orally mediated absorption when the transported mucus is swallowed, and first pass metabolism or incomplete absorption through loss at the fecal route can sometimes reduce the bioavailability. This should in no way suggest to clinicians or researchers that inhaled particles are not a greater threat than swallowed particles, it merely signifies that a combination of both methods may occur with some particles, no matter the size of or lipo/hydrophilicity of the different particle surfaces. Nasal inhalation Inhalation by nose of a substance is almost identical to oral inhalation, except that some of the drug is absorbed intranasally instead of in the oral cavity before entering the airways. Both methods can result in varying levels of the substance to be deposited in their respective initial cavities, and the level of mucus in either of these cavities will reflect the amount of substance swallowed. The rate of inhalation will usually determine the amount of the substance which enters the lungs. Faster inhalation results in more rapid absorption because more substance finds the lungs. Substances in a form that resists absorption in the lung will likely resist absorption in the nasal passage, and the oral cavity, and are often even more resistant to absorption after they fail absorption in the former cavities and are swallowed. Research Neural drug delivery is the next step beyond the basic addition of growth factors to nerve guidance conduits. Drug delivery systems allow the rate of growth factor release to be regulated over time, which is critical for creating an environment more closely representative of in vivo development environments.
Biology and health sciences
General concepts_2
Health
334845
https://en.wikipedia.org/wiki/Passiflora%20edulis
Passiflora edulis
Passiflora edulis, commonly known as passion fruit, is a vine species of passion flower native to the region of southern Brazil through Paraguay to northern Argentina. It is cultivated commercially in tropical and subtropical areas for its sweet, seedy fruit. The fruit is a pepo, a type of botanical berry, round to oval, either yellow or dark purple at maturity, with a soft to firm, juicy interior filled with numerous seeds. The fruit is both eaten and juiced, with the juice often added to other fruit juices to enhance aroma. Etymology The passion fruit is so called because it is one of the many species of passion flower, the English translation of the Latin genus name, Passiflora. Around 1700, the name was given by missionaries in Brazil as an educational aid while trying to convert the indigenous inhabitants to Christianity; its name was flor das cinco chagas or "flower of the five wounds" to illustrate the crucifixion of Christ, with other plant components also named after an emblem in the Passion of Jesus. Description Passiflora edulis is a perennial vine; tendrils are borne in leaf axils, and have a red or purple hue when young. There are two main varieties: a purple-fruited type, P. edulis f. edulis, and the yellow-fruited P. edulis f. flavicarpa. Usually the vine produces a single flower 5–7.5 cm wide at each node. The flower has 5 oblong, green sepals and 5 white petals. The sepals and petals are 4–6mm in length and form a fringe. The base of the flower is a rich purple with 5 stamens, an ovary, and a branched style. The styles bend backward and the stigmas, which are located on top of the styles, have a very distinct head. The fruit produced is a pepo and entirely fleshy (making it botanically a berry) and is spherical to ovoid. The outside color of the pepo ranges from dark purple with fine white specks to light yellow. The fruit is 4–7.5 cm in diameter; purple fruits are smaller, weighing around 35 grams, while yellow fruits are closer to 80 grams. The smooth, leathery rind is 9–13 mm thick, including a thick layer of pith. Within the pepo, there are typically 250 brown seeds, each 2.4 mm in length. Each seed is surrounded by a membranous sac filled with pulpy juice. The flavor of the juice is slightly acidic and musky. The passion fruit's flavor can be compared to the guava fruit. Varieties Several distinct varieties of passion fruit with clearly differing exterior appearances exist. The bright yellow flavicarpa variety, also known as yellow or golden passion fruit, can grow up to the size of a grapefruit, has a smooth, glossy, light, and airy rind, and has been used as a rootstock for purple passion fruit in Australia. The dark purple edulis variety is smaller than a lemon, though it is less acidic than yellow passion fruit, and has a richer aroma and flavour. Uses Passionfruit has a variety of uses related to its favored taste as whole fruit and juice. In Australia and New Zealand, it is available commercially both fresh and tinned. It is added to fruit salads, and fresh fruit pulp or passion fruit sauce is commonly used in desserts, including as a topping for pavlova (a regional meringue cake) and ice cream, a flavouring for cheesecake, and in the icing of vanilla slices. A passion-fruit–flavored soft drink called Passiona has also been manufactured in Australia since the 1920s. It can be used in some alcoholic cocktails. In Brazil, the term applies to passion fruit (, or "sour") and granadillo (, or "sweet"). Passion fruit mousse is a common dessert and passion fruit pulp is used to decorate the tops of cakes. Passion fruit juice, ice pops, and soft drinks are also consumed. When making a caipirinha, passion fruit may be used instead of lime. In Cambodia the red and yellow passion fruit grown in the Mondulkiri Province are used to produce wine and liquor. In Colombia and Costa Rica, it is used for juices and desserts. In the Dominican Republic, where it is locally called chinola, it is used to make juice and fruit preserves. Passion fruit-flavored syrup is used on shaved ice, and the fruit is also eaten raw, sprinkled with sugar. In East Africa, passion fruit is used to make juice, and is commonly eaten as a whole fruit. In Hawaii, where it is known as , fresh passion fruit pulp is eaten. Lilikoi-flavored syrup is used as a topping for shave ice, soft drinks, a glaze, and to marinate meat and vegetables. It is used as a flavoring for malasadas, cheesecakes, cookies, dessert bars, ice cream and mochi. Passion fruit is also used in jam or jelly, as well as a fruit curd known as "lilikoi butter". In India, the government of Andhra Pradesh started growing passion fruit vines in the Chintapalli (Vizag) forests to make fruit available within the region. The fruit is eaten raw, sprinkled with sugar, and is used to make juice. In Indonesia, where it is known as markisa, both edulis and flavicarpa varieties are cultivated and consumed differently. The former is normally eaten as is, while the latter is more commonly strained to obtain its juice, which is cooked with sugar to make passion fruit syrup used in drinks and desserts. In Mexico, passion fruit is used to make juice or is eaten raw with chili powder and lime. In Paraguay, passion fruit is used principally for its juice, to prepare desserts such as passion fruit mousse, cheesecake, ice cream, and to flavor yogurts and cocktails. In Peru, passion fruit has long been a staple in homemade ice pops called "marciano" or "chupetes". Passion fruit is also used in several desserts, especially mousses and cheesecakes. Passion fruit juice is also drunk on its own and is used in ceviche variations and in cocktails, including the Maracuyá sour, a variation of the Pisco sour. , or "sweet" can be eaten raw. In the Philippines, passion fruit is commonly sold in public markets and in public schools. Some vendors sell the fruit with a straw to enable sucking out the seeds and juices inside. In Portugal, especially the Azores and Madeira, passion fruit is used as a base for a variety of liqueurs and mousses. In Puerto Rico, where the fruit is known as "parcha", it is used in juices, ice cream or pastries. In South Africa, passion fruit, known locally as Granadilla (the yellow variety as Guavadilla), is used to flavor yogurt, soft drinks, such as Schweppes' "Sparkling Granadilla", and numerous cordial drinks (in cordial flavors it is referred to as passion fruit). It is often eaten raw or used as a topping for cakes and tarts. Granadilla juice is commonly available in restaurants. The yellow variety is used for juice processing, while the purple variety is sold in fresh fruit markets. In Sri Lanka, passion fruit juice, along with faluda, is a common refreshment. Passion fruit cordial is manufactured both at home as well as industrially by mixing the pulp with sugar. In Suriname, where it is known as , there are three varieties. The red and orange varieties are sold by markets and eaten as a fruit because of their natural sweet flavor. The sour yellow variety, widely grown in the coastal region, is used to make jam and juices with added sugar, either uncooked for instant use or cooked into a thick syrup for storage in the fridge. The juice is also used to flavor cocktails. Nutrition Raw passion fruit is 73% water, 22% carbohydrates, 2% protein and 0.7% fat (table). In a reference amount of , raw passion fruit supplies 97 calories and is a rich source (20% or more of the Daily Value, DV) of vitamin C (33% DV) and a moderate source (10–19% DV) of riboflavin and potassium (table). No other micronutrients are in significant content. Phytochemicals Several varieties of passion fruit are rich in polyphenol content. Yellow varieties of the fruit were found to contain prunasin and other cyanogenic glycosides in the peel and juice. Cultivation Passion fruit is widely grown in tropical and semitropical regions of the world. In the United States, it is cultivated in Florida, Hawaii, and California. They generally have to be protected from frost, although certain cultivars have survived light frosts after heavy pruning of affected areas. Pollination The flower of the yellow-fruited form of the passion fruit plant is self-sterile, while that of the purple-fruited form is self-compatible. In California, it is reported that pollination of flowers is most effective when done by the carpenter bee. There are three types of yellow passion fruit flowers, classified by curvature of style. To help assure the presence of carpenter bees, some gardeners place decaying logs near the vines, which provide shelter for the bees. Diseases Viruses Passion fruit woodiness virus is one of the most well-known viruses to the passion fruit. It belongs to the Potyvirus group and can attack a plant at any age from nursery to mature plants. Some features include yellow leaves that display distortion in the leaf length and shape. As well as affecting the leaf, this virus influences fruit shape and size. Affected fruits become stone-like and much smaller than normal, with many fruits becoming scabbed and cracked. The virus is spread by sap-sucking insects such as aphids and mites. Woodiness can also spread through vegetation propagation such as infected scions or contaminated tools. There is no chemical control for this virus once the plant is infected, but the use of clean planting material can reduce its dissemination. One of the most serious viruses pertaining to vegetation is the cucumber mosaic virus. In the passion fruit, this virus appears with yellow mottling on leaves starting at random points on the vine and diminishing in intensity towards the tip. Expanding leaves typically become twisted, curl downward, and develop a "shoestring" appearance as a result of a restriction of the leaf surface. It is mobile and can spread easily through interactions with other plants such as brushing between leaves. This virus is naturally transmitted through aphids and can also be transmitted mechanically through seedlings. Varietal resistance is the primary management tool, and eliminating weeds and infected perennial ornamentals that may harbor the virus is critical. Once the plant has been infected, there is no possible management or control for the virus. Phytoplasma Overshooting is the term used when Phytoplasma, a specialized bacterium, attacks the phloem of a plant. Phytoplasma infection is characterized by chlorotic small leaves, shortening of internodes, excessive lateral shoots and abnormal flowers. Although there have been reports of this disease within the passion fruit plant, many infected plants are affected without visible signs of disease. Although Phytoplasma can be spread through grafting, it can be inhibited by periodic inspection of plant nurseries and areas that have had past infections. Overshooting responds to treatment with tetracycline, a common broad-spectrum antibiotic. Bacteria Bacterial leaf spot, which causes vein clearing, forms bright yellow colonies causing infection and leaf wilt and, eventually, deterioration of fruit pulp, especially of young fruits. Under favorable conditions for the bacteria, infection occurs through natural openings or wounds from other pathogens that affect leaf inter-cellular spaces. Fertilizers or a copper chloride and mancozeb mixture can control the intensity of the disease, but are not a cure. The bacterial grease-spot of the passion fruit is caused by Pseudomonas syringae. It appears with olive-green to brown greasy-looking spots or brown, sunken circular lesions. On a later stage, a hard crust can cover the lesions showing a chlorotic halo. Affecting mainly the stomata, the grease-spot thrives in high temperatures and high relative humidity. To avoid infection, measures that may be adopted include planting seeds from healthy plants and using existing healthy areas. Fungicide controls can aid in preventing further infection. Fungal diseases Collar rot disease is caused by the fungus Fusarium solani. It is characterized by necrotic lesions at the collar region, browning of the stem at soil level, and dark discoloration of the stem. The rotting stem interferes with food and water transport within the plant, leading to withering of the plant until death. Infection occurs mostly through contaminated soil and infected plants which cause the plants to survive for only a few weeks. There are no chemical controls. Management includes planting seedlings in unaffected areas and using clean tools. The fungus called fusarium wilt commonly occurs in adult plants and is caused by Fusarium oxysporum. The pathogen has ability to survive for long periods, penetrating roots, invading the xylem, and preventing the transport of water and nutrients to other organs of the plant. Once infected, this disease causes leaves to yellow and browning of the vascular system until they wilt and die. It occurs in any type of soil infecting all plants. Management of crops includes planting clean seedlings, uprooting and burning infected plants, and using sterilized tools. The anthracnose, a canker caused by Colletotrichum gloeosporiodes, is a pathogen of the passion fruit creating dark and sunken lesions of the trunk. By attacking mature passion fruit trees, these lesions cause intense defoliation and fruit rot. Many leaves die due to the foliar lesions and the skin of fruits becomes papery. Under warm and humid conditions, this disease can worsen, causing red and orange spores eventually killing the plant. Infection is carried out through the residues of the passion flower, infected seeds, seedlings, and cuttings. Managing this disease involves a combination of using pathogen-free seedlings, eliminating infected areas, and improving ventilation and light conditions. Copper-based fungicides on injured areas can prevent the spread of disease. In culture Passion fruit flower is the national flower of Paraguay. In 2006, singer-songwriter Paula Fuga released the popular song ", the Hawaiian language word for passion fruit; the song is featured on an album also named after the fruit. Hip-hop artist Drake released the hit song "Passionfruit" in 2017.
Biology and health sciences
Tropical and tropical-like fruit
Plants
334884
https://en.wikipedia.org/wiki/Kinkajou
Kinkajou
The kinkajou (/ˈkɪŋkədʒuː/ KING-kə-joo; Potos flavus) is a tropical rainforest mammal of the family Procyonidae related to olingos, coatis, raccoons, and the ringtail and cacomistle. It is the only member of the genus Potos and is also known as the "honey bear" (a name that it shares with the unrelated sun bear). Though kinkajous are arboreal, they are not closely related to any other tree-dwelling mammal group (primates, some mustelids, etc.). Native to Mexico, Central and South America, this mostly frugivorous mammal is seldom seen by people because of its strict nocturnal habits. However, it is hunted for the pet trade, for its skin (to make wallets and horse saddles), and for its meat. The species has been included in Appendix III of CITES by Honduras, which means that exports from Honduras require an export permit, and exports from other countries require a certificate of origin or of re-export. They may live up to 40 years in captivity. Etymology The common name "kinkajou" derives from , based on the Algonquian name for the wolverine. It is similar to the Ojibwe word kwi·nkwaʔa·ke. Its other names in English include honey bear, night ape, and night walker. Throughout its range, several regional names are used; for instance, the Dutch names nachtaap, rolbeer, and rolstaartbeer are used in Suriname. Many names come from Portuguese, Spanish, and local dialects, such as jupará, huasa, cuchi cuchi, leoncillo, marta, perro de monte, and yapará. Taxonomy A. M. Husson, of the Rijksmuseum van Natuurlijke Historie (Leiden), discussed the rather complicated nomenclature of the kinkajou in The Mammals of Suriname (1978). In his 1774 work Die Säugethiere in Abbildungen nach der Natur, Schreber listed three items under the name "Lemur flavus Penn.": on page 145 is a short translation of Pennant's description of the yellow maucauco (later identified to be Lemur mongoz, presently known as the mongoose lemur) from his 1771 work A Synopsis of Quadrupeds (page 138, second figure on plate 16); on plate 42 is a depiction of the yellow maucauco by Schreber; the last item is a reference to A Synopsis of Quadrupeds itself. Husson noted that the last item is actually Pennant's description of an animal that is clearly a kinkajou. Husson therefore concluded that Lemur flavus is actually a "composite species" based on Schreber's specimen of the mongoose lemur and Pennant's specimen of the kinkajou, and identified the latter as the lectotype for the species. The type locality reported by Schreber for L. flavus ("the mountains in Jamaica") was clearly based on Pennant's description of the kinkajou, who claimed, however, that his specimen was "shown about three years ago in London: its keeper said it came from the mountains of Jamaica". This error was pointed out by Thomas in 1902, who corrected the type locality to Suriname. He used the name Potos flavus for the kinkajou. The genus Potos was erected by Saint-Hilaire and Cuvier in 1795, with the type species Viverra caudivolvula described by Schreber in 1778 (later identified as a synonym of Potos flavus). In 1977 the family Cercoleptidae was proposed with the kinkajou as the sole member, but this classification was later dismissed. Subspecies Eight subspecies have been proposed (type localities are listed alongside): P. f. chapadensis : Chapadas of Mato Grosso (Brazil) P. f. chiriquensis : Boquerón, Chiriquí Province (Panama) P. f. flavus : Suriname. Synonyms include Cercoleptes brachyotos, C. brachyotus, Mustela potto, and Viverra caudivolvula P. f. megalotus : Santa Marta (Colombia) P. f. meridensis : Mérida (Venezuela) P. f. modestus : Montes Balzar, Guayas Province (Ecuador) P. f. nocturnus : São Miguel dos Campos, Alagoas (Brazil) P. f. prehensilis : Veracruz (Mexico) A 2016 phylogenetic study based on mitochondrial gene cytochrome b analyzed kinkajou specimens from a variety of locations throughout most of their range. The results showed 27 haplotypes split into five clades corresponding to geographical divisions: Costa Rica (clade 1), northern Brazil and the Guianas (clade 2), northern Peru (clade 3), Ecuador and Panama (clade 4), interfluves between the Branco River and Rio Negro in the Brazilian Amazon, low-lying Amazonian areas (in Bolivia, western Brazil and Peru), and eastern Atlantic Forest (clade 5). Given the diverse clades, the researchers suggested that some of the subspecies might be independent species. Evolution A 2007 phylogenetic study showed that kinkajous form a basal lineage sister to the rest of the Procyonidae. They diverged 21.6–24 Mya. Two clades, one leading to Bassaricyon (olingos and the olinguito) plus Nasua (coatis), and one leading to Bassariscus (the ring-tailed cat and the cacomistle) plus Procyon (racoons), appeared subsequently and radiated during the Miocene (). Kinkajous are thought to have evolved in North America and invaded South America as part of the Great American Interchange that followed the formation of the Isthmus of Panama. The phylogenetic relationships obtained in the 2007 study are given below; these were supported by similar studies in the following years. Physical characteristics The kinkajou has a round head, large eyes, a short, pointed snout, short limbs, and a long prehensile tail. The total head-and-body length (including the tail) is between , and the tail measures . Its mature weight ranges from . Females are generally smaller than males. The short, rounded ears measure . The eyes reflect green or bright yellow against light. The long, thick tongue is highly extrudable. The snout is dark brown to black. The claws are sharp and short. The coat color varies throughout the range and at different times of the year. Several shades such as tawny olive, wood brown, and yellowish tawny have been reported for the upper part of the coat and the upper side of the tail, while the underparts and the lower side of the tail have been observed to be buff, tawny, or brownish yellow. Some individuals have a black stripe running along the midline of the back. The color seems to become lighter from the south to the north, though no seasonal trends have been observed. The fur is short, woolly and dense. Hairs are of two types - light yellowish and darker with brown tips. The darker hairs reflect light poorly relative to the lighter ones, often creating an illusion of spots and dark lines on the coat. The tail is covered with thick fur up to the end. The kinkajou is distinguished from other procyonids by its small, rounded ears, extensible tongue, and prehensile tail. Olingos are similar enough in appearance that many native cultures do not distinguish the two. Compared to olingos, kinkajous are larger, have foreshortened muzzles, and lack anal scent glands (in addition to the previously described differences). The binturong, a Southeast Asian viverrid, has similar limb proportions and is the only other carnivoran with a prehensile tail. The kinkajou resembles neotropical monkeys in having a prehensile tail and big, forward-facing eyes, but has a different dentition and heavy fur on the soles of the feet. Range and habitat Kinkajous range from east and south of the Sierra Madre in Mexico, throughout Central America to Bolivia east of the Andes and the Atlantic Forest of southeastern Brazil. Their altitudinal range is from sea level to 2,500 m. They are found in closed-canopy tropical forests, including lowland rainforest, montane forest, dry forest, gallery forest, and secondary forest. Deforestation is thus a potential threat to the species. Diet Although the kinkajou is classified in the order Carnivora and has sharp teeth, its omnivorous diet consists mainly of fruit, particularly figs. Some 90% of their diet consists of (primarily ripe) fruit. To eat softer fruits, they hold it with their forepaws, then scoop out the succulent pulp with their tongue. They may play an important role in seed dispersal. Leaves, flowers, nectar, and various herbs make up much of the other 10% of their diet. They sometimes eat insects, particularly ants. They may occasionally eat bird eggs and small vertebrates. Their frugivorous habits are actually convergent with those of (diurnal) spider monkeys. The kinkajou's slender 5-inch extrudable tongue helps the animal to obtain fruit and to lick nectar from flowers, so it sometimes acts as a pollinator. (Nectar is also sometimes obtained by eating entire flowers.) Although captive specimens avidly eat honey (hence the name "honey bear"), honey in the diet of wild kinkajous is not well reported. Behavior Kinkajou spend most of their lives in trees, to which they are particularly well adapted. Like raccoons, kinkajous' remarkable manipulatory abilities rival those of primates. The kinkajou has a short-haired, fully prehensile tail (like some New World monkeys), which it uses as a "fifth hand" in climbing. It does not use its tail for grasping food. It can rotate its ankles and feet 180°, making it easy for the animal to run backward over tree limbs and climb down trees headfirst. Scent glands near the mouth, on the throat, and on the belly allow kinkajous to mark their territory and their travel routes. Kinkajous sleep in family units and groom one another. While they are usually solitary when foraging, they occasionally forage in large groups, and sometimes associate with olingos (which are also nocturnal arboreal frugivores). The larger kinkajous are dominant and will drive olingos away when food is scarce. Kinkajous have a much more extensive range than olingos and tend to be more common. However, olingos may have greater agility, perhaps facilitating their sympatry with kinkajous. As a nocturnal animal, the kinkajou's peak activity is usually between about 7:00 pm and midnight, and again an hour before dawn. During daylight hours, kinkajous sleep in tree hollows or in shaded tangles of leaves, avoiding direct sunlight. Kinkajous breed throughout the year, giving birth to one or occasionally two small babies after a gestation period of 112 to 118 days. As pets Kinkajous are sometimes kept as exotic pets. They are playful, generally quiet, docile, and have little odor, but they can occasionally be aggressive. Kinkajous dislike sudden movements, noise, and being awake during the day. An agitated kinkajou may emit a scream and attack, usually clawing its victim and sometimes biting deeply. In 2011, the Centers for Disease Control and Prevention reported that pet kinkajous in the United States can be carriers (fecal–oral route) of the raccoon roundworm Baylisascaris procyonis, which is capable of causing severe morbidity and even death in humans if the brain is infected. In 2023, National Geographic reported that escaped kinkajou pets were living in Florida. In El Salvador, Guatemala, and Honduras, pet kinkajous are commonly called micoleón, meaning "lion monkey". In Peru, pet kinkajous are frequently referred to as lirón (the Spanish word for dormice), often described as a "bear-monkey". These names reflect its monkey-like body and obviously carnivoran head. They typically live about 23 years in captivity, with a maximum recorded lifespan of 42 years.
Biology and health sciences
Procyonidae
Animals
334990
https://en.wikipedia.org/wiki/Rope
Rope
A rope is a group of yarns, plies, fibres, or strands that are twisted or braided together into a larger and stronger form. Ropes have tensile strength and so can be used for dragging and lifting. Rope is thicker and stronger than similarly constructed cord, string, and twine. Construction Rope may be constructed of any long, stringy, fibrous material (e.g., rattan, a natural material), but generally is constructed of certain natural or synthetic fibres. Synthetic fibre ropes are significantly stronger than their natural fibre counterparts, they have a higher tensile strength, they are more resistant to rotting than ropes created from natural fibres, and they can be made to float on water. But synthetic ropes also possess certain disadvantages, including slipperiness, and some can be damaged more easily by UV light. Common natural fibres for rope are Manila hemp, hemp, linen, cotton, coir, jute, straw, and sisal. Synthetic fibres in use for rope-making include polypropylene, nylon, polyesters (e.g. PET, LCP, Vectran), polyethylene (e.g. Dyneema and Spectra), Aramids (e.g. Twaron, Technora and Kevlar) and acrylics (e.g. Dralon). Some ropes are constructed of mixtures of several fibres or use co-polymer fibres. Wire rope is made of steel or other metal alloys. Ropes have been constructed of other fibrous materials such as silk, wool, and hair, but such ropes are not generally available. Rayon is a regenerated fibre used to make decorative rope. The twist of the strands in a twisted or braided rope serves not only to keep a rope together, but enables the rope to more evenly distribute tension among the individual strands. Without any twist in the rope, the shortest strand(s) would always be supporting a much higher proportion of the total load. Size measurement Because rope has a long history, many systems have been used to specify the size of a rope. In systems that use the inch (Imperial and US customary measurement systems), large ropes over diameter – such as those used on ships – are measured by their circumference in inches; smaller ropes have a nominal diameter based on the circumference divided by three (as a rough approximation of pi). In the metric system of measurement, the nominal diameter is given in millimetres. The current preferred international standard for rope sizes is to give the mass per unit length, in kilograms per metre. However, even sources otherwise using metric units may still give a "rope number" for large ropes, which is the circumference in inches. Use Rope has been used since prehistoric times. It is of paramount importance in fields as diverse as construction, seafaring, exploration, sports, theatre, and communications. Many types of knots have been developed to fasten with rope, join ropes, and utilize rope to generate mechanical advantage. Pulleys can redirect the pulling force of a rope in another direction, multiply its lifting or pulling power, and distribute a load over multiple parts of the same rope to increase safety and decrease wear. Winches and capstans are machines designed to pull ropes. Knotted ropes have historically been used for measurement and mathematics. For example, Ancient Egyptian rope stretchers used knotted ropes to measure distances, Middle Age European shipbuilders and architects performed calculations using arithmetic ropes, and some pre-colonial South American cultures used quipu for numerical record-keeping. History The use of ropes for hunting, pulling, fastening, attaching, carrying, lifting, and climbing dates back to prehistoric times. It is likely that the earliest "ropes" were naturally occurring lengths of plant fibre, such as vines, followed soon by the first attempts at twisting and braiding these strands together to form the first proper ropes in the modern sense of the word. The earliest evidence of suspected rope is a very small fragment of three-ply cord from a Neanderthal site dated 50,000 years ago. This item was so small, it was only discovered and described with the help of a high power microscope. It is slightly thicker than the average thumb-nail, and would not stretch from edge-to-edge across a little finger-nail. There are other ways fibres can twist in nature, without deliberate construction. A tool dated between 35,000 and 40,000 years found in the Hohle Fels cave in south-western Germany has been identified as a means for making rope. It is a strip of mammoth ivory with four holes drilled through it. Each hole is lined with precisely cut spiral incisions. The grooves on three of the holes spiral in a clockwise direction from each side of the strip. The grooves on one hole spiral clockwise on one side, but counter-clockwise from the other side. Plant fibres have been found on it that could have come from when they fed through the holes and the tool twisted, creating a single ply yarn. Fiber-making experiments with a replica found that the perforations served as effective guides for raw fibers, making it easier to make a strong, elastic rope than simply twisting fibers by hand spiral incisions would have tended to keep the fibres in place. But the incisions cannot impart any twist to the fibres pulled through the holes. Other 15,000-year-old objects with holes with spiral incisions, made from reindeer antler, found across Europe are thought to have been used to manipulate ropes, or perhaps some other purpose. They were originally named "batons", and thought possibly to have been carried as badges of rank. Impressions of cordage found on fired clay provide evidence of string and rope-making technology in Europe dating back 28,000 years. Fossilized fragments of "probably two-ply laid rope of about diameter" were found in one of the caves at Lascaux, dating to approximately 15,000 BC. The ancient Egyptians were probably the first civilization to develop special tools to make rope. Egyptian rope dates back to 4000 to 3500 BC and was generally made of water reed fibres. Other rope in antiquity was made from the fibres of date palms, flax, grass, papyrus, leather, or animal hair. The use of such ropes pulled by thousands of workers allowed the Egyptians to move the heavy stones required to build their monuments. Starting from approximately 2800 BC, rope made of hemp fibres was in use in China. Rope and the craft of rope making spread throughout Asia, India, and Europe over the next several thousand years. From the Middle Ages until the 18th century, in Europe ropes were constructed in ropewalks, very long buildings where strands the full length of the rope were spread out and then laid up or twisted together to form the rope. The cable length was thus set by the length of the available rope walk. This is related to the unit of length termed cable length. This allowed for long ropes of up to long or longer to be made. These long ropes were necessary in shipping as short ropes would require splicing to make them long enough to use for sheets and halyards. The strongest form of splicing is the short splice, which doubles the cross-sectional area of the rope at the area of the splice, which would cause problems in running the line through pulleys. Any splices narrow enough to maintain smooth running would be less able to support the required weight. Rope intended for naval use would have a coloured yarn, known as the "rogue's yarn", included in the layup. This enabled the source to be identified and to detect pilfering. Leonardo da Vinci drew sketches of a concept for a ropemaking machine, but it was never built. Remarkable feats of construction were accomplished using rope but without advanced technology: In 1586, Domenico Fontana erected the 327 ton obelisk on Rome's Saint Peter's Square with a concerted effort of 900 men, 75 horses, and countless pulleys and meters of rope. By the late 18th century several working machines had been built and patented. Some rope is still made from natural fibres, such as coir and sisal, despite the dominance of synthetic fibres such as nylon and polypropylene, which have become increasingly popular since the 1950s. Nylon was discovered in the late 1930s and was first introduced into fiber ropes during World War II. Indeed, the first synthetic fiber ropes were small braided parachute cords and three-strand tow ropes for gliders, made of nylon during World War II. Gallery Styles of rope Laid or twisted rope Laid rope, also called twisted rope, is historically the prevalent form of rope, at least in modern Western history. Common twisted rope generally consists of three strands and is normally right-laid, or given a final right-handed twist. The ISO 2 standard uses the uppercase letters and to indicate the two possible directions of twist, as suggested by the direction of slant of the central portions of these two letters. The handedness of the twist is the direction of the twists as they progress away from an observer. Thus Z-twist rope is said to be right-handed, and S-twist to be left-handed. Twisted ropes are built up in three steps. First, fibres are gathered and spun into yarns. A number of these yarns are then formed into strands by twisting. The strands are then twisted together to lay the rope. The twist of the yarn is opposite to that of the strand, and that in turn is opposite to that of the rope. It is this counter-twist, introduced with each successive operation, which holds the final rope together as a stable, unified object. Traditionally, a three strand laid rope is called a plain- or hawser-laid, a four strand rope is called shroud-laid, and a larger rope formed by counter-twisting three or more multi-strand ropes together is called cable-laid. Cable-laid rope is sometimes clamped to maintain a tight counter-twist rendering the resulting cable virtually waterproof. Without this feature, deep water sailing (before the advent of steel chains and other lines) was largely impossible, as any appreciable length of rope for anchoring or ship to ship transfers, would become too waterlogged – and therefore too heavy – to lift, even with the aid of a capstan or windlass. One property of laid rope is partial untwisting when used. This can cause spinning of suspended loads, or stretching, kinking, or hockling of the rope itself. An additional drawback of twisted construction is that every fibre is exposed to abrasion numerous times along the length of the rope. This means that the rope can degrade to numerous inch-long fibre fragments, which is not easily detected visually. Twisted ropes have a preferred direction for coiling. Normal right-laid rope should be coiled clockwise, to prevent kinking. Coiling this way imparts a twist to the rope. Rope of this type must be bound at its ends by some means to prevent untwisting. Braided rope While rope may be made from three or more strands, modern braided rope consists of a braided (tubular) jacket over strands of fibre (these may also be braided). Some forms of braided rope with untwisted cores have a particular advantage; they do not impart an additional twisting force when they are stressed. The lack of added twisting forces is an advantage when a load is freely suspended, as when a rope is used for rappelling or to suspend an arborist. Other specialized cores reduce the shock from arresting a fall when used as a part of a personal or group safety system. Braided ropes are generally made from nylon, polyester, polypropylene or high performance fibres such as high modulus polyethylene (HMPE) and aramid. Nylon is chosen for its strength and elastic stretch properties. However, nylon absorbs water and is 10–15% weaker when wet. Polyester is about 90% as strong as nylon but stretches less under load and is not affected by water. It has somewhat better UV resistance, and is more abrasion resistant. Polypropylene is preferred for low cost and light weight (it floats on water) but it has limited resistance to ultraviolet light, is susceptible to friction and has a poor heat resistance. Braided ropes (and objects like garden hoses, fibre optic or coaxial cables, etc.) that have no lay (or inherent twist) uncoil better if each alternate loop is twisted in the opposite direction, such as in figure-eight coils, where the twist reverses regularly and essentially cancels out. Single braid consists of an even number of strands, eight or twelve being typical, braided into a circular pattern with half of the strands going clockwise and the other half going anticlockwise. The strands can interlock with either twill or panama (Basked) or seldom plain weave. Kyosev introduced the German notation in English, where the floating length (German: Flechtigkeit) and the number of yarns in a group (German: Fädigkeit) in more natural way for braiding process are used, instead of the pattern names in weaving. The central void may be large or small; in the former case the term hollow braid is sometimes preferred. Double braid, also called braid on braid, consists of an inner braid filling the central void in an outer braid, that may be of the same or different material. Often the inner braid fibre is chosen for strength while the outer braid fibre is chosen for abrasion resistance. In a solid braid, (square braid, gasket, or form braid there are at least three or more groups of yarns, interlacing in complex (interlocking) structure. This construction is popular for gaskets and general purpose utility rope but rare in specialized high performance line. Kernmantle rope has a core (kern) of long twisted fibres in the center, with a braided outer sheath or mantle of woven fibres. The kern provides most of the strength (about 70%), while the mantle protects the kern and determines the handling properties of the rope (how easy it is to hold, to tie knots in, and so on). In dynamic climbing line, core fibres are usually twisted to make the rope more elastic. Static kernmantle ropes are made with untwisted core fibres and tighter braid, which causes them to be stiffer in addition to limiting the stretch. Other types Plaited rope is made by braiding twisted strands, and is also called square braid. It is not as round as twisted rope and coarser to the touch. It is less prone to kinking than twisted rope and, depending on the material, very flexible and therefore easy to handle and knot. This construction exposes all fibres as well, with the same drawbacks as described above. Brait rope is a combination of braided and plaited, a non-rotating alternative to laid three-strand ropes. Due to its excellent energy-absorption characteristics, it is often used by arborists. It is also a popular rope for anchoring and can be used as mooring warps. This type of construction was pioneered by Yale Cordage. Endless winding rope is made by winding single strands of high-performance yarns around two end terminations until the desired break strength or stiffness has been reached. This type of rope (often specified as cable to make the difference between a braided or twined construction) has the advantage of having no construction stretch as is the case with above constructions. Endless winding is pioneered by SmartRigging and FibreMax. Rock climbing The sport of rock climbing uses what is termed "dynamic" rope, an elastic rope which stretches under load to absorb the energy generated in arresting a fall without creating forces high enough to injure the climber. Such ropes are of kernmantle construction, as described below. Conversely, "static" ropes have minimal stretch and are not designed to arrest free falls. They are used in caving, rappelling, rescue applications, and industries such as window washing. The UIAA, in concert with the CEN, sets climbing-rope standards and oversees testing. Any rope bearing a GUIANA or CE certification tag is suitable for climbing. Climbing ropes cut easily when under load. Keeping them away from sharp rock edges is imperative. Previous falls arrested by a rope, damage to its sheath, and contamination by dirt or solvents all weaken a rope and can render it unsuitable for further sport use. Rock climbing ropes are designated as suitable for single, double or twin use. A single rope is the most common, and is intended to be used by itself. These range in thickness from roughly . Smaller diameter ropes are lighter, but wear out faster. Double ropes are thinner than single, usually and under, and are intended for use in pairs. These offer a greater margin of safety against cutting, since it is unlikely that both ropes will be cut, but complicate both belaying and leading. Double ropes may be clipped into alternating pieces of protection, allowing each to stay straighter and reduce both individual and total rope drag. Twin ropes are thin ropes which must be clipped into the same piece of protection, in effect being treated as a single strand. This adds security in situations where a rope may get cut. However new lighter-weight ropes with greater safety have virtually replaced this type of rope. The butterfly and alpine coils are methods of coiling a rope for carrying. Gallery of μCT/micro-CT images and animations 2D images / sections 2D flight-throughs/sections 3D renderings 3D flight-throughs/sections Handling Rope made from hemp, cotton or nylon is generally stored in a cool dry place for proper storage. To prevent kinking it is usually coiled. To prevent fraying or unravelling, the ends of a rope are bound with twine (whipping), tape, or heat shrink tubing. The ends of plastic fibre ropes are often melted and fused solid; however, the rope and knotting expert Geoffrey Budworth warns against this practice thus: Sealing rope ends this way is lazy and dangerous. A tugboat operator once sliced the palm of his hand open down to the sinews after the hardened (and obviously sharp) end of a rope that had been heat-sealed pulled through his grasp. There is no substitute for a properly made whipping. If a load-bearing rope gets a sharp or sudden jolt or the rope shows signs of deteriorating, it is recommended that the rope be replaced immediately and should be discarded or only used for non-load-bearing tasks. The average rope life-span is 5 years. Serious inspection should be given to line after that point. However, the use to which a rope is put affects frequency of inspection. Rope used in mission-critical applications, such as mooring lines or running rigging, should be regularly inspected on a much shorter timescale than this, and rope used in life-critical applications such as mountain climbing should be inspected on a far more frequent basis, up to and including before each use. Avoid stepping on climbing rope, as this might force tiny pieces of rock through the sheath, which can eventually deteriorate the core of the rope. Ropes may be flemished into coils on deck for safety, presentation, and tidiness. Many types of filaments in ropes are weakened by corrosive liquids, solvents, and high temperatures. Such damage is particularly treacherous because it is often invisible to the eye. Shock loading should be avoided with general use ropes, as it can damage them. All ropes should be used within a safe working load, which is much less than their breaking strength. A rope under tension – particularly if it has a great deal of elasticity – can be dangerous if parted. Care should be taken around lines under load. Terminology "Rope" is a material, and a tool. When it is assigned a specific function it is often referred to as a "line", especially in nautical usage. A line may get a further distinction, for example sail control lines are known as "sheets" (e.g. A jib sheet). A halyard is a line used to raise and lower a sail, typically with a shackle on its sail end. Other maritime examples of "lines" include anchor line, mooring line, fishing line, . Common items include clothesline and a chalk line. In some marine uses the term rope is retained, such as man rope, bolt rope, and bell rope.
Technology
Components_2
null
3034219
https://en.wikipedia.org/wiki/Tyto
Tyto
Tyto is a genus of birds consisting of true barn owls, grass owls and masked owls that collectively make up all the species within the subfamily Tytoninae of the barn owl family, Tytonidae. Taxonomy The genus Tyto was introduced in 1828 by the Swedish naturalist Gustaf Johan Billberg with the western barn owl as the type species. The name is from the Ancient Greek tutō meaning "owl". The barn owl (Tyto alba) was formerly considered to have a global distribution with around 28 subspecies. In the list of birds maintained by Frank Gill, Pamela Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC) the barn owl is now split into four species: the western barn owl (Tyto alba) (10 subspecies), the American barn owl (Tyto furcata) (12 subspecies), the eastern barn owl (Tyto javanica) (7 subspecies) and the Andaman masked owl (Tyto deroepstorffi). This arrangement is followed here. Some support for this split was provided by a molecular phylogenetic study by Vera Uva and collaborators published in 2018 that compared the DNA sequences of three mitochondrial and one nuclear loci. This split has not been adopted by other taxonomic authorities such as the Clements Checklist of Birds of the World maintained by members of Cornell University or by the list maintained by BirdLife International that is used by the International Union for Conservation of Nature. The cladogram below is based on the 2018 phylogenetic study. The Andaman masked owl (Tyto deroepstorffi) and Itombwe owl (Tyto prigoginei) were not sampled. The Manus masked owl (Tyto manusi) was embedded in a clade with subspecies of the Australian masked owl. Throughout their evolutionary history, Tyto owls have shown a better capability to colonize islands than other owls. Several such island forms have become extinct, some long ago, but some in comparatively recent times. A number of insular barn owls from the Mediterranean and the Caribbean were very large or truly gigantic species. Extant species Seventeen species are recognized: Extinct species Known from ancient fossils Tyto sanctialbani (Middle - Late Miocene of Central Europe) - formerly in Strix; includes T. campiterrae Tyto robusta (Late Miocene/Early Pliocene of the Gargano Peninsula, Italy) Tyto gigantea (Late Miocene/Early Pliocene of the Gargano Peninsula, Italy) Tyto balearica (Late Miocene - Middle Pleistocene of the west-central Mediterranean) Tyto mourerchauvireae (Middle Pleistocene of Sicily, Mediterranean) Tyto jinniushanensis (Pleistocene of Jing Niu Shan, China) Tyto maniola – Cuban Dwarf Barn Owl (Late Pleistocene of Cuba) Tyto sp. 1 Tyto sp. 2 Late prehistoric extinctions usually known from subfossil remains Mussau barn owl (Tyto cf. novaehollandiae) found in Mussau New Ireland greater barn owl (Tyto cf. novaehollandiae) found in New Ireland New Ireland lesser barn owl (Tyto cf. alba/aurantiaca) found in New Ireland New Caledonian barn owl (Tyto letocarti) found in New Caledonia - tentatively placed here Puerto Rican barn owl (Tyto cavatica) found in Puerto Rico - may still have existed up to 1912; possibly a subspecies of the ashy-faced owl (Tyto glaucops) Noel's barn owl (Tyto noeli) found in Cuba Rivero's barn owl (Tyto riveroi) found in Cuba Cuban barn owl (Tyto sp.) found in Cuba Hispaniolan barn owl (Tyto ostologa) found in Hispaniola Bahaman barn owl (Tyto pollens) found in Little Exuma, New Providence, and maybe Andros Island, the Bahamas - may have survived into the 16th century Barbuda barn owl (Tyto neddi) found in Barbuda and possibly Antigua Maltese barn owl (Tyto melitensis) found in Malta - formerly in Strix; possibly a paleosubspecies of Tyto alba Former species A number of owl fossils were at one time assigned to the present genus, but are nowadays placed elsewhere. While there are clear differences in osteology between typical owls and barn owls, there has been parallel evolution to some degree and thus isolated fossil bones cannot necessarily be assigned to either family without thorough study. Notably, the genus Strix has been misapplied by many early scientists as a "wastebasket taxon" for many owls, including Tyto. Tyto antiqua (Late Eocene/Early Oligocene of Quercy? - Early Miocene of France) was a barn owl of the prehistoric genus Prosybris; this taxon might be a nomen nudum, as the species was originally described in Strix, this requires confirmation Tyto edwardsi (Late Miocene of Grive-Saint-Alban, France) was a strigid owl, but has not yet been reliably identified to a genus; it might belong in Strix or the European Ninox-like group. Tyto ignota (Middle Miocene of Sansan, France) was a strigid owl of unclear affinities; while it might belong into Strix, this requires confirmation "TMT 164", a distal left tarsometatarsus of a supposed Tyto from the Middle Miocene Grive-Saint-Alban (France); might also belong in Prosybris, as it is similar to Tyto antiqua Description They are darker on the back than the front, usually an orange-brown colour, the front being a paler version of the back or mottled, although there is considerable variation even amongst species. Tyto owls have a divided, heart-shaped facial disc, and lack the ear-like tufts of feathers found in many other owls. Tyto owls tend to be larger than bay owls. The name tyto (τυτώ) is onomatopeic Greek for owl.
Biology and health sciences
Strigiformes
null
3034286
https://en.wikipedia.org/wiki/Electronic%20lock
Electronic lock
An electronic lock (or electric lock) is a locking device which operates by means of electric current. Electric locks are sometimes stand-alone with an electronic control assembly mounted directly to the lock. Electric locks may be connected to an access control system, the advantages of which include: key control, where keys can be added and removed without re-keying the lock cylinder; fine access control, where time and place are factors; and transaction logging, where activity is recorded. Electronic locks can also be remotely monitored and controlled, both to lock and to unlock. Operation Electric locks use magnets, solenoids, or motors to actuate the lock by either supplying or removing power. Operating the lock can be as simple as using a switch, for example an apartment intercom door release, or as complex as a biometric based access control system. There are two basic types of locks: "preventing mechanism" or operation mechanism. Types Electromagnetic lock The most basic type of electronic lock is a magnetic lock (informally called a "mag lock"). A large electro-magnet is mounted on the door frame and a corresponding armature is mounted on the door. When the magnet is powered and the door is closed, the armature is held fast to the magnet. Mag locks are simple to install and are very attack-resistant. One drawback is that improperly installed or maintained mag locks can fall on people, and also that one must unlock the mag lock to both enter and to leave. This has caused fire marshals to impose strict rules on the use of mag locks and access control practice in general. Additionally, NFPA 101 (Standard for Life Safety and Security), as well as the ADA (Americans with Disability Act) require "no prior knowledge" and "one simple movement" to allow "free egress". This means that in an emergency, a person must be able to move to a door and immediately exit with one motion (requiring no push buttons, having another person unlock the door, reading a sign, or "special knowledge"). Other problems include a lag time (delay), because the collapsing magnetic field holding the door shut does not release instantaneously. This lag time can cause a user to collide with the still-locked door. Finally, mag locks fail unlocked, in other words, if electrical power is removed they unlock. This could be a problem where security is a primary concern. Additionally, power outages could affect mag locks installed on fire listed doors, which are required to remain latched at all times except when personnel are passing through. Most mag lock designs would not meet current fire codes as the primary means of securing a fire listed door to a frame. Because of this, many commercial doors (this typically does not apply to private residences) are moving over to stand-alone locks, or electric locks installed under a Certified Personnel Program. The first mechanical recodable card lock was invented in 1976 by Tor Sørnes, who had worked for VingCard since the 1950s. The first card lock order was shipped in 1979 to Westin Peachtree Plaza Hotel, Atlanta, US. This product triggered the evolution of electronic locks for the hospitality industry. Electronic strikes Electric strikes (also called electric latch release) replace a standard strike mounted on the door frame and receive the latch and latch bolt. Electric strikes can be simplest to install when they are designed for one-for-one drop-in replacement of a standard strike, but some electric strike designs require that the door frame be heavily modified. Installation of a strike into a fire listed door (for open backed strikes on pairs of doors) or the frame must be done under listing agency authority, if any modifications to the frame are required (mostly for commercial doors and frames). In the US, since there is no current Certified Personnel Program to allow field installation of electric strikes into fire listed door openings, listing agency field evaluations would most likely require the door and frame to be de-listed and replaced. Electric strikes can allow mechanical free egress: a departing person operates the lockset in the door, not the electric strike in the door frame. Electric strikes can also be either "fail unlocked" (except in Fire Listed Doors, as they must remain latched when power is not present), or the more-secure "fail locked" design. Electric strikes are easier to attack than a mag lock. It is simple to lever the door open at the strike, as often there is an increased gap between the strike and the door latch. Latch guard plates are often used to cover this gap. Electronic deadbolts and latches Electric mortise and cylindrical locks are drop-in replacements for door-mounted mechanical locks. An additional hole must be drilled in the door for electric power wires. Also, a power transfer hinge is often used to get the power from the door frame to the door. Electric mortise and cylindrical locks allow mechanical free egress, and can be either fail unlocked or fail locked. In the US, UL rated doors must retain their rating: in new construction doors are cored and then rated. but in retrofits, the doors must be re-rated. Electrified exit hardware, sometimes called "panic hardware" or "crash bars", are used in fire exit applications. A person wishing to exit pushes against the bar to open the door, making it the easiest of mechanically-free exit methods. Electrified exit hardware can be either fail unlocked or fail locked. A drawback of electrified exit hardware is their complexity, which requires skill to install and maintenance to assure proper function. Only hardware labeled "Fire Exit Hardware" can be installed on fire listed doors and frames and must meet both panic exit listing standards and fire listing standards. Motor-operated locks are used throughout Europe. A European motor-operated lock has two modes, day mode where only the latch is electrically operated, and night mode where the more secure deadbolt is electrically operated. In South Korea, most homes and apartments have installed electronic locks, which are currently replacing the lock systems in older homes. South Korea mainly uses a lock system by Gateman. Passive electronic lock The "passive" in passive electronic locks means no power supply. Like electronic deadbolts, it is a drop-in replacement for mechanical locks. But the difference is that passive electronic locks do not require wiring and are easy to install. The passive electronic lock integrates a miniature electronic single-chip microcomputer. There is no mechanical keyhole, only three metal contacts are retained. When unlocking, insert the electronic key into the keyhole of the passive electronic lock, that is, the three contacts on the head end of the key are in contact with the three contacts on the passive electronic lock. At this time, the key will supply power to the passive electronic lock, and at the same time, read the ID number of the passive electronic lock for verification. When the verification is passed, the key will power the coil in the passive electronic lock. The coil generates a magnetic field and drives the magnet in the passive electronic lock to unlock. At the moment, turn the key to drive the mechanical structure in the passive electronic lock to unlock the lock body. After successful unlocking, the key records the ID number of the passive electronic lock and also records the time of unlocking the passive electronic lock. Passive electronic locks can only be unlocked by a key with unlocking authority, and unlocking will fail if there is no unlocking authority. Passive electronic locks are currently used in a number of specialized fields, such as power utilities, water utilities, public safety, transportation, data centers, etc. Programmable lock The programmable electronic lock system is realized by programmable keys, electronic locks and software. When the identification code of the key matches the identification code of the lock, all available keys are operated to unlock. The internal structure of the lock contains a cylinder, which has a contact (lock slot) that is in contact with the key, and a part of it is an electronic control device to store and verify the received identification code and respond (whether it is unlocked). The key contains a power supply device, usually a rechargeable battery or a replaceable battery in the key, used to drive the system to work; it also includes an electronic storage and control device for storing the identification code of the lock. The software is used to set and modify the data of each key and lock. Using this type of key and lock control system does not need to change user habits. In addition, compared with the previous mechanical device, its advantage is that only one key can open multiple locks instead of a bunch of keys like the current one. A single key can contain many lock identification codes; which can set the unlock permission for a single user. Authentication methods A feature of electronic locks is that the locks can deactivated or opened by authentication, without the use of a traditional physical key: Numerical codes, passwords, and passphrases Perhaps the most common form of electronic lock uses a keypad to enter a numerical code or password for authentication. Some feature an audible response to each press. Combination lengths are usually between four and six digits long. Security tokens Another means of authenticating users is to require them to scan or "swipe" a security token such as a smart card or similar, or to interact a token with the lock. For example, some locks can access stored credentials on a personal digital assistant (PDA) or smartphone, by using infrared, Bluetooth, or NFC data transfer methods. Biometrics As biometrics become more and more prominent as a recognized means of positive identification, their use in security systems increases. Some electronic locks take advantage of technologies such as fingerprint scanning, retinal scanning, iris scanning and voice print identification to authenticate users. RFID Radio-frequency identification (RFID) is the use of an object (typically referred to as an "RFID tag") applied to or incorporated into a product, animal, or person for the purpose of identification and tracking using radio waves. Some tags can be read from several meters away and beyond the line of sight of the reader. This technology is also used in some modern electronic locks. The technology has been approved since before the 1970s but has become much more prevalent in recent years due to its usages in things like global supply chain management and pet microchipping.
Technology
Mechanisms
null
3036155
https://en.wikipedia.org/wiki/Parmeliaceae
Parmeliaceae
The Parmeliaceae is a large and diverse family of Lecanoromycetes. With over 2700 species in 71 genera, it is the largest family of lichen-forming fungi. The most speciose genera in the family are the well-known groups: Xanthoparmelia (822 species), Usnea (355 species), Parmotrema (255 species), and Hypotrachyna (262 species). Nearly all members of the family have a symbiotic association with a green alga (most often Trebouxia spp., but Asterochloris spp. are known to associate with some species). The majority of Parmeliaceae species have a foliose, fruticose, or subfruticose growth form. The morphological diversity and complexity exhibited by this group is enormous, and many specimens are exceedingly difficult to identify down to the species level. The family has a cosmopolitan distribution, and is present in a wide range of habitats and climatic regions. This includes everywhere from roadside pavement to alpine rocks, from tropical rainforest trees to subshrubs in the Arctic tundra. Members of the Parmeliaceae are found in most terrestrial environments. Several Parmeliaceae species have been assessed for the global IUCN Red List. Taxonomy Based on several molecular phylogenetic studies, the Parmeliaceae as currently circumscribed has been shown to be a monophyletic group. This circumscription is inclusive of the previously described families Alectoriaceae, Anziaceae, Hypogymniaceae, and Usneaceae, which are all no longer recognised by most lichen systematists. However, despite the family being one of the most thoroughly studied groups of lichens, several relationships within the family still remain unclear. Phylogenetic analysis supports the existence of seven distinct clades in the family. The Parmelioid clade is the largest, containing 27 genera and about 1850 species – about two-thirds of the species in the family. Alectorioid clade (5 genera) Cetrarioid clade (17 genera) Hypogymnioid clade (4 genera) Letharioid clade (2 genera) Parmelioid clade (27 genera) Psiloparmelioid clade (2 genera) Usneoid clade (1 genus) Many Parmeliaceae genera do not group phylogenetically into any of these clades, and these, along with genera that have not yet had their DNA studied, are classed as "genera with uncertain affinities". The Parmeliaceae has been divided into two subfamilies, Protoparmelioideae and Parmelioideae. The diversification of various Parmelioideae lineages may have been a result of gaining innovations that provided adaptive advantages, such as melanin production in the genus Melanohalea. Diversification of the Protoparmelioideae occurred during the Miocene. The Parmelioid clade is the largest in the Parmeliaceae, with more than 1800 species and a centre of distribution in the Southern Hemisphere. Generic classification The classification history of Parmeliaceae reflects evolving approaches to fungal taxonomy over two centuries. When Erik Acharius first described Parmelia in 1803, it encompassed a broad range of foliose lichens with rounded apothecia. By the mid-1800s, researchers began segregating genera based on ascospore characteristics, leading to the recognition of distinct groups like Physcia and Xanthoria. The most dramatic period of generic splitting occurred in the 1970s and 1980s, when Mason Hale and others proposed numerous new genera based primarily on morphological features such as shapes, rhizine types, and cortical chemistry. The advent of molecular phylogenetics techniques in the late 1990s provided new tools for evaluating which morphological and chemical characters were most reliable for defining genera. These studies led to significant refinements in generic concepts, supporting some previously proposed splits while showing others to be artificial. For example, molecular data revealed that the brown-fruited genus Neofuscelia needed to be merged into Xanthoparmelia, while confirming that groups like Parmotrema and Cetrelia represented distinct evolutionary lineages. Current understanding of generic relationships in Parmeliaceae emphasises the importance of reproductive characters over vegetative features. Characters of the ascomata (especially anatomy and characteristics), conidial types, and cell wall polysaccharides have proven particularly valuable for defining natural groups. In contrast, some previously emphasised features such as thallus growth form and the presence of specific cortical substances have been shown to be more variable within lineages than previously thought. Modern molecular studies have established that approximately 75% of Parmeliaceae species belong to well-defined major clades, including groups like Xanthoparmelia, Parmotrema, and their close relatives. The relationships among the remaining genera continue to be refined through ongoing research. Rather than being defined by single diagnostic features, most genera are now recognised as monophyletic groups characterised by unique combinations of multiple morphological, chemical, and anatomical traits. Evolutionary history Although fossil records of extant lichen species are scarce, the existence of some amber inclusions has allowed for a rough estimate of the divergence of the Parmeliaceae from its most recent common ancestor. An Anzia inclusion from 35–40 Myr-old Baltic amber and Parmelia from 15–45 Myr-old Dominican amber suggest a minimum age estimate for the Parmeliaceae of about 40 Myr. A fossil-calibrated phylogeny has estimated the Parmeliaceae to have diversified much earlier, around the Cretaceous–Paleogene boundary, 58–74 Myr ago. Characteristics Thallus Parmeliaceae thalli are most often foliose, fruticose or subfruticose, but can be umblicate, peltate, caespitose, crustose, or subcrustose. Two genera, Nesolechia and Raesaenenia, contain lichenicolous fungi. They can be a variety of colours, from whitish to grey, green to yellow, or brown to blackish (or any combination therein). Many genera are lobe forming, and nearly all are heteromerous (which are corticate on both sides). Species are usually rhizinate on the lower surface, occasionally with holdfasts, rhizohyphae, or a hypothallus. Only a few genera have a naked lower surface (for example Usnea, Hypogymnia and Menegazzia). The upper surface has a pored or non-pored epicortex. Medulla is solid, but often loosely woven. Apothecia Apothecia are lecanorine, produced along the lamina or margin, and sessile to pedicellate (or less often sunken). Thalline exciple is concolorous with the thallus. Asci are amyloid, and the vast majority of species have eight spores per ascus, though a few species are many-spored, and several Menegazzia species have two spores per ascus. Spores Ascospores are simple, hyaline, and often small. Conidia generally arise laterally from the joints of conidiogenous hyphae (Parmelia-type), but arise terminally from these joints in a small number of species (Psora-type). The conidia can have a broad range of shapes: cylindrical to bacilliform, bifusiform, fusiform, sublageniform, unciform, filiform, or curved. Pycnidia are immersed or rarely emergent from the upper cortex, are produced along the lamina or margins, pyriform in shape, and dark-brown to black in colour. Chemistry Members of the Parmeliaceae exhibit a diverse chemistry, with several types of lichenan (Xanthoparmelia-type, Cetraria-type, intermediate-type), isolichenan and/or other polysaccharides being known from the cell walls of many species. The wide diversity in the types of chemical compounds includes depsides, depsidones, aliphatic acids, triterpenes, anthraquinones, secalonic acids, pulvinic acid derivatives, and xanthones. The compounds usnic acid and atranorin, which are found exclusively in the Parmeliaceae, are of great importance in the systematics of the family, and the presence or absence of these chemicals have been used in several instances to help define genera. Parmelia and Usnea are the best chemically characterized genera, while the species Cetraria islandica and Evernia prunastri have attracted considerable research attention for their bioactive compounds. A study of three parmelioid lichens (Bulbothrix setschwanensis, Hypotrachyna cirrhata, and Parmotrema reticulatum) collected from high-altitude areas of Garhwal Himalaya, showed considerable variation in the chemical content with the rising altitude. This suggests that there is a prominent role for secondary metabolites in the wider ecological distribution of Parmelioid lichens at higher altitudes. Photobiont The main photobiont genus that associates with Parmeliaceae species is the chlorophyte Trebouxia. In particular, the species Trebouxia jamesii appears to be especially prominent. Some Parmeliaceae genera are also known to associate with Asterochloris, but the frequency of this association is not yet known. In general, photobiont diversity within the Parmeliaceae is a little studied subject, and much is left to discover here. Genera These are the genera that are in the Parmeliaceae (including estimated number of species in each genus). Following the genus name is the taxonomic authority (those who first circumscribed the genus; standardised author abbreviations are used), year of publication, and the estimated number of species. Ahtiana – 1 sp. Alectoria – 9 spp. Allantoparmelia – 3 spp. Allocetraria – 12 spp. Anzia – 34 spp. Arctocetraria Arctoparmelia – 5 spp. Asahinea – 2 spp. Austromelanelixia – 5 spp. Austroparmelina – 13 spp. Brodoa – 3 spp. Bryocaulon – 4 spp. Bryoria Bulborrhizina – 1 sp. Bulbothrix – 62 Canoparmelia – 35 spp. Cetraria – 35 spp. Cetrariella – 3 spp. Cetrariopsis – 3 spp. Cetrelia – 19 spp. Coelopogon – 2 spp. Cladocetraria – 1 sp. Cornicularia – 1 sp. Crespoa – 5 spp. Dactylina – 2 spp. Davidgallowaya – 1 sp. Dolichousnea – 3 spp. Emodomelanelia – 1 sp. Esslingeriana – 1 sp. Evernia – 10 spp. Everniopsis – 1 sp. Flavocetraria – 1 spp. Flavocetrariella Flavoparmelia – 32 spp. Flavopunctelia – 5 spp. Gowardia – 3 spp. Himantormia – 2 spp. Hypogymnia – 90 spp. Hypotrachyna – 262 spp. Imshaugia – 1 sp. Kaernefeltia – 3 spp. Letharia – 9 spp. Lethariella – 11 spp. Maronina – 3 spp. Masonhalea – 2 spp. Melanelia – 2 spp. Melanelixia – 11 spp. Melanohalea – 22 spp. Menegazzia – 70 spp. Montanelia – 5 spp. Myelochroa – 30 spp. Neoprotoparmelia – 14 spp. Nephromopsis – 62 spp.? Nesolechia – 2 spp. Nipponoparmelia – 4 spp. Nodobryoria – 3 spp. Notoparmelia – 16 spp. Omphalodium – 4 spp. Omphalora – 1 sp. Oropogon – 42 spp. Pannoparmelia – 5 spp. Parmelia – 43 spp. Parmelina – 10 spp. Parmelinella – 8 spp. Parmeliopsis – 3 spp. Parmotrema – 255 spp. Parmotremopsis – 2 spp Phacopsis – 10 spp. Platismatia – 11 spp. Pleurosticta – 2 spp. Protoparmelia – 11 spp. Protousnea – 8 spp. Pseudephebe – 2 spp. Pseudevernia – 4 spp. Pseudoparmelia – 15 spp. Psiloparmelia – 13 spp. Punctelia – 48 spp. Relicina – 59 spp. Remototrachyna – 19 spp. Raesaenenia – 1 sp. Sulcaria – 5 spp. Tuckermanella – 7 spp. Tuckermannopsis – 12 spp. Tuckneraria – 3 spp. Usnea – 355 spp. Usnocetraria – 2 spp. Vulpicida – 6 spp. Xanthoparmelia – 822 spp. A genus Foveolaria was proposed in 2023 to contain the species historically known as Cetraria nivalis and transferred to several genera (including Allocetraria, Flavocetraria, and Nephromopsis), but this naming proposal was not valid, as the name has already been used for a plant genus; its current taxonomic status is unclear. Conservation Parmeliaceae species that have been assessed for the global IUCN Red List include the following: Anzia centrifuga (vulnerable, 2014); Sulcaria badia (endangered, 2019); Lethariella togashii (vulnerable, 2017); Hypotrachyna virginica (critically endangered, 2020); Sulcaria isidiifera (critically endangered, 2017); Sulcaria spiralifera (endangered, 2020); and Xanthoparmelia beccae (vulnerable, 2017). Image gallery
Biology and health sciences
Lichens
Plants
164217
https://en.wikipedia.org/wiki/Tobacco%20mosaic%20virus
Tobacco mosaic virus
Tobacco mosaic virus (TMV) is a positive-sense single-stranded RNA virus species in the genus Tobamovirus that infects a wide range of plants, especially tobacco and other members of the family Solanaceae. The infection causes characteristic patterns, such as "mosaic"-like mottling and discoloration on the leaves (hence the name). TMV was the first virus to be discovered. Although it was known from the late 19th century that a non-bacterial infectious disease was damaging tobacco crops, it was not until 1930 that the infectious agent was determined to be a virus. It is the first pathogen identified as a virus. The virus was crystallised by Wendell Meredith Stanley.It has a similar size to the largest synthetic molecule, known as PG5 with comparable length and diameter. History In 1886, Adolf Mayer first described the tobacco mosaic disease that could be transferred between plants, similar to bacterial infections. In 1892, Dmitri Ivanovsky gave the first concrete evidence for the existence of a non-bacterial infectious agent, showing that infected sap remained infectious even after filtering through the finest Chamberland filters. Later, in 1903, Ivanovsky published a paper describing abnormal crystal intracellular inclusions in the host cells of the affected tobacco plants and argued the connection between these inclusions and the infectious agent. However, Ivanovsky remained rather convinced, despite repeated failures to produce evidence, that the causal agent was an unculturable bacterium, too small to be retained on the employed Chamberland filters and to be detected in the light microscope. In 1898, Martinus Beijerinck independently replicated Ivanovsky's filtration experiments and then showed that the infectious agent was able to reproduce and multiply in the host cells of the tobacco plant. Beijerinck adopted the term of "virus" to indicate that the causal agent of tobacco mosaic disease was of non-bacterial nature. Tobacco mosaic virus was the first virus to be crystallized. It exhibit liquid crystal phases above a critical density. It was achieved by Wendell Meredith Stanley in 1935 who also showed that TMV remains active even after crystallization. For his work, he was awarded 1/4 of the Nobel Prize in Chemistry in 1946, even though it was later shown some of his conclusions (in particular, that the crystals were pure protein, and assembled by autocatalysis) were incorrect. The first electron microscopical images of TMV were made in 1939 by Gustav Kausche, Edgar Pfankuch and Helmut Ruska – the brother of Nobel Prize winner Ernst Ruska. In 1955, Heinz Fraenkel-Conrat and Robley Williams showed that purified TMV RNA and its capsid (coat) protein assemble by themselves to functional viruses, indicating that this is the most stable structure (the one with the lowest free energy). The crystallographer Rosalind Franklin worked for Stanley for about a month at Berkeley, and later designed and built a model of TMV for the 1958 World's Fair at Brussels. In 1958, she speculated that the virus was hollow, not solid, and hypothesized that the RNA of TMV is single-stranded. This conjecture was proven to be correct after her death and is now known to be the + strand. The investigations of tobacco mosaic disease and subsequent discovery of its viral nature were instrumental in the establishment of the general concepts of virology. Structure Tobacco mosaic virus has a rod-like appearance. Its capsid is made from 2130 molecules of coat protein and one molecule of genomic single strand RNA, 6400 bases long. The coat protein self-assembles into the rod-like helical structure (16.3 proteins per helix turn) around the RNA, which forms a hairpin loop structure (see the electron micrograph above). The structural organization of the virus gives stability. The protein monomer consists of 158 amino acids which are assembled into four main alpha-helices, which are joined by a prominent loop proximal to the axis of the virion. Virions are ~300 nm in length and ~18 nm in diameter. Negatively stained electron microphotographs show a distinct inner channel of radius ~2 nm. The RNA is located at a radius of ~4 nm and is protected from the action of cellular enzymes by the coat protein. X-ray fiber diffraction structure of the intact virus was studied based on an electron density map at 3.6 Å resolution. Inside the capsid helix, near the core, is the coiled RNA molecule, which is made up of 6,395 ±10 nucleotides. The structure of the virus plays an important role in the recognition of the viral DNA. This happens due to the formation of an obligatory intermediate produced from a protein allows the virus to recognize a specific RNA hairpin structure. The intermediate induces the nucleation of TMV self-assembly by binding with the hairpin structure. Genome The TMV genome consists of a 6.3–6.5 kbp single-stranded (ss) RNA. The 3’-terminus has a tRNA-like structure, and the 5’-terminus has a methylated nucleotide cap. (m7G5’pppG). The genome encodes 4 open reading frames (ORFs), two of which produce a single protein due to ribosomal readthrough of a leaky UAG stop codon. The 4 genes encode a replicase (with methyltransferase [MT] and RNA helicase [Hel] domains), an RNA-dependent RNA polymerase, a so-called movement protein (MP) and a capsid protein (CP). The coding sequence starts with the first reading frame, which is 69 nucleotides away from the 5' end of the RNA. The noncoding region at the 5' end can be varied in different individual virions, but there hasn't been any variation found between virions in the noncoding region at the 3' end. Physicochemical properties TMV is a thermostable virus. On a dried leaf, it can withstand up to 50 °C (120 degree Fahrenheit) for 30 minutes. TMV has an index of refraction of about 1.57. Disease cycle TMV does not have a distinct overwintering structure. Rather, it will over-winter in infected tobacco stalks and leaves in the soil, on the surface of contaminated seed (TMV can even survive in contaminated tobacco products for many years, so smokers can accidentally transmit it by touch, although not in the smoke itself). With the direct contact with host plants through its vectors (normally insects such as aphids and leafhoppers), TMV will go through the infection process and then the replication process. Infection and transmission After its multiplication, it enters the neighboring cells through plasmodesmata. The infection does not spread through contact with insects, but instead spreads by direct contact to the neighboring cells. For its smooth entry, TMV produces a 30 kDa movement protein called P30 which enlarges the plasmodesmata. TMV most likely moves from cell-to-cell as a complex of the RNA, P30, and replicate proteins. It can also spread through phloem for longer distance movement within the plant. Moreover, TMV can be transmitted from one plant to another by direct contact. Although TMV does not have defined transmission vectors, the virus can be easily transmitted from the infected hosts to the healthy plants by human handling. Replication Following entry into its host via mechanical inoculation, TMV uncoats itself to release its viral [+]RNA strand. As uncoating occurs, the MetHel:Pol gene is translated to make the capping enzyme MetHel and the RNA Polymerase. Then the viral genome will further replicate to produce multiple mRNAs via a [-]RNA intermediate primed by the tRNAHIS at the [+]RNA 3' end. The resulting mRNAs encode several proteins, including the coat protein and an RNA-dependent RNA polymerase (RdRp), as well as the movement protein. Thus TMV can replicate its own genome. After the coat protein and RNA genome of TMV have been synthesized, they spontaneously assemble into complete TMV virions in a highly organized process. The protomers come together to form disks or 'lockwashers' composed of two layers of protomers arranged in a helix. The helical capsid grows by the addition of protomers to the end of the rod. As the rod lengthens, the RNA passes through a channel in its center and forms a loop at the growing end. In this way the RNA can easily fit as a spiral into the interior of the helical capsid. Host and symptoms Like other plant pathogenic viruses, TMV has a very wide host range and has different effects depending on the host being infected. Tobacco mosaic virus has been known to cause a production loss for flue cured tobacco of up to two percent in North Carolina. It is known to infect members of nine plant families, and at least 125 individual species, including tobacco, tomato, pepper (all members of the Solanaceae), cucumbers, a number of ornamental flowers, and beans including Phaseolus vulgaris and Vigna unguiculata. There are many different strains. The first symptom of this virus disease is a light green coloration between the veins of young leaves. This is followed quickly by the development of a "mosaic" or mottled pattern of light and dark green areas in the leaves. Rugosity may also be seen where the infected plant leaves display small localized random wrinkles. These symptoms develop quickly and are more pronounced on younger leaves. Its infection does not result in plant death, but if infection occurs early in the season, plants are stunted. Lower leaves are subjected to "mosaic burn" especially during periods of hot and dry weather. In these cases, large dead areas develop in the leaves. This constitutes one of the most destructive phases of Tobacco mosaic virus infection. Infected leaves may be crinkled, puckered, or elongated. However, if TMV infects crops like grape and apple, it is almost symptomless. TMV is able to infect and complete its replication cycle in a plant pathogenic fungus, TMV is able to enter and replicate in cells of C. acutatum, C. clavatum, and C. theobromicola, which may not be an exception, although it has neither been found nor probably searched for in nature. Environment TMV is one of the most stable viruses and has a wide survival range. As long as the surrounding temperature remains below approximately 40 degrees Celsius, TMV can sustain its stable form. All it needs is a host to infect. If necessary, greenhouses and botanical gardens would provide the most favorable condition for TMV to spread out, due to the high population density of possible hosts and the constant temperature throughout the year. It also could be useful to culture TMV in vitro in sap because it can survive up to 3000 days. Treatment and management One of the common control methods for TMV is sanitation, which includes removing infected plants and washing hands in between each planting. Crop rotation should also be employed to avoid infected soil/seed beds for at least two years. As for any plant disease, looking for resistant strains against TMV may also be advised. Furthermore, the cross protection method can be administered, where the stronger strain of TMV infection is inhibited by infecting the host plant with a mild strain of TMV, similar to the effect of a vaccine. In the past ten years, the application of genetic engineering on a host plant genome has been developed to allow the host plant to produce the TMV coat protein within their cells. It was hypothesized that the TMV genome will be re-coated rapidly upon entering the host cell, thus it prevents the initiation of TMV replication. Later it was found that the mechanism that protects the host from viral genome insertion is through gene silencing. TMV is inhibited by a product of the myxomycete slime mold Physarum polycephalum. Both tobacco and the beans P. vulgaris and V. sinensis suffered almost no lesioning in vitro from TMV when treated with a P. polycephalum extract. Research has shown that Bacillus spp. can be used to reduce the severity of symptoms from TMV in tobacco plants. In the study, treated tobacco plants had more growth and less build-up of TMV virions than tobacco plants that hadn't been treated. A research has been conducted by H.Fraenkel-Conrat to show the influence of acetic acid on the Tobacco Mosaic Virus. According to the research, 67% acetic acid resulted as degradation of the virus. Another possible source of prevention for TMV is the use of salicylic acid. A study completed by a research team at the University of Cambridge found that treating plants with salicylic acid reduced the amount of TMV viral RNAs and viral coat protein present in the tobacco plants. Their research showed that salicylic acid most likely was disrupting replication and transcription and more specifically, the RdRp complex. A research was conducted and revealed that humans have antibodies against Tobacco Mosaic Virus. Scientific and environmental impact The large amount of literature about TMV and its choice for many pioneering investigations in structural biology (including X-ray diffraction and X-ray crystallography), virus assembly and disassembly, and so on, are fundamentally due to the large quantities that can be obtained, plus the fact that it does not infect animals. After growing several hundred infected tobacco plants in a greenhouse, followed by a few simple laboratory procedures, a scientist can produce several grams of the virus. In fact, tobacco mosaic virus is so proliferate that the inclusion bodies can be seen with only a light microscope. James D. Watson, in his memoir The Double Helix, cites his x-ray investigation of TMV's helical structure as an important step in deducing the nature of the DNA molecule. Applications Plant viruses can be used to engineer viral vectors, tools commonly used by molecular biologists to deliver genetic material into plant cells; they are also sources of biomaterials and nanotechnology devices. Viral vectors based on TMV include those of the magnICON and TRBO plant expression technologies. Due to its cylindrical shape, high aspect ratio, self-assembling nature, and ability to incorporate metal coatings (nickel and cobalt) into its shell, TMV is an ideal candidate to be incorporated into battery electrodes. Addition of TMV to a battery electrode increases the reactive surface area by an order of magnitude, resulting in an increase in the battery's capacity by up to six times compared to a planar electrode geometry. The TMV-based vector also enabled C. acutatum to transiently express exogenous GFP up to six subcultures and for at least 2 mo after infection, without the need to develop transformation technology, RNAi can be expressed in the phytopathogenic fungus Colletotrichum acutatum by VIGS using a recombinant vector based on TMV in which the ORF of the gene encoding the green fluorescent protein (GFP) was transcribed in fungal cells from a duplicate of the TMV coat protein (CP) subgenomic mRNA promoter and demonstrated that the approach could be used to obtain foreign protein expression in fungi.
Biology and health sciences
Infectious disease
null
164331
https://en.wikipedia.org/wiki/Cauliflower
Cauliflower
Cauliflower is one of several vegetables cultivated from the species Brassica oleracea in the genus Brassica, which is in the Brassicaceae (or mustard) family. An annual plant that reproduces by seed, the cauliflower head is composed of a (generally) white inflorescence meristem. Cauliflower heads resemble those in broccoli, which differs in having flower buds as the edible portion. Only the head is typically eaten; the edible white flesh is sometimes called "curd". The global cauliflower and broccoli production in 2020 was over 25.5 million tons, worth 14.1 billion US dollars. Description There are four major groups of cauliflower. Italian: This specimen is diverse in appearance, biennial, and annual in type. This group includes white, Romanesco, and various brown, green, purple, and yellow cultivars. This type is the ancestral form from which the others were derived. Northern European annuals: These are used in Europe and North America for summer and fall harvests. They were developed in Germany in the 18th century and include the old cultivars Erfurt and Snowball. Northwest biennial: Used in Europe for winter and early spring harvest, developed in France in the 19th century and includes the old cultivars Angers and Roscoff. Asian: A tropical cauliflower used in China and India, it was developed in India during the 19th century from the now-abandoned Cornish type and includes old varieties Early Benaras and Early Patna. Domestication Cauliflowers are an 'arrested inflorescence'  subspecies of B. oleracea that arose around 2,500 years ago. Genomic analysis finds initially evolved from broccoli with three MADS-box genes, playing roles in its curd formation. Nine loci and candidate genes are linked with morphological and biological characters. Varieties There are hundreds of historic and current commercial varieties used around the world. A comprehensive list of about 80 North American varieties is maintained at North Carolina State University. Colors White White cauliflower is the most common color of cauliflower, having a contrasting white head (also called "curd", having a similar appearance to cheese curd), surrounded by green leaves. Orange Orange cauliflower contains beta-carotene as the orange pigment, a provitamin A compound. This orange trait originated from a natural mutant found in a cauliflower field in Canada. Cultivars include 'Cheddar' and 'Orange Bouquet.' Green Green cauliflower in the B. oleracea Botrytis Group is sometimes called broccoflower. It is available in the normal curd (head) shape and with a fractal spiral curd called Romanesco broccoli. Both have been commercially available in the U.S. and Europe since the early 1990s. Green-headed varieties include 'Alverda, 'Green Goddess,' and 'Vorda.' Romanesco varieties include 'Minaret' and 'Veronica.' Purple The purple color in this cauliflower is caused by the presence of anthocyanins, water-soluble pigments that are found in many other plants and plant-based products, such as red cabbage and red wine. Varieties include 'Graffiti' and 'Purple Cape.' In Great Britain and southern Italy, a broccoli with tiny flower buds is sold as a vegetable under the name "purple cauliflower"; it is not the same as standard cauliflower with a purple head. Phytochemicals Cauliflower contains several non-nutrient phytochemicals common in the cabbage family that are under preliminary research for their potential properties, including isothiocyanates and glucosinolates. Boiling reduces the levels of cauliflower glucosinolates, while other cooking methods, such as steaming, microwaving, and stir frying, have no significant effect on glucosinolate levels. Etymology The word "cauliflower" derives from the Italian cavolfiore, meaning "cabbage flower". The ultimate origin of the name is from the Latin words caulis (cabbage) and flōs (flower). Cultivation History Cauliflower is the result of selective breeding and likely arose in the Mediterranean region, possibly from broccoli. Pliny the Elder included cyma among cultivated plants he described in Natural History: "Ex omnibus brassicae generibus suavissima est cyma" ("Of all the varieties of cabbage the most pleasant-tasted is cyma"). Pliny's description likely refers to the flowering heads of an earlier cultivated variety of Brassica oleracea. In the Middle Ages, early forms of cauliflower were associated with the island of Cyprus, with the 12th- and 13th-century Arab botanists Ibn al-'Awwam and Ibn al-Baitar claiming its origin to be Cyprus. This association continued into Western Europe, where cauliflowers were sometimes known as Cyprus colewort, and there was extensive trade in Western Europe in cauliflower seeds from Cyprus, under the French Lusignan rulers of the island, until well into the 16th century. It is thought to have been introduced into Italy from Cyprus or the east coast of the Mediterranean around 1490 and then spread to other European countries in the following centuries. François Pierre La Varenne employed chouxfleurs in Le cuisinier françois. They were introduced to France from Genoa in the 16th century and are featured in Olivier de Serres' Théâtre de l'agriculture (1600), as cauli-fiori "as the Italians call it, which are still rather rare in France; they hold an honorable place in the garden because of their delicacy", but they did not commonly appear on grand tables until the time of Louis XIV. It was introduced to India in 1822 by the British. Production In 2020, global production of cauliflowers (combined for production reports with broccoli) was 25.5 million tonnes, led by China and India which, combined, had 72% of the world total. Secondary producers, having 0.4–1.3 million tonnes annually, were the United States, Spain, Mexico, and Italy. Horticulture Cauliflower is relatively difficult to grow compared to cabbage, with common problems such as an underdeveloped head and poor curd quality. Climate Because the weather is a limiting factor for producing cauliflower, the plant grows best in moderate daytime temperatures , with plentiful sun and moist soil conditions high in organic matter and sandy soils. The earliest maturity possible for cauliflower is 7 to 12 weeks from transplanting. In the northern hemisphere, fall season plantings in July may enable harvesting before autumn frost. Long periods of sun exposure in hot summer weather may cause cauliflower heads to discolor with a red-purple hue. Seeding and transplanting Transplantable cauliflowers can be produced in containers such as flats, hotbeds, or fields. In soil that is loose, well-drained, and fertile, field seedlings are shallow-planted and thinned by ample space – about 12 plants per . Ideal growing temperatures are about when seedlings are 25 to 35 days old. Applications of fertilizer to developing seedlings begin when leaves appear, usually with a starter solution weekly. Transplanting to the field normally begins in late spring and may continue until mid-summer. Row spacing is about . Rapid vegetative growth after transplanting may benefit from such procedures as avoiding spring frosts, using starter solutions high in phosphorus, irrigating weekly, and applying fertilizer. Disorders, pests, and diseases The most important disorders affecting cauliflower quality are a hollow stem, stunted head growth or buttoning, ricing, browning, and leaf-tip burn. Among major pests affecting cauliflower are aphids, root maggots, cutworms, moths, and flea beetles. The plant is susceptible to black rot, black leg, club root, black leaf spot, and downy mildew. Harvesting When cauliflower is mature, heads appear clear white, compact, and in diameter, and should be cooled shortly after harvest. Forced air cooling to remove heat from the field during hot weather may be needed for optimal preservation. Short-term storage is possible using cool, high-humidity storage conditions. Pollination Many species of blowflies, including Calliphora vomitoria, are known pollinators of cauliflower. Uses Culinary Cauliflower heads can be roasted, grilled, boiled, fried, steamed, pickled, or eaten raw. When cooking, the outer leaves and thick stalks are typically removed, leaving only the florets (the edible "curd" or "head"). The leaves are also edible but are often discarded. Cauliflower can be used as a low-calorie, gluten-free alternative to rice and flour. Between 2012 and 2016, cauliflower production in the United States increased by 63%, and cauliflower-based product sales increased by 71% between 2017 and 2018. Cauliflower rice is made by pulsing cauliflower florets and cooking the result in oil. Cauliflower pizza crust is made from cauliflower flour and is popular in pizza restaurants. Mashed cauliflower is a low-carbohydrate alternative to mashed potatoes. Nutrition Raw cauliflower is 92% water, 5% carbohydrates, 2% protein, and contains negligible fat (see table). A reference amount of raw cauliflower provides of food energy, and has a high content (20% or more of the Daily Value, DV) of vitamin C (58% DV) and moderate levels of several B vitamins and vitamin K (13–15% DV; table). Contents of dietary minerals are low (7% DV or less). In culture Cauliflower has been noticed by mathematicians for its distinct fractal dimension, calculated to be roughly 2.8. One of the fractal properties of cauliflower is that every branch, or "module", is similar to the entire cauliflower. Another quality, also present in other plant species, is that the angle between "modules", as they become more distant from the center, is 360 degrees divided by the golden ratio. The fancied resemblance of the shape of a boxer's ear to a cauliflower gave rise to the term "cauliflower ear". Photos
Biology and health sciences
Brassicales
null
164402
https://en.wikipedia.org/wiki/Magnetic%20dipole
Magnetic dipole
In electromagnetism, a magnetic dipole is the limit of either a closed loop of electric current or a pair of poles as the size of the source is reduced to zero while keeping the magnetic moment constant. It is a magnetic analogue of the electric dipole, but the analogy is not perfect. In particular, a true magnetic monopole, the magnetic analogue of an electric charge, has never been observed in nature. However, magnetic monopole quasiparticles have been observed as emergent properties of certain condensed matter systems. Moreover, one form of magnetic dipole moment is associated with a fundamental quantum property—the spin of elementary particles. Because magnetic monopoles do not exist, the magnetic field at a large distance from any static magnetic source looks like the field of a dipole with the same dipole moment. For higher-order sources (e.g. quadrupoles) with no dipole moment, their field decays towards zero with distance faster than a dipole field does. External magnetic field produced by a magnetic dipole moment In classical physics, the magnetic field of a dipole is calculated as the limit of either a current loop or a pair of charges as the source shrinks to a point while keeping the magnetic moment constant. For the current loop, this limit is most easily derived from the vector potential: where μ0 is the vacuum permeability constant and is the surface of a sphere of radius . The magnetic flux density (strength of the B-field) is then Alternatively one can obtain the scalar potential first from the magnetic pole limit, and hence the magnetic field strength (or strength of the H-field) is The magnetic field strength is symmetric under rotations about the axis of the magnetic moment. In spherical coordinates, with , and with the magnetic moment aligned with the z-axis, then the field strength can more simply be expressed as Internal magnetic field of a dipole The two models for a dipole (current loop and magnetic poles), give the same predictions for the magnetic field far from the source. However, inside the source region they give different predictions. The magnetic field between poles is in the opposite direction to the magnetic moment (which points from the negative charge to the positive charge), while inside a current loop it is in the same direction (see the figure to the right (above for mobile users)). Clearly, the limits of these fields must also be different as the sources shrink to zero size. This distinction only matters if the dipole limit is used to calculate fields inside a magnetic material. If a magnetic dipole is formed by making a current loop smaller and smaller, but keeping the product of current and area constant, the limiting field is where is the Dirac delta function in three dimensions. Unlike the expressions in the previous section, this limit is correct for the internal field of the dipole. If a magnetic dipole is formed by taking a "north pole" and a "south pole", bringing them closer and closer together but keeping the product of magnetic pole-charge and distance constant, the limiting field is These fields are related by , where is the magnetization. Forces between two magnetic dipoles The force exerted by one dipole moment on another separated in space by a vector can be calculated using: or where is the distance between dipoles. The force acting on is in the opposite direction. The torque can be obtained from the formula Dipolar fields from finite sources The magnetic scalar potential produced by a finite source, but external to it, can be represented by a multipole expansion. Each term in the expansion is associated with a characteristic moment and a potential having a characteristic rate of decrease with distance from the source. Monopole moments have a rate of decrease, dipole moments have a rate, quadrupole moments have a rate, and so on. The higher the order, the faster the potential drops off. Since the lowest-order term observed in magnetic sources is the dipole term, it dominates at large distances. Therefore, at large distances any magnetic source looks like a dipole of the same magnetic moment.
Physical sciences
Magnetostatics
Physics
164483
https://en.wikipedia.org/wiki/Scattering
Scattering
In physics, scattering is a wide range of physical processes where moving particles or radiation of some form, such as light or sound, are forced to deviate from a straight trajectory by localized non-uniformities (including particles and radiation) in the medium through which they pass. In conventional use, this also includes deviation of reflected radiation from the angle predicted by the law of reflection. Reflections of radiation that undergo scattering are often called diffuse reflections and unscattered reflections are called specular (mirror-like) reflections. Originally, the term was confined to light scattering (going back at least as far as Isaac Newton in the 17th century). As more "ray"-like phenomena were discovered, the idea of scattering was extended to them, so that William Herschel could refer to the scattering of "heat rays" (not then recognized as electromagnetic in nature) in 1800. John Tyndall, a pioneer in light scattering research, noted the connection between light scattering and acoustic scattering in the 1870s. Near the end of the 19th century, the scattering of cathode rays (electron beams) and X-rays was observed and discussed. With the discovery of subatomic particles (e.g. Ernest Rutherford in 1911) and the development of quantum theory in the 20th century, the sense of the term became broader as it was recognized that the same mathematical frameworks used in light scattering could be applied to many other phenomena. Scattering can refer to the consequences of particle-particle collisions between molecules, atoms, electrons, photons and other particles. Examples include: cosmic ray scattering in the Earth's upper atmosphere; particle collisions inside particle accelerators; electron scattering by gas atoms in fluorescent lamps; and neutron scattering inside nuclear reactors. The types of non-uniformities which can cause scattering, sometimes known as scatterers or scattering centers, are too numerous to list, but a small sample includes particles, bubbles, droplets, density fluctuations in fluids, crystallites in polycrystalline solids, defects in monocrystalline solids, surface roughness, cells in organisms, and textile fibers in clothing. The effects of such features on the path of almost any type of propagating wave or moving particle can be described in the framework of scattering theory. Some areas where scattering and scattering theory are significant include radar sensing, medical ultrasound, semiconductor wafer inspection, polymerization process monitoring, acoustic tiling, free-space communications and computer-generated imagery. Particle-particle scattering theory is important in areas such as particle physics, atomic, molecular, and optical physics, nuclear physics and astrophysics. In particle physics the quantum interaction and scattering of fundamental particles is described by the Scattering Matrix or S-Matrix, introduced and developed by John Archibald Wheeler and Werner Heisenberg. Scattering is quantified using many different concepts, including scattering cross section (σ), attenuation coefficients, the bidirectional scattering distribution function (BSDF), S-matrices, and mean free path. Single and multiple scattering When radiation is only scattered by one localized scattering center, this is called single scattering. It is more common that scattering centers are grouped together; in such cases, radiation may scatter many times, in what is known as multiple scattering. The main difference between the effects of single and multiple scattering is that single scattering can usually be treated as a random phenomenon, whereas multiple scattering, somewhat counterintuitively, can be modeled as a more deterministic process because the combined results of a large number of scattering events tend to average out. Multiple scattering can thus often be modeled well with diffusion theory. Because the location of a single scattering center is not usually well known relative to the path of the radiation, the outcome, which tends to depend strongly on the exact incoming trajectory, appears random to an observer. This type of scattering would be exemplified by an electron being fired at an atomic nucleus. In this case, the atom's exact position relative to the path of the electron is unknown and would be unmeasurable, so the exact trajectory of the electron after the collision cannot be predicted. Single scattering is therefore often described by probability distributions. With multiple scattering, the randomness of the interaction tends to be averaged out by a large number of scattering events, so that the final path of the radiation appears to be a deterministic distribution of intensity. This is exemplified by a light beam passing through thick fog. Multiple scattering is highly analogous to diffusion, and the terms multiple scattering and diffusion are interchangeable in many contexts. Optical elements designed to produce multiple scattering are thus known as diffusers. Coherent backscattering, an enhancement of backscattering that occurs when coherent radiation is multiply scattered by a random medium, is usually attributed to weak localization. Not all single scattering is random, however. A well-controlled laser beam can be exactly positioned to scatter off a microscopic particle with a deterministic outcome, for instance. Such situations are encountered in radar scattering as well, where the targets tend to be macroscopic objects such as people or aircraft. Similarly, multiple scattering can sometimes have somewhat random outcomes, particularly with coherent radiation. The random fluctuations in the multiply scattered intensity of coherent radiation are called speckles. Speckle also occurs if multiple parts of a coherent wave scatter from different centers. In certain rare circumstances, multiple scattering may only involve a small number of interactions such that the randomness is not completely averaged out. These systems are considered to be some of the most difficult to model accurately. The description of scattering and the distinction between single and multiple scattering are tightly related to wave–particle duality. Theory Scattering theory is a framework for studying and understanding the scattering of waves and particles. Wave scattering corresponds to the collision and scattering of a wave with some material object, for instance (sunlight) scattered by rain drops to form a rainbow. Scattering also includes the interaction of billiard balls on a table, the Rutherford scattering (or angle change) of alpha particles by gold nuclei, the Bragg scattering (or diffraction) of electrons and X-rays by a cluster of atoms, and the inelastic scattering of a fission fragment as it traverses a thin foil. More precisely, scattering consists of the study of how solutions of partial differential equations, propagating freely "in the distant past", come together and interact with one another or with a boundary condition, and then propagate away "to the distant future". The direct scattering problem is the problem of determining the distribution of scattered radiation/particle flux basing on the characteristics of the scatterer. The inverse scattering problem is the problem of determining the characteristics of an object (e.g., its shape, internal constitution) from measurement data of radiation or particles scattered from the object. Attenuation due to scattering When the target is a set of many scattering centers whose relative position varies unpredictably, it is customary to think of a range equation whose arguments take different forms in different application areas. In the simplest case consider an interaction that removes particles from the "unscattered beam" at a uniform rate that is proportional to the incident number of particles per unit area per unit time (), i.e. that where Q is an interaction coefficient and x is the distance traveled in the target. The above ordinary first-order differential equation has solutions of the form: where Io is the initial flux, path length Δx ≡ x − xo, the second equality defines an interaction mean free path λ, the third uses the number of targets per unit volume η to define an area cross-section σ, and the last uses the target mass density ρ to define a density mean free path τ. Hence one converts between these quantities via Q = 1/λ = ησ = ρ/τ, as shown in the figure at left. In electromagnetic absorption spectroscopy, for example, interaction coefficient (e.g. Q in cm−1) is variously called opacity, absorption coefficient, and attenuation coefficient. In nuclear physics, area cross-sections (e.g. σ in barns or units of 10−24 cm2), density mean free path (e.g. τ in grams/cm2), and its reciprocal the mass attenuation coefficient (e.g. in cm2/gram) or area per nucleon are all popular, while in electron microscopy the inelastic mean free path (e.g. λ in nanometers) is often discussed instead. Elastic and inelastic scattering The term "elastic scattering" implies that the internal states of the scattering particles do not change, and hence they emerge unchanged from the scattering process. In inelastic scattering, by contrast, the particles' internal state is changed, which may amount to exciting some of the electrons of a scattering atom, or the complete annihilation of a scattering particle and the creation of entirely new particles. The example of scattering in quantum chemistry is particularly instructive, as the theory is reasonably complex while still having a good foundation on which to build an intuitive understanding. When two atoms are scattered off one another, one can understand them as being the bound state solutions of some differential equation. Thus, for example, the hydrogen atom corresponds to a solution to the Schrödinger equation with a negative inverse-power (i.e., attractive Coulombic) central potential. The scattering of two hydrogen atoms will disturb the state of each atom, resulting in one or both becoming excited, or even ionized, representing an inelastic scattering process. The term "deep inelastic scattering" refers to a special kind of scattering experiment in particle physics. Mathematical framework In mathematics, scattering theory deals with a more abstract formulation of the same set of concepts. For example, if a differential equation is known to have some simple, localized solutions, and the solutions are a function of a single parameter, that parameter can take the conceptual role of time. One then asks what might happen if two such solutions are set up far away from each other, in the "distant past", and are made to move towards each other, interact (under the constraint of the differential equation) and then move apart in the "future". The scattering matrix then pairs solutions in the "distant past" to those in the "distant future". Solutions to differential equations are often posed on manifolds. Frequently, the means to the solution requires the study of the spectrum of an operator on the manifold. As a result, the solutions often have a spectrum that can be identified with a Hilbert space, and scattering is described by a certain map, the S matrix, on Hilbert spaces. Solutions with a discrete spectrum correspond to bound states in quantum mechanics, while a continuous spectrum is associated with scattering states. The study of inelastic scattering then asks how discrete and continuous spectra are mixed together. An important, notable development is the inverse scattering transform, central to the solution of many exactly solvable models. Theoretical physics In mathematical physics, scattering theory is a framework for studying and understanding the interaction or scattering of solutions to partial differential equations. In acoustics, the differential equation is the wave equation, and scattering studies how its solutions, the sound waves, scatter from solid objects or propagate through non-uniform media (such as sound waves, in sea water, coming from a submarine). In the case of classical electrodynamics, the differential equation is again the wave equation, and the scattering of light or radio waves is studied. In particle physics, the equations are those of Quantum electrodynamics, Quantum chromodynamics and the Standard Model, the solutions of which correspond to fundamental particles. In regular quantum mechanics, which includes quantum chemistry, the relevant equation is the Schrödinger equation, although equivalent formulations, such as the Lippmann-Schwinger equation and the Faddeev equations, are also largely used. The solutions of interest describe the long-term motion of free atoms, molecules, photons, electrons, and protons. The scenario is that several particles come together from an infinite distance away. These reagents then collide, optionally reacting, getting destroyed or creating new particles. The products and unused reagents then fly away to infinity again. (The atoms and molecules are effectively particles for our purposes. Also, under everyday circumstances, only photons are being created and destroyed.) The solutions reveal which directions the products are most likely to fly off to and how quickly. They also reveal the probability of various reactions, creations, and decays occurring. There are two predominant techniques of finding solutions to scattering problems: partial wave analysis, and the Born approximation. Electromagnetics Electromagnetic waves are one of the best known and most commonly encountered forms of radiation that undergo scattering. Scattering of light and radio waves (especially in radar) is particularly important. Several different aspects of electromagnetic scattering are distinct enough to have conventional names. Major forms of elastic light scattering (involving negligible energy transfer) are Rayleigh scattering and Mie scattering. Inelastic scattering includes Brillouin scattering, Raman scattering, inelastic X-ray scattering and Compton scattering. Light scattering is one of the two major physical processes that contribute to the visible appearance of most objects, the other being absorption. Surfaces described as white owe their appearance to multiple scattering of light by internal or surface inhomogeneities in the object, for example by the boundaries of transparent microscopic crystals that make up a stone or by the microscopic fibers in a sheet of paper. More generally, the gloss (or lustre or sheen) of the surface is determined by scattering. Highly scattering surfaces are described as being dull or having a matte finish, while the absence of surface scattering leads to a glossy appearance, as with polished metal or stone. Spectral absorption, the selective absorption of certain colors, determines the color of most objects with some modification by elastic scattering. The apparent blue color of veins in skin is a common example where both spectral absorption and scattering play important and complex roles in the coloration. Light scattering can also create color without absorption, often shades of blue, as with the sky (Rayleigh scattering), the human blue iris, and the feathers of some birds (Prum et al. 1998). However, resonant light scattering in nanoparticles can produce many different highly saturated and vibrant hues, especially when surface plasmon resonance is involved (Roqué et al. 2006). Models of light scattering can be divided into three domains based on a dimensionless size parameter, α which is defined as: where πDp is the circumference of a particle and λ is the wavelength of incident radiation in the medium. Based on the value of α, these domains are: α ≪ 1: Rayleigh scattering (small particle compared to wavelength of light); α ≈ 1: Mie scattering (particle about the same size as wavelength of light, valid only for spheres); α ≫ 1: geometric scattering (particle much larger than wavelength of light). Rayleigh scattering is a process in which electromagnetic radiation (including light) is scattered by a small spherical volume of variant refractive indexes, such as a particle, bubble, droplet, or even a density fluctuation. This effect was first modeled successfully by Lord Rayleigh, from whom it gets its name. In order for Rayleigh's model to apply, the sphere must be much smaller in diameter than the wavelength (λ) of the scattered wave; typically the upper limit is taken to be about 1/10 the wavelength. In this size regime, the exact shape of the scattering center is usually not very significant and can often be treated as a sphere of equivalent volume. The inherent scattering that radiation undergoes passing through a pure gas is due to microscopic density fluctuations as the gas molecules move around, which are normally small enough in scale for Rayleigh's model to apply. This scattering mechanism is the primary cause of the blue color of the Earth's sky on a clear day, as the shorter blue wavelengths of sunlight passing overhead are more strongly scattered than the longer red wavelengths according to Rayleigh's famous 1/λ4 relation. Along with absorption, such scattering is a major cause of the attenuation of radiation by the atmosphere. The degree of scattering varies as a function of the ratio of the particle diameter to the wavelength of the radiation, along with many other factors including polarization, angle, and coherence. For larger diameters, the problem of electromagnetic scattering by spheres was first solved by Gustav Mie, and scattering by spheres larger than the Rayleigh range is therefore usually known as Mie scattering. In the Mie regime, the shape of the scattering center becomes much more significant and the theory only applies well to spheres and, with some modification, spheroids and ellipsoids. Closed-form solutions for scattering by certain other simple shapes exist, but no general closed-form solution is known for arbitrary shapes. Both Mie and Rayleigh scattering are considered elastic scattering processes, in which the energy (and thus wavelength and frequency) of the light is not substantially changed. However, electromagnetic radiation scattered by moving scattering centers does undergo a Doppler shift, which can be detected and used to measure the velocity of the scattering center/s in forms of techniques such as lidar and radar. This shift involves a slight change in energy. At values of the ratio of particle diameter to wavelength more than about 10, the laws of geometric optics are mostly sufficient to describe the interaction of light with the particle. Mie theory can still be used for these larger spheres, but the solution often becomes numerically unwieldy. For modeling of scattering in cases where the Rayleigh and Mie models do not apply such as larger, irregularly shaped particles, there are many numerical methods that can be used. The most common are finite-element methods which solve Maxwell's equations to find the distribution of the scattered electromagnetic field. Sophisticated software packages exist which allow the user to specify the refractive index or indices of the scattering feature in space, creating a 2- or sometimes 3-dimensional model of the structure. For relatively large and complex structures, these models usually require substantial execution times on a computer. Electrophoresis involves the migration of macromolecules under the influence of an electric field. Electrophoretic light scattering involves passing an electric field through a liquid which makes particles move. The bigger the charge is on the particles, the faster they are able to move.
Physical sciences
Particle physics: General
null
164491
https://en.wikipedia.org/wiki/Cucumber
Cucumber
The cucumber (Cucumis sativus) is a widely-cultivated creeping vine plant in the family Cucurbitaceae that bears cylindrical to spherical fruits, which are used as culinary vegetables. Considered an annual plant, there are three main types of cucumber—slicing, pickling, and seedless—within which several cultivars have been created. The cucumber originates in Asia extending from India, Nepal, Bangladesh, China (Yunnan, Guizhou, Guangxi), and Northern Thailand, but now grows on most continents, and many different types of cucumber are grown commercially and traded on the global market. In North America, the term wild cucumber refers to plants in the genera Echinocystis and Marah, though the two are not closely related. Description The cucumber is a creeping vine that roots in the ground and grows up trellises or other supporting frames, wrapping around supports with thin, spiraling tendrils. The plant may also root in a soilless medium, whereby it will sprawl along the ground in lieu of a supporting structure. The vine has large leaves that form a canopy over the fruits. The fruit of typical cultivars of cucumber is roughly cylindrical, but elongated with tapered ends, and may be as large as long and in diameter. Cucumber fruits consist of 95% water (see nutrition table). In botanical terms, the cucumber is classified as a pepo, a type of botanical berry with a hard outer rind and no internal divisions. However, much like tomatoes and squashes, it is often perceived, prepared, and eaten as a vegetable. Flowering and pollination Most cucumber cultivars are seeded and require pollination. For this purpose, thousands of honey beehives are annually carried to cucumber fields just before bloom. Cucumbers may also be pollinated via bumblebees and several other bee species. Most cucumbers that require pollination are self-incompatible, thus requiring the pollen of another plant in order to form seeds and fruit. Some self-compatible cultivars exist that are related to the 'Lemon cucumber' cultivar. A few cultivars of cucumber are parthenocarpic, the blossoms of which create seedless fruit without pollination, which degrades the eating quality of these cultivar. In the United States, these are usually grown in greenhouses, where bees are excluded. In Europe, they are grown outdoors in some regions, where bees are likewise excluded. Traditional cultivars produce male blossoms first, then female, in about equivalent numbers. Newer gynoecious hybrid cultivars produce almost all female blossoms. They may have a pollenizer cultivar interplanted, and the number of beehives per unit area is increased, but temperature changes induce male flowers even on these plants, which may be sufficient for pollination to occur. In 2009, an international team of researchers announced they had sequenced the cucumber genome. A study of genetic recombination during meiosis in cucumber provided a high resolution landscape of meiotic DNA double strand-breaks and genetic crossovers. The average number of crossovers per chromosome per meiosis was 0.92 to 0.99. Herbivore defense Phytochemicals in cucumbers may discourage natural foraging by herbivores, such as insects, nematodes or wildlife. As a possible defense mechanism, cucumbers produce cucurbitacin C, which causes a bitter taste in some cucumber varieties. This potential mechanism is under preliminary research to identify whether cucumbers are able to deter herbivores and environmental stresses by using an intrinsic chemical defense, particularly in the leaves, cotyledons, pedicel, carpopodium, and fruit. Nutrition, aroma, and taste Raw cucumber (with peel) is 95% water, 4% carbohydrates, 1% protein, and contains negligible fat. A reference serving provides of food energy. It has a low content of micronutrients: it is notable only for vitamin K, at 14% of the Daily Value (table). Depending on variety, cucumbers may have a mild melon aroma and flavor, in part resulting from unsaturated aldehydes, such as , and the cis- and trans- isomers of 2-nonenal. The slightly bitter taste of cucumber rind results from cucurbitacins. Varieties In general cultivation, cucumbers are classified into three main cultivar groups: slicing, pickled, and seedless/burpless. Culinary uses Fruit Slicing Cucumbers grown to eat fresh are called slicing cucumbers. The main varieties of slicers mature on vines with large leaves that provide shading. Slicers grown commercially for the North American market are generally longer, smoother, more uniform in color, and have much tougher skin. In contrast, those in other countries, often called European cucumbers, are smaller and have thinner, more delicate skin, often with fewer seeds, thus are often sold in plastic skin for protection. This variety may also be called a telegraph cucumber, particularly in Australasia. Pickling Pickling with brine, sugar, vinegar, and spices creates various flavored products from cucumbers and other foods. Although any cucumber can be pickled, commercial pickles are made from cucumbers specially bred for uniformity of length-to-diameter ratio and lack of voids in the flesh. Those cucumbers intended for pickling, called picklers, grow to about long and wide. Compared to slicers, picklers tend to be shorter, thicker, less-regularly shaped, and have bumpy skin with tiny white or black-dotted spines. Color can vary from creamy yellow to pale or dark green. Gherkin Gherkins, also called cornichons, or baby pickles, are small cucumbers, typically those in length, often with bumpy skin, which are typically used for pickling. The word gherkin comes from the early modern Dutch gurken or augurken ('small pickled cucumber'). The term is also used in the name for Cucumis anguria, the West Indian gherkin, a closely related species. Burpless Burpless cucumbers are sweeter and have a thinner skin than other varieties of cucumber. They are reputed to be easy to digest and to have a pleasant taste. They can grow as long as , are nearly seedless, and have a delicate skin. Most commonly grown in greenhouses, these parthenocarpic cucumbers are often found in grocery markets, shrink-wrapped in plastic. They are marketed as either burpless or seedless, as the seeds and skin of other varieties of cucumbers are said to give some people gas. Shoots Cucumber shoots are regularly consumed as a vegetable, especially in rural areas. In Thailand they are often served with a crab meat sauce. They can also be stir fried or used in soups. Production In 2022, world production of cucumbers and gherkins was 95 million tonnes, led by China with 82% of the total. Cultivation history Cultivated for at least 3,000 years, the cultivated cucumbers "Cucumis sativus" were domesticated in India from wild "C. sativus var. hardwickii". where a great many varieties have been observed, along with its closest living relative, Cucumis hystrix. Three main cultivar groups of cucumber are namely Eurasian cucumbers (slicing cucumbers eaten raw and immature), East Asian cucumbers (pickling cucumbers) and Xishuangbanna cucumbers. Based on demographic modelling, the East Asian C. sativus cultivars diverged from the Indian cultivars c. 2500 years ago. It was probably introduced to Europe by the Greeks or Romans. Records of cucumber cultivation appear in France in the 9th century, England in the 14th century, and in North America by the mid-16th century. Roman Empire According to Pliny the Elder, the Emperor Tiberius had the cucumber on his table daily during summer and winter. In order to have it available for his table every day of the year, the Romans reportedly used artificial methods of growing (similar to the greenhouse system), whereby mirrorstone refers to Pliny's lapis specularis, believed to have been sheet mica: Reportedly, they were also cultivated in specularia, cucumber houses glazed with oiled cloth. Pliny describes the Italian fruit as very small, probably like a gherkin. He also describes the preparation of a medication known as elaterium. However, some scholars believe that he was instead referring to Ecballium elaterium, known in pre-Linnean times as Cucumis silvestris or Cucumis asininus ('wild cucumber' or 'donkey cucumber'), a species different from the common cucumber. Pliny also writes about several other varieties of cucumber, including the cultivated cucumber, and remedies from the different types (9 from the cultivated; 5 from the "anguine;" and 26 from the "wild"). Middle Ages Charlemagne had cucumbers grown in his gardens in the 8th/9th century. They were reportedly introduced into England in the early 14th century, lost, then reintroduced approximately 250 years later. The Spaniards (through the Italian Christopher Columbus) brought cucumbers to Haiti in 1494. In 1535, Jacques Cartier, a French explorer, found "very great cucumbers" grown on the site of what is now Montreal. Early-modern age Throughout the 16th century, European trappers, traders, bison hunters, and explorers bartered for the products of American Indian agriculture. The tribes of the Great Plains and the Rocky Mountains learned from the Spanish how to grow European crops. The farmers on the Great Plains included the Mandan and Abenaki. They obtained cucumbers and watermelons from the Spanish, and added them to the crops they were already growing, including several varieties of corn and beans, pumpkins, squash, and gourd plants. The Iroquois were also growing them when the first Europeans visited them. In 1630, the Reverend Francis Higginson produced a book called New-Englands Plantation in which, describing a garden on Conant's Island in Boston Harbor known as The Governor's Garden, he states:The countrie aboundeth naturally with store of roots of great and good to eat. Our turnips, parsnips, and carrots are here both bigger and sweeter than is ordinary to be found in England. Here are store of pompions, cowcumbers, and other things of that nature which I know not...In New England Prospect (1633, England), William Wood published observations he made in 1629 in America: Age of Enlightenment and later In the later 17th century, a prejudice developed against uncooked vegetables and fruits. A number of articles in contemporary health publications stated that uncooked plants brought on summer diseases and should be forbidden to children. The cucumber kept this reputation for an inordinate period of time, "fit only for consumption by cows," which some believe is why it gained the name, cowcumber. Samuel Pepys wrote in his diary on 22 August 1663:[T]his day Sir W. Batten tells me that Mr. Newburne is dead of eating cowcumbers, of which the other day I heard of another, I think. John Evelyn in 1699 wrote that the cucumber, 'however dress'd, was thought fit to be thrown away, being accounted little better than poyson (poison)'. According to 18th-century British writer Samuel Johnson, it was commonly said among English physicians that a cucumber "should be well sliced, and dressed with pepper and vinegar, and then thrown out, as good for nothing." A copper etching made by Maddalena Bouchard between 1772 and 1793 shows this plant to have smaller, almost bean-shaped fruits, and small yellow flowers. The small form of the cucumber is figured in Herbals of the 16th century, however stating that "[i]f hung in a tube while in blossom, the Cucumber will grow to a most surprising length." Gallery
Biology and health sciences
Cucurbitales
null
164497
https://en.wikipedia.org/wiki/Radish
Radish
The radish (Raphanus sativus) is a flowering plant in the mustard family, Brassicaceae. Its large taproot is commonly used as a root vegetable, although the entire plant is edible and its leaves are sometimes used as a leaf vegetable. Originally domesticated in Asia, radishes are now grown and consumed throughout the world. The radish is sometimes considered to form a species complex with the wild radish, and instead given the trinomial name Raphanus raphanistrum subsp. sativus. Radishes are often used raw as a crunchy salad vegetable with a pungent, slightly spicy flavor, varying in intensity depending on its growing environment. There are numerous varieties varying in size, flavor, color, and length of time they take to mature. Radishes owe their sharp flavor to the various chemical compounds produced by the plants, including glucosinolate, myrosinase, and isothiocyanate. They are sometimes grown as companion plants and suffer from few pests and diseases. They germinate quickly and grow rapidly, common smaller varieties being ready for consumption within a month, while larger daikon varieties take several weeks. Being relatively easy to grow and quick to harvest, radishes are often planted by novice gardeners. Another use of radish is as a cover or catch crop in winter, or as a forage crop. Some radishes are grown for their seeds; others, such as daikon, may be grown for oil production. Others are used for sprouting. History Varieties of radish are now broadly distributed around the world, but almost no archeological records are available to help determine their early history and domestication. However, scientists have tentatively located the origin of Raphanus sativus in Southeast Asia, as this is the only region where truly wild forms have been discovered. India, central China, and Central Asia appear to have been secondary centers where differing forms were developed. Radishes enter the historical record in . Greek and Roman agriculturalists of the gave details of small, large, round, long, mild, and sharp varieties. The radish seems to have been one of the first European crops introduced to the Americas. A German botanist reported radishes of and roughly in length in 1544, although the only variety of that size today is the Japanese Sakurajima radish. The large, mild, and white East Asian form was developed in China, though it is mostly associated in the West with the Japanese daikon, owing to Japanese agricultural development and larger exports. Folklore Asaph the Jew noted that the radish, particularly its leaves, may be useful in traditional medicine to increase mucus. During the Middle Ages, Ibn Wahshiyya considered it a component of poison antidotes, while Maimonides highlighted its possible uses as a treatment. Al-Warraq's 10th century cookbook includes radish as a side dish for ostrich meat and an ingredient in a chicken dish called kardanāj. Description Radishes are annual or biennial brassicaceous crops grown for their swollen tap roots which can be globular, tapering, or cylindrical. The root skin colour ranges from white through pink, red, purple, yellow, and green to black, but the flesh is usually white. The roots obtain their color from anthocyanins. Red varieties use the anthocyanin pelargonidin as a pigment, and purple cultivars obtain their color from cyanidin. Smaller types have a few leaves about long with round roots up to in diameter or more slender, long roots up to long. Both of these are normally eaten raw in salads. A longer root form, including oriental radishes, daikon or mooli, and winter radishes, grows up to long with foliage about high with a spread of . The flesh of radishes harvested timely is crisp and sweet, but becomes bitter and tough if the vegetable is left in the ground too long. Leaves are arranged in a rosette. They have a lyrate shape, meaning they are divided pinnately with an enlarged terminal lobe and smaller lateral lobes. The white flowers are borne on a racemose inflorescence. The fruits are small pods which can be eaten when young. The radish is a diploid species, and has 18 chromosomes (2n=18). It is estimated that the radish genome contains between 526 and 574 Mb. Varieties Cultivation Radishes are a fast-growing, annual, cool-season crop. The seed germinates in three to four days in moist conditions with soil temperatures between . Best quality roots are obtained under moderate day lengths with air temperatures in the range . Under average conditions, the crop matures in 3–4 weeks, but in colder weather, 6–7 weeks may be required. Homegrown varieties can be significantly sharper. Radishes grow best in full sun in light, sandy loams, with a soil pH 6.5 to 7.0, but for late-season crops, a clayey-loam is ideal. Soils that bake dry and form a crust in dry weather are unsuitable and can impair germination. Harvesting periods can be extended by making repeat plantings, spaced a week or two apart. In warmer climates, radishes are normally planted in the autumn. The depth at which seeds are planted affects the size of the root, from deep recommended for small radishes to for large radishes. During the growing period, the crop needs to be thinned and weeds controlled, and irrigation may be required. Radishes are a common garden crop in many parts of the world, and the fast harvest cycle makes them particularly suitable for children's gardens. After harvesting, radishes can be stored without loss of quality for two or three days at room temperature, and about two months at with a relative humidity of 90–95%. Companion plant Radishes can be useful as companion plants for many other crops, probably because their pungent odour deters such insect pests as aphids, cucumber beetles, tomato hornworms, squash bugs, and ants. They can also function as a trap crop, luring insect pests away from the main crop. Cucumbers and radishes seem to thrive when grown in close association with each other, and radishes also grow well with chervil, lettuce, peas, and nasturtiums. However, they react adversely to growing in close association with hyssop. Pests As a fast-growing plant, diseases are not generally a problem with radishes, but some insect pests can be a nuisance. The larvae of flea beetles live in the soil, but the adult beetles cause damage to the crop, biting small "shot holes" in the leaves, especially of seedlings. The swede midge (Contarinia nasturtii) attacks the foliage and growing tip of the plant and causes distortion, multiple (or no) growing tips, and swollen or crinkled leaves and stems. The larvae of the cabbage root fly sometimes attack the roots. The foliage droops and becomes discoloured, and small, white maggots tunnel through the root, making it unattractive or inedible. Varieties Broadly speaking, radishes can be categorized into four main types according to the seasons when they are grown and a variety of shapes, lengths, colors, and sizes, such as red, pink, white, gray-black, or yellow radishes, with round or elongated roots that can grow longer than a parsnip. Spring or summer radishes Sometimes referred to as European radishes or spring radishes if they are planted in cooler weather, summer radishes are generally small and have a relatively short three- to four-week cultivation time. The 'April Cross' is a giant white radish hybrid that bolts very slowly. 'Bunny Tail' is an heirloom variety from Italy, where it is known as Rosso Tondo A Piccola Punta Bianca. It is slightly oblong, mostly red, with a white tip. 'Cherry Belle' is a bright red-skinned round variety with a white interior. It is familiar in North American supermarkets. 'Champion' is round and red-skinned like the 'Cherry Belle', but with slightly larger roots, up to , and a milder flavor. 'Red King' has a mild flavor, with good resistance to club root, a problem that can arise from poor drainage. 'Sicily Giant' is a large heirloom variety from Sicily. It can reach up to 5 cm (2 in) in diameter. 'Snow Belle' is an all-white variety of radish, similar in shape to the 'Cherry Belle'. 'White Icicle' or 'Icicle' is a white carrot-shaped variety, around long, dating back to the 16th century. It slices easily, and has better than average resistance to pithiness. 'French Breakfast' is an elongated, red-skinned radish with a white splash at the root end. It is typically slightly milder than other summer varieties, but is among the quickest to turn pithy. 'Plum Purple', a purple-fuchsia radish, tends to stay crisp longer than average. 'Gala' and 'Roodbol' are two varieties popular in the Netherlands in a breakfast dish, thinly sliced on buttered bread. 'Easter Egg' is not an actual variety, but a mix of varieties with different skin colors, typically including white, pink, red, and purple radishes. Sold in markets or seed packets under the name, the seed mixes can extend harvesting duration from a single planting, as different varieties may mature at different times. Winter varieties 'Black Spanish' or 'Black Spanish Round' occur in both round and elongated forms, and are sometimes simply called the black radish (Raphanus sativus L. var. niger (M.) S.K. or L. ssp. niger (M.). D.C. var. albus D.C) or known by the French name Gros Noir d'Hiver. It dates in Europe to 1548, and was a common garden variety in England and France during the early 19th century. It has a rough, black skin with hot-flavored, white flesh, is round or irregularly pear shaped, and grows to around in diameter. Daikon refers to a wide variety of winter oilseed radishes from Asia. While the Japanese name daikon has been adopted in English, it is also sometimes called the Japanese radish, Chinese radish, Oriental radish or mooli (in India and South Asia). Daikons commonly have elongated white roots, although many varieties of daikon exist. One well-known variety is 'April Cross', with smooth white roots. The New York Times describes 'Masato Red' and 'Masato Green' varieties as extremely long, well-suited for fall planting and winter storage. The Sakurajima radish is a hot-flavored variety which is typically grown to around , but which can grow to when left in the ground. Korean radish, also called mu(), is a variety of white radish with firm crunchy texture. Although mu is also a generic term for radishes in Korean (as daikon is a generic term for radishes in Japanese), the word is usually used in its narrow sense, referring to Joseon radish(, Joseonmu). In Korean cuisine context, the word Joseon is often used in contrast to Wae, to distinguish Korean varieties from Japanese ones. The longer, thinner, and waterier Japanese daikon cultivated mainly for danmuji is referred to as Wae radish(, Waemu) in Korea. Korean radishes are generally shorter, stouter, and sturdier than daikon, and have pale green shade halfway down from the top. They also have stronger flavour, denser flesh and softer leaves. The greens of Korean radishes are called mucheong() and used as vegetable in various dishes. Seed pod varieties The seeds of radishes grow in siliques (widely referred to as "pods"), following flowering that happens when left to grow past their normal harvesting period. The seeds are edible, and are sometimes used as a crunchy, sharp addition to salads. Some varieties are grown specifically for their seeds or seed pods, rather than their roots. The rat-tailed radish, an old European variety thought to have come from East Asia centuries ago, has long, thin, curly pods which can exceed in length. In the 17th century, the pods were often pickled and served with meat. The 'München Bier' variety supplies seed pods that are sometimes served raw as an accompaniment to beer in Germany. Production Using 2003–4 data, several sources report annual world production of radishes to be about 7 million tonnes, produced mainly by China, Japan, and South Korea, and representing roughly 2% of global vegetable production. Nutritional value In a reference serving, raw radishes provide of food energy and have a moderate amount of vitamin C (18% of Daily Value), with other essential nutrients in low content (table). A raw radish is 95% water, 3% carbohydrates, 1% protein, and has negligible fat. Uses Cooking The most commonly eaten portion is the napiform or fusiform taproot, although the entire plant is edible and the tops can be used as a leaf vegetable. The seed can also be sprouted and eaten raw in a similar way to a mung bean. The root of the radish is usually eaten raw, although tougher specimens can be steamed. The raw flesh has a crisp texture and a pungent, peppery flavor, caused by glucosinolates and the enzyme myrosinase, which combine when chewed to form allyl isothiocyanates, also present in mustard, horseradish, and wasabi. Radishes are mostly used in salads, but also appear in many European dishes. In Mexican cuisine, sliced radishes are used in combination with shredded lettuce as garnish for traditional dishes such as tostadas, sopes, enchiladas and pozole. Radish greens are usually discarded, but are edible and nutritious, and can be prepared in a variety of ways. The leaves are sometimes used in recipes, like potato soup or as a sauteed side dish. They are also found blended with fruit juices in some recipes. In Indian cuisine the seed pods are called "moongra" or "mogri" and can be used in many dishes. Other uses The seeds of radishes can be pressed to extract radish seed oil. Wild radish seeds contain up to 48% oil, and while not suitable for human consumption, this oil is a potential source of biofuel. The daikon grows well in cool climates and, apart from its industrial use, can be used as a cover crop, grown to increase soil fertility, to scavenge nutrients, suppress weeds, help alleviate soil compaction, and prevent winter erosion of the soil. "Radi", a spiral-cut radish, served with salt and occasionally chives, is traditionally served with beer at the Bavarian Oktoberfest. Culture The daikon varieties of radish are important parts of East, Southeast, and South Asian cuisine. In Japan and Korea, radish dolls are sometimes made as children's toys. Daikon is also one of the plants that make up the Japanese Festival of Seven Herbs (Nanakusa no sekku) on the seventh day after the new year. Citizens of Oaxaca, Mexico, celebrate the Night of the Radishes (Noche de los rábanos) on December 23 as a part of Christmas celebrations. This folk art competition uses a large type of radish up to long and weighing up to . Great skill and ingenuity are used to carve these into religious and popular figures, buildings, and other objects, and they are displayed in the town square. Gallery
Biology and health sciences
Brassicales
null
164544
https://en.wikipedia.org/wiki/Soil%20formation
Soil formation
Soil formation, also known as pedogenesis, is the process of soil genesis as regulated by the effects of place, environment, and history. Biogeochemical processes act to both create and destroy order (anisotropy) within soils. These alterations lead to the development of layers, termed soil horizons, distinguished by differences in color, structure, texture, and chemistry. These features occur in patterns of soil type distribution, forming in response to differences in soil forming factors. Pedogenesis is studied as a branch of pedology, the study of soil in its natural environment. Other branches of pedology are the study of soil morphology and soil classification. The study of pedogenesis is important to understanding soil distribution patterns in current (soil geography) and past (paleopedology) geologic periods. Overview Soil develops through a series of changes. The starting point is weathering of freshly accumulated parent material. A variety of soil microbes (bacteria, archaea, fungi) feed on simple compounds (nutrients) released by weathering and produce organic acids and specialized proteins which contribute in turn to mineral weathering. They also leave behind organic residues which contribute to humus formation. Plant roots with their symbiotic mycorrhizal fungi are also able to extract nutrients from rocks. New soils increase in depth by a combination of weathering and further deposition. The soil production rate due to weathering is approximately 1/10 mm per year. New soils can also deepen from dust deposition. Gradually soil is able to support higher forms of plants and animals, starting with pioneer species and proceeding along ecological succession to more complex plant and animal communities. Topsoils deepen with the accumulation of humus originating from dead remains of higher plants and soil microbes. They also deepen through mixing of organic matter with weathered minerals. As soils mature, they develop soil horizons as organic matter accumulates and mineral weathering and leaching take place. Factors Soil formation is influenced by at least five classic factors that are intertwined in the evolution of a soil. They are: parent material, climate, topography (relief), organisms, and time. When reordered to climate, organisms, relief, parent material, and time, they form the acronym CLORPT. Parent material The mineral material from which a soil forms is called parent material. Rock, whether its origin is igneous, sedimentary, or metamorphic, is the source of all soil mineral materials and the origin of all plant nutrients with the exceptions of nitrogen, hydrogen and carbon. As the parent rock is chemically and physically weathered, transported, deposited and precipitated, it is transformed into a soil. Typical soil parent mineral materials are: Quartz: SiO2 Calcite: CaCO3 Feldspar: KAlSi3O8 Mica (biotite): Parent materials are classified according to how they came to be deposited. Residual materials are mineral materials that have weathered in place from primary bedrock. Transported materials are those that have been deposited by water, wind, ice or gravity. Cumulose material is organic matter that has grown and accumulates in place. Residual soils are soils that develop from their underlying parent rocks and have the same general chemistry as those rocks. The soils found on mesas, plateaux, and plains are residual soils. In the United States as little as three percent of the soils are residual. Most soils derive from transported materials that have been moved many miles by wind, water, ice and gravity: Aeolian processes (movement by wind) are capable of moving silt and fine sand many hundreds of miles, forming loess soils (60–90 percent silt), common in the Midwestern United States and Canada, north-western Europe, Argentina and Central Asia. Clay is seldom moved by wind as it forms stable aggregates. Water-transported materials are classed as either alluvial, lacustrine, or marine. Alluvial materials are those moved and deposited by flowing water. Sedimentary deposits settled in lakes are called lacustrine. Lake Bonneville and many soils around the Great Lakes are examples. Marine deposits, such as soils along the Atlantic and Gulf Coasts and in the Imperial Valley of California are the beds of ancient seas that have been revealed as the land uplifted. Ice moves parent material and makes deposits in the form of terminal and lateral moraines in the case of stationary glaciers. Retreating glaciers leave smoother ground moraines, and in all cases outwash plains are left as alluvial deposits are moved downstream from the glacier. Parent material moved by gravity is obvious at the base of steep slopes as talus cones and is called colluvial material. Cumulose parent material is not moved but originates from deposited organic material. This includes peat and muck soils and results from preservation of plant residues by the low oxygen content of a high water table. While peat may form sterile soils, muck soils may be very fertile. Weathering The weathering of parent material takes the form of physical weathering (disintegration), chemical weathering (decomposition) and chemical transformation. Weathering is usually confined to the top few meters of geologic material, because physical, chemical, and biological stresses and fluctuations generally decrease with depth. Physical disintegration begins as rocks that have solidified deep in the Earth are exposed to lower pressure near the surface and swell and become mechanically unstable. Chemical decomposition is a function of mineral solubility, the rate of which doubles with each 10 °C rise in temperature but is strongly dependent on water to effect chemical changes. Rocks that will decompose in a few years in tropical climates will remain unaltered for millennia in deserts. Structural changes are the result of hydration, oxidation, and reduction. Chemical weathering mainly results from the excretion of organic acids and chelating compounds by bacteria and fungi, thought to increase under greenhouse effect. Physical disintegration is the first stage in the transformation of parent material into soil. Temperature fluctuations cause expansion and contraction of the rock, splitting it along lines of weakness. Water may then enter the cracks and freeze and cause the physical splitting of material along a path toward the center of the rock, while temperature gradients within the rock can cause exfoliation of "shells". Cycles of wetting and drying cause soil particles to be abraded to a finer size, as does the physical rubbing of material as it is moved by wind, water, and gravity. Organisms may reduce parent material size and create crevices and pores through the mechanical action of plant roots and the digging activity of animals. Chemical decomposition and structural changes result when minerals are made soluble by water or are changed in structure. The first three of the following list are solubility changes, and the last three are structural changes. The solution of salts in water results from the action of bipolar water molecules on ionic salt compounds producing a solution of ions and water, removing those minerals and reducing the rock's integrity, at a rate depending on water flow and pore channels. Hydrolysis is the transformation of minerals into polar molecules by the splitting of intervening water. This results in soluble acid-base pairs. For example, the hydrolysis of orthoclase-feldspar transforms it to acid silicate clay and basic potassium hydroxide, both of which are more soluble. In carbonation, the solution of carbon dioxide in water forms carbonic acid. Carbonic acid will transform calcite into more soluble calcium bicarbonate. Hydration is the inclusion of water in a mineral structure, causing it to swell and leaving it stressed and easily decomposed. Oxidation of a mineral compound is the inclusion of oxygen in a mineral, causing it to increase its oxidation number and swell due to the relatively large size of oxygen, leaving it stressed and more easily attacked by water (hydrolysis) or carbonic acid (carbonation). Reduction, the opposite of oxidation, means the removal of oxygen, hence the oxidation number of some part of the mineral is reduced, which occurs when oxygen is scarce. The reduction of minerals leaves them electrically unstable, more soluble and internally stressed and easily decomposed. It mainly occurs in waterlogged conditions. Of the above, hydrolysis and carbonation are the most effective, in particular in regions of high rainfall, temperature and physical erosion. Chemical weathering becomes more effective as the surface area of the rock increases, thus is favoured by physical disintegration. This stems in latitudinal and altitudinal climate gradients in regolith formation. Saprolite is a particular example of a residual soil formed from the transformation of granite, metamorphic and other types of bedrock into clay minerals. Often called weathered granite, saprolite is the result of weathering processes that include: hydrolysis, chelation from organic compounds, hydration and physical processes that include freezing and thawing. The mineralogical and chemical composition of the primary bedrock material, its physical features (including grain size and degree of consolidation), and the rate and type of weathering transforms the parent material into a different mineral. The texture, pH and mineral constituents of saprolite are inherited from its parent material. This process is also called arenization, resulting in the formation of sandy soils, thanks to the much higher resistance of quartz compared to other mineral components of granite (e.g., mica, amphibole, feldspar). Climate The principal climatic variables influencing soil formation are effective precipitation (i.e., precipitation minus evapotranspiration) and temperature, both of which affect the rates of chemical, physical, and biological processes. Temperature and moisture both influence the organic matter content of soil through their effects on the balance between primary production and decomposition: the colder or drier the climate the lesser atmospheric carbon is fixed as organic matter while the lesser organic matter is decomposed. Climate also indirectly influences soil formation through the effects of vegetation cover and biological activity, which modify the rates of chemical reactions in the soil. Climate is the dominant factor in soil formation, and soils show the distinctive characteristics of the climate zones in which they form, with a feedback to climate through transfer of carbon stocked in soil horizons back to the atmosphere. If warm temperatures and abundant water are present in the profile at the same time, the processes of weathering, leaching, and plant growth will be maximized. According to the climatic determination of biomes, humid climates favor the growth of trees. In contrast, grasses are the dominant native vegetation in subhumid and semiarid regions, while shrubs and brush of various kinds dominate in arid areas. Water is essential for all the major chemical weathering reactions. To be effective in soil formation, water must penetrate the regolith. The seasonal rainfall distribution, evaporative losses, site topography, and soil permeability interact to determine how effectively precipitation can influence soil formation. The greater the depth of water penetration, the greater the depth of weathering of the soil and its development. Surplus water percolating through the soil profile transports soluble and suspended materials from the upper layers (eluviation) to the lower layers (illuviation), including clay particles and dissolved organic matter. It may also carry away soluble materials in the surface drainage waters. Thus, percolating water stimulates weathering reactions and helps differentiate soil horizons. Likewise, a deficiency of water is a major factor in determining the characteristics of soils of dry regions. Soluble salts are not leached from these soils, and in some cases they build up to levels that curtail plant and microbial growth. Soil profiles in arid and semi-arid regions are also apt to accumulate carbonates and certain types of expansive clays (calcrete or caliche horizons). In tropical soils, when the soil has been deprived of vegetation (e.g. by deforestation) and thereby is submitted to intense evaporation, the upward capillary movement of water, which has dissolved iron and aluminum salts, is responsible for the formation of a superficial hard pan of laterite or bauxite, respectively, which is improper for cultivation, a known case of irreversible soil degradation. The direct influences of climate include: A shallow accumulation of lime in low rainfall areas as caliche Formation of acid soils in humid areas Erosion of soils on steep hillsides Deposition of eroded materials downstream Very intense chemical weathering, leaching, and erosion in warm and humid regions where soil does not freeze Climate directly affects the rate of weathering and leaching. Wind moves sand and smaller particles (dust), especially in arid regions where there is little plant cover, depositing it close to or far from the entrainment source. The type and amount of precipitation influence soil formation by affecting the movement of ions and particles through the soil, and aid in the development of different soil profiles. Soil profiles are more distinct in wet and cool climates, where organic materials may accumulate, than in wet and warm climates, where organic materials are rapidly consumed. The effectiveness of water in weathering parent rock material depends on seasonal and daily temperature fluctuations, which favour tensile stresses in rock minerals, and thus their mechanical disaggregation, a process called thermal fatigue. By the same process freeze-thaw cycles are an effective mechanism which breaks up rocks and other consolidated materials. Topography The topography, or relief, is characterized by the inclination (slope), elevation, and orientation of the terrain (aspect). Topography determines the rate of precipitation or runoff and the rate of formation or erosion of the surface soil profile. The topographical setting may either hasten or retard the work of climatic forces. Steep slopes encourage rapid soil loss by erosion and allow less rainfall to enter the soil before running off and hence, little mineral deposition in lower profiles (illuviation). In semiarid regions, the lower effective rainfall on steeper slopes also results in less complete vegetative cover, so there is less plant contribution to soil formation. For all of these reasons, steep slopes prevent the formation of soil from getting very far ahead of soil destruction. Therefore, soils on steep terrain tend to have rather shallow, poorly developed profiles in comparison to soils on nearby, more level sites. Topography determines exposure to weather, fire, and other forces of man and nature. Mineral accumulations, plant nutrients, type of vegetation, vegetation growth, erosion, and water drainage are dependent on topographic relief. Soils at the bottom of a hill will get more water than soils on the slopes, and soils on the slopes that face the sun's path will be drier than soils on slopes that do not. In swales and depressions where runoff water tends to concentrate, the regolith is usually more deeply weathered, and soil profile development is more advanced. However, in the lowest landscape positions, water may saturate the regolith to such a degree that drainage and aeration are restricted. Here, the weathering of some minerals and the decomposition of organic matter are retarded, while the loss of iron and manganese is accelerated. In such low-lying topography, special profile features characteristic of wetland soils may develop. Depressions allow the accumulation of water, minerals and organic matter, and in the extreme, the resulting soils will be saline marshes or peat bogs. Recurring patterns of topography result in toposequences or soil catenas. These patterns emerge from topographic differences in erosion, deposition, fertility, soil moisture, plant cover, soil biology, fire history, and exposure to the elements. Gravity transports water downslope, together with mineral and organic solutes and colloids, increasing particulate and base content at the foot of hills and mountains. However, many other factors like drainage and erosion interact with slope position, blurring its expected influence on crop yield. Organisms Each soil has a unique combination of microbial, plant, animal and human influences acting upon it. Microorganisms are particularly influential in the mineral transformations critical to the soil forming process. Additionally, some bacteria can fix atmospheric nitrogen, and some fungi are efficient at extracting deep soil phosphorus and increasing soil carbon levels in the form of glomalin. Plants hold soil against erosion, and accumulated plant material build soil humus levels. Plant root exudation supports microbial activity. Animals serve to decompose plant materials and mix soil through bioturbation. Soil is the most speciose (species-rich) ecosystem on Earth, but the vast majority of organisms in soil are microbes, a great many of which have not been described. There may be a population limit of around one billion cells per gram of soil, but estimates of the number of species vary widely from 50,000 per gram to over a million per gram of soil. The number of organisms and species can vary widely according to soil type, location, and depth. Plants, animals, fungi, bacteria and humans affect soil formation (see soil biomantle and stonelayer). Soil animals, including fauna and soil mesofauna, mix soils as they form burrows and pores, allowing moisture and gases to move about, a process called bioturbation. In the same way, plant roots penetrate soil horizons and open channels upon decomposition. Plants with deep taproots can penetrate many metres through the different soil layers to bring up nutrients from deeper in the profile. Plants have fine roots that excrete organic compounds (sugars, organic acids, mucilage), slough off cells (in particular at their tip), and are easily decomposed, adding organic matter to soil, a process called rhizodeposition. Microorganisms, including fungi and bacteria, effect chemical exchanges between roots and soil and act as a reserve of nutrients in a soil biological hotspot called rhizosphere. The growth of roots through the soil stimulates microbial populations, stimulating in turn the activity of their predators (notably amoeba), thereby increasing the mineralization rate, and in last turn root growth, a positive feedback called the soil microbial loop. Out of root influence, in the bulk soil most bacteria are in a quiescent stage, forming micro-aggregates, i.e. mucilaginous colonies to which clay particles are glued, offering them a protection against desiccation and predation by soil microfauna (bacteriophagous protozoa and nematodes). Microaggregates (20–250 μm) are ingested by soil mesofauna and fauna, and bacterial bodies are partly or totally digested in their guts. Humans impact soil formation by removing vegetation cover through tillage, application of biocides, fire and leaving soils bare. This can lead to erosion, waterlogging, lateritization or podzolization (according to climate and topography). Tillage mixes the different soil layers, restarting the soil formation process as less weathered material is mixed with the more developed upper layers, resulting in net increased rate of mineral weathering. Earthworms, ants, termites, moles, gophers, as well as some millipedes and tenebrionid beetles, mix the soil as they burrow, significantly affecting soil formation. Earthworms ingest soil particles and organic residues, enhancing the availability of plant nutrients in the material that passes through their bodies. They aerate and stir the soil and create stable soil aggregates, after having disrupted links between soil particles during the intestinal transit of ingested soil, thereby assuring ready infiltration of water. As ants and termites build mounds, earthworms transport soil materials from one horizon to another. Other important functions are fulfilled by earthworms in the soil ecosystem, in particular their intense mucus production, both within the intestine and as a lining in their galleries, exert a priming effect on soil microflora, giving them the status of ecosystem engineers, which they share with ants and termites. In general, the mixing of the soil by the activities of animals, sometimes called pedoturbation, tends to undo or counteract the tendency of other soil-forming processes that create distinct horizons. Termites and ants may also retard soil profile development by denuding large areas of soil around their nests, leading to increased loss of soil by erosion. Large animals such as gophers, moles, and prairie dogs bore into the lower soil horizons, bringing materials to the surface. Their tunnels are often open to the surface, encouraging the movement of water and air into the subsurface layers. In localized areas, they enhance mixing of the lower and upper horizons by creating and later refilling the tunnels. Old animal burrows in the lower horizons often become filled with soil material from the overlying A horizon, creating profile features known as crotovinas. Vegetation impacts soils in numerous ways. It can prevent erosion caused by excessive rain that might result from surface runoff. Plants shade soils, keeping them cooler and slowing evaporation of soil moisture. Conversely, by way of transpiration, plants can cause soils to lose moisture, resulting in complex and highly variable relationships between leaf area index (measuring light interception) and moisture loss: more generally plants prevent soil from desiccation during driest months while they dry it during moister months, thereby acting as a buffer against strong moisture variation. Plants can form new chemicals that can break down minerals, both directly and indirectly through mycorrhizal fungi and rhizosphere bacteria, and improve the soil structure. The type and amount of vegetation depend on climate, topography, soil characteristics and biological factors, mediated or not by human activities. Soil factors such as density, depth, chemistry, pH, temperature and moisture greatly affect the type of plants that can grow in a given location. Dead plants and fallen leaves and stems begin their decomposition on the surface. There, organisms feed on them and mix the organic material with the upper soil layers; these added organic compounds become part of the soil formation process. The influence of humans, and by association, fire, are state factors placed within the organisms state factor. Humans can import or extract nutrients and energy in ways that dramatically change soil formation. Accelerated soil erosion from overgrazing, and Pre-Columbian terraforming the Amazon basin resulting in terra preta are two examples of the effects of human management. It is believed that Native Americans regularly set fires to maintain several large areas of prairie grasslands in Indiana and Michigan, although climate and mammalian grazers (e.g. bisons) are also advocated to explain the maintenance of the Great Plains of North America. In more recent times, human destruction of natural vegetation and subsequent tillage of the soil for crop production has abruptly modified soil formation. Likewise, irrigating soil in an arid region drastically influences soil-forming factors, as does adding fertilizer and lime to soils of low fertility. Distinct ecosystems produce distinct soils, sometimes in easily observable ways. For example, three species of land snails in the genus Euchondrus in the Negev desert are noted for eating lichens growing under the surface limestone rocks and slabs (endolithic lichens). The grazing activity of these ecosystem engineers disrupts the limestone, resulting in the weathering and the subsequent formation of soil. They have a significant effect on the region: the population of snails is estimated to process between 0.7 and 1.1 metric ton per hectare per year of limestone in the Negev desert. The effects of ancient ecosystems are not as easily observed, and this challenges the understanding of soil formation. For example, the chernozems of the North American tallgrass prairie have a humus fraction nearly half of which is charcoal. This outcome was not anticipated because the antecedent prairie fire ecology capable of producing these distinct deep rich black soils is not easily observed. Time Time is a factor in the interactions of all the above. While a mixture of sand, silt and clay constitute the texture of a soil and the aggregation of those components produces peds, the development of a distinct B horizon marks the development of a soil or pedogenesis. With time, soils will evolve features that depend on the interplay of the prior listed soil-forming factors. It takes decades to several thousand years for a soil to develop a profile, although the notion of soil development has been criticized, soil being in a constant state-of-change under the influence of fluctuating soil-forming factors. That time period depends strongly on climate, parent material, relief, and biotic activity. For example, recently deposited material from a flood exhibits no soil development as there has not been enough time for the material to form a structure that further defines soil. The original soil surface is buried, and the formation process must begin anew for this deposit. Over time the soil will develop a profile that depends on the intensities of biota and climate. While a soil can achieve relative stability of its properties for extended periods, the soil life cycle ultimately ends in soil conditions that leave it vulnerable to erosion. Despite the inevitability of soil retrogression and degradation, most soil cycles are long. Soil-forming factors continue to affect soils during their existence, even on stable landscapes that are long-enduring, some for millions of years. Materials are deposited on top or are blown or washed from the surface. With additions, removals and alterations, soils are always subject to new conditions. Whether these are slow or rapid changes depends on climate, topography and biological activity. Time as a soil-forming factor may be investigated by studying soil chronosequences, in which soils of different ages but with minor differences in other soil-forming factors can be compared. Paleosols are soils formed during previous soil forming conditions. History of research Dokuchaev's equation Russian geologist Vasily Dokuchaev, commonly regarded as the father of pedology, determined in 1883 that soil formation occurs over time under the influence of climate, vegetation, topography, and parent material. He demonstrated this in 1898 using the soil forming equation: (where cl or c climate, o biological processes, p parent material) tr relative time (young, mature, old) Hans Jenny's state equation American soil scientist Hans Jenny published in 1941 a state equation for the factors influencing soil formation: S soil formation cl (sometimes c) climate o organisms (soil microbiology, soil mesofauna, soil biology) r relief p parent material t time This is often remembered with the mnemonic Clorpt. Jenny's state equation in Factors of Soil Formation differs from the Vasily Dokuchaev equation, treating time (t) as a factor, adding topographic relief (r), and pointedly leaving the ellipsis "open" for more factors (state variables) to be added as our understanding becomes more refined. There are two principal methods by which the state equation may be solved: first in a theoretical or conceptual manner by logical deductions from certain premises, and second empirically by experimentation or field observation. The empirical method is still mostly employed today, and soil formation can be defined by varying a single factor and keeping the other factors constant. This had led to the development of empirical models to describe pedogenesis, such as climofunctions, biofunctions, topofunctions, lithofunctions, and chronofunctions. Since Jenny published his formulation in 1941, it has been used by innumerable soil surveyors all over the world as a qualitative list for understanding the factors that may be important for producing the soil pattern within a region. Example An example of the evolution of soils in prehistoric lake beds is in the Makgadikgadi Pans of the Kalahari Desert, where change in an ancient river course led to millennia of salinity buildup and formation of calcretes and silcretes.
Physical sciences
Pedology
null
164570
https://en.wikipedia.org/wiki/Wavenumber
Wavenumber
In the physical sciences, the wavenumber (or wave number), also known as repetency, is the spatial frequency of a wave, measured in cycles per unit distance (ordinary wavenumber) or radians per unit distance (angular wavenumber). It is analogous to temporal frequency, which is defined as the number of wave cycles per unit time (ordinary frequency) or radians per unit time (angular frequency). In multidimensional systems, the wavenumber is the magnitude of the wave vector. The space of wave vectors is called reciprocal space. Wave numbers and wave vectors play an essential role in optics and the physics of wave scattering, such as X-ray diffraction, neutron diffraction, electron diffraction, and elementary particle physics. For quantum mechanical waves, the wavenumber multiplied by the reduced Planck constant is the canonical momentum. Wavenumber can be used to specify quantities other than spatial frequency. For example, in optical spectroscopy, it is often used as a unit of temporal frequency assuming a certain speed of light. Definition Wavenumber, as used in spectroscopy and most chemistry fields, is defined as the number of wavelengths per unit distance, typically centimeters (cm−1): where λ is the wavelength. It is sometimes called the "spectroscopic wavenumber". It equals the spatial frequency. For example, a wavenumber in inverse centimeters can be converted to a frequency expressed in the unit gigahertz by multiplying by (the speed of light, in centimeters per nanosecond); conversely, an electromagnetic wave at 29.9792458 GHz has a wavelength of 1 cm in free space. In theoretical physics, a wave number, defined as the number of radians per unit distance, sometimes called "angular wavenumber", is more often used: When wavenumber is represented by the symbol , a frequency is still being represented, albeit indirectly. As described in the spectroscopy section, this is done through the relationship where s is a frequency expressed in the unit hertz. This is done for convenience as frequencies tend to be very large. Wavenumber has dimensions of reciprocal length, so its SI unit is the reciprocal of meters (m−1). In spectroscopy it is usual to give wavenumbers in cgs unit (i.e., reciprocal centimeters; cm−1); in this context, the wavenumber was formerly called the kayser, after Heinrich Kayser (some older scientific papers used this unit, abbreviated as K, where 1K = 1cm−1). The angular wavenumber may be expressed in the unit radian per meter (rad⋅m−1), or as above, since the radian is dimensionless. For electromagnetic radiation in vacuum, wavenumber is directly proportional to frequency and to photon energy. Because of this, wavenumbers are used as a convenient unit of energy in spectroscopy. Complex A complex-valued wavenumber can be defined for a medium with complex-valued relative permittivity , relative permeability and refraction index n as: where k0 is the free-space wavenumber, as above. The imaginary part of the wavenumber expresses attenuation per unit distance and is useful in the study of exponentially decaying evanescent fields. Plane waves in linear media The propagation factor of a sinusoidal plane wave propagating in the positive x direction in a linear material is given by where phase constant in the units of radians/meter attenuation constant in the units of nepers/meter angular frequency distance traveled in the x direction conductivity in Siemens/meter complex permittivity complex permeability The sign convention is chosen for consistency with propagation in lossy media. If the attenuation constant is positive, then the wave amplitude decreases as the wave propagates in the x-direction. Wavelength, phase velocity, and skin depth have simple relationships to the components of the wavenumber: In wave equations Here we assume that the wave is regular in the sense that the different quantities describing the wave such as the wavelength, frequency and thus the wavenumber are constants. See wavepacket for discussion of the case when these quantities are not constant. In general, the angular wavenumber k (i.e. the magnitude of the wave vector) is given by where ν is the frequency of the wave, λ is the wavelength, ω = 2πν is the angular frequency of the wave, and vp is the phase velocity of the wave. The dependence of the wavenumber on the frequency (or more commonly the frequency on the wavenumber) is known as a dispersion relation. For the special case of an electromagnetic wave in a vacuum, in which the wave propagates at the speed of light, k is given by: where E is the energy of the wave, ħ is the reduced Planck constant, and c is the speed of light in a vacuum. For the special case of a matter wave, for example an electron wave, in the non-relativistic approximation (in the case of a free particle, that is, the particle has no potential energy): Here p is the momentum of the particle, m is the mass of the particle, E is the kinetic energy of the particle, and ħ is the reduced Planck constant. Wavenumber is also used to define the group velocity. In spectroscopy In spectroscopy, "wavenumber" (in reciprocal centimeters, cm−1) refers to a temporal frequency (in hertz) which has been divided by the speed of light in vacuum (usually in centimeters per second, cm⋅s−1): The historical reason for using this spectroscopic wavenumber rather than frequency is that it is a convenient unit when studying atomic spectra by counting fringes per cm with an interferometer : the spectroscopic wavenumber is the reciprocal of the wavelength of light in vacuum: which remains essentially the same in air, and so the spectroscopic wavenumber is directly related to the angles of light scattered from diffraction gratings and the distance between fringes in interferometers, when those instruments are operated in air or vacuum. Such wavenumbers were first used in the calculations of Johannes Rydberg in the 1880s. The Rydberg–Ritz combination principle of 1908 was also formulated in terms of wavenumbers. A few years later spectral lines could be understood in quantum theory as differences between energy levels, energy being proportional to wavenumber, or frequency. However, spectroscopic data kept being tabulated in terms of spectroscopic wavenumber rather than frequency or energy. For example, the spectroscopic wavenumbers of the emission spectrum of atomic hydrogen are given by the Rydberg formula: where R is the Rydberg constant, and ni and nf are the principal quantum numbers of the initial and final levels respectively (ni is greater than nf for emission). A spectroscopic wavenumber can be converted into energy per photon E by Planck's relation: It can also be converted into wavelength of light: where n is the refractive index of the medium. Note that the wavelength of light changes as it passes through different media, however, the spectroscopic wavenumber (i.e., frequency) remains constant. Often spatial frequencies are stated by some authors "in wavenumbers", incorrectly transferring the name of the quantity to the CGS unit cm−1 itself.
Physical sciences
Waves
Physics
164610
https://en.wikipedia.org/wiki/Latent%20heat
Latent heat
Latent heat (also known as latent energy or heat of transformation) is energy released or absorbed, by a body or a thermodynamic system, during a constant-temperature process—usually a first-order phase transition, like melting or condensation. Latent heat can be understood as hidden energy which is supplied or extracted to change the state of a substance without changing its temperature or pressure. This includes the latent heat of fusion (solid to liquid), the latent heat of vaporization (liquid to gas) and the latent heat of sublimation (solid to gas). The term was introduced around 1762 by Scottish chemist Joseph Black. Black used the term in the context of calorimetry where a heat transfer caused a volume change in a body while its temperature was constant. In contrast to latent heat, sensible heat is energy transferred as heat, with a resultant temperature change in a body. Usage The terms sensible heat and latent heat refer to energy transferred between a body and its surroundings, defined by the occurrence or non-occurrence of temperature change; they depend on the properties of the body. Sensible heat is sensed or felt in a process as a change in the body's temperature. Latent heat is energy transferred in a process without change of the body's temperature, for example, in a phase change (solid/liquid/gas). Both sensible and latent heats are observed in many processes of transfer of energy in nature. Latent heat is associated with the change of phase of atmospheric or ocean water, vaporization, condensation, freezing or melting, whereas sensible heat is energy transferred that is evident in change of the temperature of the atmosphere or ocean, or ice, without those phase changes, though it is associated with changes of pressure and volume. The original usage of the term, as introduced by Black, was applied to systems that were intentionally held at constant temperature. Such usage referred to latent heat of expansion and several other related latent heats. These latent heats are defined independently of the conceptual framework of thermodynamics. When a body is heated at constant temperature by thermal radiation in a microwave field for example, it may expand by an amount described by its latent heat with respect to volume or latent heat of expansion, or increase its pressure by an amount described by its latent heat with respect to pressure. Latent heat is energy released or absorbed by a body or a thermodynamic system during a constant-temperature process. Two common forms of latent heat are latent heat of fusion (melting) and latent heat of vaporization (boiling). These names describe the direction of energy flow when changing from one phase to the next: from solid to liquid, and liquid to gas. In both cases the change is endothermic, meaning that the system absorbs energy. For example, when water evaporates, an input of energy is required for the water molecules to overcome the forces of attraction between them and make the transition from water to vapor. If the vapor then condenses to a liquid on a surface, then the vapor's latent energy absorbed during evaporation is released as the liquid's sensible heat onto the surface. The large value of the enthalpy of condensation of water vapor is the reason that steam is a far more effective heating medium than boiling water, and is more hazardous. Meteorology In meteorology, latent heat flux is the flux of energy from the Earth's surface to the atmosphere that is associated with evaporation or transpiration of water at the surface and subsequent condensation of water vapor in the troposphere. It is an important component of Earth's surface energy budget. Latent heat flux has been commonly measured with the Bowen ratio technique, or more recently since the mid-1900s by the eddy covariance method. History Background Evaporative cooling In 1748, an account was published in The Edinburgh Physical and Literary Essays of an experiment by the Scottish physician and chemist William Cullen. Cullen had used an air pump to lower the pressure in a container with diethyl ether. No heat was withdrawn from the ether, yet the ether boiled, but its temperature decreased. And in 1758, on a warm day in Cambridge, England, Benjamin Franklin and fellow scientist John Hadley experimented by continually wetting the ball of a mercury thermometer with ether and using bellows to evaporate the ether. With each subsequent evaporation, the thermometer read a lower temperature, eventually reaching . Another thermometer showed that the room temperature was constant at . In his letter Cooling by Evaporation, Franklin noted that, "One may see the possibility of freezing a man to death on a warm summer's day." Latent heat The English word latent comes from Latin latēns, meaning lying hidden. The term latent heat was introduced into calorimetry around 1750 by Joseph Black, commissioned by producers of Scotch whisky in search of ideal quantities of fuel and water for their distilling process to study system changes, such as of volume and pressure, when the thermodynamic system was held at constant temperature in a thermal bath. It was known that when the air temperature rises above freezing—air then becoming the obvious heat source—snow melts very slowly and the temperature of the melted snow is close to its freezing point. In 1757, Black started to investigate if heat, therefore, was required for the melting of a solid, independent of any rise in temperature. As far Black knew, the general view at that time was that melting was inevitably accompanied by a small increase in temperature, and that no more heat was required than what the increase in temperature would require in itself. Soon, however, Black was able to show that much more heat was required during melting than could be explained by the increase in temperature alone. He was also able to show that heat is released by a liquid during its freezing; again, much more than could be explained by the decrease of its temperature alone. Black would compare the change in temperature of two identical quantities of water, heated by identical means, one of which was, say, melted from ice, whereas the other was heated from merely cold liquid state. By comparing the resulting temperatures, he could conclude that, for instance, the temperature of the sample melted from ice was 140 °F lower than the other sample, thus melting the ice absorbed 140 "degrees of heat" that could not be measured by the thermometer, yet needed to be supplied, thus it was "latent" (hidden). Black also deduced that as much latent heat as was supplied into boiling the distillate (thus giving the quantity of fuel needed) also had to be absorbed to condense it again (thus giving the cooling water required). Quantifying latent heat In 1762, Black announced the following research and results to a society of professors at the University of Glasgow. Black had placed equal masses of ice at 32 °F (0 °C) and water at 33 °F (0.6 °C) respectively in two identical, well separated containers. The water and the ice were both evenly heated to 40 °F by the air in the room, which was at a constant 47 °F (8 °C). The water had therefore received 40 – 33 = 7 “degrees of heat”. The ice had been heated for 21 times longer and had therefore received 7 × 21 = 147 “degrees of heat”. The temperature of the ice had increased by 8 °F. The ice now stored, as it were, an additional 8 “degrees of heat” in a form which Black called sensible heat, manifested as temperature, which could be felt and measured. 147 – 8 = 139 “degrees of heat” were, so to speak, stored as latent heat, not manifesting itself. (In modern thermodynamics the idea of heat contained has been abandoned, so sensible heat and latent heat have been redefined. They do not reside anywhere.) Black next showed that a water temperature of 176 °F was needed to melt an equal mass of ice until it was all 32 °F. So now 176 – 32 = 144 “degrees of heat” seemed to be needed to melt the ice. The modern value for the heat of fusion of ice would be 143 “degrees of heat” on the same scale (79.5 “degrees of heat Celsius”). Finally Black increased the temperature of and vaporized respectively two equal masses of water through even heating. He showed that 830 “degrees of heat” was needed for the vaporization; again based on the time required. The modern value for the heat of vaporization of water would be 967 “degrees of heat” on the same scale. James Prescott Joule Later, James Prescott Joule characterised latent energy as the energy of interaction in a given configuration of particles, i.e. a form of potential energy, and the sensible heat as an energy that was indicated by the thermometer, relating the latter to thermal energy. Specific latent heat A specific latent heat (L) expresses the amount of energy in the form of heat (Q) required to completely effect a phase change of a unit of mass (m), usually , of a substance as an intensive property: Intensive properties are material characteristics and are not dependent on the size or extent of the sample. Commonly quoted and tabulated in the literature are the specific latent heat of fusion and the specific latent heat of vaporization for many substances. From this definition, the latent heat for a given mass of a substance is calculated by where: Q is the amount of energy released or absorbed during the change of phase of the substance (in kJ or in BTU), m is the mass of the substance (in kg or in lb), and L is the specific latent heat for a particular substance (in kJ kg−1 or in BTU lb−1), either Lf for fusion, or Lv for vaporization. Table of specific latent heats The following table shows the specific latent heats and change of phase temperatures (at standard pressure) of some common fluids and gases. Specific latent heat for condensation of water in clouds The specific latent heat of condensation of water in the temperature range from −25 °C to 40 °C is approximated by the following empirical cubic function: where the temperature is taken to be the numerical value in °C. For sublimation and deposition from and into ice, the specific latent heat is almost constant in the temperature range from −40 °C to 0 °C and can be approximated by the following empirical quadratic function: Variation with temperature (or pressure) As the temperature (or pressure) rises to the critical point, the latent heat of vaporization falls to zero.
Physical sciences
Thermodynamics
Physics
164633
https://en.wikipedia.org/wiki/Atmospheric%20science
Atmospheric science
Atmospheric science is the study of the Earth's atmosphere and its various inner-working physical processes. Meteorology includes atmospheric chemistry and atmospheric physics with a major focus on weather forecasting. Climatology is the study of atmospheric changes (both long and short-term) that define average climates and their change over time climate variability. Aeronomy is the study of the upper layers of the atmosphere, where dissociation and ionization are important. Atmospheric science has been extended to the field of planetary science and the study of the atmospheres of the planets and natural satellites of the Solar System. Experimental instruments used in atmospheric science include satellites, rocketsondes, radiosondes, weather balloons, radars, and lasers. The term aerology (from Greek ἀήρ, aēr, "air"; and -λογία, -logia) is sometimes used as an alternative term for the study of Earth's atmosphere; in other definitions, aerology is restricted to the free atmosphere, the region above the planetary boundary layer. Early pioneers in the field include Léon Teisserenc de Bort and Richard Assmann. Atmospheric chemistry Atmospheric chemistry is a branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied. It is a multidisciplinary field of research and draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology and other disciplines. Research is increasingly connected with other areas of study such as climatology. The composition and chemistry of the atmosphere is of importance for several reasons, but primarily because of the interactions between the atmosphere and living organisms. The composition of the Earth's atmosphere has been changed by human activity and some of these changes are harmful to human health, crops and ecosystems. Examples of problems which have been addressed by atmospheric chemistry include acid rain, photochemical smog and global warming. Atmospheric chemistry seeks to understand the causes of these problems, and by obtaining a theoretical understanding of them, allow possible solutions to be tested and the effects of changes in government policy evaluated. Atmospheric dynamics Atmospheric dynamics is the study of motion systems of meteorological importance, integrating observations at multiple locations and times and theories. Common topics studied include diverse phenomena such as thunderstorms, tornadoes, gravity waves, tropical cyclones, extratropical cyclones, jet streams, and global-scale circulations. The goal of dynamical studies is to explain the observed circulations on the basis of fundamental principles from physics. The objectives of such studies incorporate improving weather forecasting, developing methods for predicting seasonal and interannual climate fluctuations, and understanding the implications of human-induced perturbations (e.g., increased carbon dioxide concentrations or depletion of the ozone layer) on the global climate. Atmospheric physics Atmospheric physics is the application of physics to the study of the atmosphere. Atmospheric physicists attempt to model Earth's atmosphere and the atmospheres of the other planets using fluid flow equations, chemical models, radiation balancing, and energy transfer processes in the atmosphere and underlying oceans and land. In order to model weather systems, atmospheric physicists employ elements of scattering theory, wave propagation models, cloud physics, statistical mechanics and spatial statistics, each of which incorporate high levels of mathematics and physics. Atmospheric physics has close links to meteorology and climatology and also covers the design and construction of instruments for studying the atmosphere and the interpretation of the data they provide, including remote sensing instruments. In the United Kingdom, atmospheric studies are underpinned by the Meteorological Office. Divisions of the U.S. National Oceanic and Atmospheric Administration (NOAA) oversee research projects and weather modeling involving atmospheric physics. The U.S. National Astronomy and Ionosphere Center also carries out studies of the high atmosphere. The Earth's magnetic field and the solar wind interact with the atmosphere, creating the ionosphere, Van Allen radiation belts, telluric currents, and radiant energy. Climatology Is a science that bases its more general knowledge of the more specialized disciplines of meteorology, oceanography, geology, and astronomy, which in turn are based on the basic sciences of physics, chemistry, and mathematics. In contrast to meteorology, which studies short term weather systems lasting up to a few weeks, climatology studies the frequency and trends of those systems. It studies the periodicity of weather events over years to millennia, as well as changes in long-term average weather patterns, in relation to atmospheric conditions. Climatologists, those who practice climatology, study both the nature of climates – local, regional or global – and the natural or human-induced factors that cause climates to change. Climatology considers the past and tries to predict future climate change. Phenomena of climatological interest include the atmospheric boundary layer, circulation patterns, heat transfer (radiative, convective and latent), interactions between the atmosphere and the oceans and land surface (particularly vegetation, land use and topography), and the chemical and physical composition of the atmosphere. Related disciplines include astrophysics, atmospheric physics, chemistry, ecology, physical geography, geology, geophysics, glaciology, hydrology, oceanography, and volcanology. Aeronomy Aeronomy is the scientific study of the upper atmosphere of the Earth — the atmospheric layers above the stratopause — and corresponding regions of the atmospheres of other planets, where the entire atmosphere may correspond to the Earth's upper atmosphere or a portion of it. A branch of both atmospheric chemistry and atmospheric physics, aeronomy contrasts with meteorology, which focuses on the layers of the atmosphere below the stratopause. In atmospheric regions studied by aeronomers, chemical dissociation and ionization are important phenomena. Atmospheres on other celestial bodies All of the Solar System's planets have atmospheres. This is because their gravity is strong enough to keep gaseous particles close to the surface. Larger gas giants are massive enough to keep large amounts of the light gases hydrogen and helium close by, while the smaller planets lose these gases into space. The composition of the Earth's atmosphere is different from the other planets because the various life processes that have transpired on the planet have introduced free molecular oxygen. Much of Mercury's atmosphere has been blasted away by the solar wind. The only moon that has retained a dense atmosphere is Titan. There is a thin atmosphere on Triton, and a trace of an atmosphere on the Moon. Planetary atmospheres are affected by the varying degrees of energy received from either the Sun or their interiors, leading to the formation of dynamic weather systems such as hurricanes (on Earth), planet-wide dust storms (on Mars), an Earth-sized anticyclone on Jupiter (called the Great Red Spot), and holes in the atmosphere (on Neptune). At least one extrasolar planet, HD 189733 b, has been claimed to possess such a weather system, similar to the Great Red Spot but twice as large. Hot Jupiters have been shown to be losing their atmospheres into space due to stellar radiation, much like the tails of comets. These planets may have vast differences in temperature between their day and night sides which produce supersonic winds, although the day and night sides of HD 189733b appear to have very similar temperatures, indicating that planet's atmosphere effectively redistributes the star's energy around the planet.
Physical sciences
Atmosphere
null
164653
https://en.wikipedia.org/wiki/Physical%20restraint
Physical restraint
Physical restraint refers to means of purposely limiting or obstructing the freedom of a person's or an animal's bodily movement. Basic methods Usually, binding objects such as handcuffs, legcuffs, ropes, chains, straps or straitjackets are used for this purpose. Alternatively different kinds of arm locks deriving from unarmed combat methods or martial arts are used to restrain a person, which are predominantly used by trained police or correctional officers. This less commonly also extends to joint locks and pinning techniques. Purpose in humans Physical restraints are used: primarily by police and prison authorities to obstruct delinquents and prisoners from escaping or resisting British Police officers are authorised to use leg and arm restraints, if they have been instructed in their use. Guidelines set out by the Association of Chief Police Officers dictate that restraints are only to be used on subjects who are violent while being transported, restraining the use of their arms and legs, minimising the risk of punching and kicking. Pouches carrying restraints are usually carried on the duty belt, and in some cases carried in police vans. to enforce corporal punishment (typically a form of flagellation) by impeding motions of the target (usually prisoner), as is still practiced in penal functions of several countries by specially-trained teachers or teaching assistants to restrain children and teenagers with severe behavioral problems or disorders like autism or Tourette syndrome, to prevent hurting others or themselves approximately 70% of teachers who work with students with behavioral disabilities use a type of physical restraint (Goldstein & Brooks, 2007) often used in emergency situations or for de-escalation purposes (Ryan & Peterson, 2004) many educators believe restraints are used to maintain the safety and order of the classroom and students, while those who oppose their use believe they are dangerous to the physical and mental health of children and may result in death (McAfee, Schwilk & Miltruski, 2006) and (Kutz, 2009). Individuals with Disabilities Education Act has stated that "Restraints may not be used as an alternative to adequate staff" (McAfee, Schwilk & Miltruski, 2006, p. 713). Also, "restraint may be used only when aggressive behavior interferes with an individual's own ability to benefit from programming or poses physical threat to others" (McAfee, Schwilk & Miltruski, 2006, p. 713). by escapologists, illusionists and stunt performers to restrain people who are suffering from involuntary physical spasms, to prevent them from hurting themselves (see medical restraints) controversially, in psychiatric hospitals restraints were developed during the 1700s by Philippe Pinel and performed with his assistant, Jean-Baptiste Pussin in hospitals in France by a kidnapper (stereotypically with rope or duct tape and a gag) or other material for eroticism Misuse and risks Restraining someone against their will is generally a crime in most jurisdictions, unless it is explicitly sanctioned by law. (See false arrest, false imprisonment). Restraint has been misused in special education settings resulting in severe injury and trauma of students and lack of education from spending school hours restrained. The misuse of physical restraint has resulted in many deaths. Physical restraint can be dangerous, sometimes in unexpected ways. Examples include: postural asphyxia unintended strangulation death due to choking or vomiting and being unable to clear the airway death due to inability to escape in the event of fire or other disaster death due to dehydration or starvation due to the inability to escape cutting off of blood circulation by restraints nerve damage by restraints cutting of blood vessels by struggling against restraints, resulting in death by loss of blood death by hypothermia or hyperthermia whilst unable to escape death from deep vein thrombosis and pulmonary embolism due to lack of movement For these and many other reasons, extreme caution is needed in the use of physical restraint. Gagging a restrained person is highly risky, as it involves a substantial risk of asphyxia, both from the gag itself, and also from choking or vomiting and being unable to clear the airway. In practice, simple gags do not restrict communication much; however, this means that gags that are effective enough to prevent communication are generally also potentially effective at restricting breathing. Gags that prevent communication may also prevent the communication of distress that might otherwise prevent injury. Medical restraints A survey in the US in 1998 reported an estimated 150 restraint related deaths in care environments (Weiss, 1998). Low frequency fatalities occur with some degree of regularity. An investigation of 45 restraint related deaths in US childcare settings showed 28 of these deaths were reported to have occurred in the prone position. In the UK restraint related deaths would appear to be reported less often. The evidence for effective staff training in the use of medical restraints is at best crude, with evaluation of training programmes being the exception rather than the rule. Vast numbers of care staff are trained in 'physical interventions' including physical restraint, although they rarely employ them in practice. It is accepted that staff training in physical interventions can increase carer confidence. Japan Japanese law states that psychiatric hospitals may use restraints on patients only if there is a danger that the patients will harm themselves. The law also states that a designated psychiatrist must approve the use of restraints and examine the patient at least every 12 hours to determine whether the situation has changed and the patient should be removed from restraints. However, in practice, Japanese psychiatric hospitals use restraints fairly often and for long periods. Despite being required to certify every 12 hours whether a patient still needs restraints, Japanese psychiatric hospitals keep patients in restraints for a much longer time than hospitals in other countries. According to a survey conducted on 689 patients in 11 psychiatric hospitals in Japan, the average time spent in physical restraints is 96 days. Meanwhile, the average time in most other developed countries is at most several hours to tens of hours. The number of people who are physically restrained in Japanese psychiatric hospitals continues to increase. In 2014 more than 10,000 people were restrained-the highest ever recorded, and more than double the number a decade earlier. It is thought that some of that increase includes older patients with dementia. As a result, the Japanese Ministry of Health, Labour and Welfare has revised its guidelines for elderly people in nursing homes to have more restrictions against body restraints. The changes will take effect on 1 April 2018. Deaths have been reported from their use, including that of Kelly Savage, an Assisted Language Teacher from New Zealand, in 2017. United Kingdom The Millfields Charter is an electronic charter which promotes an end to the teaching to frontline healthcare staff of all prone (face down) restraint holds. Despite a UK government statement in 2013 that it was minded to impose a ban on such techniques in mental health facilities, by 2017 the use of restraints in UK psychiatric facilities had increased. Face down restraints are used more often on women and girls than on men. 51 out of 58 mental health trusts use restraints unnecessarily when other techniques would work. Organisations opposed to restraints include Mind and Rethink Mental Illness. YoungMinds and Agenda claim restraints are "frightening and humiliating" and "re-traumatises" patients especially women and girls who have previously been victims of physical and/or sexual abuse. The charities sent an open letter to health secretary, Jeremy Hunt showing evidence from 'Agenda, the alliance for women and girls at risk', revealing that patients are routinely restrained in some mental health units while others use non-physical ways to calm patients or stop self-harm. According to the letter over half of women with psychiatric problems have suffered abuse, restraint can cause physical harm, can frighten and humiliate the victim. Restraint, specially face down restraint can re-traumatise patients who previously suffered violence and abuse. "Mental health units are meant to be caring, therapeutic environments, for people feeling at their most vulnerable, not places where physical force is routine". Government guidelines state that face down restraint should not be used at all and other types of physical restraint are only for last resort. Research by Agenda found one fifth of women and girl patients in mental health units had suffered physical restraint. Some trusts averaged over twelve face down restraints per female patient. Over 6% of women, close to 2,000 were restrained face-down in total more than 4,000 times. The figures vary widely between regions. Some trusts hardly use restraints, others use them routinely. A woman patient was in several hospitals and units at times for a decade with mental health issues, she said in some units she suffered restraints two or three times daily. Katharine Sacks-Jones director of Agenda, maintains trusts use restraint when alternatives would work. Sacks-Jones maintains women her group speak to repeatedly describe face down restraint as a traumatic experience. On occasions male nurses have used it when a woman did not want her medication. "If you are a woman who has been sexually or physically abused, and mental health problems in women often have close links to violence and abuse, then a safer environment has to be just that: safe and not a re-traumatising experience. (...) Face-down restraint hurts, it is dangerous, and there are some big questions around why it is used more on women than men".
Technology
Law enforcement equipment
null
164656
https://en.wikipedia.org/wiki/Jet%20aircraft
Jet aircraft
A jet aircraft (or simply jet) is an aircraft (nearly always a fixed-wing aircraft) propelled by one or more jet engines. Whereas the engines in propeller-powered aircraft generally achieve their maximum efficiency at much lower speeds and altitudes, jet engines achieve maximum efficiency at speeds close to or even well above the speed of sound. Jet aircraft generally cruise most efficiently at about Mach 0.8 () and at altitudes around or more. The idea of the jet engine was not new, but the technical problems involved did not begin to be solved until the 1930s. Frank Whittle, an English inventor and RAF officer, began development of a viable jet engine in 1928, and Hans von Ohain in Germany began work independently in the early 1930s. In August 1939 the turbojet powered Heinkel He 178, the world's first jet aircraft, made its first flight. A wide range of different types of jet aircraft exist, both for civilian and military purposes. History After the first instance of powered flight, a large number of jet engine designs were suggested. René Lorin, Morize, Harris proposed systems for creating a jet efflux. After other jet engines had been run, Romanian inventor Henri Coandă claimed to have built a jet-powered aircraft in 1910, the Coandă-1910. However, to support this claim, he had to make substantial alterations to the drawings which he used to support his subsequently debunked claims. In fact the ducted-fan engine backfired, setting the aircraft on fire before any flights were ever made, and it lacked nearly all of the features necessary for a jet engine - including a lack of fuel injection, and any concern about hot jet efflux being directed at a highly flammable fabric surface. During the 1920s and 1930s a number of approaches were tried. A variety of motorjet, turboprop, pulsejet and rocket powered aircraft were designed. Rocket-engine research was being carried out in Germany and the first aircraft to fly under rocket power was the Lippisch Ente, in 1928. The Ente had previously been flown as a glider. The next year, in 1929, the Opel RAK.1 became the first purpose-built rocket aircraft to fly. The turbojet was invented in the 1930s, independently by Frank Whittle and later Hans von Ohain. The first turbojet aircraft to fly was the Heinkel He 178, on August 27, 1939 in Rostock (Germany), powered by von Ohain's design. This was largely a proof of concept, as the problem of "creep" (metal fatigue caused by the high temperatures within the engine) had not been solved, and the engine quickly burned out. Von Ohain's design, an axial-flow engine, as opposed to Whittle's centrifugal flow engine, was eventually adopted by most manufacturers by the 1950s. The first flight of a jet-propelled aircraft to come to public attention was the Italian Caproni Campini N.1 motorjet prototype which flew on August 27, 1940. It was the first jet aircraft recognised by the Fédération Aéronautique Internationale (at the time the German He 178 program was still kept secret). Campini began development of the motorjet in 1932; it differed from a true turbojet in that the turbine was driven by a piston engine, rather than combustion of the turbine gases - which was a much more complex solution. The British experimental Gloster E.28/39 first flew on May 15, 1941, powered by Sir Frank Whittle's turbojet. The United States Bell XP-59A flew on October 1, 1942, using two examples of a version of the Whittle engine built by General Electric. The Meteor was the first production jet, with the first orders for production examples being made on 8 August 1941, the prototype first flying on 5 March 1943 and the first production aircraft flying on 12 January 1944, while the first orders for production Me 262 aircraft were not issued until 25 May 1943, and the first production Me 262 did not fly until 28 March 1944 despite the Me 262 program having started earlier than that of the Meteor, as Projekt 1065, with initial plans drawn up by Waldemar Voigt's design team in April 1939. The Messerschmitt Me 262 was the first operational jet fighter, manufactured by Germany during World War II and entering service on 19 April 1944 with Erprobungskommando 262 at Lechfeld just south of Augsburg. An Me 262 scored the first combat victory for a jet fighter on 26 July 1944, the day before the British Gloster Meteor entered operational service. The Me 262 had first flown on April 18, 1941, but mass production did not start until early 1944, with the first squadrons operational that year, too late for any effect on the outcome of the World War II. While only around 15 Meteors were operational during WW2, up to 1,400 Me 262 were produced, with 300 entering combat. Only the rocket-propelled Messerschmitt Me 163 Komet was a faster operational aircraft during the war. Around this time, mid 1944, the United Kingdom's Meteor was being used for defence of the UK against the V-1 flying bomb – the V-1 itself a pulsejet-powered aircraft and direct ancestor of the cruise missile – and then ground-attack operations over Europe in the last months of the war. In 1944 Germany introduced the Arado Ar 234 jet reconnaissance and bomber aircraft into service, though chiefly used in the former role, with the Heinkel He 162 Spatz single-jet light fighter appearing at the end of 1944. USSR tested its own Bereznyak-Isayev BI-1 in 1942, but the project was scrapped by leader Joseph Stalin in 1945. The Imperial Japanese Navy also developed jet aircraft in 1945, including the Nakajima J9Y Kikka, a modified, and slightly smaller version of the Me 262 that had folding wings. By the end of 1945, the US had introduced their first jet fighter, the Lockheed P-80 Shooting Star into service and the UK its second fighter design, the de Havilland Vampire. The US introduced the North American B-45 Tornado, their first jet bomber, into service in 1948. It was capable of carrying nuclear weapons, but was used for reconnaissance over Korea. On November 8, 1950, during the Korean War, United States Air Force Lt. Russell J. Brown, flying in an F-80, intercepted two North Korean MiG-15s near the Yalu River and shot them down in the first jet-to-jet dogfight in history. The UK put the English Electric Canberra into service in 1951 as a light bomber. It was designed to fly higher and faster than any interceptor. BOAC operated the first commercial jet service, from London to Johannesburg, in 1952 with the de Havilland Comet jetliner. This highly innovative aircraft travelled far faster and higher than propeller aircraft, was much quieter, smoother, and had stylish blended wings containing hidden jet engines. However, due to a design defect, and use of aluminium alloys, the aircraft suffered catastrophic metal fatigue which led to several crashes, which gave time for the Boeing 707 to enter service in 1958 and thus to dominate the market for civilian airliners. The underslung engines were found to be advantageous in the event of a propellant leak, and so the 707 looked rather different from the Comet: the 707 has a shape that is effectively the same as that of contemporary aircraft, with marked commonality still evident today for example with the 737 (fuselage) and A340 (single deck, swept wing, four below-wing engines). Turbofan aircraft with far greater fuel efficiency began entering service in the 1950s and 1960s, and became the most commonly used type of jet. The Tu-144 supersonic transport was the fastest commercial jet aircraft at Mach 2.35 (). It went into service in 1975, but was withdrawn from commercial service shortly afterwards. The Mach 2 Concorde entered service in 1976 and flew for 27 years. The fastest military jet aircraft was the SR-71 Blackbird at Mach 3.35 (). Other jets Most people use the term 'jet aircraft' to denote gas turbine based airbreathing jet engines, but rockets and scramjets are both also propelled by jet propulsion. Cruise missiles are single-use unmanned jet aircraft, powered predominantly by ramjets or turbojets or sometimes turbofans, but they will often have a rocket propulsion system for initial propulsion. The fastest airbreathing jet aircraft is the unmanned X-43 scramjet at around Mach 9–10. The fastest manned (rocket) aircraft is the X-15 at Mach 6.85. The Space Shuttle, while far faster than the X-43 or X-15, was not regarded as an aircraft during ascent as it was carried ballistically by rocket thrust, rather than the air. During re-entry it was classed (like a glider) as an unpowered aircraft. The first flight was in 1981. The Bell 533 (1964), Lockheed XH-51 (1965), and Sikorsky S-69 (1977-1981) are examples of compound helicopter designs where jet exhaust added to forward thrust. The Hiller YH-32 Hornet and Fairey Ultra-light Helicopter were among the many helicopters where the rotors were driven by tip jets. Jet-powered wingsuits exist – powered by model aircraft jet engines – but of short duration and needing to be launched at height. Aerodynamics Because of the way they work, the typical exhaust speed of jet engines is transonic or faster, therefore most jet aircraft need to fly at high speeds, either supersonic or speeds just below the speed of sound ("transonic") so as to achieve efficient flight. Aerodynamics is therefore an important consideration. Jet aircraft are usually designed using the Whitcomb area rule, which says that the total area of cross-section of the aircraft at any point along the aircraft from the nose must be approximately the same as that of a Sears-Haack body. A shape with that property minimises the production of shockwaves which would waste energy. Jet engines There are several types of engine which operate by expelling hot gas: turbojet turbofan (which come in two main forms low bypass turbofan and high bypass turbofan) ramjet turboprop The different types are used for different purposes. Rockets are the oldest type, and are mainly used when extremely high speeds are needed, or operation at extremely high altitudes where there is insufficient air to operate a jet engine. Due to the extreme, typically hypersonic, exhaust velocity and the necessity of oxidiser being carried on board, they consume propellant extremely quickly, making them impractical for routine transportation. Turbojets are the second oldest type; they have a high, usually supersonic, exhaust speed and low frontal cross-section, and so are best suited to high-speed, usually supersonic, flight. Although once widely used, they are relatively inefficient compared to turboprop and turbofans for subsonic flight. The last major aircraft to use turbojets were Concorde and Tu-144 supersonic transports. Low bypass turbofans have a lower exhaust speed than turbojets, and are mostly used for high sonic, transonic, and low supersonic speeds. High bypass turbofans are relatively efficient, and are used by subsonic aircraft such as airliners. Flying characteristics Jet aircraft fly considerably differently than propeller aircraft. One difference is that jet engines respond relatively slowly. This complicates takeoff and landing maneuvers. In particular, during takeoff, propeller aircraft engines blow air over their wings and that gives more lift and a shorter takeoff. These differences caught out some early BOAC Comet pilots. Propulsive efficiency In aircraft overall propulsive efficiency is the efficiency, in percent, with which the energy contained in a vehicle's propellant is converted into useful energy, to replace losses due to air drag, gravity, and acceleration. It can also be stated as the proportion of the mechanical energy actually used to propel the aircraft. It is always less than 100% because of kinetic energy loss to the exhaust, and less-than-ideal efficiency of the propulsive mechanism, whether a propeller, a jet exhaust, or a fan. In addition, propulsive efficiency is greatly dependent on air density and airspeed. Mathematically, it is represented as where is the cycle efficiency and is the propulsive efficiency. The cycle efficiency, in percent, is the proportion of energy that can be derived from the energy source that is converted to mechanical energy by the engine. For jet aircraft the propulsive efficiency (essentially energy efficiency) is highest when the engine emits an exhaust jet at a speed that is the same as, or nearly the same as, the vehicle velocity. The exact formula for air-breathing engines as given in the literature, is where c is the exhaust speed, and v is the speed of the aircraft. Range For a long range jet operating in the stratosphere, the speed of sound is constant, hence flying at fixed angle of attack and constant Mach number causes the aircraft to climb, without changing the value of the local speed of sound. In this case: where is the cruise Mach number and the local speed of sound. The range equation can be shown to be: which is known as the Breguet range equation after the French aviation pioneer Louis Charles Breguet.
Technology
Aviation
null
164901
https://en.wikipedia.org/wiki/Post-translational%20modification
Post-translational modification
In molecular biology, post-translational modification (PTM) is the covalent process of changing proteins following protein biosynthesis. PTMs may involve enzymes or occur spontaneously. Proteins are created by ribosomes, which translate mRNA into polypeptide chains, which may then change to form the mature protein product. PTMs are important components in cell signalling, as for example when prohormones are converted to hormones. Post-translational modifications can occur on the amino acid side chains or at the protein's C- or N- termini. They can expand the chemical set of the 22 amino acids by changing an existing functional group or adding a new one such as phosphate. Phosphorylation is highly effective for controlling the enzyme activity and is the most common change after translation. Many eukaryotic and prokaryotic proteins also have carbohydrate molecules attached to them in a process called glycosylation, which can promote protein folding and improve stability as well as serving regulatory functions. Attachment of lipid molecules, known as lipidation, often targets a protein or part of a protein attached to the cell membrane. Other forms of post-translational modification consist of cleaving peptide bonds, as in processing a propeptide to a mature form or removing the initiator methionine residue. The formation of disulfide bonds from cysteine residues may also be referred to as a post-translational modification. For instance, the peptide hormone insulin is cut twice after disulfide bonds are formed, and a propeptide is removed from the middle of the chain; the resulting protein consists of two polypeptide chains connected by disulfide bonds. Some types of post-translational modification are consequences of oxidative stress. Carbonylation is one example that targets the modified protein for degradation and can result in the formation of protein aggregates. Specific amino acid modifications can be used as biomarkers indicating oxidative damage. Sites that often undergo post-translational modification are those that have a functional group that can serve as a nucleophile in the reaction: the hydroxyl groups of serine, threonine, and tyrosine; the amine forms of lysine, arginine, and histidine; the thiolate anion of cysteine; the carboxylates of aspartate and glutamate; and the N- and C-termini. In addition, although the amide of asparagine is a weak nucleophile, it can serve as an attachment point for glycans. Rarer modifications can occur at oxidized methionines and at some methylene groups in side chains. Post-translational modification of proteins can be experimentally detected by a variety of techniques, including mass spectrometry, Eastern blotting, and Western blotting. PTMs involving addition of functional groups Addition by an enzyme in vivo Hydrophobic groups for membrane localization myristoylation (a type of acylation), attachment of myristate, a C14 saturated acid palmitoylation (a type of acylation), attachment of palmitate, a C16 saturated acid isoprenylation or prenylation, the addition of an isoprenoid group (e.g. farnesol and geranylgeraniol) farnesylation geranylgeranylation glypiation, glycosylphosphatidylinositol (GPI) anchor formation via an amide bond to C-terminal tail Cofactors for enhanced enzymatic activity lipoylation (a type of acylation), attachment of a lipoate (C8) functional group flavin moiety (FMN or FAD) may be covalently attached heme C attachment via thioether bonds with cysteines phosphopantetheinylation, the addition of a 4'-phosphopantetheinyl moiety from coenzyme A, as in fatty acid, polyketide, non-ribosomal peptide and leucine biosynthesis retinylidene Schiff base formation Modifications of translation factors diphthamide formation (on a histidine found in eEF2) ethanolamine phosphoglycerol attachment (on glutamate found in eEF1α) hypusine formation (on conserved lysine of eIF5A (eukaryotic) and aIF5A (archaeal)) beta-Lysine addition on a conserved lysine of the elongation factor P (EFP) in most bacteria. EFP is a homolog to eIF5A (eukaryotic) and aIF5A (archaeal) (see above). Smaller chemical groups acylation, e.g. O-acylation (esters), N-acylation (amides), S-acylation (thioesters) acetylation, the addition of an acetyl group, either at the N-terminus of the protein or at lysine residues. The reverse is called deacetylation. formylation alkylation, the addition of an alkyl group, e.g. methyl, ethyl methylation the addition of a methyl group, usually at lysine or arginine residues. The reverse is called demethylation. amidation at C-terminus. Formed by oxidative dissociation of a C-terminal Gly residue. amide bond formation amino acid addition arginylation, a tRNA-mediation addition polyglutamylation, covalent linkage of glutamic acid residues to the N-terminus of tubulin and some other proteins. (See tubulin polyglutamylase) polyglycylation, covalent linkage of one to more than 40 glycine residues to the tubulin C-terminal tail butyrylation gamma-carboxylation dependent on Vitamin K glycosylation, the addition of a glycosyl group to either arginine, asparagine, cysteine, hydroxylysine, serine, threonine, tyrosine, or tryptophan resulting in a glycoprotein. Distinct from glycation, which is regarded as a nonenzymatic attachment of sugars. O-GlcNAc, addition of N-acetylglucosamine to serine or threonine residues in a β-glycosidic linkage polysialylation, addition of polysialic acid, PSA, to NCAM malonylation hydroxylation: addition of an oxygen atom to the side-chain of a Pro or Lys residue iodination: addition of an iodine atom to the aromatic ring of a tyrosine residue (e.g. in thyroglobulin) nucleotide addition such as ADP-ribosylation phosphate ester (O-linked) or phosphoramidate (N-linked) formation phosphorylation, the addition of a phosphate group, usually to serine, threonine, and tyrosine (O-linked), or histidine (N-linked) adenylylation, the addition of an adenylyl moiety, usually to tyrosine (O-linked), or histidine and lysine (N-linked) uridylylation, the addition of an uridylyl-group (i.e. uridine monophosphate, UMP), usually to tyrosine propionylation pyroglutamate formation S-glutathionylation S-nitrosylation S-sulfenylation (aka S-sulphenylation), reversible covalent addition of one oxygen atom to the thiol group of a cysteine residue S-sulfinylation, normally irreversible covalent addition of two oxygen atoms to the thiol group of a cysteine residue S-sulfonylation, normally irreversible covalent addition of three oxygen atoms to the thiol group of a cysteine residue, resulting in the formation of a cysteic acid residue succinylation addition of a succinyl group to lysine sulfation, the addition of a sulfate group to a tyrosine. Non-enzymatic modifications in vivo Examples of non-enzymatic PTMs are glycation, glycoxidation, nitrosylation, oxidation, succination, and lipoxidation. glycation, the addition of a sugar molecule to a protein without the controlling action of an enzyme. carbamylation the addition of Isocyanic acid to a protein's N-terminus or the side-chain of Lys. carbonylation the addition of carbon monoxide to other organic/inorganic compounds. spontaneous isopeptide bond formation, as found in many surface proteins of Gram-positive bacteria. Non-enzymatic additions in vitro biotinylation: covalent attachment of a biotin moiety using a biotinylation reagent, typically for the purpose of labeling a protein. carbamylation: the addition of Isocyanic acid to a protein's N-terminus or the side-chain of Lys or Cys residues, typically resulting from exposure to urea solutions. oxidation: addition of one or more Oxygen atoms to a susceptible side-chain, principally of Met, Trp, His or Cys residues. Formation of disulfide bonds between Cys residues. pegylation: covalent attachment of polyethylene glycol (PEG) using a pegylation reagent, typically to the N-terminus or the side-chains of Lys residues. Pegylation is used to improve the efficacy of protein pharmaceuticals. Conjugation with other proteins or peptides ubiquitination, the covalent linkage to the protein ubiquitin. SUMOylation, the covalent linkage to the SUMO protein (Small Ubiquitin-related MOdifier) neddylation, the covalent linkage to the Nedd protein ISGylation, the covalent linkage to the ISG15 protein (Interferon-Stimulated Gene 15) pupylation, the covalent linkage to the prokaryotic ubiquitin-like protein Chemical modification of amino acids citrullination, or deimination, the conversion of arginine to citrulline deamidation, the conversion of glutamine to glutamic acid or asparagine to aspartic acid eliminylation, the conversion to an alkene by beta-elimination of phosphothreonine and phosphoserine, or dehydration of threonine and serine Structural changes disulfide bridges, the covalent linkage of two cysteine amino acids lysine-cysteine bridges, the covalent linkage of 1 lysine and 1 or 2 cystine residues via an oxygen atom (NOS and SONOS bridges) proteolytic cleavage, cleavage of a protein at a peptide bond isoaspartate formation, via the cyclisation of asparagine or aspartic acid amino-acid residues racemization of serine by protein-serine epimerase of alanine in dermorphin, a frog opioid peptide of methionine in deltorphin, also a frog opioid peptide protein splicing, self-catalytic removal of inteins analogous to mRNA processing Statistics Common PTMs by frequency In 2011, statistics of each post-translational modification experimentally and putatively detected have been compiled using proteome-wide information from the Swiss-Prot database. The 10 most common experimentally found modifications were as follows: Common PTMs by residue Some common post-translational modifications to specific amino-acid residues are shown below. Modifications occur on the side-chain unless indicated otherwise. Databases and tools Protein sequences contain sequence motifs that are recognized by modifying enzymes, and which can be documented or predicted in PTM databases. With the large number of different modifications being discovered, there is a need to document this sort of information in databases. PTM information can be collected through experimental means or predicted from high-quality, manually curated data. Numerous databases have been created, often with a focus on certain taxonomic groups (e.g. human proteins) or other features. List of resources PhosphoSitePlus – A database of comprehensive information and tools for the study of mammalian protein post-translational modification ProteomeScout – A database of proteins and post-translational modifications experimentally Human Protein Reference Database – A database for different modifications and understand different proteins, their class, and function/process related to disease causing proteins PROSITE – A database of Consensus patterns for many types of PTM's including sites RESID – A database consisting of a collection of annotations and structures for PTMs. iPTMnet – A database that integrates PTM information from several knowledgbases and text mining results. dbPTM – A database that shows different PTM's and information regarding their chemical components/structures and a frequency for amino acid modified site Uniprot has PTM information although that may be less comprehensive than in more specialized databases. The O-GlcNAc Database - A curated database for protein O-GlcNAcylation and referencing more than 14 000 protein entries and 10 000 O-GlcNAc sites. Tools List of software for visualization of proteins and their PTMs PyMOL – introduce a set of common PTM's into protein models AWESOME – Interactive tool to see the role of single nucleotide polymorphisms to PTM's Chimera – Interactive Database to visualize molecules Case examples Cleavage and formation of disulfide bridges during the production of insulin PTM of histones as regulation of transcription: RNA polymerase control by chromatin structure PTM of RNA polymerase II as regulation of transcription Cleavage of polypeptide chains as crucial for lectin specificity
Biology and health sciences
Molecular biology
Biology
164912
https://en.wikipedia.org/wiki/Glycoprotein
Glycoprotein
Glycoproteins are proteins which contain oligosaccharide (sugar) chains covalently attached to amino acid side-chains. The carbohydrate is attached to the protein in a cotranslational or posttranslational modification. This process is known as glycosylation. Secreted extracellular proteins are often glycosylated. In proteins that have segments extending extracellularly, the extracellular segments are also often glycosylated. Glycoproteins are also often important integral membrane proteins, where they play a role in cell–cell interactions. It is important to distinguish endoplasmic reticulum-based glycosylation of the secretory system from reversible cytosolic-nuclear glycosylation. Glycoproteins of the cytosol and nucleus can be modified through the reversible addition of a single GlcNAc residue that is considered reciprocal to phosphorylation and the functions of these are likely to be an additional regulatory mechanism that controls phosphorylation-based signalling. In contrast, classical secretory glycosylation can be structurally essential. For example, inhibition of asparagine-linked, i.e. N-linked, glycosylation can prevent proper glycoprotein folding and full inhibition can be toxic to an individual cell. In contrast, perturbation of glycan processing (enzymatic removal/addition of carbohydrate residues to the glycan), which occurs in both the endoplasmic reticulum and Golgi apparatus, is dispensable for isolated cells (as evidenced by survival with glycosides inhibitors) but can lead to human disease (congenital disorders of glycosylation) and can be lethal in animal models. It is therefore likely that the fine processing of glycans is important for endogenous functionality, such as cell trafficking, but that this is likely to have been secondary to its role in host-pathogen interactions. A famous example of this latter effect is the ABO blood group system. Though there are different types of glycoproteins, the most common are N-linked and O-linked glycoproteins. These two types of glycoproteins are distinguished by structural differences that give them their names. Glycoproteins vary greatly in composition, making many different compounds such as antibodies or hormones. Due to the wide array of functions within the body, interest in glycoprotein synthesis for medical use has increased. There are now several methods to synthesize glycoproteins, including recombination and glycosylation of proteins. Glycosylation is also known to occur on nucleocytoplasmic proteins in the form of O-GlcNAc. Types of glycosylation There are several types of glycosylation, although the first two are the most common. In N-glycosylation, sugars are attached to nitrogen, typically on the amide side-chain of asparagine. In O-glycosylation, sugars are attached to oxygen, typically on serine or threonine, but also on tyrosine or non-canonical amino acids such as hydroxylysine and hydroxyproline. In P-glycosylation, sugars are attached to phosphorus on a phosphoserine. In C-glycosylation, sugars are attached directly to carbon, such as in the addition of mannose to tryptophan. In S-glycosylation, a beta-GlcNAc is attached to the sulfur atom of a cysteine residue. In glypiation, a GPI glycolipid is attached to the C-terminus of a polypeptide, serving as a membrane anchor. In glycation, also known as non-enzymatic glycosylation, sugars are covalently bonded to a protein or lipid molecule, without the controlling action of an enzyme, but through a Maillard reaction. Monosaccharides Monosaccharides commonly found in eukaryotic glycoproteins include: The sugar group(s) can assist in protein folding, improve proteins' stability and are involved in cell signalling. Structure The critical structural element of all glycoproteins is having oligosaccharides bonded covalently to a protein. There are 10 common monosaccharides in mammalian glycans including: glucose (Glc), fucose (Fuc), xylose (Xyl), mannose (Man), galactose (Gal), N-acetylglucosamine (GlcNAc), glucuronic acid (GlcA), iduronic acid (IdoA), N-acetylgalactosamine (GalNAc), sialic acid, and 5-N-acetylneuraminic acid (Neu5Ac). These glycans link themselves to specific areas of the protein amino acid chain. The two most common linkages in glycoproteins are N-linked and O-linked glycoproteins. An N-linked glycoprotein has glycan bonds to the nitrogen containing an asparagine amino acid within the protein sequence. An O-linked glycoprotein has the sugar is bonded to an oxygen atom of a serine or threonine amino acid in the protein. Glycoprotein size and composition can vary largely, with carbohydrate composition ranges from 1% to 70% of the total mass of the glycoprotein. Within the cell, they appear in the blood, the extracellular matrix, or on the outer surface of the plasma membrane, and make up a large portion of the proteins secreted by eukaryotic cells. They are very broad in their applications and can function as a variety of chemicals from antibodies to hormones. Glycomics Glycomics is the study of the carbohydrate components of cells. Though not exclusive to glycoproteins, it can reveal more information about different glycoproteins and their structure. One of the purposes of this field of study is to determine which proteins are glycosylated and where in the amino acid sequence the glycosylation occurs. Historically, mass spectrometry has been used to identify the structure of glycoproteins and characterize the carbohydrate chains attached. Examples The unique interaction between the oligosaccharide chains have different applications. First, it aids in quality control by identifying misfolded proteins. The oligosaccharide chains also change the solubility and polarity of the proteins that they are bonded to. For example, if the oligosaccharide chains are negatively charged, with enough density around the protein, they can repulse proteolytic enzymes away from the bonded protein. The diversity in interactions lends itself to different types of glycoproteins with different structures and functions. One example of glycoproteins found in the body is mucins, which are secreted in the mucus of the respiratory and digestive tracts. The sugars when attached to mucins give them considerable water-holding capacity and also make them resistant to proteolysis by digestive enzymes. Glycoproteins are important for white blood cell recognition. Examples of glycoproteins in the immune system are: molecules such as antibodies (immunoglobulins), which interact directly with antigens. molecules of the major histocompatibility complex (or MHC), which are expressed on the surface of cells and interact with T cells as part of the adaptive immune response. sialyl Lewis X antigen on the surface of leukocytes. H antigen of the ABO blood compatibility antigens. Other examples of glycoproteins include: gonadotropins (luteinizing hormone and follicle-stimulating hormone) glycoprotein IIb/IIIa, an integrin found on platelets that is required for normal platelet aggregation and adherence to the endothelium. components of the zona pellucida, which surrounds the oocyte, and is important for sperm-egg interaction. structural glycoproteins, which occur in connective tissue. These help bind together the fibers, cells, and ground substance of connective tissue. They may also help components of the tissue bind to inorganic substances, such as calcium in bone. Glycoprotein-41 (gp41) and glycoprotein-120 (gp120) are HIV viral coat proteins. Soluble glycoproteins often show a high viscosity, for example, in egg white and blood plasma. Miraculin, is a glycoprotein extracted from Synsepalum dulcificum a berry which alters human tongue receptors to recognize sour foods as sweet. Variable surface glycoproteins allow the sleeping sickness Trypanosoma parasite to escape the immune response of the host. The viral spike of the human immunodeficiency virus is heavily glycosylated. Approximately half the mass of the spike is glycosylation and the glycans act to limit antibody recognition as the glycans are assembled by the host cell and so are largely 'self'. Over time, some patients can evolve antibodies to recognise the HIV glycans and almost all so-called 'broadly neutralising antibodies (bnAbs) recognise some glycans. This is possible mainly because the unusually high density of glycans hinders normal glycan maturation and they are therefore trapped in the premature, high-mannose, state. This provides a window for immune recognition. In addition, as these glycans are much less variable than the underlying protein, they have emerged as promising targets for vaccine design. P-glycoproteins are critical for antitumor research due to its ability block the effects of antitumor drugs. P-glycoprotein, or multidrug transporter (MDR1), is a type of ABC transporter that transports compounds out of cells. This transportation of compounds out of cells includes drugs made to be delivered to the cell, causing a decrease in drug effectiveness. Therefore, being able to inhibit this behavior would decrease P-glycoprotein interference in drug delivery, making this an important topic in drug discovery. For example, P-Glycoprotein causes a decrease in anti-cancer drug accumulation within tumor cells, limiting the effectiveness of chemotherapies used to treat cancer. Hormones Hormones that are glycoproteins include: Follicle-stimulating hormone Luteinizing hormone Thyroid-stimulating hormone Human chorionic gonadotropin Alpha-fetoprotein Erythropoietin (EPO) Distinction between glycoproteins and proteoglycans Functions Analysis A variety of methods used in detection, purification, and structural analysis of glycoproteins are Synthesis The glycosylation of proteins has an array of different applications from influencing cell to cell communication to changing the thermal stability and the folding of proteins. Due to the unique abilities of glycoproteins, they can be used in many therapies. By understanding glycoproteins and their synthesis, they can be made to treat cancer, Crohn's Disease, high cholesterol, and more. The process of glycosylation (binding a carbohydrate to a protein) is a post-translational modification, meaning it happens after the production of the protein. Glycosylation is a process that roughly half of all human proteins undergo and heavily influences the properties and functions of the protein. Within the cell, glycosylation occurs in the endoplasmic reticulum. Recombination There are several techniques for the assembly of glycoproteins. One technique utilizes recombination. The first consideration for this method is the choice of host, as there are many different factors that can influence the success of glycoprotein recombination such as cost, the host environment, the efficacy of the process, and other considerations. Some examples of host cells include E. coli, yeast, plant cells, insect cells, and mammalian cells. Of these options, mammalian cells are the most common because their use does not face the same challenges that other host cells do such as different glycan structures, shorter half life, and potential unwanted immune responses in humans. Of mammalian cells, the most common cell line used for recombinant glycoprotein production is the Chinese hamster ovary line. However, as technologies develop, the most promising cell lines for recombinant glycoprotein production are human cell lines. Glycosylation The formation of the link between the glycan and the protein is key element of the synthesis of glycoproteins. The most common method of glycosylation of N-linked glycoproteins is through the reaction between a protected glycan and a protected Asparagine. Similarly, an O-linked glycoprotein can be formed through the addition of a glycosyl donor with a protected Serine or Threonine. These two methods are examples of natural linkage. However, there are also methods of unnatural linkages. Some methods include ligation and a reaction between a serine-derived sulfamidate and thiohexoses in water. Once this linkage is complete, the amino acid sequence can be expanded upon using solid-phase peptide synthesis.
Biology and health sciences
Proteins
Biology
164922
https://en.wikipedia.org/wiki/Strike%20fighter
Strike fighter
In current military parlance, a strike fighter is a multirole combat aircraft designed to operate both as an attack aircraft and as an air superiority fighter. As a category, it is distinct from fighter-bombers, and is closely related to the concept of interdictor aircraft, although it puts more emphasis on aerial combat capabilities. Examples of notable contemporary strike fighters are the American McDonnell Douglas F-15E Strike Eagle, Boeing F/A-18E/F Super Hornet and Lockheed Martin F-35 Lightning II, the Russian Sukhoi Su-34, and the Chinese Shenyang J-16. History Beginning in the 1940s, the term "strike fighter" was occasionally used in navies to refer to fighter aircraft capable of performing air-to-surface strikes, such as the Westland Wyvern, Blackburn Firebrand and Blackburn Firecrest. The term "light weight tactical strike fighter (LWTSF)" was used to describe the aircraft to meet the December 1953 NATO specification NBMR-1. Amongst the designs submitted to the competition were the Aerfer Sagittario 2, Breguet Br.1001 Taon, Dassault Étendard VI, Fiat G.91 and Sud-Est Baroudeur. The term entered normal use in the United States Navy by the end of the 1970s, becoming the official description of the new McDonnell Douglas F/A-18 Hornet. In 1983, the U.S. Navy even renamed each existing Fighter Attack Squadron to Strike Fighter Squadron to emphasize the air-to-surface mission (as the "Fighter Attack" designation was confused with the "Fighter" designation, which flew pure air-to-air missions). This name quickly spread to non-maritime use. When the F-15E Strike Eagle came into service, it was originally called a "dual role fighter", but it instead quickly became known as a "strike fighter". Joint Strike Fighter In 1995, the U.S. military's Joint Advanced Strike Technology program changed its name to the Joint Strike Fighter program. The project consequently resulted in the development of the F-35 Lightning II family of fifth generation multirole fighters to perform ground attack, reconnaissance, and air defense missions with stealth capability. Modern strike fighters Boeing F-15EX Eagle II McDonnell Douglas F-15E Strike Eagle Lockheed Martin F-35 Lightning II Sukhoi Su-34 Shenyang J-16
Technology
Military aviation
null
164924
https://en.wikipedia.org/wiki/Norovirus
Norovirus
Norovirus, also known as Norwalk virus and sometimes referred to as the winter vomiting disease, is the most common cause of gastroenteritis. Infection is characterized by non-bloody diarrhea, vomiting, and stomach pain. Fever or headaches may also occur. Symptoms usually develop 12 to 48 hours after being exposed, and recovery typically occurs within one to three days. Complications are uncommon, but may include dehydration, especially in the young, the old, and those with other health problems. The virus is usually spread by the fecal–oral route. This may be through contaminated food or water or person-to-person contact. It may also spread via contaminated surfaces or through air from the vomit of an infected person. Risk factors include unsanitary food preparation and sharing close quarters. Diagnosis is generally based on symptoms. Confirmatory testing is not usually available but may be performed by public health agencies during outbreaks. Prevention involves proper hand washing and disinfection of contaminated surfaces. There is no vaccine or specific treatment for norovirus. Management involves supportive care such as drinking sufficient fluids or intravenous fluids. Oral rehydration solutions are the preferred fluids to drink, although other drinks without caffeine or alcohol can help. Alcohol-based hand sanitizers are not effective against the norovirus, according to the NHS information page on the subject; this is due to norovirus being a non-enveloped virus. Norovirus results in about 685 million cases of disease and 200,000 deaths globally a year. It is common both in the developed and developing world. Those under the age of five are most often affected, and in this group it results in about 50,000 deaths in the developing world. Norovirus infections occur more commonly during winter months. It often occurs in outbreaks, especially among those living in close quarters. In the United States, it is the cause of about half of all foodborne disease outbreaks. The virus is named after the city of Norwalk, Ohio, US, where an outbreak occurred in 1968. Signs and symptoms Norovirus infection is characterized by nausea, vomiting, watery diarrhea, abdominal pain, and in some cases, loss of taste. A person usually develops symptoms of gastroenteritis 12 to 48 hours after being exposed to norovirus. General lethargy, weakness, muscle aches, headaches, and low-grade fevers may occur. The disease is usually self-limiting, and severe illness is rare. Although having norovirus can be unpleasant, it is not usually dangerous, and most who contract it make a full recovery within two to three days. Norovirus can establish a long-term infection in people who are immunocompromised, such as those with common variable immunodeficiency or with a suppressed immune system after organ transplantation. These infections can be with or without symptoms. In severe cases, persistent infections can lead to norovirus‐associated enteropathy, intestinal villous atrophy, and malabsorption. Virology Transmission Noroviruses are transmitted directly from person to person (62–84% of all reported outbreaks) and indirectly via contaminated water and food. Transmission can be aerosolized when those stricken with the illness vomit or by a toilet flush when vomit or diarrhea is present; infection can follow eating food or breathing air near an episode of vomiting, even if cleaned up. The viruses continue to be shed after symptoms have subsided, and shedding can still be detected many weeks after infection. Vomiting, in particular, transmits infection effectively and appears to allow airborne transmission. In one incident, a person who vomited spread the infection across a restaurant, suggesting that many unexplained cases of food poisoning may have their source in vomit. In December 1998, 126 people were dining at six tables; one person vomited onto the floor. Staff quickly cleaned up, and people continued eating. Three days later others started falling ill; 52 people reported a range of symptoms, from fever and nausea to vomiting and diarrhea. The cause was not immediately identified. Researchers plotted the seating arrangement: more than 90% of the people at the same table as the sick person later reported becoming ill. There was a direct correlation between the risk of infection of people at other tables and how close they were to the sick person. More than 70% of the diners at an adjacent table fell ill; at a table on the other side of the restaurant, the infection rate was still 25%. The outbreak was attributed to a Norwalk-like virus (norovirus). Other cases of transmission by vomit were later identified. In one outbreak at an international scout jamboree in the Netherlands, each person with gastroenteritis infected an average of 14 people before increased hygiene measures were put in place. Even after these new measures were enacted, an ill person still infected an average of 2.1 other people. A US Centers for Disease Control and Prevention (CDC) study of 11 outbreaks in New York State lists the suspected mode of transmission as person-to-person in seven outbreaks, foodborne in two, waterborne in one, and one unknown. The source of waterborne outbreaks may include water from municipal supplies, wells, recreational lakes, swimming pools, and ice machines. Shellfish and salad ingredients are the foods most often implicated in norovirus outbreaks. Ingestion of shellfish that has not been sufficiently heatedunder poses a high risk for norovirus infection. Foods other than shellfish may be contaminated by infected food handlers. Many norovirus outbreaks have been traced to food that was handled by only one infected person. From March and August 2017, in Quebec, Canada, there was an outbreak of norovirus that sickened more than 700 people. According to an investigation by Canada's CFIA Food Control Agency, the culprit was frozen raspberries imported from Harbin Gaotai Food Co Ltd, a Chinese supplier, and then Canadian authorities issued a recall on raspberries products from Harbin Gaotai. According to the CDC, there was a surge in norovirus cases on thirteen cruise ships in 2023, which marks the highest number of outbreaks since 2012. Classification Noroviruses (NoV) are a genetically diverse group of single-stranded positive-sense RNA, non-enveloped viruses belonging to the family Caliciviridae. According to the International Committee on Taxonomy of Viruses, the genus Norovirus has one species, which is called Norwalk virus. Noroviruses can genetically be classified into at least seven different genogroups (GI, GII, GIII, GIV, GV, GVI, and GVII), which can be further divided into other genetic clusters or genotypes. Noroviruses commonly isolated in cases of acute gastroenteritis belong to two genogroups: genogroup I (GI) includes Norwalk virus, Desert Shield virus, and Southampton virus; and II (GII), which includes Bristol virus, Lordsdale virus, Toronto virus, Mexico virus, Hawaii virus and Snow Mountain virus. Most noroviruses that infect humans belong to genogroups GI and GII. Noroviruses from genogroup II, genotype 4 (abbreviated as GII.4) account for the majority of adult outbreaks of gastroenteritis and often sweep across the globe. Recent examples include US95/96-US strain, associated with global outbreaks in the mid- to late-1990s; Farmington Hills virus associated with outbreaks in Europe and the United States in 2002 and in 2004; and Hunter virus which was associated with outbreaks in Europe, Japan, and Australasia. In 2006, there was another large increase in NoV infection around the globe. Reports have shown a link between the expression of human histo-blood group antigens (HBGAs) and the susceptibility to norovirus infection. Studies have suggested the capsid of noroviruses may have evolved from selective pressure of human HBGAs. HBGAs are not, however, the receptor or facilitator of norovirus infection. Co-factors such as bile salts may facilitate the infection, making it more intense when introduced during or after the initial infection of the host tissue. Bile salts are produced by the liver in response to eating fatty foods, and they help with the absorption of consumed lipids. It is not yet clear at what specific point in the Norovirus replication cycle bile salts facilitate infection: penetration, uncoating, or maintaining capsid stability. The protein MDA-5 may be the primary immune sensor that detects the presence of noroviruses in the body. Some people have common variations of the MDA-5 gene that could make them more susceptible to norovirus infection. Structure Viruses in Norovirus are non-enveloped, with icosahedral geometries. Capsid diameters vary widely, from 23 to 40 nm in diameter. The larger capsids (38–40 nm) exhibit T=3 symmetry and are composed of 180 VP1 proteins. Small capsids (23 nm) show T=1 symmetry, and are composed of 60 VP1 proteins. The virus particles demonstrate an amorphous surface structure when visualized using electron microscopy. Genome Noroviruses contain a linear, non-segmented, positive-sense RNA genome of approximately 7.5 kilobases, encoding a large polyprotein which is cleaved into six smaller non-structural proteins (NS1/2 to NS7) by the viral 3C-like protease (NS6), a major structural protein (VP1) of about 58~60 kDa and a minor capsid protein (VP2). The most variable region of the viral capsid is the P2 domain, which contains antigen-presenting sites and carbohydrate-receptor binding regions. Evolution Groups 1, 2, 3, and 4 last shared a common ancestor in AD 867. The group 2 and group 4 viruses last shared a common ancestor in approximately AD 1443 (95% highest posterior density AD 1336–1542). Several estimates of the evolution rate have been made varying from 8.98 × 10−3 to 2.03 × 10−3 substitutions per site per year. The estimated mutation rate (1.21 to 1.41 substitutions per site per year) in this virus is high even compared with other RNA viruses. In addition, a recombination hotspot exists at the ORF1-ORF2 (VP1) junction. Replication cycle Viral replication is cytoplasmic. Entry into the host cell is achieved by attachment to host receptors, which mediates endocytosis. Positive-stranded RNA virus transcription is the method of replication. Translation takes place by leaky scanning and RNA termination-reinitiation. Humans and other mammals serve as the natural host. Transmission routes are fecal-oral and contamination. Pathophysiology When a person becomes infected with norovirus, the virus replicates within the small intestine. The principal symptom is acute gastroenteritis, characterized by nausea, forceful vomiting, watery diarrhea, and abdominal pain, that develops 12 to 48 hours after exposure and lasts for 24–72 hours. Sometimes there is loss of taste, general lethargy, weakness, muscle aches, headache, cough, and/or low-grade fever. The disease is usually self-limiting. Severe illness is rare; although people are frequently treated at the emergency ward, they are rarely admitted to the hospital. The number of deaths from norovirus in the United States is estimated to be around 570–800 each year, with most of these occurring in the very young, the elderly, and persons with weakened immune systems. Symptoms may become life-threatening in these groups if dehydration or electrolyte imbalance is ignored or left untreated. Diagnosis Specific diagnosis of norovirus is routinely made by polymerase chain reaction (PCR) assays or quantitative PCR assays, which give results within a few hours. These assays are very sensitive and can detect as few as 10 virus particles. Tests such as ELISA that use antibodies against a mixture of norovirus strains are available commercially, but lack specificity and sensitivity. Prevention After infection, immunity to the same strain of the virus – the genotype – protects against reinfection for six months to two years. This immunity does not fully protect against infection with the other diverse genotypes of the virus. In Canada, norovirus is a notifiable disease. In both the US and the UK it is not notifiable. Hand washing and disinfectants Hand washing with soap and water is an effective method for reducing the transmission of norovirus pathogens. Alcohol rubs (≥62% isopropyl alcohol) may be used as an adjunct, but are less effective than hand-washing, as norovirus lacks a lipid viral envelope. Surfaces where norovirus particles may be present can be sanitised with a solution of 1.5% to 7.5% of household bleach in water, or other disinfectants effective against norovirus. Health care facilities In healthcare environments, the prevention of nosocomial infections involves routine and terminal cleaning. Nonflammable alcohol vapor in CO2 systems is used in health care environments where medical electronics would be adversely affected by aerosolized chlorine or other caustic compounds. In 2011, the CDC published a clinical practice guideline addressing strategies for the prevention and control of norovirus gastroenteritis outbreaks in healthcare settings. Based on a systematic review of published scientific studies, the guideline presents 51 specific evidence-based recommendations, which were organized into 12 categories: 1) patient cohorting and isolation precautions, 2) hand hygiene, 3) patient transfer and ward closure, 4) food handlers in healthcare, 5) diagnostics, 6) personal protective equipment, 7) environmental cleaning, 8) staff leave and policy, 9) visitors, 10) education, 11) active case-finding, and 12) communication and notification. The guideline also identifies eight high-priority recommendations and suggests several areas in need of future research. Vaccine trials LigoCyte announced in 2007 that it was working on a vaccine and had started phase 1 trials. The company has since been taken over by Takeda Pharmaceutical Company. , a bivalent (NoV GI.1/GII.4) intramuscular vaccine had completed phase 1 trials. In 2020 the phase 2b trials were finished. The vaccine relies on using a virus-like particle that is made of the norovirus capsid proteins in order to mimic the external structure of the virus. Since there is no RNA in this particle, it is incapable of reproducing and cannot cause an infection. Persistence The norovirus can survive for long periods outside a human host depending on the surface and temperature conditions: it can survive for weeks on hard and soft surfaces, and it can survive for months, maybe even years in contaminated still water. A 2006 study found the virus remained on surfaces used for food preparation seven days after contamination. Detection in food Routine protocols to detect norovirus in clams and oysters by reverse transcription polymerase chain reaction are being employed by governmental laboratories such as the Food and Drug Administration (FDA) in the US. Treatment There is no specific medicine to treat people with norovirus illness. Norovirus infection cannot be treated with antibiotics because it is a virus. Treatments aim to avoid complications by measures such as the management of dehydration caused by fluid loss in vomiting and diarrhea, and to mitigate symptoms using antiemetics and antidiarrheals. Epidemiology Norovirus causes about 18% of all cases of acute gastroenteritis worldwide. It is relatively common in developed countries and in low-mortality developing countries (20% and 19% respectively) compared to high-mortality developing countries (14%). Proportionately it causes more illness in people in the community or in hospital outpatients (24% and 20% respectively) as compared with hospital inpatients (17%) in whom other causes are more common. Age and emergence of new norovirus strains do not appear to affect the proportion of gastroenteritis attributable to norovirus. In the United States, the estimated annual number of norovirus cases was 21 million, representing a rate of 6,270 cases per 100,000 individuals. Norovirus is a common cause of epidemics of gastroenteritis on cruise ships. The CDC, through its Vessel Sanitation Program, records and investigates outbreaks of gastrointestinal illness – mostly caused by norovirus – on cruise ships with both a US and foreign itinerary; there were 12 in 2015, and 10 from 1 January to 9 May 2016. An outbreak may affect over 25% of passengers, and a smaller proportion of crew members. Human genetics Epidemiological studies have shown that individuals with different ABO(H) (histo-blood group) phenotypes are infected with NoV strains in a genotype-specific manner. GII.4 includes global epidemic strains and binds to more histo-blood group antigens than other genogroups. FUT2 fucosyltransferase transfers a fucose sugar to the end of the ABO(H) precursor in gastrointestinal cells and saliva glands. The ABH-antigen produced is thought to act as a receptor for human norovirus: A non-functional fucosyltransferase FUT2 provides high protection from the most common norovirus strain, GII.4. Homozygous carriers of any nonsense mutation in the FUT2 gene are called non-secretors, as no ABH-antigen is produced. Approximately 20% of Caucasians are non-secretors due to G428A and C571T nonsense mutations in FUT2 and therefore have strong – although not absolute – protection from the norovirus GII.4. Non-secretors can still produce ABH antigens in erythrocytes, as the precursor is formed by FUT1. Some norovirus genotypes (GI.3) can infect non-secretors. History The norovirus was originally named the "Norwalk agent" after Norwalk, Ohio, in the United States, where an outbreak of acute gastroenteritis occurred among children at Bronson Elementary School in November 1968. In 1972, electron microscopy on stored human stool samples identified a virus, which was given the name "Norwalk virus". Numerous outbreaks with similar symptoms have been reported since. The cloning and sequencing of the Norwalk virus genome showed that these viruses have a genomic organization consistent with viruses belonging to the family Caliciviridae. The name "norovirus" (Norovirus for the genus) was approved by the International Committee on Taxonomy of Viruses (ICTV) in 2002. In 2011, however, a press release and a newsletter were published by ICTV, which strongly encouraged the media, national health authorities, and the scientific community to use the virus name Norwalk virus, rather than the genus name Norovirus when referring to outbreaks of the disease. This was also a public response by ICTV to the request from an individual in Japan to rename the Norovirus genus because of the possibility of negative associations for people in Japan and elsewhere who have the family name "Noro". Before this position of ICTV was made public, ICTV consulted widely with members of the Caliciviridae Study Group and carefully discussed the case. In addition to "Norwalk agent" and "Norwalk virus", the virus has also been called "Norwalk-like virus", "small, round-structured viruses" (SRSVs), Spencer flu, and "Snow Mountain virus". Common names of the illness caused by noroviruses still in use include "Roskilde illness", "winter vomiting disease", "winter vomiting bug", "viral gastroenteritis", and "acute nonbacterial gastroenteritis".
Biology and health sciences
Specific viruses
Health
164933
https://en.wikipedia.org/wiki/Airbus%20A320%20family
Airbus A320 family
The Airbus A320 family is a series of narrow-body airliners developed and produced by Airbus. The A320 was launched in March 1984, first flew on 22 February 1987, and was introduced in April 1988 by Air France. The first member of the family was followed by the stretched A321 (first delivered in January 1994), the shorter A319 (April 1996), and the even shorter A318 (July 2003). Final assembly takes place in Toulouse in France; Hamburg in Germany; Tianjin in China since 2009; and Mobile, Alabama in the United States since April 2016. The twinjet has a six-abreast economy cross-section and came with either CFM56 or IAE V2500 turbofan engines, except the CFM56/PW6000 powered A318. The family pioneered the use of digital fly-by-wire and side-stick flight controls in airliners. Variants offer maximum take-off weights from , to cover a range. The 31.4 m (103 ft) long A318 typically accommodates 107 to 132 passengers. The 124-156 seat A319 is 33.8 m (111 ft) long. The A320 is 37.6 m (123 ft) long and can accommodate 150 to 186 passengers. The 44.5 m (146 ft) A321 offers 185 to 230 seats. The Airbus Corporate Jets are modified business jet versions of the standard commercial variants. In December 2010, Airbus announced the re-engined A320neo (new engine option), which entered service with Lufthansa in January 2016. With more efficient turbofans and improvements including sharklets, it offers up to 15% better fuel economy. The previous A320 generation is now called A320ceo (current engine option). American Airlines is the largest A320 operator with 482 aircraft in its fleet, while IndiGo is the largest customer with 930 aircraft on order. In October 2019, the A320 family surpassed the Boeing 737 to become the highest-selling airliner. , a total of 19,075 A320 family aircraft had been ordered and 11,865 delivered, of which 10,947 aircraft were in service with more than 350 operators. The global A320 fleet had completed more than 176 million flights over 328 million block hours since its entry into service. The A320ceo initially competed with the 737 Classic and the MD-80, then their successors, the 737 Next Generation (737NG) and the MD-90 respectively, while the 737 MAX is Boeing's response to the A320neo. Development Origins When Airbus designed the A300 during the late 1960s and early 1970s, it envisaged a broad family of airliners with which to compete against Boeing and Douglas (later McDonnell Douglas), two established US aerospace manufacturers. From the moment of formation, Airbus had begun studies into derivatives of the Airbus A300B in support of this long-term goal. Prior to the service introduction of the first Airbus airliners, engineers within Airbus had identified nine possible variations of the A300 known as A300B1 to B9. A 10th variation, conceived in 1973, later the first to be constructed, was designated the A300B10. It was a smaller aircraft which would be developed into the long-range Airbus A310. Airbus then focused its efforts on the single-aisle market, which was dominated by the 737 and McDonnell Douglas DC-9. Plans from a number of European aircraft manufacturers called for a successor to the relatively successful BAC One-Eleven, and to replace the 737-200 and DC-9. Germany's MBB (Messerschmitt-Bölkow-Blohm), British Aircraft Corporation, Sweden's Saab and Spain's CASA worked on the EUROPLANE, a 180- to 200-seat aircraft. It was abandoned after intruding on A310 specifications. VFW-Fokker, Dornier and Hawker Siddeley worked on a number of 150-seat designs. The design within the JET study that was carried forward was the JET2 (163 passengers), which then became the Airbus S.A1/2/3 series (Single Aisle), before settling on the A320 name for its launch in 1984. Previously, Hawker Siddeley had produced a design called the HS.134 "Airbus" in 1965, an evolution of the HS.121 (formerly DH.121) Trident, which shared much of the general arrangement of the later JET3 study design. The name "Airbus" at the time referred to a BEA requirement, rather than to the later international programme. Design effort In June 1977 a new Joint European Transport (JET) programme was set up, established by British Aerospace (BAe), Aerospatiale, Dornier and Fokker. It was based at the then BAe (formerly Vickers) site in Weybridge, Surrey, UK. Although the members were all of Airbus' partners, they regarded the project as a separate collaboration from Airbus. This project was considered the forerunner of Airbus A320, encompassing the 130- to 188-seat market, powered by two CFM56s. It would have a cruise speed of Mach 0.84 (faster than the Boeing 737). The programme was later transferred to Airbus, leading up to the creation of the Single-Aisle (SA) studies in 1980, led by former leader of the JET programme, Derek Brown. The group looked at three different variants, covering the 125- to 180-seat market, called SA1, SA2 and SA3. Although unaware at the time, the consortium was producing the blueprints for the A319, A320 and A321, respectively. The single-aisle programme created divisions within Airbus about whether to design a shorter-range twinjet rather than a longer-range quadjet wanted by the West Germans, particularly Lufthansa. However, works proceeded, and the German carrier would eventually order the twinjet. In February 1981 the project was re-designated A320, with efforts focused on the blueprint formerly designated SA2. During the year, Airbus worked with Delta Air Lines on a 150-seat aircraft envisioned and required by the airline. The A320 would carry 150 passengers over using fuel from wing fuel tanks only. The -200 had the centre tank activated, increasing fuel capacity from . They would measure , respectively. Airbus considered a fuselage diameter of "the Boeing 707 and 727, or do something better" and settled on a wider cross-section with a internal width, compared to Boeing's . Although heavier, this allowed the A320 to compete more effectively with the 737. The A320 wing went through several design stages, eventually measuring . National shares The UK, France and West Germany wanted responsibility over final assembly and its associated work, known as "work-share arguments". The Germans requested an increased work-share of 40%, while the British wanted the major responsibilities to be swapped around to give partners production and research and development experience. In the end, British work-share was increased from that of the two previous Airbuses. France was willing to commit to launch aid, or subsidies, while the Germans were more cautious. The UK government was unwilling to provide funding for the tooling, requested by BAe and estimated at £250 million; it was postponed for three years. On 1 March 1984, the British government and BAe agreed that £50 million would be paid, whether the A320 flew or not, while the rest would be paid as a levy on each aircraft sold. In 1984, the program cost was then estimated at £2 billion ($2.8 billion) by Flight International, equivalent to £ billion today. Launch The programme was launched on 2 March 1984. At the time, Airbus had 96 orders. Air France was its first customer to sign a "letter of intent" for 25 A320s and options for 25 more at the 1981 Paris Air Show. In October 1983, British Caledonian placed seven firm orders, bringing total orders to more than 80. Cyprus Airways became the first customer to place an order for V2500-powered A320s in November 1984, followed by Pan Am with 16 firm orders and 34 options in January 1985, and then Inex Adria. One of the most significant orders occurred when Northwest Airlines placed an order for 100 A320s in October 1986, powered by CFM56 engines, later confirmed at the 1990 Farnborough Airshow. During A320 development, Airbus considered propfan technology, which was backed by Lufthansa. At the time unproven, the technology essentially consisted of a fan placed outside the engine nacelle, offering turbofan speeds and turboprop economics; ultimately, Airbus stuck with turbofans. Power on the A320 was to be supplied by two CFM56-5-A1s rated at . It was the only engine available until the arrival of the IAE V2500, offered by International Aero Engines, a group composed of Rolls-Royce plc, Pratt & Whitney, Japanese Aero Engine Corporation, Fiat and MTU. The first V2500 variant, the V2500-A1, has a thrust output of , hence the name. It is 4% more efficient than the CFM56, with cruise thrust-specific fuel consumption for the -A5 at for the CFM56-5A1. Entry into service In the presence of then-French Prime Minister Jacques Chirac and the Prince and Princess of Wales, the first A320 was rolled out of the final assembly line at Toulouse on 14 February 1987 and made its maiden flight on 22 February in 3 hours and 23 minutes. The flight test programme took 1,200 hours over 530 flights. European Joint Aviation Authorities (JAA) certification was awarded on 26 February 1988. The first A320 was delivered to Air France on 28 March, and began commercial service on 8 April with a flight between Paris and Berlin via Düsseldorf. In 1988, the clean-sheet aircraft program cost was 5.486 billion French francs. Stretching the A320: A321 The first derivative of the A320 was the Airbus A321, also known as the Stretched A320, A320-500 and A325. Its launch came on 24 November 1988 after commitments for 183 aircraft from 10 customers were secured. The aircraft was to be a minimally changed derivative, apart from minor wing modifications and the fuselage stretch itself. The wing would incorporate double-slotted flaps and minor trailing edge modifications, increasing wing area from to . The fuselage was lengthened by four plugs (two ahead and two behind the wings), making the A321 longer than the A320 overall. The length increase required enlarged overwing exits, which were repositioned in front of and behind the wings. The centre fuselage and undercarriage were reinforced to accommodate an increase in maximum takeoff weight of , for a total of . Final assembly for the A321 would be, as a first for any Airbus, carried out in Germany (then West Germany). This came after a dispute between the French, who claimed the move would incur $150 million (€135 million) in unnecessary expenditures associated with the new plant, and the Germans, who argued that it would be more productive for Airbus in the long run. The second production line was located at Hamburg, which would also subsequently produce the smaller Airbus A319 and A318. For the first time, Airbus entered the bond market, through which it raised $480 million (€475 million) to finance development costs. An additional $180 million (€175 million) was borrowed from the European Investment Bank and private investors. The maiden flight of the Airbus A321 came on 11 March 1993, when the prototype, registration F-WWIA, flew with IAE V2500 engines; the second prototype, equipped with CFM56-5B turbofans, flew in May. Lufthansa and Alitalia were the first to order the stretched Airbuses, with 20 and 40 aircraft, respectively. The first of Lufthansa's V2500-A5-powered A321s arrived on 27 January 1994, while Alitalia received its first CFM56-5B-powered aircraft on 22 March. Shrinking the A320: A319 The A319 was the following derivative of the baseline A320. The design was a "shrink", with its origins in the 130- to 140-seat SA1, part of the Single-Aisle studies, which had been shelved as the consortium focused on its bigger siblings. After healthy sales of the A320/A321, Airbus focused once more on what was then known as the A320M-7, meaning A320 minus seven fuselage frames. It would provide direct competition for the 737-300/-700. The shrink was achieved through the removal of four fuselage frames fore and three aft of the wing, cutting the overall length by . Consequently, the number of overwing exits was reduced from four to two. The bulk-cargo door was replaced by an aft container door, which can take in reduced height LD3-45 containers. Minor software changes were made to accommodate the different handling characteristics; otherwise the aircraft was largely unchanged. Power is provided by the CFM56-5A, CFM56-5B, or V2500-A5, derated to , with option for thrust. Airbus began offering the new model from 22 May 1992, with the actual launch of the $275 million (€250 million) programme occurring on 10 June 1993; the A319's first customer was ILFC, which signed for six aircraft. On 23 March 1995, the first A319 underwent final assembly at Airbus' German plant in Hamburg, where A321s were also assembled. It was rolled out on 24 August 1995, with the maiden flight taking place the following day. The certification programme took 350 airborne hours involving two aircraft. Certification for the CFM56-5B6/2-equipped variant was granted in April 1996, and qualification for the V2524-A5 started the following month. Delivery of the first A319, to Swissair, occurred on 25 April 1996; it entered service by month's end. In January 1997, an A319 broke a record during a delivery flight by flying the great circle route to Winnipeg, Manitoba from Hamburg in 9 hours and 5 minutes. The A319 has proven popular with low-cost airlines such as EasyJet, which purchased 172 of them. Second shrink: A318 The A318 was born out of mid-1990 studies between Aviation Industry Corporation of China (AVIC), Singapore Technologies Aerospace, Alenia and Airbus on a 95- to 125-seat aircraft project. The programme was called the AE31X, and covered the 95-seat AE316 and 115- to 125-seat AE317. The former would have had an overall length of , while the AE317 was longer by , at . The engines were to be two Rolls-Royce BR715s, CFM56-9s, or the Pratt & Whitney PW6000; with the MTOW of for the smaller version and for the AE317, the thrust requirement were and , respectively. Range was settled at and for the high gross weights of both variants. Both share a wingspan of and a flight deck similar to that of the A320 family. Costing $2 billion (€1.85 billion) to develop, aircraft production was to take place in China. Simultaneously, Airbus was developing the Airbus A318. In early 1998, Airbus revealed that it was designing a 100-seat aircraft based on the A320. The AE31X project was terminated by September 1998, and Airbus officially announced the A318 at that year's Farnborough Airshow. The aircraft was the smallest in Airbus's product range, and was developed coincidentally at the same time as the largest commercial aircraft in history, the Airbus A380. First called A319M5 in as early as March 1995, it was shorter by ahead of the wing and behind. These cuts reduced passenger capacity from 124 on the A319 to 107 passengers in a two-class layout. Range was , or with upcoming Sharklets. The 107-seater was launched on 26 April 1999 with the options and orders count at 109 aircraft. After three years of design, the maiden flight took place at Hamburg on 15 January 2002. Tests on the lead engine, the PW6000, revealed worse-than-expected fuel consumption. Consequently, Pratt & Whitney abandoned the five-stage high-pressure compressor (HPC) for the MTU-designed six-stage HPC. The 129 order book for the A318 shrunk to 80 largely because of switches to other A320 family members. After 17 months of flight certification, during which 850 hours and 350 flights were accumulated, JAA certification was obtained for the CFM56-powered variant on 23 May 2003. On 22 July 2003, first delivery for launch customer Frontier Airlines occurred, entering service before the end of the month. Production The Toulouse Blagnac final assembly line builds A320s, whereas the Hamburg Finkenwerder final assembly line builds A318s, A319s, and A321s. The Airbus factory in Tianjin, China assembles A319s, A320s, and A321s; A320s and A321s are also assembled at the Airbus Americas factory in Mobile, Alabama. Airbus produced a total of 42 A320s per month in 2015, and expected to increase to 50 per month in 2017. Production of parts takes place in a large number of countries around the world. For example, the centre fuselage is made in Hamburg, Germany; the horizontal stabiliser is produced in Getafe, Spain; and the rudder is produced in Harbin, China. As Airbus targets a 60 monthly global production rate by mid-2019, the Tianjin line delivered 51 in 2016 and it could assemble six per month from four as it starts producing A320neos in 2017; 147 Airbus were delivered in 2016 in China, 20% of its production, mostly A320-family, a 47% market share as the country should become the world's largest market ahead of the US before 2027. In June 2018, along a larger and modernised delivery centre, Airbus inaugurated its fourth Hamburg production line, with two seven-axis robots to drill 80% of fuselage upper side holes, autonomous mobile tooling platforms and following Design Thinking principles. By January 2019, Mobile was outputting 4.5 A320s per month, raising to five by the end of the year. In September 2019, Airbus reached a milestone with the delivery of the 9000th A320-family aircraft, to Easyjet. In October 2019, Airbus inaugurated a highly automated fuselage structure assembly line for A320 Family aircraft in Hamburg, showcasing an evolution in Airbus' industrial production system. Production rates continue to rise, and Airbus aims to reach a production rate of 63 aircraft per month by 2021, which would result in the 10,000th delivery occurring early that year. Due to the impact of the COVID-19 pandemic on aviation, demand for new jets was reduced in 2020 and Airbus cut its monthly production from 60 to 40 A320s. In October 2020, the 500th A320 built in Tianjin, an A320neo, was delivered to China Southern, twelve years after the final assembly line start in 2008. A320 Enhanced In 2006, Airbus started the A320 Enhanced (A320E) programme as a series of improvements targeting a 4–5% efficiency gain, with large winglets (2%), aerodynamic refinements (1%), weight savings and a new aircraft cabin. Engine improvements that reduced fuel consumption by 1% were made to the A320 in 2007 with the CFM56 Tech Insertion and in 2008 with the V2500Select (One). Sharklets In 2006, Airbus tested three styles of winglets intended to counteract the wing's lift-induced drag and wingtip vortices more effectively than the previous wingtip fence. The first design type to be tested was developed by Airbus and based on work done by the programme. The second type of winglet incorporated a more blended design and was designed by Winglet Technology, a company based in Wichita, Kansas, USA. Two aircraft were used in the flight test evaluation campaign – the prototype A320, which had been retained by Airbus for testing, and a new build aircraft which was fitted with both types of winglets before it was delivered to JetBlue. Despite the anticipated efficiency gains and development work, Airbus announced that those winglets would not be offered to customers, claiming that the weight of the modifications required negated any aerodynamic benefits. On 17 December 2008, Airbus announced it was to begin flight testing an existing blended winglet design developed by Aviation Partners Inc. as part of an A320 modernisation programme using the A320 prototype. Airbus launched the sharklet blended winglets during the November 2009 Dubai Airshow. Installation adds but offers a 3.5% fuel burn reduction on flights over , saving approximately US$220,000 and 700 t of CO2 per aircraft per year. The tall wingtip devices are manufactured by Korean Air Aerospace Division. In December 2011, Airbus filed suit in the western district of Texas over Aviation Partners' claims of infringement of its patents on winglet design and construction which were granted in 1993. Airbus' lawsuit seeks to reject responsibility to pay royalties to Aviation Partners for using its designs, despite work performed together with both parties to develop advanced winglets for the Airbus A320neo. The first sharklet-equipped Airbus A320 was delivered to Indonesia AirAsia on 21 December 2012, offering a payload and range increases over the original aircraft specifications. Cabin In 2007, Airbus introduced a new enhanced, quieter cabin with better luggage storage and a more modern look and feel, and a new galley that reduced weight, increased revenue space and improved ergonomics and design for food hygiene and recycling. It offered a new air purifier with filters and a catalytic converter, removing unpleasant smells from the air before it is pumped into the cabin, as well as LEDs for mood lighting and a new passenger service unit (PSU). Offering 10% more overhead bin volume, more shoulder room, a weight reduction, a new intercom and in-flight entertainment system, noise reduction and slimmer PSU, the enhanced cabin can be retrofitted. The flight crew controls the cabin through touchscreen displays. New Engine Option The A320neo (neo for new engine option) is a development launched on 1 December 2010, making its first flight on 25 September 2014 and introduced by Lufthansa on 25 January 2016. Re-engined with CFM International LEAP-1A or Pratt & Whitney PW1000G engines and with large sharklets, it was designed to be 15% more fuel efficient. Its three variants are based on the previous A319, A320 and A321. Airbus received 6,031 orders by March 2018 and delivered 318 by May 2018. The original family was renamed A320ceo, for current engine option. As of July 2024, IndiGo has 173 Airbus A320neos under service, making it the largest operator of this type of aircraft. Replacement airliner In 2006, Airbus was studying a future replacement for the A320 series, tentatively dubbed as NSR or "New Short-Range aircraft". The follow-on aircraft to replace the A320 was to be named A30X. In 2007, Airbus North America President Barry Eccleston stated that the earliest the aircraft could have been available was 2017. In January 2010, John Leahy, Airbus's chief operating officer-customers, stated that an all-new single-aisle aircraft was unlikely to be constructed before 2024 or 2025. Design The Airbus A320 family are narrow-body (single-aisle) aircraft with a retractable tricycle landing gear and powered by two wing pylon-mounted turbofan engines. After the oil price rises of the 1970s, Airbus needed to minimise the trip fuel costs of the A320. To that end, it adopted composite primary structures for the empennage with a conventional tail configuration, centre-of-gravity control using fuel, a glass cockpit (EFIS) with side-stick controllers and a two-crew flight deck. Airbus claimed the 737-300 burns 35% more fuel and has a 16% higher operating cost per seat than the V2500-powered A320. A 150-seat A320 burns of jet fuel over (between Los Angeles and New York City), or per seat with a 0.8 kg/L fuel. Its wing is long and thin, offering better aerodynamic efficiency because of the higher aspect ratio than the competing 737 and MD-80. Airframe The Airbus A320 family are low-wing cantilever monoplanes with a conventional empennage with a single vertical stabiliser and rudder. Its wing sweep is 25 degrees. Compared to other airliners of the same class, the A320 features a wider single-aisle cabin of outside diameter, compared to the of the Boeing 737 or 757, and larger overhead bins. Its cargo hold can accommodate unit load device containers. The A320 airframe includes composite materials and aluminium alloys to save weight and reduce the total number of parts to decrease the maintenance costs. Its tail assembly is made almost entirely of such composites by CASA, which also builds the elevators, main landing gear doors, and rear fuselage parts. Flight deck The A320 flight deck features a full glass cockpit, rather than the hybrid versions found in previous airliners. It is also equipped with an Electronic Flight Instrument System (EFIS) with side-stick controllers. The A320 has an Electronic Centralised Aircraft Monitor (ECAM) to give the flight crew information about all of the systems on the aircraft. The only analogue instruments were the radio-magnetic indicator and brake pressure indicator. Since 2003, the A320 has featured liquid crystal display (LCD) units on the flight deck instead of the original cathode-ray tube (CRT) displays. These include both main displays and the backup artificial horizon, which also previously had an analogue display. Airbus offers an avionics upgrade for older A320 aircraft, the In-Service Enhancement Package, to keep them updated. Digital head-up displays are also available. The A320 retained the dark cockpit (where an indicator is off when its system is running; useful for drawing attention to dysfunctions when an indicator is lit) from the A310, the first widebody designed to be operated without a flight engineer and influenced by Bernard Ziegler, first Airbus CEO Henri Ziegler's son. Fly-by-wire The A320 is the world's first airliner with digital fly-by-wire (FBW) flight control system: input commands through the side-stick are interpreted by flight control computers and transmitted to flight control surfaces within the flight envelope protection; in the 1980s the computer-controlled dynamic system of the Dassault Mirage 2000 fighter cross-fertilised the Airbus team which tested FBW on an A300. At its introduction, fly-by-wire and flight envelope protection was a new experience for many pilots. All following Airbuses have similar human/machine interface and systems control philosophy to facilitate cross-type qualification with minimal training. For Roger Béteille, then Airbus president, introducing fly-by-wire with flight envelope protection was one of the most difficult decisions he had ever made, explaining: "Either we were going to be first with new technologies or we could not expect to be in the market." Early A320s used the Intel 80186 and Motorola 68010. In 1988, the flight management computer contained six Intel 80286 CPUs, running in three logical pairs, with 2.5 megabytes of memory. Engines The suppliers providing turbofan engines for the A320ceo family were CFM International with the CFM56, International Aero Engines offering its V2500, and Pratt & Whitney's PW6000 engines available only for the A318, while for the A320neo family are CFM International LEAP-1A or Pratt & Whitney PW1000G engines. The engines on the A320 family tend to make a distinct two tone drone when flying low and taxiing. Operational history The Joint Aviation Authorities (JAA) issued the type certificate for the A320 on 26 February 1988. After entering the market on 18 April 1988 with Air France, Airbus then expanded the A320 family rapidly, launching the 185-seat A321 in 1989 and first delivered it in 1994; launching the 124-seat A319 in 1993 and delivering it in 1996; and launching the 107-seat A318 in 1999 with first deliveries in 2003. , the global A320 fleet had 99.7 percent operational reliability in the last 12 months and completed more than 176 million flights over 328 million block hours since its entry into service. Competition The A320 family was developed to compete with the Boeing 737 Classics (-300/-400/-500) and the McDonnell Douglas MD-80/90 series, and has since faced challenges from the Boeing 737 Next Generation (-600/-700/-800/-900) and the 717 during its two decades in service. As of 2010, the A320 family also faced competition from Embraer's E-195 (to the A318) and the CSeries being developed by Bombardier to the A318/A319. Airbus has delivered 8,605 A320 family aircraft since their certification/first delivery in early 1988, with another 6,056 on firm order (as of 31 December 2018). In comparison, Boeing has shipped 10,444 737-series aircraft since late 1967, including 8,918 since March 1988, and has a further 4,763 on firm order (as of 31 December 2018). By September 2018, there were 7,251 A320ceo family aircraft in service versus 6,757 737NGs, while Airbus expected to deliver 3,174 A320neos compared with 2,999 Boeing 737 MAX through 2022. Airbus sold the A320 well to low-cost startups and offering a choice of engines could make them more attractive to airlines and lessors than the single-sourced 737, but CFM engines are extremely reliable. The six-month head start of the A320neo allowed Airbus to rack up 1,000 orders before Boeing announced the MAX. The A321 has outsold the 737-900 three to one, as the A321neo is again dominating the 737-9 MAX, to be joined by the 737-10 MAX. <noinclude> Maintenance A Checks are every 750 flight hours and structural inspections are at six- and 12-year intervals. Variants The baseline A320 has given rise to a family of aircraft which share a common design but with passenger capacity ranges from 100, on the A318, to 220, on the A321. They compete with the 737, 757, and 717. Because the four A320 variants share the same flight deck, all have the same pilot type rating. Today all variants are available as corporate jets. An A319 variant known as A319LR was also developed. Military versions like A319 MPA also exist. American Airlines is the largest airline operator of the A320 family of aircraft, with 392 aircraft in service as of 30 September 2017. Technically, the name "A320" only refers to the original mid-sized aircraft, but it is often informally used to indicate any of the A318/A319/A320/A321 family. All variants have had 180-minute ETOPS (Extended-range Twin-engine Operational Performance Standards) certification capacity since 2004 (EASA) and 2006 (FAA). A318 The Airbus A318 is the smallest member of the Airbus A320 family. The A318 carries up to 132 passengers and has a maximum range of . The aircraft entered service in July 2003 with Frontier Airlines, and shares a common type rating with all other Airbus A320 family variants, allowing existing A320 family pilots to fly the aircraft without the need for further training. It is the largest commercial aircraft certified by the European Aviation Safety Agency for steep approach operations, allowing flights at airports such as London City Airport. Relative to other Airbus A320 family variants, the A318 has sold in only small numbers with total orders for only 80 aircraft placed . In 2018, the A318 list price was US$77.4 million. A319 The A319 is shorter than the A320. Also known as the A320M-7, it is a shortened, minimum-change version of the A320 with four frames fore of the wing and three frames aft of the wing removed. With a similar fuel capacity as the A320-200 and fewer passengers, the range with 124 passengers in a two-class configuration extends to , or with the "Sharklets". Four propulsion options available on the A319 are the IAE V2500, or the CFM56. Although identical to those of the A320, these engines are derated because of the A319's lower MTOW. The A319 was developed at the request of Steven F. Udvar-Házy, the former president and CEO of ILFC. The A319's launch customer, in fact, was ILFC, which had placed an order for six A319s by 1993. Anticipating further orders by Swissair and Alitalia, Airbus decided to launch the programme on 10 June 1993. Final assembly of the first A319 began on 23 March 1995 and it was first introduced with Swissair in April 1996. The direct Boeing competitor is the Boeing 737-700. A total of 1,460 of the A319ceo model have been delivered with 24 remaining on order as of 30 September 2017. A 1998 A319 was $35 million new; the value was halved by 2009, and reached scrap levels by 2019. In 2018, the A319 list price was US$92.3 million. ACJ319 The A319CJ (rebranded the ACJ319) is the corporate jet version of the A319. It incorporates removable extra fuel tanks (up to six additional centre tanks) which are installed in the cargo compartment, and an increased service ceiling of . Range with eight passengers' payload and auxiliary fuel tanks (ACTs) is up to . Upon resale, the aircraft can be reconfigured as a standard A319 by removing its extra tanks and corporate cabin outfit, thus maximising its resale value. It was formerly also known as the ACJ, or Airbus Corporate Jet, while starting with 2014 it has the marketing designation ACJ319. The aircraft seats up to 39 passengers, but may be outfitted by the customers into any configuration. Tyrolean Jet Services Mfg. GmbH & CO KG, MJET and Reliance Industries are among its users. The A319CJ competes with other ultralarge-cabin corporate jets such as the Boeing 737-700-based Boeing Business Jet (BBJ) and Embraer Lineage 1000, as well as with large-cabin and ultralong-range Gulfstream G650, Gulfstream G550 and Bombardier's Global 6000. It is powered by the same engine types as the A320. The A319CJ was used by the Escadron de Transport, d'Entraînement et de Calibration which is in charge of transportation for France's officials and also by the Flugbereitschaft of the German Air Force for transportation of Germany's officials. An ACJ serves as a presidential or official aircraft of Armenia, Azerbaijan, Brazil, Bulgaria, Czech Republic, Germany, Italy, Malaysia, Slovakia, Thailand, Turkey, Ukraine, and Venezuela. A320 The A320 series has two variants, the A320-100 and A320-200. Only 21 A320-100s were produced. These aircraft, the first to be manufactured, were delivered to Air Inter later acquired by Air France and British Airways as a result of an order from British Caledonian made prior to its acquisition. The primary differences from the -100 were the -200's wingtip fences and increased fuel capacity, providing increased range. Powered by two CFM56-5s or IAE V2500s with thrust ratings of , the A320's typical range with 150 passengers is . A total of 4,512 of the A320ceo model have been delivered, with 220 remaining on order as of 30 September 2017. The closest Boeing competitor is the 737-800. In 1988, the value of a new A320 was $30 million, reaching $40 million by the end of the 1990s, a 30% increase lower than the inflation, it dipped to $37 million after 2001, then peaked to $47 million in 2008, and stabilised at $40–42 million until the transition to the A320neo. In 2018, its list price was US$101.0 million. A321 As the A320 was beginning operations in 1988, the A321 was launched as its first derivative the same year. The A321 fuselage is stretched by , with a front plug immediately forward of wing and a rear plug. The A321-100 maximum takeoff weight is increased by to . To maintain performance, double-slotted flaps were included, in addition to increasing the wing area by , to . The maiden flight of the first of two prototypes came on 11 March 1993. The A321-100 entered service in January 1994 with Lufthansa. As the A321-100 range was less than the A320, development of the heavier and longer range A321-200 began in 1995. The higher range was achieved through higher thrust engines (V2533-A5 or CFM56-5B3), minor structural strengthening, and an increase in fuel capacity with the installation of one or two optional tanks in the rear underfloor hold. Its fuel capacity was increased to and its maximum takeoff weight to . It first flew in December 1996 and entered service with Monarch Airlines in April 1997. The A321's closest Boeing competitors are the 737-900/900ER, and the 757-200. In 2018, the A321 list price was US$118.3 million. A total 1,784 units of the A321ceo model have been delivered, with seven remaining on order as of 30 September 2023. Conversions Civilian variants Passenger-to-freighter (P2F) A programme to convert A320 and A321 aircraft into freighters was set up by Airbus Freighter Conversion GmbH. Airframes were to be converted by Elbe Flugzeugwerke GmbH (EFW) in Dresden, Germany, and Zhukovsky, Russia. Launch customer AerCap signed a firm contract on 16 July 2008 to convert 30 of its passenger A320/A321s into A320/A321P2F (passenger to freighter). However, on 3 June 2011, Airbus announced all partners would end the passenger-to-freighter programme, citing high demand on used airframes for passenger service. Finally, on 17 June 2015 ST Aerospace signed agreements with Airbus and EFW for a collaboration to launch the A320/A321 passenger-to-freighter (P2F) conversion programme. A321P2F In August 2019, Qantas was announced as launch operator for the A321P2F converted freighter, for Australia Post, with up to three aircraft to be introduced in October 2020. Titan Airways received their first A321P2F in January 2021; it was converted at Singapore Seletar Airport, with two more A321P2F's to be converted. The initial converted aircraft first flew on 22 January 2020, to be delivered to Vallair, and secured EASA supplementary type certificate in February. It was to replace older converted Boeing 757s with 14 main deck and 10 lower deck positions, carrying up to over . Airbus sees a market for 1,000 narrowbody conversions over the 2020-2040 period. On 27 October 2020, the first A321P2F was delivered to launch operator Qantas Airways, with windows and exit doors removed, and a large hydraulically actuated main cargo door installed. A320P2F After EFW began the first A320 conversion in March 2021, the A320P2F made its maiden three-hour flight on 8 December from Singapore. The aircraft was first delivered in 2006, and its first cargo operator was to be Nairobi-based Astral Aviation from the second quarter of 2022, leased from Middle Eastern lessor Vaayu Group. The A320P2F received its supplemental type certification at the end of March 2022. The A320P2F is suitable for express domestic as well as regional operations and can accommodate up to 27 metric tonnes over 1,900 nautical miles, offering space for 14 large containers/pallets on the main deck and 10 LD3-type containers on the lower deck. Military variants DRDO AEW&CS (Airborne Early Warning and Control System) In late 2020, the Indian Defence Ministry greenlit the modification by the Defence Research and Development Organisation of six Air India A320s into Netra Mk2 Airborne early warning and control planes for Rs 10,500 crore (US$ billion). They were to complement two Indian-built Netra and three Israeli-and-Russian-made Phalcons of the Indian Air Force. Operators , there are 10,947 A320 family aircraft in commercial service with over 375 operators. The five largest operators are American Airlines (482), China Eastern Airlines (386), EasyJet (360), IndiGo (362) and China Southern Airlines (329). Aircraft in operation include 42 A318s, 1,277 A319s (1,251 ceo, 26 neo), 6,316 A320s (4,186 ceo, 2,130 neo) and 3,312 A321s (1,703 ceo, 1,609 neo) aircraft. In addition, 918 A320ceo family aircraft consisting of 38 A318s, 233 A319s, 566 A320s and 81 A321s were out of service through retirement or write-off. Air France, British Airways, and Frontier Airlines are the only operators to have operated all four variants of the A320ceo family. Middle East Airlines received two milestone aircraft. The first was an A320ceo with manufacturer serial number (MSN) 5,000 on 20 January 2012. Eight years later, on 9 October 2020, the airline received MSN 10,000, an A321neo, at the celebration of its 75th anniversary. In December 2022, over 10,000 A320 family aircraft were operated by more than 330 airlines, completing more than 158 million flights, or 292 million hours in the air. Orders and deliveries The A320ceo family was the fastest-selling airliner from 2005 to 2007. Its successor, the A320neo family, improved on this with 1,420 orders and commitments in less than a year in 2011. In November 2013, the A320 family aircraft reached 10,000 orders. In October 2019, the A320 family became the highest-selling airliner family with 15,193 orders, surpassing the Boeing 737's total of 15,136. In August 2021, the A320 family passed the 10,000 delivery mark, 33 years after its introduction, versus 50 years for the Boeing 737, which passed the 10,000 delivery mark in March 2018. On 16 December 2021, the last member of the A320ceo family, an A321ceo (MSN 10315), was delivered from the Airbus Mobile assembly line in Alabama to Delta Air Lines, registered N129DN. In July 2022, total orders for the A320neo family reached 8,502, exceeding the total orders for the A320ceo family of 8,120. In June 2023, total orders for the A321neo reached 5,163, surpassing total orders for the A320ceo of 4,763, and making it the most-ordered variant of the A320 family. In July 2023, total orders for the A321neo reached 5,259, surpassing the record 5,205 orders for the Boeing 737-800, making it the most ordered variant of any airliner in history. In December 2023, the A320neo family became the first of airliner generations to reach a record order of 10,000 units and an order backlog of 7,000 units. , a total of 11,865 A320 family aircraft have been delivered, with 6 A320ceos (2 A319s and 4 A320s from two defunct airlines) remaining in the backlog. In 2024, Airbus delivered 602 A320neo family aircraft, comprising 9 A319neos, 232 A320neos and 361 A321neos. The A320 family backlog remains over the 7,000 mark, with A321s accounting for 60%, and total orders have reached 19,075, while total orders for the competing Boeing 737 have increased slightly to 16,703 aircraft, of which 11,925 have been delivered. Data Accidents and incidents , across the entire A320 family, 180 major aviation accidents and incidents have occurred, including 38 hull loss accidents (the latest being LATAM Perú Flight 2213 on 18 November 2022), resulting in a total of fatalities. The A320 family has experienced 50 incidents in which several flight displays were lost. As of 2015, the Airbus A320 family had experienced 0.12 fatal hull loss accidents for every million takeoffs and 0.26 total hull loss accidents for every million takeoffs. As of 2023, the Airbus A320 family had experienced 0.095 (0.08 for A320ceo and 0.11 for A320neo) fatal hull loss accidents for every million takeoffs and 0.14 (0.17 for A320ceo and 0.11 for A320neo) total hull loss accidents for every million takeoffs. Aircraft on display Specifications Aircraft type designations
Technology
Specific aircraft_2
null
164956
https://en.wikipedia.org/wiki/Food%20coloring
Food coloring
Food coloring, color additive or colorant is any dye, pigment, or substance that imparts color when it is added to food or beverages. Colorants can be supplied as liquids, powders, gels, or pastes. Food coloring is commonly used in commercial products and in domestic cooking. Food colorants are also used in various non-food applications, including cosmetics, pharmaceuticals, home craft projects, and medical devices. Some colorings may be natural, such as with carotenoids and anthocyanins extracted from plants or cochineal from insects, or may be synthesized, such as tartrazine yellow. In the manufacturing of foods, beverages and cosmetics, the safety of colorants is under constant scientific review and certification by national regulatory agencies, such as the European Food Safety Authority (EFSA) and US Food and Drug Administration (FDA), and by international reviewers, such as the Joint FAO/WHO Expert Committee on Food Additives. Purpose of food coloring People associate certain colors with certain flavors, and the color of food can influence the perceived flavor in anything from candy to wine. Sometimes, the aim is to simulate a color that is perceived by the consumer as natural, such as adding red coloring to glacé cherries (which would otherwise be beige), but sometimes it is for effect, like the green ketchup that Heinz launched in 2000. Color additives are used in foods for many reasons including: To make food more attractive, appealing, appetizing, and informative Offsetting color loss over time due to exposure to light, air, temperature extremes, moisture and storage conditions Correcting natural variations in color Enhancing colors that occur naturally Providing color to colorless and "fun" foods Allowing products to be identified at sight, like candy flavors or medicine dosages Natural food dyes History The addition of colorants to foods is thought to have occurred in Egyptian cities as early as 1500 BC, when candy makers added natural extracts and wine to improve the products' appearance. During the Middle Ages, the economy in the European countries was based on agriculture, and the peasants were accustomed to producing their own food locally or trading within the village communities. Under feudalism, aesthetic aspects were not considered, at least not by the vast majority of the generally very poor population. This situation changed with urbanization at the beginning of the Modern Age, when trade emerged—especially the import of precious spices and colors. One of the first food laws, created in Augsburg, Germany, in 1531, concerned spices or colorants and required saffron counterfeiters to be burned to death. Natural colorants Carotenoids (E160, E161, E164), chlorophyllin (E140, E141), anthocyanins (E163), and betanin (E162) comprise four main categories of plant pigments grown to color food products. Other colorants or specialized derivatives of these core groups include: Annatto (E160b), a reddish-orange dye made from the seed of the achiote Caramel coloring (E150a-d), made from caramelized sugar Carmine (E120), a red dye derived from the cochineal insect, Dactylopius coccus Elderberry juice (E163) Lycopene (E160d) Paprika (E160c) Turmeric/curcumin (E100) Blue colors are rare. The pigment genipin, present in the fruit of Gardenia jasminoides, can be treated with amino acids to produce the blue pigment gardenia blue, which is approved for use in Japan, but not the EU or the US. To ensure reproducibility, the colored components of these substances are often provided in highly purified form. For stability and convenience, they can be formulated in suitable carrier materials (solid and liquids). Hexane, acetone, and other solvents break down cell walls in the fruit and vegetables and allow for maximum extraction of the coloring. Traces of these may still remain in the finished colorant, but they do not need to be declared on the product label. These solvents are known as carry-over ingredients. Chemical structures of representative natural colorants Artificial food colorants History With the onset of the industrial revolution, people became dependent on foods produced by others. These new urban dwellers demanded food at low cost. Analytical chemistry was still primitive and regulations few. The adulteration of foods flourished. Heavy metal and other inorganic element-containing compounds turned out to be cheap and suitable to "restore" the color of watered-down milk and other foodstuffs, some more lurid examples being: Red lead (Pb3O4) and vermillion (HgS) were routinely used to color cheese and confectionery. Copper arsenite (CuHAsO3) was used to recolor used tea leaves for resale. It also caused two deaths when used to color a dessert in 1860. Sellers at the time offered more than 80 artificial coloring agents, some invented for dyeing textiles, not foods. Many color additives had never been tested for toxicity or other adverse effects. Historical records show that injuries, even deaths, resulted from tainted colorants. In 1851, about 200 people were poisoned in England, 17 of them fatally, directly as a result of eating adulterated lozenges. In 1856, mauveine, the first synthetic color, was developed by Sir William Henry Perkin and by the turn of the century, unmonitored color additives had spread through Europe and the United States in all sorts of popular foods, including ketchup, mustard, jellies, and wine. Originally, these were dubbed 'coal-tar' colors because the starting materials were obtained from bituminous coal. Synthetic dyes are often less costly and technically superior to natural dyes. Chemical structures of representative artificial colorants Regulation History: 19th and 20th centuries Concerns over food safety led to numerous regulations throughout the world. German food regulations released in 1882 stipulated the exclusion of dangerous "minerals" such as arsenic, copper, chromium, lead, mercury, and zinc, which were frequently used as ingredients in colorants. In contrast to today's regulatory guidelines, these first laws followed the principle of a negative listing (substances not allowed for use); they were already driven by the main principles of today's food regulations all over the world, since all of these regulations follow the same goal: the protection of consumers from toxic substances and from fraud. In the United States, the Pure Food and Drug Act of 1906 reduced the permitted list of synthetic colors from 700 down to seven. The seven dyes initially approved were Ponceau 3R (FD&C Red No. 1), amaranth (FD&C Red No. 2), erythrosine (FD&C Red No. 3), indigotine (FD&C Blue No. 2), light green SF (FD&C Green No. 2), naphthol yellow 1 (FD&C Yellow No. 1), and orange 1 (FD&C Orange No. 1). Even with updated food laws, adulteration continued for many years. In the 20th century, improved chemical analysis and testing led to the replacement of the negative lists by positive listings. Positive lists consist of substances allowed to be used for the production and the improvement of foods. Most prevailing legislations are based on positive listing. Positive listing implies that substances meant for human consumption have been tested for their safety, and that they have to meet specified purity criteria prior to their approval by the corresponding authorities. In 1962, the first EU directive (62/2645/EEC) approved 36 colorants, of which 20 were naturally derived and 16 were synthetic. This directive did not list which food products the colorants could or could not be used in. At that time, each member state could designate where certain colors could and could not be used. In Germany, for example, quinoline yellow was allowed in puddings and desserts, but tartrazine was not. The reverse was true in France. This was updated in 1989 with 89/107/EEC, which concerned food additives authorized for use in foodstuffs. Status as of 2024 Naturally derived colors, most of which have been used traditionally for centuries, are exempt from certification by several regulatory bodies throughout the world, such as the FDA. Included in the exempt category are colors or pigments from vegetables, minerals, or animals, such as annatto extract (yellow), beets (purple), beta-carotene (yellow to orange), and grape skin extract (purple). Synthetic food colorings are typically less expensive to manufacture, but require closer scientific scrutiny for safety and are certified for use in food manufacturing in the United States, United Kingdom, and European Union. Global market The global market for food coloring is anticipated to grow from $4.6 billion in 2023 to $6 billion by 2028. This expansion is primarily driven by increasing consumer demand for visually appealing food products. Home chefs, particularly those active on social media, are seeking vibrant colors to enhance the aesthetic appeal of their homemade snacks and treats. Additionally, large food brands incorporate vivid colors into their products to stand out in a competitive market. While the demand for food coloring is rising, there are growing concerns about its potential health implications. Some localities, such as California, have implemented regulations restricting certain artificial dyes due to concerns about their impact on children's behavior. National regulations Canada Canada has published food and drug regulations covering food colorants. Food in Canada cannot be sold with more than: 100 ppm of fast green FCF or brilliant blue FCF. or any combination 300 ppm of allura red, amaranth, erythrosine, indigotine, sunset yellow FCF or tartrazine and Fast Green FCF or brilliant blue FCF combined 150 ppm of ponceau SX dye. European Union In the European Union, E numbers are used for all additives, both synthetic and natural, that are approved in food applications. E numbers beginning with 1, such as E100 (turmeric) or E161b (lutein), are allocated to colorants. The safety of food colors and other food additives in the EU is evaluated by the European Food Safety Authority (EFSA). Color Directive 94/36/EC, enacted by the European Commission in 1994, outlines permitted natural and artificial colors with their approved applications and limits in different foodstuffs. This is binding on all member countries of the EU; any changes have to be implemented into national laws by a specified deadline. In non-EU member states, food additives are regulated by national authorities, which usually, but not always, try to harmonize with EU regulations. Most other countries have their own regulations and list of food colors which can be used in various applications, including maximum daily intake limits. Permitted synthetic colorants in the EU include E numbers 102–143 which cover the range of artificial colors. The EU maintains a list of currently allowed additives. Some artificial dyes approved for food use in the EU include: E104: Quinoline yellow E122: Carmoisine E124: Ponceau 4R E131: Patent blue V E142: Green S The three synthetic colors Orange B, Citrus Red No. 2 and FD&C Green No. 3 are not permitted in the EU, and neither is the natural toasted partially defatted cooked cottonseed flour. India The Food Safety and Standard Act, 2006 in India generally permits eight artificial colorings in food: United States The FDA permitted colors are classified as subject to certification or exempt from certification in Code of Federal Regulations – Title 21 Part 73 & 74, both of which are subject to rigorous safety standards prior to their approval and listing for use in foods. In the United States, FD&C numbers (which indicate that the FDA has approved the colorant for use in foods, drugs and cosmetics) are given to approved synthetic food dyes that do not exist in nature. Permitted synthetic colorants include the following seven artificial colorings (the most common in bold). The lakes of these colorings are also permitted except the lake of Red No. 3. FD&C Blue No. 1 – Brilliant blue FCF, E133 (blue shade) FD&C Blue No. 2 – Indigotine, E132 (indigo shade) FD&C Green No. 3 – Fast green FCF, E143 (turquoise shade) FD&C Red No. 3 – Erythrosine, E127 (pink shade, commonly used in glacé cherries) FD&C Red No. 40 – Allura red AC, E129 (red shade) FD&C Yellow No. 5 – Tartrazine, E102 (yellow shade) FD&C Yellow No. 6 – Sunset yellow FCF, E110 (orange shade) Two dyes are allowed by the FDA for limited applications: Citrus red 2 (orange shade) – allowed only to color orange peels. Orange B (red shade) – allowed only for use in hot dog and sausage casings (not produced after 1978, but not delisted) Many dyes have been delisted for a variety of reasons, ranging from poor coloring properties to regulatory restrictions. Some of these delisted food colorants are: FD&C Red No. 2 – Amaranth, E123 FD&C Red No. 4 – Scarlet GN, E125 FD&C Red No. 32 was used to color Florida oranges. FD&C Orange Number 1 was one of the first water-soluble dyes to be commercialized, and one of seven original food dyes allowed under the Pure Food and Drug Act of June 30, 1906. FD&C Orange No. 2 was used to color Florida oranges. FD&C Yellow No. 1, 2, 3, and 4 FD&C Violet No. 1 Global harmonization Since the beginning of the 1960s, the Joint FAO/WHO Expert Committee on Food Additives has promoted the development of international standards for food additives, not only by its toxicological assessments, which are continuously published by the World Health Organization in a "Technical Report Series", but furthermore by elaborating appropriate purity criteria, which are laid down in the two volumes of the "Compendium of Food Additive Specifications" and their supplements. These specifications are not legally binding but very often serve as a guiding principle, especially in countries where no scientific expert committees have been established. To further regulate the use of these evaluated additives, in 1962 the WHO and FAO created an international commission, the Codex Alimentarius, which is composed of authorities, food industry associations and consumer groups from all over the world. Within the Codex organization, the Codex Committee for Food Additives and Contaminants is responsible for working out recommendations for the application of food additives: the General Standard for Food Additives. In the light of the World Trade Organizations General Agreement on Tariffs and Trade (GATT), the Codex Standard, although not legally binding, influences food color regulations all over the world. Safety evaluation A 2023 update by the FDA on food colorants required safety assurances by manufacturers and restrictions on the types of foods in which colorants are used, their maximum amounts and labeling, batch certification, and the amount needed to attain the desired food coloring. Scientific consensus regards that food color additives are safe under the restrictions for use, and that most children have no adverse effects when consuming foods with color ingredients; some individual studies, however, indicate that certain children may have allergic sensitivities to colorants. In October 2023, the state of California banned the colorant, Red 3, in food products starting in 2027. In the 20th century, widespread public belief that artificial food coloring causes ADHD-like hyperactivity in children originated from Benjamin Feingold, a pediatric allergist from California, who proposed in 1973 that salicylates, artificial colors, and artificial flavors cause hyperactivity in children. However, there is no clinical evidence to support broad claims that food coloring causes food intolerance and ADHD-like behavior in children. It is possible that certain food colorings may act as a trigger in those who are genetically predisposed. Concerns were expressed again in 2011 that food colorings may cause ADHD-like behavior in children; a 2015 literature review found the evidence inconclusive. The UK Food Standards Agency examined the effects of tartrazine, allura red, ponceau 4R, quinoline yellow, sunset yellow and carmoisine on children. These colorants are found in beverages. The study found "a possible link between the consumption of these artificial colours and a sodium benzoate preservative and increased hyperactivity" in the children; the advisory committee to the FSA that evaluated the study also determined that because of study limitations, the results could not be extrapolated to the general population, and further testing was recommended. After continuous review as of 2024, the FSA stated that the above artificial food colors may induce hyperactivity in some children. Food and drink products containing any of the six designated colors must warn consumers on the package labels, stating May have an adverse effect on activity and attention in children. The European regulatory community, with an emphasis on the precautionary principle, required labeling and temporarily reduced the acceptable daily intake for the food colorings; the UK FSA called for voluntary withdrawal of the colorings by food manufacturers. However, in 2009, the European Food Safety Authority re-evaluated the data at hand and determined that "the available scientific evidence does not substantiate a link between the color additives and behavioral effects" for any of the dyes. Titanium dioxide In 2016, EFSA updated its safety assessment of titanium dioxide (E 171), concluding that it can no longer be considered safe as a food additive. As of 2024, the FDA was evaluating a petition to exclude titanium dioxide from use in foods, beverages or cosmetics in the United States.
Technology
Food, water and health
null
164974
https://en.wikipedia.org/wiki/Hydrography
Hydrography
Hydrography is the branch of applied sciences which deals with the measurement and description of the physical features of oceans, seas, coastal areas, lakes and rivers, as well as with the prediction of their change over time, for the primary purpose of safety of navigation and in support of all other marine activities, including economic development, security and defense, scientific research, and environmental protection. History The origins of hydrography lay in the making of charts to aid navigation, by individual mariners as they navigated into new waters. These were usually the private property, even closely held secrets, of individuals who used them for commercial or military advantage. As transoceanic trade and exploration increased, hydrographic surveys started to be carried out as an exercise in their own right, and the commissioning of surveys was increasingly done by governments and special hydrographic offices. National organizations, particularly navies, realized that the collection, systematization and distribution of this knowledge gave it great organizational and military advantages. Thus were born dedicated national hydrographic organizations for the collection, organization, publication and distribution of hydrography incorporated into charts and sailing directions. Prior to the establishment of the United Kingdom Hydrographic Office, Royal Navy captains were responsible for the provision of their own charts. In practice this meant that ships often sailed with inadequate information for safe navigation, and that when new areas were surveyed, the data rarely reached all those who needed it. The Admiralty appointed Alexander Dalrymple as Hydrographer in 1795, with a remit to gather and distribute charts to HM Ships. Within a year existing charts from the previous two centuries had been collated, and the first catalog published. The first chart produced under the direction of the Admiralty, was a chart of Quiberon Bay in Brittany, and it appeared in 1800. Under Captain Thomas Hurd the department received its first professional guidelines, and the first catalogs were published and made available to the public and to other nations as well. In 1829, Rear-Admiral Sir Francis Beaufort, as Hydrographer, developed the eponymous Scale, and introduced the first official tide tables in 1833 and the first "Notices to Mariners" in 1834. The Hydrographic Office underwent steady expansion throughout the 19th century; by 1855, the Chart Catalogue listed 1,981 charts giving a definitive coverage over the entire world, and produced over 130,000 charts annually, of which about half were sold. The word hydrography comes from the Ancient Greek ὕδωρ (hydor), "water" and γράφω (graphō), "to write". Overview Large-scale hydrography is usually undertaken by national or international organizations which sponsor data collection through precise surveys and publish charts and descriptive material for navigational purposes. The science of oceanography is, in part, an outgrowth of classical hydrography. In many respects the data are interchangeable, but marine hydrographic data will be particularly directed toward marine navigation and safety of that navigation. Marine resource exploration and exploitation is a significant application of hydrography, principally focused on the search for hydrocarbons. Hydrographical measurements include the tidal, current and wave information of physical oceanography. They include bottom measurements, with particular emphasis on those marine geographical features that pose a hazard to navigation such as rocks, shoals, reefs and other features that obstruct ship passage. Bottom measurements also include collection of the nature of the bottom as it pertains to effective anchoring. Unlike oceanography, hydrography will include shore features, natural and manmade, that aid in navigation. Therefore, a hydrographic survey may include the accurate positions and representations of hills, mountains and even lights and towers that will aid in fixing a ship's position, as well as the physical aspects of the sea and seabed. Hydrography, mostly for reasons of safety, adopted a number of conventions that have affected its portrayal of the data on nautical charts. For example, hydrographic charts are designed to portray what is safe for navigation, and therefore will usually tend to maintain least depths and occasionally de-emphasize the actual submarine topography that would be portrayed on bathymetric charts. The former are the mariner's tools to avoid accident. The latter are best representations of the actual seabed, as in a topographic map, for scientific and other purposes. Trends in hydrographic practice since c. 2003–2005 have led to a narrowing of this difference, with many more hydrographic offices maintaining "best observed" databases, and then making navigationally "safe" products as required. This has been coupled with a preference for multi-use surveys, so that the same data collected for nautical charting purposes can also be used for bathymetric portrayal. Even though, in places, hydrographic survey data may be collected in sufficient detail to portray bottom topography in some areas, hydrographic charts only show depth information relevant for safe navigation and should not be considered as a product that accurately portrays the actual shape of the bottom. The soundings selected from the raw source depth data for placement on the nautical chart are selected for safe navigation and are biased to show predominantly the shallowest depths that relate to safe navigation. For instance, if there is a deep area that can not be reached because it is surrounded by shallow water, the deep area may not be shown. The color filled areas that show different ranges of shallow water are not the equivalent of contours on a topographic map since they are often drawn seaward of the actual shallowest depth portrayed. A bathymetric chart does show marine topology accurately. Details covering the above limitations can be found in Part 1 of Bowditch's American Practical Navigator. Another concept that affects safe navigation is the sparsity of detailed depth data from high resolution sonar systems. In more remote areas, the only available depth information has been collected with lead lines. This collection method drops a weighted line to the bottom at intervals and records the depth, often from a rowboat or sail boat. There is no data between soundings or between sounding lines to guarantee that there is not a hazard such as a wreck or a coral head waiting there to ruin a sailor's day. Often, the navigation of the collecting boat does not match today's GPS navigational accuracies. The hydrographic chart will use the best data available and will caveat its nature in a caution note or in the legend of the chart. A hydrographic survey is quite different from a bathymetric survey in some important respects, particularly in a bias toward least depths due to the safety requirements of the former and geomorphologic descriptive requirements of the latter. Historically, this could include echosoundings being conducted under settings biased toward least depths, but in modern practice hydrographic surveys typically attempt to best measure the depths observed, with the adjustments for navigational safety being applied after the fact. Hydrography of streams will include information on the stream bed, flows, water quality and surrounding land. Basin or interior hydrography pays special attention to rivers and potable water although if collected data is not for ship navigational uses, and is intended for scientific usage, it is more commonly called hydrometry or hydrology. Hydrography of rivers and streams is also an integral part of water management. Most reservoirs in the United States use dedicated stream gauging and rating tables to determine inflows into the reservoir and outflows to irrigation districts, water municipalities and other users of captured water. River/stream hydrographers use handheld and bank mounted devices, to capture a sectional flow rate of moving water through a section and or current. Equipment Uncrewed Surface Vessels (USVs) and are commonly used for hydrographic surveys - they are often equipped with some sort of sonar. Single-beam echosounders, multibeam echosounders, and side scan sonars are all frequently used in hydrographic applications. The knowledge gained from these surveys aid in disaster planning, port and harbor maintenance, and various other coastal planning activities. Organizations Hydrographic services in most countries are carried out by specialized hydrographic offices. The international coordination of hydrographic efforts lies with the International Hydrographic Organization. The United Kingdom Hydrographic Office is one of the oldest, supplying a wide range of charts covering the globe to other countries, allied military organizations and the public. In the United States, the hydrographic charting function has been carried out since 1807 by the Office of Coast Survey of the National Oceanic and Atmospheric Administration within the U.S. Department of Commerce and the U.S. Army Corps of Engineers.
Physical sciences
Hydrology
null
165015
https://en.wikipedia.org/wiki/Metal%20detector
Metal detector
A metal detector is an instrument that detects the nearby presence of metal. Metal detectors are useful for finding metal objects on the surface, underground, and under water. A metal detector consists of a control box, an adjustable shaft, and a variable-shaped pickup coil. When the coil nears metal, the control box signals its presence with a tone, light, or needle movement. Signal intensity typically increases with proximity. A common type are stationary "walk through" metal detectors used at access points in prisons, courthouses, airports and psychiatric hospitals to detect concealed metal weapons on a person's body. The simplest form of a metal detector consists of an oscillator producing an alternating current that passes through a coil producing an alternating magnetic field. If a piece of electrically conductive metal is close to the coil, eddy currents will be induced (inductive sensor) in the metal, and this produces a magnetic field of its own. If another coil is used to measure the magnetic field (acting as a magnetometer), the change in the magnetic field due to the metallic object can be detected. The first industrial metal detectors came out in the 1960s. They were used for finding minerals among other things. Metal detectors help find land mines. They also detect weapons like knives and guns, which is important for airport security. People even use them to search for buried objects, like in archaeology and treasure hunting. Metal detectors are also used to detect foreign bodies in food, and in the construction industry to detect steel reinforcing bars in concrete and pipes and wires buried in walls and floors. History and development In 1841, Professor Heinrich Wilhelm Dove published an invention he called the "differential inductor". It was a 4-coil induction balance, with 2 glass tubes each having 2 well-insulated copper wire solenoids wound around them. Charged Leyden jars (high-voltage capacitors) were discharged through the 2 primary coils; this current surge induced a voltage in the secondary coils. When the secondary coils were wired in opposition, the induced voltages cancelled as confirmed by the Professor holding the ends of the secondary coils. When a piece of metal was placed inside one glass tube the Professor received a shock. This then was the first magnetic induction metal detector, and the first pulse induction metal detector. In late 1878 and early 1879, Professor (of music) David Edward Hughes published his experiments with the 4-coil induction balance. He used his own recent invention the microphone and a ticking clock to generate regular pulses and a telephone receiver as detector. To measure the strength of the signals he invented a coaxial 3-coil induction balance which he called the "electric sonometer". Hughes did much to popularize the induction balance, quickly leading to practical devices that could identify counterfeit coins. In 1880 Mr. J. Munro, C.E. suggested the use of the 4-coil induction balance for metal prospecting. Hughes's coaxial 3-coil induction balance would also see use in metal detecting. In July 1881, Alexander Graham Bell initially used a 4-coil induction balance to attempt to locate a bullet lodged in the chest of American President James Garfield. After much experimenting the best bullet detection range he achieved was only 2 inches (5 centimeters). He then used his own earlier discovery, the partially overlapping 2-coil induction balance, and the detection range increased to 5 inches (12 centimeters). But the attempt was still unsuccessful because the metal coil spring bed Garfield was lying on confused the detector. Bell's 2-coil induction balance would go on to evolve into the popular double D coil. On December 16, 1881, Captain Charles Ambrose McEvoy applied for British Patent No. 5518, Apparatus for Searching for Submerged Torpedoes, &c., which was granted Jun 16 1882. His US269439 patent application of Jul 12 1882 was granted Dec 19 1882. It was a 4-coil induction balance for detecting submerged metallic torpedoes and iron ships and the like. Given the development time involved this may have been the earliest known device specifically constructed as a metal detector using magnetic induction. In 1892, George M. Hopkins described an orthogonal 2-coil induction balance for metal detecting. In 1915, Professor Camille Gutton developed a 4-coil induction balance to detect unexploded shells in farmland of former battlefields in France. Unusually both coil pairs were used for detection. The 1919 photo at the right is a later version of Gutton's detector. Modern developments The modern development of the metal detector began in the 1920s. Gerhard Fischer had developed a system of radio direction-finding, which was to be used for accurate navigation. The system worked extremely well, but Fischer noticed there were anomalies in areas where the terrain contained ore-bearing rocks. He reasoned that if a radio beam could be distorted by metal, then it should be possible to design a machine which would detect metal using a search coil resonating at a radio frequency. In 1925 he applied for, and was granted, the first patent for an electronic metal detector. Although Gerhard Fischer was the first person granted a patent for an electronic metal detector, the first to apply was Shirl Herr, a businessman from Crawfordsville, Indiana. His application for a hand-held Hidden-Metal Detector was filed in February 1924, but not patented until July 1928. Herr assisted Italian leader Benito Mussolini in recovering items remaining from the Emperor Caligula's galleys at the bottom of Lake Nemi, Italy, in August 1929. Herr's invention was used by Admiral Richard Byrd's Second Antarctic Expedition in 1933, when it was used to locate objects left behind by earlier explorers. It was effective up to a depth of eight feet. However, it was one Lieutenant Józef Stanisław Kosacki, a Polish officer attached to a unit stationed in St Andrews, Fife, Scotland, during the early years of World War II, who refined the design into a practical Polish mine detector. These units were still quite heavy, as they ran on vacuum tubes, and needed separate battery packs. The design invented by Kosacki was used extensively during the Second Battle of El Alamein when 500 units were shipped to Field Marshal Montgomery to clear the minefields of the retreating Germans, and later used during the Allied invasion of Sicily, the Allied invasion of Italy and the Invasion of Normandy. As the creation and refinement of the device was a wartime military research operation, the knowledge that Kosacki created the first practical metal detector was kept secret for over 50 years. Beat frequency induction Many manufacturers of these new devices brought their own ideas to the market. White's Electronics of Oregon began in the 1950s by building a machine called the Oremaster Geiger Counter. Another leader in detector technology was Charles Garrett, who pioneered the BFO (beat frequency oscillator) machine. With the invention and development of the transistor in the 1950s and 1960s, metal detector manufacturers and designers made smaller, lighter machines with improved circuitry, running on small battery packs. Companies sprang up all over the United States and Britain to supply the growing demand. Beat Frequency Induction requires movement of the detector coil; akin to how swinging a conductor near a magnet induces an electric current. Refinements Modern top models are fully computerized, using integrated circuit technology to allow the user to set sensitivity, discrimination, track speed, threshold volume, notch filters, etc., and hold these parameters in memory for future use. Compared to just a decade ago, detectors are lighter, deeper-seeking, use less battery power, and discriminate better. State-of-the-art metal detectors have further incorporated extensive wireless technologies for the earphones, connect to Wi-Fi networks and Bluetooth devices. Some also utilize built in GPS locator technology to keep track of searching location and the location of items found. Some connect to smartphone applications to further extend functionality. Discriminators The biggest technical change in detectors was the development of a tunable induction system. This system involved two coils that are electro-magnetically tuned. One coil acts as an RF transmitter, the other as a receiver; in some cases these can be tuned to between 3 and 100 kHz. When metal is in their vicinity, a signal is detected owing to eddy currents induced in the metal. What allowed detectors to discriminate between metals was the fact that every metal has a different phase response when exposed to alternating current; longer waves (low frequency) penetrate ground deeper, and select for high-conductivity targets like silver, and copper; than shorter waves (higher frequency) which, while less ground penetrating, select for low-conductivity targets like iron. Unfortunately, high frequency is also sensitive to ground mineralization interference. This selectivity or discrimination allowed detectors to be developed that could selectively detect desirable metals, while ignoring undesirable ones. Even with discriminators, it was still a challenge to avoid undesirable metals, because some of them have similar phase responses (e.g. tinfoil and gold), particularly in alloy form. Thus, improperly tuning out certain metals increased the risk of passing over a valuable find. Another disadvantage of discriminators was that they reduced the sensitivity of the machines. New coil designs Coil designers also tried out innovative designs. The original induction balance coil system consisted of two identical coils placed on top of one another. Compass Electronics produced a new design: two coils in a D shape, mounted back-to-back to form a circle. The system was widely used in the 1970s, and both concentric and double D type (or widescan as they became known) had their fans. Another development was the invention of detectors which could cancel out the effect of mineralization in the ground. This gave greater depth, but was a non-discriminate mode. It worked best at lower frequencies than those used before, and frequencies of 3 to 20 kHz were found to produce the best results. Many detectors in the 1970s had a switch which enabled the user to switch between the discriminate mode and the non-discriminate mode. Later developments switched electronically between both modes. The development of the induction balance detector would ultimately result in the motion detector, which constantly checked and balanced the background mineralization. Pulse induction At the same time, developers were looking at using a different technique in metal detection called pulse induction. Unlike the beat frequency oscillator or the induction balance machines, which both used a uniform alternating current at a low frequency, the pulse induction (PI) machine simply magnetized the ground with a relatively powerful, momentary current through a search coil. In the absence of metal, the field decayed at a uniform rate, and the time it took to fall to zero volts could be accurately measured. However, if metal was present when the machine fired, a small eddy current would be induced in the metal, and the time for sensed current decay would be increased. These time differences were minute, but the improvement in electronics made it possible to measure them accurately and identify the presence of metal at a reasonable distance. These new machines had one major advantage: they were mostly impervious to the effects of mineralization, and rings and other jewelry could now be located even under highly mineralized black sand. The addition of computer control and digital signal processing have further improved pulse induction sensors. One particular advantage of using a pulse induction detector includes the ability to ignore the minerals contained within heavily mineralized soil; in some cases the heavy mineral content may even help the PI detector function better. Where a VLF detector is affected negatively by soil mineralization, a PI unit is not. Uses Large portable metal detectors are used by archaeologists and treasure hunters to locate metallic items, such as jewelry, coins, clothes buttons and other accessories, bullets, and other various artifacts buried beneath the surface. Archaeology Metal detectors are widely used in archaeology with the first recorded use by military historian Don Rickey in 1958 who used one to detect the firing lines at Little Big Horn. However archaeologists oppose the use of metal detectors by "artifact seekers" or "site looters" whose activities disrupt archaeological sites. The problem with use of metal detectors in archaeological sites or hobbyist who find objects of archeological interest is that the context that the object was found in is lost and no detailed survey of its surroundings is made. Outside of known sites the significance of objects may not be apparent to a metal detector hobbyist. England and Wales In England and Wales, metal detecting is legal provided that the landowner has granted permission and that the area is not a Scheduled Ancient Monument, a site of special scientific interest (SSSI), or covered by elements of the Countryside Stewardship Scheme. The Treasure Act 1996 governs whether or not items that have been discovered are defined as treasure. Finders of items that the Act defines as treasure must report their finds to the local coroner. If they discover items which are not defined as treasure but that are of cultural or historical interest, finders can voluntarily report them to the Portable Antiquities Scheme and the UK Detector Finds Database. France The sale of metal detectors is allowed in France. The first use of metal detectors in France which led to archaeological discoveries occurred in 1958: people living in the city of Graincourt-lès-Havrincourt who were seeking copper from World War I bombshell with military mine detector found a Roman silver treasure. The French law on metal detecting is ambiguous because it refers only to the objective pursued by the user of a metal detector. The first law to regulate the use of metal detectors was Law No. 89–900 of 18 December 1989. This last is resumed without any change in Article L. 542–1 of the code of the heritage, which states that "no person may use the equipment for the detection of metal objects, for the purpose of research monuments and items of interest prehistory, history, art and archeology without having previously obtained an administrative authorization issued based on the applicant's qualification and the nature and method of research." Outside the research of archaeological objects, using a metal detector does not require specific authorization, except that of the owner of the land. Asked about Law No. 89–900 of 18 December 1989 by a member of parliament, Jack Lang, Minister of Culture at the time, replied by letter the following: "The new law does not prohibit the use of metal detectors but only regulates the use. If the purpose of such use is the search for archaeological remains, prior authorization is required from my services. Apart from this case, the law ask to be reported to the appropriate authorities an accidental discovery of archaeological remains." The entire letter of Jack Lang was published in 1990 in a French metal detection magazine, and then, to be visible on the internet, scanned with permission of the author of the magazine on a French metal detection website. Northern Ireland In Northern Ireland, it is an offence to be in possession of a metal detector on a scheduled or a State Care site without a licence from the Department for Communities. It is also illegal to remove an archaeological object found with a detector from such a site without written consent. Republic of Ireland In the Republic of Ireland, laws against metal detecting are very strict: it is illegal to use a detection device to search for archaeological objects anywhere within the State or its territorial seas without the prior written consent of the Minister for Culture, Heritage and the Gaeltacht, and it is illegal to promote the sale or use of detection devices for the purposes of searching for archaeological objects. Scotland Under the Scots law principle of bona vacantia, the Crown has claim over any object of any material value where the original owner cannot be traced. There is also no 300 year limit to Scottish finds. Any artifact found, whether by metal detector survey or from an archaeological excavation, must be reported to the Crown through the Treasure Trove Advisory Panel at the National Museums of Scotland. The panel then determines what will happen to the artifacts. Reporting is not voluntary, and failure to report the discovery of historic artifacts is a criminal offence in Scotland. United States The sale of metal detectors is allowed in the United States. People can use metal detectors in public places (parks, beaches, etc.) and on private property with the permission of the owner of the site. In the United States, cooperation between archeologists hunting for the location of colonial-era Native American villages and hobbyists has been productive. As a hobby There are various types of hobby activities involving metal detectors: Coin shooting is specifically targeting coins. Some coin shooters conduct historical research to locate sites with potential to give up historical and collectible coins. Prospecting is looking for valuable metals like gold, silver, and copper in their natural forms, such as nuggets or flakes. Metal detectors are also used to search for discarded or lost, valuable man-made objects such as jewelry, mobile phones, cameras and other devices. Some metal detectors are waterproof, to allow the user to search for submerged objects in areas of shallow water. General metal detecting is very similar to coin shooting except the user is after any type of historical artifact. Detectorists may be dedicated to preserving historical artifacts, and often have considerable expertise. Coins, bullets, buttons, axe heads, and buckles are just a few of the items that are commonly found by relic hunters; in general the potential is far greater in Europe and Asia than in many other parts of the world. More valuable finds in Britain alone include the Staffordshire Hoard of Anglo-Saxon gold, sold for £3,285,000, the gold Celtic Newark Torc, the Ringlemere Cup, West Bagborough Hoard, Milton Keynes Hoard, Roman Crosby Garrett Helmet, Stirling Hoard, Collette Hoard and thousands of smaller finds. Beach combing is hunting for lost coins or jewelry on a beach. Beach hunting can be as simple or as complicated as one wishes to make it. Many dedicated beach hunters also familiarize themselves with tide movements and beach erosion. Metal detecting clubs exist for hobbyists to learn from others, show off finds from their hunts and to learn more about the hobby. Hobbyists often use their own metal detecting lingo when discussing the hobby with others. Politics and conflicts in the metal detecting hobby in the United States The metal detecting community and professional archaeologists have different ideas related to the recovery and preservation of historic finds and locations. Archaeologists claim that detector hobbyists take an artifact-centric approach, removing these from their context resulting in a permanent loss of historical information. Archaeological looting of places like Slack Farm in 1987 and Petersburg National Battlefield serve as evidence against allowing unsupervised metal detecting in historic locations. Security screening In 1926, two Leipzig, Germany scientists installed a walk-though enclosure at a factory, to ensure that employees were not exiting with prohibited metallic items. A series of aircraft hijackings led the United States in 1972 to adopt metal detector technology to screen airline passengers, initially using magnetometers that were originally designed for logging operations to detect spikes in trees. The Finnish company Outokumpu adapted mining metal detectors in the 1970s, still housed in a large cylindrical pipe, to make a commercial walk-through security detector. The development of these systems continued in a spin-off company and systems branded as Metor Metal Detectors evolved in the form of the rectangular gantry now standard in airports. In common with the developments in other uses of metal detectors both alternating current and pulse systems are used, and the design of the coils and the electronics has moved forward to improve the discrimination of these systems. In 1995 systems such as the Metor 200 appeared with the ability to indicate the approximate height of the metal object above the ground, enabling security personnel to more rapidly locate the source of the signal. Smaller hand held metal detectors are also used to locate a metal object on a person more precisely. Industrial metal detectors Contamination of food by metal shards from broken processing machinery during the manufacturing process is a major safety issue in the food industry. Most food processing equipment is made of stainless steel, and other components made of plastic or elastomers can be manufactured with embedded metallic particles, allowing them to be detected as well. Metal detectors for this purpose are widely used and integrated into the production line. Current practice at garment or apparel industry plants is to apply metal detecting after the garments are completely sewn and before garments are packed to check whether there is any metal contamination (needle, broken needle, etc.) in the garments. This needs to be done for safety reasons. The industrial metal detector was developed by Bruce Kerr and David Hiscock in 1947. The founding company Goring Kerr pioneered the use and development of the first industrial metal detector. Mars Incorporated was one of the first customers of Goring Kerr using their Metlokate metal detector to inspect Mars bars. The basic principle of operation for the common industrial metal detector is based on a 3-coil design. This design utilizes an AM (amplitude modulated) transmitting coil and two receiving coils one on either side of the transmitter. The design and physical configuration of the receiving coils are instrumental in the ability to detect very small metal contaminates of 1 mm or smaller. Today modern metal detectors continue to utilize this configuration for the detection of tramp metal. The coil configuration is such that it creates an opening whereby the product (food, plastics, pharmaceuticals, etc.) passes through the coils. This opening or aperture allows the product to enter and exit through the three-coil system, producing an equal but mirrored signal on the two receiving coils. The resulting signals are summed together effectively nullifying each other. Fortress Technology innovated a new feature, that allows the coil structure of their BSH Model to ignore the effects of vibration, even when inspecting conductive products. When a metal contaminant is introduced into the product an unequal disturbance is created. That creates a very small electronic signal. After suitable amplification a mechanical device mounted to the conveyor system is signaled to remove the contaminated product from the production line. This process is completely automated and allows manufacturing to operate uninterrupted. Civil engineering In civil engineering, special metal detectors (cover meters) are used to locate reinforcement bars inside walls. The most common type of metal detector is a hand-held metal detector or coil-based detectors using oval-shaped disks with built-in copper coils. The search coil works as sensing probe and must be moved over the ground to detect potential metal targets buried underground. When the search coil detects metal objects, the device gives an audible signal via speaker or earphone. In most units, the feedback is an analog or digital indicator. The metal detectors were first invented and manufactured commercially in the United States by Fisher Labs in the 1930s; other companies like Garrett established and developed the metal detectors in terms of technology and features in the following decades. Military The first metal detector proved inductance changes to be a practical metal detection technique, and it served as the prototype for all subsequent metal detectors. Initially these machines were huge and complex. After Lee de Forest invented the triode in 1907 metal detectors used vacuum tubes to operate and became more sensitive but still quite cumbersome. One of the early common uses of the first metal detectors, for example, was the detection of landmines and unexploded bombs in a number of European countries following the First and Second World Wars. Uses and benefits Metal detectors can be used for several military uses, including: Exposing mines planted during the war or after the end of the war Detecting dangerous explosives and cluster bombs dangerous to people's lives Hand-held metal detectors can be used to search people for weapons and explosives War mine detection Demining, also known as mine removal, is the method of clearing a field of landmines. The aim of military operations is to clear a path through a minefield as quickly as possible, which is mostly accomplished using equipment like mine plows and blast waves. Humanitarian demining aims to clear all landmines to a certain depth and make the land secure for human use. Landmine detection techniques have been studied in various forms. Detection of mines can be done by a specially designed metal detector tuned to detect mines and bombs. Electromagnetic technologies have been used in conjunction with ground-penetrating radar. Specially trained dogs are often used to focus the search and confirm that an area has been cleared, mines are often cleared using mechanical equipment such as flails and excavators. First idea The first metal detector was likely the simple electric conduction metal detector ca. 1830. Electric conduction was also used to locate metal ore bodies by measuring the conductivity between metal rods driven into the ground. In 1862, Italian General Giuseppe Garibaldi was wounded in the foot. It was difficult to distinguish between bullet, bone, and cartilage. So Professor Favre of Marseilles quickly built a simple probe that was inserted into the track of the bullet. It had two sharp points connected to a battery and a bell. Contact with metal completed the circuit and rang the bell. In 1867, Mr. Sylvan de Wilde had a similar detector and an extractor also wired to a bell. In 1870, Gustave Trouvé, a French electrical engineer also had a similar device however his buzzer made a different sound for lead and iron. The electric bullet locators were in use until the advent of X-rays. Technology development Gerhard Fischer Gerhard Fischer developed a portable metal detector in 1925. His model was first marketed commercially in 1931; he was responsible for the first large-scale hand-held metal detector development. Gerhard Fisher studied electronics at the University of Dresden before emigrating to the United States. When working as a research engineer in Los Angeles, he came up with the concept of a portable metal detector while working with aircraft radio detection finders. Fisher shared the concept with Albert Einstein, who foresaw the widespread use of hand-held metal detectors. Fisher, the founder of Fisher Research Laboratory, was contracted by the Federal Telegraph Company and Western Air Express to establish airborne direction finding equipment in the late 1920s. He received some of the first patents in the area of radio-based airborne direction finding. He came across some unusual errors in the course of his work; once he figured out what was wrong, he had the foresight to apply the solution to a totally unrelated area, metal and mineral detection." Fisher received the patent for the first portable electronic metal detector in 1925. In 1931, he marketed his first Fisher device to the general public, and he established a famous Fisher Labs company that started to manufacture and develop hand-held metal detectors and sell it commercially. Charles Garrett Despite the fact that Fisher was the first to receive a patent for an electronic metal detector, he was only one of many who improved and mastered the device. Charles Garrett, the founder of Garrett Metal Detectors, was another key figure in the creation of today's metal detectors. Garrett, an electrical engineer by profession, began metal detecting as a pastime in the early 1960s. He tried a number of machines on the market but couldn't find one that could do what he needed. As a result, he started developing his own metal detector. He was able to develop a system that removed oscillator drift, as well as many special search coils that he patented, both of which effectively revolutionized metal detector design at the time. To present day In the 1960s, the first industrial metal detectors were produced, and they were widely used for mineral prospecting and other industrial purposes. De-mining (the detection of landmines), the detection of weapons such as knives and guns (particularly in airport security), geophysical prospecting, archaeology, and treasure hunting are just some of the applications. Metal detectors are also used to detect foreign bodies in food, as well as steel reinforcement bars in concrete and pipes. The building industry uses them to find wires buried in walls or floors. Discriminators and circuits The development of transistors, discriminators, modern search coil designs, and wireless technology significantly impacted the design of metal detectors as we know them today: lightweight, compact, easy-to-use, and deep-seeking systems. The invention of a tunable induction device was the most significant technological advancement in detectors. Two electro-magnetically tuned coils were used in this method. One coil serves as an RF transmitter, while the other serves as a receiver; in some situations, these coils may be tuned to frequencies ranging from 3 to 100 kHz. Due to eddy currents induced in the metal, a signal is detected when metal is present. The fact that every metal has a different phase response when exposed to alternating current allowed detectors to differentiate between metals. Longer waves (low frequency) penetrate the ground deeper and select for high conductivity targets like silver and copper, while shorter waves (higher frequency) select for low conductivity targets like iron. Unfortunately, ground mineralization interference affects high frequency as well. This selectivity or discrimination allowed the development of detectors that can selectively detect desirable metals. Even with discriminators, avoiding undesirable metals was difficult because some of them have similar phase responses (for example, tinfoil and gold), particularly in alloy form. As a result, tuning out those metals incorrectly increased the chance of missing a valuable discovery. Discriminators also had the downside of lowering the sensitivity of the devices.
Technology
Measuring instruments
null
165094
https://en.wikipedia.org/wiki/Runway
Runway
In aviation, a runway is an elongated, rectangular surface designed for the landing and takeoff of an aircraft. Runways may be a human-made surface (often asphalt, concrete, or a mixture of both) or a natural surface (grass, dirt, gravel, ice, sand or salt). Runways, taxiways and ramps, are sometimes referred to as "tarmac", though very few runways are built using tarmac. Takeoff and landing areas defined on the surface of water for seaplanes are generally referred to as waterways. Runway lengths are now commonly given in meters worldwide, except in North America where feet are commonly used. History In 1916, in a World War I war effort context, the first concrete-paved runway was built in Clermont-Ferrand in France, allowing local company Michelin to manufacture Bréguet Aviation military aircraft. In January 1919, aviation pioneer Orville Wright underlined the need for "distinctly marked and carefully prepared landing places, [but] the preparing of the surface of reasonably flat ground [is] an expensive undertaking [and] there would also be a continuous expense for the upkeep." Headings For fixed-wing aircraft, it is advantageous to perform takeoffs and landings into the wind to reduce takeoff or landing roll and reduce the ground speed needed to attain flying speed. Larger airports usually have several runways in different directions, so that one can be selected that is most nearly aligned with the wind. Airports with one runway are often constructed to be aligned with the prevailing wind. Compiling a wind rose is one of the preliminary steps taken in constructing airport runways. Wind direction is given as the direction the wind is coming from: a plane taking off from runway 09 faces east, into an "east wind" blowing from 090°. Originally in the 1920s and 1930s, airports and air bases (particularly in the United Kingdom) were built in a triangle-like pattern of three runways at 60° angles to each other. The reason was that aviation was only starting, and although it was known that wind affected the runway distance required, not much was known about wind behaviour. As a result, three runways in a triangle-like pattern were built, and the runway with the heaviest traffic would eventually expand into the airport's main runway, while the other two runways would be either abandoned or converted into taxiways. Naming Runways are named by a number between 01 and 36, which is generally the magnetic azimuth of the runway's heading in decadegrees. This heading differs from true north by the local magnetic declination. A runway numbered 09 points east (90°), runway 18 is south (180°), runway 27 points west (270°) and runway 36 points to the north (360° rather than 0°). When taking off from or landing on runway 09, a plane is heading around 90° (east). A runway can normally be used in both directions, and is named for each direction separately: e.g., "runway 15" in one direction is "runway 33" when used in the other. The two numbers differ by 18 (= 180°). For clarity in radio communications, each digit in the runway name is pronounced individually: runway one-five, runway three-three, etc. (instead of "fifteen" or "thirty-three"). A leading zero, for example in "runway zero-six" or "runway zero-one-left", is included for all ICAO and some U.S. military airports (such as Edwards Air Force Base). However, most U.S. civil aviation airports drop the leading zero as required by FAA regulation. This also includes some military airfields such as Cairns Army Airfield. This American anomaly may lead to inconsistencies in conversations between American pilots and controllers in other countries. It is very common in a country such as Canada for a controller to clear an incoming American aircraft to, for example, runway 04, and the pilot read back the clearance as runway 4. In flight simulation programs those of American origin might apply U.S. usage to airports around the world. For example, runway 05 at Halifax will appear on the program as the single digit 5 rather than 05. Military airbases may include smaller paved runways known as "assault strips" for practice and training next to larger primary runways. These strips eschew the standard numerical naming convention and instead employ the runway's full three digit heading; examples include Dobbins Air Reserve Base's Runway 110/290 and Duke Field's Runway 180/360. Runways with non-hard surfaces, such as small turf airfields and waterways for seaplanes, may use the standard numerical scheme or may use traditional compass point naming, examples include Ketchikan Harbor Seaplane Base's Waterway E/W. Airports with unpredictable or chaotic water currents, such as Santa Catalina Island's Pebbly Beach Seaplane Base, may designate their landing area as Waterway ALL/WAY to denote the lack of designated landing direction. Letter suffix If there is more than one runway pointing in the same direction (parallel runways), each runway is identified by appending left (L), center (C) and right (R) to the end of the runway number to identify its position (when facing its direction)—for example, runways one-five-left (15L), one-five-center (15C), and one-five-right (15R). Runway zero-three-left (03L) becomes runway two-one-right (21R) when used in the opposite direction (derived from adding 18 to the original number for the 180° difference when approaching from the opposite direction). In some countries, regulations mandate that where parallel runways are too close to each other, only one may be used at a time under certain conditions (usually adverse weather). At large airports with four or more parallel runways (for example, at Chicago O'Hare, Los Angeles, Detroit Metropolitan Wayne County, Hartsfield-Jackson Atlanta, Denver, Dallas–Fort Worth and Orlando), some runway identifiers are shifted by 1 to avoid the ambiguity that would result with more than three parallel runways. For example, in Los Angeles, this system results in runways 6L, 6R, 7L, and 7R, even though all four runways are actually parallel at approximately 69°. At Dallas/Fort Worth International Airport, there are five parallel runways, named 17L, 17C, 17R, 18L, and 18R, all oriented at a heading of 175.4°. Occasionally, an airport with only three parallel runways may use different runway identifiers, such as when a third parallel runway was opened at Phoenix Sky Harbor International Airport in 2000 to the south of existing 8R/26L—rather than confusingly becoming the "new" 8R/26L it was instead designated 7R/25L, with the former 8R/26L becoming 7L/25R and 8L/26R becoming 8/26. Suffixes may also be used to denote special use runways. Airports that have seaplane waterways may choose to denote the waterway on charts with the suffix W; such as Daniel K. Inouye International Airport in Honolulu and Lake Hood Seaplane Base in Anchorage. Small airports that host various forms of air traffic may employ additional suffixes to denote special runway types based on the type of aircraft expected to use them, including STOL aircraft (S), gliders (G), rotorcraft (H), and ultralights (U). Runways that are numbered relative to true north rather than magnetic north will use the suffix T; this is advantageous for certain airfields in the far north such as Thule Air Base (08T/26T). Renumbering Runway designations may change over time because Earth's magnetic lines slowly drift on the surface and the magnetic direction changes. Depending on the airport location and how much drift occurs, it may be necessary to change the runway designation. As runways are designated with headings rounded to the nearest 10°, this affects some runways sooner than others. For example, if the magnetic heading of a runway is 233°, it is designated Runway 23. If the magnetic heading changes downwards by 5 degrees to 228°, the runway remains Runway 23. If on the other hand the original magnetic heading was 226° (Runway 23), and the heading decreased by only 2 degrees to 224°, the runway becomes Runway 22. Because magnetic drift itself is slow, runway designation changes are uncommon, and not welcomed, as they require an accompanying change in aeronautical charts and descriptive documents. When a runway designation does change, especially at major airports, it is often done at night, because taxiway signs need to be changed and the numbers at each end of the runway need to be repainted to the new runway designators. In July 2009 for example, London Stansted Airport in the United Kingdom changed its runway designations from 05/23 to 04/22 during the night. Declared distances Runway dimensions vary from as small as long and wide in smaller general aviation airports, to long and wide at large international airports built to accommodate the largest jets, to the huge lake bed runway 17/35 at Edwards Air Force Base in California – developed as a landing site for the Space Shuttle. Takeoff and landing distances available are given using one of the following terms: Takeoff Run Available (TORA) – The length of runway declared available and suitable for the ground run of an airplane taking off. Takeoff Distance Available (TODA) – The length of the takeoff run available plus the length of the clearway, if clearway is provided. (The clearway length allowed must lie within the aerodrome or airport boundary. According to the Federal Aviation Regulations and Joint Aviation Requirements (JAR) TODA is the lesser of TORA plus clearway or 1.5 times TORA). Accelerate-Stop Distance Available (ASDA)– The length of the takeoff run available plus the length of the stopway, if stopway is provided. Landing Distance Available (LDA) – The length of runway that is declared available and suitable for the ground run of an airplane landing. Emergency Distance Available (EMDA) – LDA (or TORA) plus a stopway. Sections There are standards for runway markings. The runway thresholds are markings across the runway that denote the beginning and end of the designated space for landing and takeoff under non-emergency conditions. The runway safety area is the cleared, smoothed and graded area around the paved runway. It is kept free from any obstacles that might impede flight or ground roll of aircraft. The runway is the surface from threshold to threshold (including displaced thresholds), which typically features threshold markings, numbers, and centerlines, but excludes blast pads and stopways at both ends. Blast pads are often constructed just before the start of a runway where jet blast produced by large planes during the takeoff roll could otherwise erode the ground and eventually damage the runway. Stopways, also known as overrun areas, are also constructed at the end of runways as emergency space to stop planes that overrun the runway on landing or a rejected takeoff. Blast pads and stopways look similar, and are both marked with yellow chevrons; stopways may optionally be surrounded by red runway lights. The differences are that stopways can support the full weight of an aircraft and are designated for use in an aborted takeoff, while blast pads are often not as strong as the main paved surface of the runway and are not to be used for taxiing, landing, or aborted takeoffs. An engineered materials arrestor system (EMAS) may also be present, which may overlap with the end of the blast pad or stopway and is painted similarly (although an EMAS does not count as part of a stopway). Displaced thresholds may be used for taxiing, takeoff, and landing rollout, but not for touchdown. A displaced threshold often exists because of obstacles just before the runway, runway strength, or noise restrictions making the beginning section of runway unsuitable for landings. It is marked with white paint arrows that lead up to the beginning of the landing portion of the runway. As with blast pads, landings on displaced thresholds are not permitted aside from emergency use or exigent circumstance. Relocated thresholds are similar to displaced thresholds. They are used to mark a portion of the runway temporarily closed due to construction or runway maintenance. This closed portion of the runway is not available for use by aircraft for takeoff or landing, but it is available for taxi. While methods for identifying the relocated threshold vary, a common way for the relocated threshold to be marked is a ten-foot-wide white bar across the width of the runway. Clearway is an area beyond the paved runway, aligned with the runway centerline and under the control of the airport authorities. This area is not less than 500 ft and there are no protruding obstacles except for threshold lights provided they are not higher than 26 inches. There is a limit on the upslope of the clearway of 1.25%. The length of the clearway may be included in the length of the takeoff distance available. For example, if a paved runway is long and there are of clearway beyond the end of the runway, the takeoff distance available is long. When the runway is to be used for takeoff of a large airplane, the maximum permissible takeoff weight of the airplane can be based on the takeoff distance available, including clearway. Clearway allows large airplanes to take off at a heavier weight than would be allowed if only the length of the paved runway is taken into account. Markings There are runway markings and signs on most large runways. Larger runways have a distance remaining sign (black box with white numbers). This sign uses a single number to indicate the remaining distance of the runway in thousands of feet. For example, a 7 will indicate remaining. The runway threshold is marked by a line of green lights. There are three types of runways: Visual runways are used at small airstrips and are usually just a strip of grass, gravel, ice, asphalt, or concrete. Although there are usually no markings on a visual runway, they may have threshold markings, designators, and centerlines. Additionally, they do not provide an instrument-based landing procedure; pilots must be able to see the runway to use it. Also, radio communication may not be available and pilots must be self-reliant. Non-precision instrument runways are often used at small- to medium-size airports. These runways, depending on the surface, may be marked with threshold markings, designators, centerlines, and sometimes a mark (known as an aiming point, sometimes installed at ). While centerlines provide horizontal position guidance, aiming point markers provide vertical position guidance to planes on visual approach. Precision instrument runways, which are found at medium- and large-size airports, consist of a blast pad/stopway (optional, for airports handling jets), threshold, designator, centerline, aiming point, and , /, , , and touchdown zone marks. Precision runways provide both horizontal and vertical guidance for instrument approaches. Waterways may be unmarked or marked with buoys that follow maritime notation instead. For runways and taxiways that are permanently closed, the lighting circuits are disconnected. The runway threshold, runway designation, and touchdown markings are obliterated and yellow "Xs" are placed at each end of the runway and at intervals. National variants In Australia, Canada, the United Kingdom, as well as some other countries or territories (Hong Kong and Macau) all 3-stripe and 2-stripe touchdown zones for precision runways are replaced with one-stripe touchdown zones. In some South American countries like Colombia, Ecuador and Peru, one 3-stripe is added and a 2-stripe is replaced with the aiming point. Some European countries replace the aiming point with a 3-stripe touchdown zone. Runways in Norway have yellow markings instead of the usual white ones. This also occurs in some airports in Japan, Sweden, and Finland. The yellow markings are used to ensure better contrast against snow. Runways may have different types of equipment on each end. To reduce costs, many airports do not install precision guidance equipment on both ends. Runways with one precision end and any other type of end can install the full set of touchdown zones, even if some are past the midpoint. Runways with precision markings on both ends omit touchdown zones within of the midpoint, to avoid ambiguity over the end with which the zone is associated. Lighting A line of lights on an airfield or elsewhere to guide aircraft in taking off or coming in to land or an illuminated runway is sometimes also known as a flare path. Technical specifications Runway lighting is used at airports during periods of darkness and low visibility. Seen from the air, runway lights form an outline of the runway. A runway may have some or all of the following: Runway end identifier lights (REIL) – unidirectional (facing approach direction) or omnidirectional pair of synchronized flashing lights installed at the runway threshold, one on each side. Runway end lights – a pair of four lights on each side of the runway on precision instrument runways, these lights extend along the full width of the runway. These lights show green when viewed by approaching aircraft and red when seen from the runway. Runway edge lights – white elevated lights that run the length of the runway on either side. On precision instrument runways, the edge-lighting becomes amber in the last of the runway, or last third of the runway, whichever is less. Taxiways are differentiated by being bordered by blue lights, or by having green center lights, depending on the width of the taxiway, and the complexity of the taxi pattern. Runway centerline lighting system (RCLS) – lights embedded into the surface of the runway at intervals along the runway centerline on some precision instrument runways. White except the last : alternate white and red for next and red for last . Touchdown zone lights (TDZL) – rows of white light bars (with three in each row) at intervals on either side of the centerline for . Taxiway centerline lead-off lights – installed along lead-off markings, alternate green and yellow lights embedded into the runway pavement. It starts with green light at about the runway centerline to the position of first centerline light beyond the Hold-Short markings on the taxiway. Taxiway centerline lead-on lights – installed the same way as taxiway centerline lead-off Lights, but directing airplane traffic in the opposite direction. Land and hold short lights – a row of white pulsating lights installed across the runway to indicate hold short position on some runways that are facilitating land and hold short operations (LAHSO). Approach lighting system (ALS) – a lighting system installed on the approach end of an airport runway and consists of a series of lightbars, strobe lights, or a combination of the two that extends outward from the runway end. According to Transport Canada's regulations, the runway-edge lighting must be visible for at least . Additionally, a new system of advisory lighting, runway status lights, is currently being tested in the United States. The edge lights must be arranged such that: the minimum distance between lines is , and maximum is the maximum distance between lights within each line is the minimum length of parallel lines is the minimum number of lights in the line is 8. Control of lighting system Typically the lights are controlled by a control tower, a flight service station or another designated authority. Some airports/airfields (particularly uncontrolled ones) are equipped with pilot-controlled lighting, so that pilots can temporarily turn on the lights when the relevant authority is not available. This avoids the need for automatic systems or staff to turn the lights on at night or in other low visibility situations. This also avoids the cost of having the lighting system on for extended periods. Smaller airports may not have lighted runways or runway markings. Particularly at private airfields for light planes, there may be nothing more than a windsock beside a landing strip. Safety Types of runway safety incidents include: Runway excursion – an incident involving only a single aircraft, where it makes an inappropriate exit from the runway (e.g. Thai Airways Flight 679). Runway overrun (also known as an overshoot) – a type of excursion where the aircraft is unable to stop before the end of the runway (e.g. Air France Flight 358, TAM Airlines Flight 3054, Air India Express Flight 812). Runway incursion – an incident involving incorrect presence of a vehicle, person or another aircraft on the runway (e.g. Aeroflot Flight 3352, Scandinavian Airlines Flight 686). Runway confusion – an aircraft makes use of the wrong runway for landing or takeoff (e.g. Singapore Airlines Flight 006, Western Airlines Flight 2605). Runway undershoot – an aircraft that lands short of the runway (e.g. British Airways Flight 38, Asiana Airlines Flight 214). Surface The choice of material used to construct the runway depends on the use and the local ground conditions. For a major airport, where the ground conditions permit, the most satisfactory type of pavement for long-term minimum maintenance is concrete. Although certain airports have used reinforcement in concrete pavements, this is generally found to be unnecessary, with the exception of expansion joints across the runway where a dowel assembly, which permits relative movement of the concrete slabs, is placed in the concrete. Where it can be anticipated that major settlements of the runway will occur over the years because of unstable ground conditions, it is preferable to install asphalt concrete surface, as it is easier to patch on a periodic basis. Fields with very low traffic of light planes may use a sod surface. Some runways make use of salt flats. For pavement designs, borings are taken to determine the subgrade condition, and based on the relative bearing capacity of the subgrade, the specifications are established. For heavy-duty commercial aircraft, the pavement thickness, no matter what the top surface, varies from , including subgrade. Airport pavements have been designed by two methods. The first, Westergaard, is based on the assumption that the pavement is an elastic plate supported on a heavy fluid base with a uniform reaction coefficient known as the K value. Experience has shown that the K values on which the formula was developed are not applicable for newer aircraft with very large footprint pressures. The second method is called the California bearing ratio and was developed in the late 1940s. It is an extrapolation of the original test results, which are not applicable to modern aircraft pavements or to modern aircraft landing gear. Some designs were made by a mixture of these two design theories. A more recent method is an analytical system based on the introduction of vehicle response as an important design parameter. Essentially it takes into account all factors, including the traffic conditions, service life, materials used in the construction, and, especially important, the dynamic response of the vehicles using the landing area. Because airport pavement construction is so expensive, manufacturers aim to minimize aircraft stresses on the pavement. Manufacturers of the larger planes design landing gear so that the weight of the plane is supported on larger and more numerous tires. Attention is also paid to the characteristics of the landing gear itself, so that adverse effects on the pavement are minimized. Sometimes it is possible to reinforce a pavement for higher loading by applying an overlay of asphaltic concrete or portland cement concrete that is bonded to the original slab. Post-tensioning concrete has been developed for the runway surface. This permits the use of thinner pavements and should result in longer concrete pavement life. Because of the susceptibility of thinner pavements to frost heave, this process is generally applicable only where there is no appreciable frost action. Pavement surface Runway pavement surface is prepared and maintained to maximize friction for wheel braking. To minimize hydroplaning following heavy rain, the pavement surface is usually grooved so that the surface water film flows into the grooves and the peaks between grooves will still be in contact with the aircraft tyres. To maintain the macrotexturing built into the runway by the grooves, maintenance crews engage in airfield rubber removal or hydrocleaning in order to meet required FAA, or other aviation authority friction levels. Pavement subsurface drainage and underdrains Subsurface underdrains help provide extended life and excellent and reliable pavement performance. At the Hartsfield Atlanta, GA airport the underdrains usually consist of trenches wide and deep from the top of the pavement. A perforated plastic tube ( in diameter) is placed at the bottom of the ditch. The ditches are filled with gravel size crushed stone. Excessive moisture under a concrete pavement can cause pumping, cracking, and joint failure. Surface type codes In aviation charts, the surface type is usually abbreviated to a three-letter code. The most common hard surface types are asphalt and concrete. The most common soft surface types are grass and gravel. Length A runway of at least in length is usually adequate for aircraft weights below approximately . Larger aircraft including widebodies will usually require at least at sea level. International widebody flights, which carry substantial amounts of fuel and are therefore heavier, may also have landing requirements of or more and takeoff requirements of . The Boeing 747 is considered to have the longest takeoff distance of the more common aircraft types and has set the standard for runway lengths of larger international airports. At sea level, can be considered an adequate length to land virtually any aircraft. For example, at O'Hare International Airport, when landing simultaneously on 4L/22R and 10/28 or parallel 9R/27L, it is routine for arrivals from East Asia, which would normally be vectored for 4L/22R () or 9R/27L () to request 28R (). It is always accommodated, although occasionally with a delay. Another example is that the Luleå Airport in Sweden was extended to to allow any fully loaded freight aircraft to take off. These distances are also influenced by the runway grade (slope) such that, for example, each 1 percent of runway down slope increases the landing distance by 10 percent. An aircraft taking off at a higher altitude must do so at reduced weight due to decreased density of air at higher altitudes, which reduces engine power and wing lift. An aircraft must also take off at a reduced weight in hotter or more humid conditions (see density altitude). Most commercial aircraft carry manufacturer's tables showing the adjustments required for a given temperature. In India, recommendations of International Civil Aviation Organization (ICAO) are now followed more often. For landing, only altitude correction is done for runway length whereas for take-off, all types of correction are taken into consideration.
Technology
Concepts of aviation
null
165146
https://en.wikipedia.org/wiki/Inductance
Inductance
Inductance is the tendency of an electrical conductor to oppose a change in the electric current flowing through it. The electric current produces a magnetic field around the conductor. The magnetic field strength depends on the magnitude of the electric current, and therefore follows any changes in the magnitude of the current. From Faraday's law of induction, any change in magnetic field through a circuit induces an electromotive force (EMF) (voltage) in the conductors, a process known as electromagnetic induction. This induced voltage created by the changing current has the effect of opposing the change in current. This is stated by Lenz's law, and the voltage is called back EMF. Inductance is defined as the ratio of the induced voltage to the rate of change of current causing it. It is a proportionality constant that depends on the geometry of circuit conductors (e.g., cross-section area and length) and the magnetic permeability of the conductor and nearby materials. An electronic component designed to add inductance to a circuit is called an inductor. It typically consists of a coil or helix of wire. The term inductance was coined by Oliver Heaviside in May 1884, as a convenient way to refer to "coefficient of self-induction". It is customary to use the symbol for inductance, in honour of the physicist Heinrich Lenz. In the SI system, the unit of inductance is the henry (H), which is the amount of inductance that causes a voltage of one volt, when the current is changing at a rate of one ampere per second. The unit is named for Joseph Henry, who discovered inductance independently of Faraday. History The history of electromagnetic induction, a facet of electromagnetism, began with observations of the ancients: electric charge or static electricity (rubbing silk on amber), electric current (lightning), and magnetic attraction (lodestone). Understanding the unity of these forces of nature, and the scientific theory of electromagnetism was initiated and achieved during the 19th century. Electromagnetic induction was first described by Michael Faraday in 1831. In Faraday's experiment, he wrapped two wires around opposite sides of an iron ring. He expected that, when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. Using a galvanometer, he observed a transient current flow in the second coil of wire each time that a battery was connected or disconnected from the first coil. This current was induced by the change in magnetic flux that occurred when the battery was connected and disconnected. Faraday found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk"). Source of inductance A current flowing through a conductor generates a magnetic field around the conductor, which is described by Ampere's circuital law. The total magnetic flux through a circuit is equal to the product of the perpendicular component of the magnetic flux density and the area of the surface spanning the current path. If the current varies, the magnetic flux through the circuit changes. By Faraday's law of induction, any change in flux through a circuit induces an electromotive force (EMF, in the circuit, proportional to the rate of change of flux The negative sign in the equation indicates that the induced voltage is in a direction which opposes the change in current that created it; this is called Lenz's law. The potential is therefore called a back EMF. If the current is increasing, the voltage is positive at the end of the conductor through which the current enters and negative at the end through which it leaves, tending to reduce the current. If the current is decreasing, the voltage is positive at the end through which the current leaves the conductor, tending to maintain the current. Self-inductance, usually just called inductance, is the ratio between the induced voltage and the rate of change of the current Thus, inductance is a property of a conductor or circuit, due to its magnetic field, which tends to oppose changes in current through the circuit. The unit of inductance in the SI system is the henry (H), named after Joseph Henry, which is the amount of inductance that generates a voltage of one volt when the current is changing at a rate of one ampere per second. All conductors have some inductance, which may have either desirable or detrimental effects in practical electrical devices. The inductance of a circuit depends on the geometry of the current path, and on the magnetic permeability of nearby materials; ferromagnetic materials with a higher permeability like iron near a conductor tend to increase the magnetic field and inductance. Any alteration to a circuit which increases the flux (total magnetic field) through the circuit produced by a given current increases the inductance, because inductance is also equal to the ratio of magnetic flux to current An inductor is an electrical component consisting of a conductor shaped to increase the magnetic flux, to add inductance to a circuit. Typically it consists of a wire wound into a coil or helix. A coiled wire has a higher inductance than a straight wire of the same length, because the magnetic field lines pass through the circuit multiple times, it has multiple flux linkages. The inductance is proportional to the square of the number of turns in the coil, assuming full flux linkage. The inductance of a coil can be increased by placing a magnetic core of ferromagnetic material in the hole in the center. The magnetic field of the coil magnetizes the material of the core, aligning its magnetic domains, and the magnetic field of the core adds to that of the coil, increasing the flux through the coil. This is called a ferromagnetic core inductor. A magnetic core can increase the inductance of a coil by thousands of times. If multiple electric circuits are located close to each other, the magnetic field of one can pass through the other; in this case the circuits are said to be inductively coupled. Due to Faraday's law of induction, a change in current in one circuit can cause a change in magnetic flux in another circuit and thus induce a voltage in another circuit. The concept of inductance can be generalized in this case by defining the mutual inductance of circuit and circuit as the ratio of voltage induced in circuit to the rate of change of current in circuit This is the principle behind a transformer. The property describing the effect of one conductor on itself is more precisely called self-inductance, and the properties describing the effects of one conductor with changing current on nearby conductors is called mutual inductance. Self-inductance and magnetic energy If the current through a conductor with inductance is increasing, a voltage is induced across the conductor with a polarity that opposes the current—in addition to any voltage drop caused by the conductor's resistance. The charges flowing through the circuit lose potential energy. The energy from the external circuit required to overcome this "potential hill" is stored in the increased magnetic field around the conductor. Therefore, an inductor stores energy in its magnetic field. At any given time the power flowing into the magnetic field, which is equal to the rate of change of the stored energy is the product of the current and voltage across the conductor From (1) above When there is no current, there is no magnetic field and the stored energy is zero. Neglecting resistive losses, the energy (measured in joules, in SI) stored by an inductance with a current through it is equal to the amount of work required to establish the current through the inductance from zero, and therefore the magnetic field. This is given by: If the inductance is constant over the current range, the stored energy is Inductance is therefore also proportional to the energy stored in the magnetic field for a given current. This energy is stored as long as the current remains constant. If the current decreases, the magnetic field decreases, inducing a voltage in the conductor in the opposite direction, negative at the end through which current enters and positive at the end through which it leaves. This returns stored magnetic energy to the external circuit. If ferromagnetic materials are located near the conductor, such as in an inductor with a magnetic core, the constant inductance equation above is only valid for linear regions of the magnetic flux, at currents below the level at which the ferromagnetic material saturates, where the inductance is approximately constant. If the magnetic field in the inductor approaches the level at which the core saturates, the inductance begins to change with current, and the integral equation must be used. Inductive reactance When a sinusoidal alternating current (AC) is passing through a linear inductance, the induced back- is also sinusoidal. If the current through the inductance is , from (1) above the voltage across it is where is the amplitude (peak value) of the sinusoidal current in amperes, is the angular frequency of the alternating current, with being its frequency in hertz, and is the inductance. Thus the amplitude (peak value) of the voltage across the inductance is Inductive reactance is the opposition of an inductor to an alternating current. It is defined analogously to electrical resistance in a resistor, as the ratio of the amplitude (peak value) of the alternating voltage to current in the component Reactance has units of ohms. It can be seen that inductive reactance of an inductor increases proportionally with frequency so an inductor conducts less current for a given applied AC voltage as the frequency increases. Because the induced voltage is greatest when the current is increasing, the voltage and current waveforms are out of phase; the voltage peaks occur earlier in each cycle than the current peaks. The phase difference between the current and the induced voltage is radians or 90 degrees, showing that in an ideal inductor the current lags the voltage by 90°. Calculating inductance In the most general case, inductance can be calculated from Maxwell's equations. Many important cases can be solved using simplifications. Where high frequency currents are considered, with skin effect, the surface current densities and magnetic field may be obtained by solving the Laplace equation. Where the conductors are thin wires, self-inductance still depends on the wire radius and the distribution of the current in the wire. This current distribution is approximately constant (on the surface or in the volume of the wire) for a wire radius much smaller than other length scales. Inductance of a straight single wire As a practical matter, longer wires have more inductance, and thicker wires have less, analogous to their electrical resistance (although the relationships are not linear, and are different in kind from the relationships that length and diameter bear to resistance). Separating the wire from the other parts of the circuit introduces some unavoidable error in any formulas' results. These inductances are often referred to as “partial inductances”, in part to encourage consideration of the other contributions to whole-circuit inductance which are omitted. Practical formulas For derivation of the formulas below, see Rosa (1908). The total low frequency inductance (interior plus exterior) of a straight wire is: where is the "low-frequency" or DC inductance in nanohenry (nH or 10−9H), is the length of the wire in meters, is the radius of the wire in meters (hence a very small decimal number), the constant is the permeability of free space, commonly called , divided by ; in the absence of magnetically reactive insulation the value 200 is exact when using the classical definition of μ0 = , and correct to 7 decimal places when using the 2019-redefined SI value of μ0 = . The constant 0.75 is just one parameter value among several; different frequency ranges, different shapes, or extremely long wire lengths require a slightly different constant (see below). This result is based on the assumption that the radius is much less than the length which is the common case for wires and rods. Disks or thick cylinders have slightly different formulas. For sufficiently high frequencies skin effects cause the interior currents to vanish, leaving only the currents on the surface of the conductor; the inductance for alternating current, is then given by a very similar formula: where the variables and are the same as above; note the changed constant term now 1, from 0.75 above. In an example from everyday experience, just one of the conductors of a lamp cord long, made of 18 AWG wire, would only have an inductance of about if stretched out straight. Mutual inductance of two parallel straight wires There are two cases to consider: Current travels in the same direction in each wire, and current travels in opposing directions in the wires. Currents in the wires need not be equal, though they often are, as in the case of a complete circuit, where one wire is the source and the other the return. Mutual inductance of two wire loops This is the generalized case of the paradigmatic two-loop cylindrical coil carrying a uniform low frequency current; the loops are independent closed circuits that can have different lengths, any orientation in space, and carry different currents. Nonetheless, the error terms, which are not included in the integral are only small if the geometries of the loops are mostly smooth and convex: They must not have too many kinks, sharp corners, coils, crossovers, parallel segments, concave cavities, or other topologically "close" deformations. A necessary predicate for the reduction of the 3-dimensional manifold integration formula to a double curve integral is that the current paths be filamentary circuits, i.e. thin wires where the radius of the wire is negligible compared to its length. The mutual inductance by a filamentary circuit on a filamentary circuit is given by the double integral Neumann formula where and are the curves followed by the wires. is the permeability of free space () is a small increment of the wire in circuit is the position of in space is a small increment of the wire in circuit is the position of in space. Derivation where is the current through the th wire, this current creates the magnetic flux through the th surface is the magnetic flux through the ith surface due to the electrical circuit outlined by where Stokes' theorem has been used for the 3rd equality step. For the last equality step, we used the retarded potential expression for and we ignore the effect of the retarded time (assuming the geometry of the circuits is small enough compared to the wavelength of the current they carry). It is actually an approximation step, and is valid only for local circuits made of thin wires. Self-inductance of a wire loop Formally, the self-inductance of a wire loop would be given by the above equation with However, here becomes infinite, leading to a logarithmically divergent integral. This necessitates taking the finite wire radius and the distribution of the current in the wire into account. There remains the contribution from the integral over all points and a correction term, where and are distances along the curves and respectively is the radius of the wire is the length of the wire is a constant that depends on the distribution of the current in the wire: when the current flows on the surface of the wire (total skin effect), when the current is evenly over the cross-section of the wire. is an error term whose size depends on the curve of the loop: when the loop has sharp corners, and when it is a smooth curve. Both are small when the wire is long compared to its radius. Inductance of a solenoid A solenoid is a long, thin coil; i.e., a coil whose length is much greater than its diameter. Under these conditions, and without any magnetic material used, the magnetic flux density within the coil is practically constant and is given by where is the magnetic constant, the number of turns, the current and the length of the coil. Ignoring end effects, the total magnetic flux through the coil is obtained by multiplying the flux density by the cross-section area When this is combined with the definition of inductance it follows that the inductance of a solenoid is given by: Therefore, for air-core coils, inductance is a function of coil geometry and number of turns, and is independent of current. Inductance of a coaxial cable Let the inner conductor have radius and permeability let the dielectric between the inner and outer conductor have permeability and let the outer conductor have inner radius outer radius and permeability However, for a typical coaxial line application, we are interested in passing (non-DC) signals at frequencies for which the resistive skin effect cannot be neglected. In most cases, the inner and outer conductor terms are negligible, in which case one may approximate Inductance of multilayer coils Most practical air-core inductors are multilayer cylindrical coils with square cross-sections to minimize average distance between turns (circular cross -sections would be better but harder to form). Magnetic cores Many inductors include a magnetic core at the center of or partly surrounding the winding. Over a large enough range these exhibit a nonlinear permeability with effects such as magnetic saturation. Saturation makes the resulting inductance a function of the applied current. The secant or large-signal inductance is used in flux calculations. It is defined as: The differential or small-signal inductance, on the other hand, is used in calculating voltage. It is defined as: The circuit voltage for a nonlinear inductor is obtained via the differential inductance as shown by Faraday's Law and the chain rule of calculus. Similar definitions may be derived for nonlinear mutual inductance. Mutual inductance Mutual inductance is defined as the ratio between the EMF induced in one loop or coil by the rate of change of current in another loop or coil. Mutual inductance is given the symbol . Derivation of mutual inductance The inductance equations above are a consequence of Maxwell's equations. For the important case of electrical circuits consisting of thin wires, the derivation is straightforward. In a system of wire loops, each with one or several wire turns, the flux linkage of loop is given by Here denotes the number of turns in loop is the magnetic flux through loop and are some constants described below. This equation follows from Ampere's law: magnetic fields and fluxes are linear functions of the currents. By Faraday's law of induction, we have where denotes the voltage induced in circuit This agrees with the definition of inductance above if the coefficients are identified with the coefficients of inductance. Because the total currents contribute to it also follows that is proportional to the product of turns Mutual inductance and magnetic field energy Multiplying the equation for vm above with imdt and summing over m gives the energy transferred to the system in the time interval dt, This must agree with the change of the magnetic field energy, W, caused by the currents. The integrability condition requires Lm,n = Ln,m. The inductance matrix, Lm,n, thus is symmetric. The integral of the energy transfer is the magnetic field energy as a function of the currents, This equation also is a direct consequence of the linearity of Maxwell's equations. It is helpful to associate changing electric currents with a build-up or decrease of magnetic field energy. The corresponding energy transfer requires or generates a voltage. A mechanical analogy in the K = 1 case with magnetic field energy (1/2)Li2 is a body with mass M, velocity u and kinetic energy (1/2)Mu2. The rate of change of velocity (current) multiplied with mass (inductance) requires or generates a force (an electrical voltage). Mutual inductance occurs when the change in current in one inductor induces a voltage in another nearby inductor. It is important as the mechanism by which transformers work, but it can also cause unwanted coupling between conductors in a circuit. The mutual inductance, is also a measure of the coupling between two inductors. The mutual inductance by circuit on circuit is given by the double integral Neumann formula, see calculation techniques The mutual inductance also has the relationship: where Once the mutual inductance is determined, it can be used to predict the behavior of a circuit: where The minus sign arises because of the sense the current has been defined in the diagram. With both currents defined going into the dots the sign of will be positive (the equation would read with a plus sign instead). Coupling coefficient The coupling coefficient is the ratio of the open-circuit actual voltage ratio to the ratio that would be obtained if all the flux coupled from one magnetic circuit to the other. The coupling coefficient is related to mutual inductance and self inductances in the following way. From the two simultaneous equations expressed in the two-port matrix the open-circuit voltage ratio is found to be: where while the ratio if all the flux is coupled is the ratio of the turns, hence the ratio of the square root of the inductances thus, where The coupling coefficient is a convenient way to specify the relationship between a certain orientation of inductors with arbitrary inductance. Most authors define the range as but some define it as Allowing negative values of captures phase inversions of the coil connections and the direction of the windings. Matrix representation Mutually coupled inductors can be described by any of the two-port network parameter matrix representations. The most direct are the z parameters, which are given by The y parameters are given by Where is the complex frequency variable, and are the inductances of the primary and secondary coil, respectively, and is the mutual inductance between the coils. Multiple Coupled Inductors Mutual inductance may be applied to multiple inductors simultaneously. The matrix representations for multiple mutually coupled inductors are given by Equivalent circuits T-circuit Mutually coupled inductors can equivalently be represented by a T-circuit of inductors as shown. If the coupling is strong and the inductors are of unequal values then the series inductor on the step-down side may take on a negative value. This can be analyzed as a two port network. With the output terminated with some arbitrary impedance the voltage gain is given by, where is the coupling constant and is the complex frequency variable, as above. For tightly coupled inductors where this reduces to which is independent of the load impedance. If the inductors are wound on the same core and with the same geometry, then this expression is equal to the turns ratio of the two inductors because inductance is proportional to the square of turns ratio. The input impedance of the network is given by, For this reduces to Thus, current gain is independent of load unless the further condition is met, in which case, and π-circuit Alternatively, two coupled inductors can be modelled using a π equivalent circuit with optional ideal transformers at each port. While the circuit is more complicated than a T-circuit, it can be generalized to circuits consisting of more than two coupled inductors. Equivalent circuit elements have physical meaning, modelling respectively magnetic reluctances of coupling paths and magnetic reluctances of leakage paths. For example, electric currents flowing through these elements correspond to coupling and leakage magnetic fluxes. Ideal transformers normalize all self-inductances to 1 Henry to simplify mathematical formulas. Equivalent circuit element values can be calculated from coupling coefficients with where coupling coefficient matrix and its cofactors are defined as and For two coupled inductors, these formulas simplify to and and for three coupled inductors (for brevity shown only for and ) and Resonant transformer When a capacitor is connected across one winding of a transformer, making the winding a tuned circuit (resonant circuit) it is called a single-tuned transformer. When a capacitor is connected across each winding, it is called a double tuned transformer. These resonant transformers can store oscillating electrical energy similar to a resonant circuit and thus function as a bandpass filter, allowing frequencies near their resonant frequency to pass from the primary to secondary winding, but blocking other frequencies. The amount of mutual inductance between the two windings, together with the Q factor of the circuit, determine the shape of the frequency response curve. The advantage of the double tuned transformer is that it can have a wider bandwidth than a simple tuned circuit. The coupling of double-tuned circuits is described as loose-, critical-, or over-coupled depending on the value of the coupling coefficient When two tuned circuits are loosely coupled through mutual inductance, the bandwidth is narrow. As the amount of mutual inductance increases, the bandwidth continues to grow. When the mutual inductance is increased beyond the critical coupling, the peak in the frequency response curve splits into two peaks, and as the coupling is increased the two peaks move further apart. This is known as overcoupling. Stongly-coupled self-resonant coils can be used for wireless power transfer between devices in the mid range distances (up to two metres). Strong coupling is required for a high percentage of power transferred, which results in peak splitting of the frequency response. Ideal transformers When the inductor is referred to as being closely coupled. If in addition, the self-inductances go to infinity, the inductor becomes an ideal transformer. In this case the voltages, currents, and number of turns can be related in the following way: where Conversely the current: where The power through one inductor is the same as the power through the other. These equations neglect any forcing by current sources or voltage sources. Self-inductance of thin wire shapes The table below lists formulas for the self-inductance of various simple shapes made of thin cylindrical conductors (wires). In general these are only accurate if the wire radius is much smaller than the dimensions of the shape, and if no ferromagnetic materials are nearby (no magnetic core). is an approximately constant value between 0 and 1 that depends on the distribution of the current in the wire: when the current flows only on the surface of the wire (complete skin effect), when the current is evenly spread over the cross-section of the wire (direct current). For round wires, Rosa (1908) gives a formula equivalent to: where is represents small term(s) that have been dropped from the formula, to make it simpler. Read the term as "plus small corrections that vary on the order of (see big O notation).
Physical sciences
Electrical circuits
null
165252
https://en.wikipedia.org/wiki/Beech
Beech
Beech (Fagus) is a genus of deciduous trees in the family Fagaceae, native to subtropical (accessory forest element) and temperate (as dominant element of mesophytic forests) Eurasia and North America. There are 14 accepted species in two distinct subgenera, Englerianae and Fagus. The subgenus Englerianae is found only in East Asia, distinctive for its low branches, often made up of several major trunks with yellowish bark. The better known species of subgenus Fagus are native to Europe, western and eastern Asia and eastern North America. They are high-branching trees with tall, stout trunks and smooth silver-grey bark. The European beech Fagus sylvatica is the most commonly cultivated species, yielding a utility timber used for furniture construction, flooring and engineering purposes, in plywood, and household items. The timber can be used to build homes. Beechwood makes excellent firewood. Slats of washed beech wood are spread around the bottom of fermentation tanks for Budweiser beer. Beech logs are burned to dry the malt used in some German smoked beers. Beech is also used to smoke Westphalian ham, andouille sausage, and some cheeses. Description Beeches are monoecious, bearing both male and female flowers on the same plant. The small flowers are unisexual, the female flowers borne in pairs, the male flowers wind-pollinating catkins. They are produced in spring shortly after the new leaves appear. The fruit of the beech tree, known as beechnuts or mast, is found in small burrs that drop from the tree in autumn. They are small, roughly triangular, and edible, with a bitter, astringent, or mild and nut-like taste. The European beech (Fagus sylvatica) is the most commonly cultivated, although few important differences are seen between species aside from detail elements such as leaf shape. The leaves of beech trees are entire or sparsely toothed, from long and broad. The bark is smooth and light gray. The fruit is a small, sharply three-angled nut long, borne singly or in pairs in soft-spined husks long, known as cupules. The husk can have a variety of spine- to scale-like appendages, the character of which is, in addition to leaf shape, one of the primary ways beeches are differentiated. The nuts are called beechnuts or beech mast and have a bitter taste (though not nearly as bitter as acorns) and a high tannin content. Taxonomy and systematics The most recent classification system of the genus recognizes 14 species in two distinct subgenera, subgenus Englerianae and Fagus. Beech species can be diagnosed by phenotypical and/or genotypical traits. Species of subgenus Engleriana are found only in East Asia, and are notably distinct from species of subgenus Fagus in that these beeches are low-branching trees, often made up of several major trunks with yellowish bark and a substantially different nucleome (nuclear DNA), especially in noncoding, highly variable gene regions such as the spacers of the nuclear-encoded ribosomal RNA genes (ribosomal DNA). Further differentiating characteristics include the whitish bloom on the underside of the leaves, the visible tertiary leaf veins, and a long, smooth cupule-peduncle. Originally proposed but not formalized by botanist Chung-Fu Shen in 1992, this group comprised two Japanese species, F. japonica and F. okamotoi, and one Chinese species, F. engleriana. While the status of F. okamotoi remains uncertain, the most recent systematic treatment based on morphological and genetic data confirmed a third species, F. multinervis, endemic to Ulleungdo, a South Korean island in the Sea of Japan. The beeches of Ulleungdo have been traditionally treated as a subspecies of F. engleriana, to which they are phenotypically identical, or as a variety of F. japonica. The differ from their siblings by their unique nuclear and plastid genotypes. The better known subgenus Fagus beeches are high-branching with tall, stout trunks and smooth silver-gray bark. This group includes five extant species in continental and insular East Asia (F. crenata, F. longipetiolata, F. lucida, and the cryptic sister species F. hayatae and F. pashanica), two pseudo-cryptic species in eastern North America (F. grandifolia, F. mexicana), and a species complex of at least four species (F. caspica, F. hohenackeriana, F. orientalis, F. sylvatica) in Western Eurasia. Their genetics are highly complex and include both species-unique alleles as well as alleles and ribosomal DNA spacers that are shared between two or more species. The western Eurasian species are characterized by morphological and genetical gradients. Research suggests that the first representatives of the modern-day genus were already present in the Paleocene of Arctic North America (western Greenland) and quickly radiated across the high latitudes of the Northern Hemisphere, with a first diversity peak in the Miocene of northeastern Asia. The contemporary species are the product of past, repeated reticulate evolutionary processes (outbreeding, introgression, hybridization). As far as studied, heterozygosity and intragenomic variation are common in beech species, and their chloroplast genomes are nonspecific with the exception of the Western Eurasian and North American species. Fagus is the first diverging lineage in the evolution of the Fagaceae family, which also includes oaks and chestnuts. The oldest fossils that can be assigned to the beech lineage are 81–82 million years old pollen from the Late Cretaceous of Wyoming, United States. The southern beeches (genus Nothofagus) historically thought closely related to beeches, are treated as members of a separate family, the Nothofagaceae (which remains a member of the order Fagales). They are found throughout the Southern Hemisphere in Australia, New Zealand, New Guinea, New Caledonia, as well as Argentina and Chile (principally Patagonia and Tierra del Fuego). Species Species treated in Denk et al. (2024) and listed in Plants of the World Online (POWO): Natural and potential hybrids Phylogeny A cladogram of 11 beech species is shown below. Fossil species Numerous species have been named globally from the fossil record spanning from the Cretaceous to the Pleistocene. †Fagus aburatoensis †Fagus alnitifolia †Fagus altaensis †Fagus ambigua †Fagus angusta †Fagus antipofii †Fagus aperta †Fagus arduinorum †Fagus aspera †Fagus aspera (jr homonym) †Fagus atlantica †Fagus attenuata †Fagus aurelianii †Fagus australis †Fagus betulifolia †Fagus bonnevillensis †Fagus castaneifolia †Fagus celastrifolia †Fagus ceretana †Fagus chamaephegos †Fagus chankaica †Fagus chiericii †Fagus chinensis †Fagus coalita †Fagus cordifolia †Fagus cretacea †Fagus decurrens †Fagus dentata †Fagus deucalionis †Fagus dubia †Fagus dubia (jr homonym) †Fagus echinata †Fagus eocenica †Fagus etheridgei †Fagus ettingshausenii †Fagus europaea †Fagus evenensis †Fagus faujasii †Fagus feroniae †Fagus florinii †Fagus forumlivii †Fagus friedrichii †Fagus gortanii †Fagus grandifoliiformis †Fagus gussonii †Fagus haidingeri †Fagus herthae †Fagus hitchcockii †Fagus hondoensis †Fagus hookeri †Fagus horrida †Fagus humata †Fagus idahoensis †Fagus inaequalis †Fagus incerta †Fagus integrifolia †Fagus intermedia †Fagus irvajamensis †Fagus japoniciformis †Fagus japonicoides †Fagus jobanensis †Fagus jonesii †Fagus juliae †Fagus kitamiensis †Fagus koraica †Fagus kraeuselii †Fagus kuprianoviae †Fagus lancifolia (nomen nudum) †Fagus langevinii †Fagus laptoneura †Fagus latissima †Fagus leptoneuron †Fagus macrophylla †Fagus maorica †Fagus marsillii †Fagus menzelii †Fagus microcarpa †Fagus miocenica †Fagus napanensis †Fagus nelsonica †Fagus oblonga †Fagus oblonga †Fagus obscura †Fagus olejnikovii †Fagus orbiculatum †Fagus orientaliformis †Fagus orientalis var fossilis †Fagus orientalis var palibinii †Fagus pacifica †Fagus palaeococcus †Fagus palaeocrenata †Fagus palaeograndifolia †Fagus palaeojaponica †Fagus pittmanii †Fagus pliocaenica (jr homonym) †Fagus pliocenica †Fagus polycladus †Fagus praelucida †Fagus praeninnisiana †Fagus praeulmifolia †Fagus prisca †Fagus pristina †Fagus producta †Fagus protojaponica †Fagus protolongipetiolata †Fagus protonucifera †Fagus pseudoferruginea †Fagus pygmaea †Fagus pyrrhae †Fagus salnikovii †Fagus sanctieugeniensis †Fagus saxonica †Fagus schofieldii †Fagus septembris †Fagus shagiana †Fagus stuxbergii †Fagus subferruginea †Fagus succinea †Fagus sylvatica var diluviana †Fagus sylvatica var pliocenica †Fagus tenella †Fagus uemurae †Fagus uotanii †Fagus vivianii †Fagus washoensis Fossil species formerly placed in Fagus include: †Alnus paucinervis †Castanea abnormalis †Fagopsis longifolia †Fagopsis undulata †Fagoxylon grandiporosum †Fagus-pollenites parvifossilis †Juglans ginannii (new name for F. ginannii) †Nothofagaphyllites novae-zealandiae †Nothofagus benthamii †Nothofagus dicksonii †Nothofagus lendenfeldii †Nothofagus luehmannii †Nothofagus magelhaenica †Nothofagus maidenii †Nothofagus muelleri †Nothofagus ninnisiana †Nothofagus risdoniana †Nothofagus ulmifolia †Nothofagus wilkinsonii †Trigonobalanus minima Etymology The name of the tree in Latin, fagus (from whence the generic epithet), is cognate with English "beech" and of Indo-European origin, and played an important role in early debates on the geographical origins of the Indo-European people, the beech argument. Greek φηγός (figós) is from the same root, but the word was transferred to the oak tree (e.g. Iliad 16.767) as a result of the absence of beech trees in southern Greece. Distribution and habitat Britain and Ireland Fagus sylvatica was a late entrant to Great Britain after the last glaciation, and may have been restricted to basic soils in the south of England. Some suggest that it was introduced by Neolithic tribes who planted the trees for their edible nuts. The beech is classified as a native in the south of England and as a non-native in the north where it is often removed from 'native' woods. Large areas of the Chilterns are covered with beech woods, which are habitat to the common bluebell and other flora. The Cwm Clydach National Nature Reserve in southeast Wales was designated for its beech woodlands, which are believed to be on the western edge of their natural range in this steep limestone gorge. Beech is not native to Ireland; however, it was widely planted in the 18th century and can become a problem shading out the native woodland understory. Beech is widely planted for hedging and in deciduous woodlands, and mature, regenerating stands occur throughout mainland Britain at elevations below about . The tallest and longest hedge in the world (according to Guinness World Records) is the Meikleour Beech Hedge in Meikleour, Perth and Kinross, Scotland. Continental Europe Fagus sylvatica is one of the most common hardwood trees in north-central Europe, in France constituting alone about 15% of all nonconifers. The Balkans are also home to the lesser-known oriental beech (F. orientalis) and Crimean beech (F. taurica). As a naturally growing forest tree, beech marks the important border between the European deciduous forest zone and the northern pine forest zone. This border is important for wildlife and fauna. In Denmark and Scania at the southernmost peak of the Scandinavian peninsula, southwest of the natural spruce boundary, it is the most common forest tree. It grows naturally in Denmark and southern Norway and Sweden up to about 57–59°N. The most northern known naturally growing (not planted) beech trees are found in a small grove north of Bergen on the west coast of Norway. Near the city of Larvik is the largest naturally occurring beech forest in Norway, Bøkeskogen. Some research suggests that early agriculture patterns supported the spread of beech in continental Europe. Research has linked the establishment of beech stands in Scandinavia and Germany with cultivation and fire disturbance, i.e. early agricultural practices. Other areas which have a long history of cultivation, Bulgaria for example, do not exhibit this pattern, so how much human activity has influenced the spread of beech trees is as yet unclear. The primeval beech forests of the Carpathians are also an example of a singular, complete, and comprehensive forest dominated by a single tree species - the beech tree. Forest dynamics here were allowed to proceed without interruption or interference since the last ice age. Nowadays, they are amongst the last pure beech forests in Europe to document the undisturbed postglacial repopulation of the species, which also includes the unbroken existence of typical animals and plants. These virgin beech forests and similar forests across 12 countries in continental Europe were inscribed on the UNESCO World Heritage List in 2007. North America The American beech (Fagus grandifolia) occurs across much of the eastern United States and southeastern Canada, with a disjunct sister species in Mexico (F. mexicana). There are the only extant (surviving) Fagus species in the Western Hemisphere. Before the Pleistocene Ice Age, it is believed to have spanned the entire width of the continent from the Atlantic Ocean to the Pacific but now is confined to the east of the Great Plains. F. grandifolia tolerates hotter climates than European species but is not planted much as an ornamental due to slower growth and less resistance to urban pollution. It most commonly occurs as an overstory component in the northern part of its range with sugar maple, transitioning to other forest types further south such as beech-magnolia. American beech is rarely encountered in developed areas except as a remnant of a forest that was cut down for land development. The dead brown leaves of the American beech remain on the branches until well into the following spring, when the new buds finally push them off. Asia East Asia is home to eight species of Fagus, only one of which (F. crenata) is occasionally planted in Western countries. Smaller than F. sylvatica and F. grandifolia, this beech is one of the most common hardwoods in its native range. Ecology Beech grows on a wide range of soil types, acidic or basic, provided they are not waterlogged. The tree canopy casts dense shade and thickens the ground with leaf litter. In North America, they can form beech-maple climax forests by partnering with the sugar maple. The beech blight aphid (Grylloprociphilus imbricator) is a common pest of American beech trees. Beeches are also used as food plants by some species of Lepidoptera. Beech bark is extremely thin and scars easily. Since the beech tree has such delicate bark, carvings, such as lovers' initials and other forms of graffiti, remain because the tree is unable to heal itself. Diseases Beech bark disease is a fungal infection that attacks the American beech through damage caused by scale insects. Infection can lead to the death of the tree. Beech leaf disease is a disease that affects American beeches spread by the newly discovered nematode, Litylenchus crenatae mccannii. This disease was first discovered in Lake County, Ohio, in 2012 and has now spread to over 41 counties in Ohio, Pennsylvania, New York, and Ontario, Canada. As of 2024, the disease has become widespread in Connecticut, Massachusetts and Rhode Island, and in portions of coastal New Hampshire and coastal and central Maine. Cultivation The beech most commonly grown as an ornamental tree is the European beech (Fagus sylvatica), widely cultivated in North America as well as its native Europe. Many varieties are in cultivation, notably the weeping beech F. sylvatica 'Pendula', several varieties of copper or purple beech, the fern-leaved beech F. sylvatica 'Asplenifolia', and the tricolour beech F. sylvatica 'Roseomarginata'. The columnar Dawyck beech (F. sylvatica 'Dawyck') occurs in green, gold, and purple forms, named after Dawyck Botanic Garden in the Scottish Borders, one of the four garden sites of the Royal Botanic Garden Edinburgh. Uses Wood Beech wood is an excellent firewood, easily split and burning for many hours with bright but calm flames. Slats of beech wood are washed in caustic soda to leach out any flavour or aroma characteristics and are spread around the bottom of fermentation tanks for Budweiser beer. This provides a complex surface on which the yeast can settle, so that it does not pile up, preventing yeast autolysis which would contribute off-flavours to the beer. Beech logs are burned to dry the malt used in German smoked beers. Beech is also used to smoke Westphalian ham, traditional andouille (an offal sausage) from Normandy, and some cheeses. Some drums are made from beech, which has a tone between those of maple and birch, the two most popular drum woods. The textile modal is a kind of rayon often made wholly from reconstituted cellulose of pulped beech wood. The European species Fagus sylvatica yields a tough, utility timber. It weighs about 720 kg per cubic metre and is widely used for furniture construction, flooring, and engineering purposes, in plywood and household items, but rarely as a decorative wood. The timber can be used to build chalets, houses, and log cabins. Beech wood is used for the stocks of military rifles when traditionally preferred woods such as walnut are scarce or unavailable or as a lower-cost alternative. Food The edible fruit of the beech tree, known as beechnuts or mast, is found in small burrs that drop from the tree in autumn. They are small, roughly triangular, and edible, with a bitter, astringent, or in some cases, mild and nut-like taste. According to the Roman statesman Pliny the Elder in his work Natural History, beechnut was eaten by the people of Chios when the town was besieged, writing of the fruit: "that of the beech is the sweetest of all; so much so, that, according to Cornelius Alexander, the people of the city of Chios, when besieged, supported themselves wholly on mast". They can also be roasted and pulverized into an adequate coffee substitute. The leaves can be steeped in liquor to give a light green/yellow liqueur. Books In antiquity, the bark of the beech tree was used by Indo-European people for writing-related purposes, especially in a religious context. Beech wood tablets were a common writing material in Germanic societies before the development of paper. The Old English bōc has the primary sense of "beech" but also a secondary sense of "book", and it is from bōc that the modern word derives. In modern German, the word for "book" is Buch, with Buche meaning "beech tree". In modern Dutch, the word for "book" is boek, with beuk meaning "beech tree". In Swedish, these words are the same, bok meaning both "beech tree" and "book". There is a similar relationship in some Slavic languages. In Russian and Bulgarian, the word for beech is бук (buk), while that for "letter" (as in a letter of the alphabet) is буква (bukva), while Serbo-Croatian and Slovene use "bukva" to refer to the tree. Other The pigment bistre was made from beech wood soot. Beech litter raking as a replacement for straw in animal husbandry was an old non-timber practice in forest management that once occurred in parts of Switzerland in the 17th century. Beech has been listed as one of the 38 plants whose flowers are used to prepare Bach flower remedies.
Biology and health sciences
Fagales
null
165261
https://en.wikipedia.org/wiki/Platanus
Platanus
Platanus ( ) is a genus consisting of a small number of tree species native to the Northern Hemisphere. They are the sole living members of the family Platanaceae. All mature members of Platanus are tall, reaching in height. The type species of the genus is the Oriental plane Platanus orientalis. All except for P. kerrii are deciduous, and most are found in riparian or other wetland habitats in the wild, though proving drought-tolerant in cultivation. The hybrid London plane (Platanus × hispanica) has proved particularly tolerant of urban conditions, and has been widely planted in London and elsewhere across the temperate world. They are often known in English as planes or plane trees. A formerly used name that is now rare is plantain tree (not to be confused with other, unrelated, species with the name). Some North American species are called sycamores (especially Platanus occidentalis), although the term is also used for several unrelated species of trees. The genus name Platanus comes from Ancient Greek , which referred to Platanus orientalis. Botany The flowers are reduced and are borne in balls (globose heads); 3–7 hairy sepals may be fused at the base, and the petals are 3–7 and are spatulate. Male and female flowers are separate, but borne on the same plant (monoecious). The number of heads in one cluster (inflorescence) is indicative of the species (see table below). The male flower has 3–8 stamens; the female has a superior ovary with 3–7 carpels. Plane trees are wind-pollinated. Male flower-heads fall off after shedding their pollen. After being pollinated, the female flowers become achenes that form an aggregate ball. The fruit is a multiple of achenes (plant systematics, Simpson M. G., 2006). Typically, the core of the ball is 1 cm in diameter and is covered with a net of mesh 1 mm, which can be peeled off. The ball is 2.5–4 cm in diameter and contains several hundred achenes, each of which has a single seed and is conical, with the point attached downward to the net at the surface of the ball. There is also a tuft of many thin stiff yellow-green bristle fibers attached to the base of each achene. These bristles help in wind dispersion of the fruits as in the dandelion. The leaves are simple and alternate. In the subgenus Platanus they have a palmate outline. The base of the leaf stalk (petiole) is enlarged and completely wraps around the young stem bud in its axil. The axillary bud is exposed only after the leaf falls off. The mature bark peels off or exfoliates easily in irregularly shaped patches, producing a mottled, scaly appearance. On old trunks, bark may not flake off, but thickens and cracks instead. Phylogeny There are two subgenera, subgenus Castaneophyllum containing the anomalous P. kerrii, and subgenus Platanus, with all the others; recent studies in Mexico have increased the number of accepted species in this subgenus. Within subgenus Platanus, evidence from both chloroplast and nuclear gene sequences suggests that the P. racemosa species complex in Western North America (including P. racemosa, P. gentryi, P. wrightii) is more closely related to the Eurasian P. orientalis than it is to the other North American species (P. mexicana sensu lato, including up to four species: P. chiapaensis, P. lindeniana, P. [×] mexicana sensu stricto, P. oaxacana; P. occidentalis s.l. with two [sub]species: P. occidentalis, P. palmeri; P. rzedowskii). The two groups form genetically and morphologically distinct evolutionary lineages (sister clades), informally called the “ANA clade” (Atlantic North American lineage) and “PNA-E clade” (Pacific North American-European lineage). Both lineages have been affected by reticulate evolutionary processes in the past (ancient or recent hybridization and introgression): Platanus palmeri (= P. occidentalis var. palmeri) – forming the southwesternmost populations of P. occidentalis s.l. – carries nuclear intron sequences (second intron of the Leafy gene) of PNA-E origin. It lacks the plastid haplotype specific for the northeastern populations (P. occidentalis s.str.) The internal transcribed spacers of the nuclear-encoded rRNA genes of P. occidentalis s.l. and P. rzedowskii include ANA-specific variants with functional 5.8S rDNA as well as PNA-E-specific variants showing signs of pseudogeny. The latter are shared with P. gentryi, the PNA-E species closest to the ANA clade area and the northern/ interior populations of P. mexicana s.l. This indicates that already the common ancestor of P. rzedowskii and P. occidentalis s.l. had been in contact with a member of the PNA-E clade. Likewise, P. rzedowskii from Nuevo León is a genetic mosaic, and may have originated from earlier hybridization within the ANA clade, between southernmost P. occidentalis s.l. (P. palmeri) and P. mexicana s.l., or their ancestors. Today the ranges of P. occidentalis s.l. and P. mexicana s.l. are mutually exclusive. Platanus rzedowskii is geographically and morphologically intermediate between P. occidentalis s.l. and P. mexicana s.l. Morphological reinvestigation including the originally collected material revealed that the interior populations of P. mexicana (northern Querétaro and northern Hidalgo; P. mexicana var. interior according Nixon & Poole) mark the hybrid zone between P. rzedowskii and P. mexicana s.l. and the (former) contact zone to the species of the PNA-E clade (P. gentryi, P. wrightii). Since the holotype of P. mexicana is from this zone and shows the characteristical intermediate morphology, P. mexicana s.str. would represent a nothospecies: P. × mexicana. The remaining populations of P. mexicana s.l., P. lindeniana, show no sign of introgression from either P. rzedowskii, P. occidentalis-palmeri or the Western North American species (P. racemosa species aggregate), with the exception of one heterozygotic P. oaxacana population from northcentral Oaxaca. The genus Platanus exemplarily illustrates the concept of a Coral of Life, a species network. Its modern-day species are not only the product of evolutionary dichotomies (cladogenesis), the splitting of an ancestral lineage into two (Tree of Life metaphor) but also evolutionary anastomoses: hybridization and introgression. The fossil record of leaves and fruit identifiable to Platanus begins in the Paleocene. Despite the geographic separation between North America and Old World, species from these continents will cross readily resulting in fertile hybrids such as the London plane, which is an anthropogenic hybrid (cultivar) between the North American P. occidentalis sensu stricto (ANA clade) and the Mediterranean P. orientalis (PNA-E clade). Widely used as a park tree across the Northern Hemisphere, it frequently backcrosses with both its parents. Species The following are named species of Platanus; not all are accepted by all authorities: Diseases Planes are susceptible to plane anthracnose (Apiognomonia veneta), a fungal disease that can defoliate the trees in some years. The most severe infections are associated with cold, wet spring weather. P. occidentalis and the other American species are the most susceptible, with P. orientalis the most resistant. The hybrid London plane is intermediate in resistance. Ceratocystis platani, a wilt disease, has become a significant problem in recent years in much of Europe. The North American species are mostly resistant to the disease, with which they probably coevolved, while the Old World species are highly sensitive. Other diseases such as powdery mildew occur frequently, but are of lesser importance. Platanus species are used as food plants by the larvae of some Lepidoptera species including Phyllonorycter platani and Setaceous Hebrew Character. In the 21st century a disease, commonly known as Massaria disease, has attacked plane trees across Europe. It is caused by the fungus Splanchnonema platani, and causes large lesions on the upper sides of branches. Effects on humans There have been cases of "platanus cough", symptoms of shortness of breath, coughing, and irritated eyes, which may affect several people in a place, and have led to initial suspicion of an attack with an irritant gas. After one such mass attack which affected schoolchildren in classrooms with open windows densely surrounded by plane tees, children had to be admitted to hospital, where they were treated and recovered without ill effects. It was found that the symptoms were due to the fine star-shaped trichomes (hairs) on all parts of platanus trees, which are broken off by strong wind after a prolonged dry period. The dust created causes direct irritation and scratchiness in the eyes, throat, and nose, but not the runny nose and itching eyes and nose caused by an allergy. The school incident took place after a dry period, with a fairly high temperature of , and wind blowing at . Protection against platanus cough is provided by avoiding contact and wearing protective glasses and masks under weather conditions promoting release of trichomes. When cleaning in an urban environment, sweeping up fallen leaves and branches can release hairs; cleaning by suction is preferred. It is not recommended that trees in cities be felled, as they are beneficial; in particular the platanus trichomes act as biofilters for air pollutants. Where there are urban concentrations presenting a risk, seasonal spraying of trees with a solution of apple pectin can prevent the star hair from breaking off. Uses The principal use of these trees is as ornamental trees, especially in urban areas and by roadsides. The London plane is particularly popular for this purpose. The American plane is cultivated sometimes for timber and investigations have been made into its use as a biomass crop. The oriental plane is widely used as an ornamental tree, and also has a number of minor medicinal uses. Cultural history Most significant aspects of cultural history apply to Platanus orientalis in the Old World. The tree is an important part of the literary scenery of Plato's dialogue Phaedrus. Because of Plato, the tree also played an important role in the scenery of Cicero's De Oratore. The trees also provided the shade under which Aristotle and Plato's famed philosophical schools were held. Handel's opera Serse has a famous aria, "Ombra mai fu", which the title character sings in praise of his favorite plane tree. The plane tree has been a frequent motif featured in Classical Chinese poetry as an embodiment of sorrowful sentiments due to its autumnal shedding of leaves. The legendary Dry Tree first recorded by Marco Polo was possibly a platanus. According to the legend, it marked the site of the battle between Alexander the Great and Darius III. The German camouflage pattern Platanenmuster ("plane tree pattern"), designed in 1937–1942 by Johann Georg Otto Schick, was the first dotted camouflage pattern.
Biology and health sciences
Others
null
165320
https://en.wikipedia.org/wiki/Jodrell%20Bank%20Observatory
Jodrell Bank Observatory
Jodrell Bank Observatory ( ) in Cheshire, England hosts a number of radio telescopes as part of the Jodrell Bank Centre for Astrophysics at the University of Manchester. The observatory was established in 1945 by Bernard Lovell, a radio astronomer at the university, to investigate cosmic rays after his work on radar in the Second World War. It has since played an important role in the research of meteoroids, quasars, pulsars, masers, and gravitational lenses, and was heavily involved with the tracking of space probes at the start of the Space Age. The main telescope at the observatory is the Lovell Telescope. Its diameter of makes it the third largest steerable radio telescope in the world. There are three other active telescopes at the observatory; the Mark II and and 7 m diameter radio telescopes. Jodrell Bank Observatory is the base of the Multi-Element Radio Linked Interferometer Network (MERLIN), a National Facility run by the University of Manchester on behalf of the Science and Technology Facilities Council. The Jodrell Bank Visitor Centre and an arboretum are in Lower Withington, and the Lovell Telescope and the observatory near Goostrey and Holmes Chapel. The observatory is reached from the A535. The Crewe to Manchester Line passes by the site, and Goostrey station is a short distance away. In 2019, the observatory became a UNESCO World Heritage Site. Early years Jodrell Bank was first used for academic purposes in 1939 when the University of Manchester's Department of Botany purchased three fields from the Leighs. It is named from a nearby rise in the ground, Jodrell Bank, which was named after William Jauderell, an archer whose descendants lived at the mansion that is now Terra Nova School. The site was extended in 1952 by the purchase of a farm from George Massey on which the Lovell Telescope was built. The site was first used for astrophysics in 1945, when Bernard Lovell used some equipment left over from World War II, including a gun laying radar, to investigate cosmic rays. The equipment was a GL II radar system working at a wavelength of 4.2 m, provided by J. S. Hey. He intended to use the equipment in Manchester, but electrical interference from the trams on Oxford Road prevented him from doing so. He moved the equipment to Jodrell Bank, south of the city, on 10 December 1945. Lovell's main research was transient radio echoes, which he confirmed were from ionized meteor trails by October 1946. The first staff were Alf Dean and Frank Foden who observed meteors with the naked eye while Lovell observed the electromagnetic signal using equipment. The first time Lovell turned the radar on – 14 December 1945 – the Geminids meteor shower was at a maximum. Over the next few years, Lovell accumulated more ex-military radio hardware, including a portable cabin, known as a "Park Royal" in the military (see Park Royal Vehicles). The first permanent building was near to the cabin and was named after it. Searchlight telescope A searchlight was loaned to Jodrell Bank in 1946 by the army; a broadside array, was constructed on its mount by J. Clegg. It consisted of 7 elements of Yagi–Uda antennas. It was used for astronomical observations in October 1946. On 9 and 10 October 1946, the telescope observed ionisation in the atmosphere caused by meteors in the Giacobinids meteor shower. When the antenna was turned by 90 degrees at the maximum of the shower, the number of detections dropped to the background level, proving that the transient signals detected by radar were from meteors. The telescope was then used to determine the radiant points for meteors. This was possible as the echo rate is at a minimum at the radiant point, and a maximum at 90 degrees to it. The telescope and other receivers on the site studied the auroral streamers that were visible in early August 1947. Transit Telescope The Transit Telescope was a parabolic reflector zenith telescope built in 1947. At the time, it was the world's largest radio telescope. It consisted of a wire mesh suspended from a ring of scaffold poles, which focussed radio signals on a focal point above the ground. The telescope mainly looked directly upwards, but the direction of the beam could be changed by small amounts by tilting the mast to change the position of the focal point. The focal mast was changed from timber to steel before construction was complete. The telescope was replaced by the steerable Lovell Telescope, and the Mark II telescope was subsequently built at the same location. The telescope could map a ± 15-degree strip around the zenith at 72 and 160 MHz, with a resolution at 160 MHz of 1 degree. It discovered radio noise from the Great Nebula in Andromeda – the first definite detection of an extragalactic radio source – and the remnants of Tycho's Supernova in the radio frequency; at the time it had not been discovered by optical astronomy. Lovell Telescope The "Mark I" telescope, now known as the Lovell Telescope, was the world's largest steerable dish radio telescope, in diameter, when it was constructed in 1957; it is now the third largest, after the Green Bank telescope in West Virginia and the Effelsberg telescope in Germany. Part of the gun turret mechanisms from the First World War battleships and were reused in the telescope's motor system. The telescope became operational in mid-1957, in time for the launch of the Soviet Union's Sputnik 1, the world's first artificial satellite. The telescope was the only one able to track Sputnik's booster rocket by radar; first locating it just before midnight on 12 October 1957, eight days after its launch. In the following years, the telescope tracked various space probes. Between 11 March and 12 June 1960, it tracked the United States' NASA-launched Pioneer 5 probe. The telescope sent commands to the probe, including those to separate it from its carrier rocket and turn on its more powerful transmitter when the probe was eight million miles away. It received data from the probe, the only telescope in the world capable of doing so. In February 1966, Jodrell Bank was asked by the Soviet Union to track its unmanned Moon lander Luna 9 and recorded on its facsimile transmission of photographs from the Moon's surface. The photographs were sent to the British press and published before the Soviets made them public. In 1969, the Soviet Union's Luna 15 was also tracked. A recording of the moment when Jodrell Bank's scientists observed the mission was released on 3 July 2009. With the support of Sir Bernard Lovell, the telescope tracked Russian satellites. Satellite and space probe observations were shared with the US Department of Defense satellite tracking research and development activity at Project Space Track. Tracking space probes only took a fraction of the Lovell telescope's observing time, and the remainder was used for scientific observations including using radar to measure the distance to the Moon and to Venus; observations of astrophysical masers around star-forming regions and giant stars; observations of pulsars (including the discovery of millisecond pulsars and the first pulsar in a globular cluster); and observations of quasars and gravitational lenses (including the detection of the first gravitational lens and the first Einstein ring). The telescope has also been used for SETI observations. Mark II and III telescopes The Mark II telescope is an elliptical radio telescope, with a major axis of and a minor axis of . It was constructed in 1964. As well as operating as a standalone telescope, it has been used as an interferometer with the Lovell Telescope, and is now primarily used as part of the MERLIN project. The Mark III telescope, the same size as the Mark II, was constructed to be transportable but it was never moved from Wardle, near Nantwich, where it was used as part of MERLIN. It was built in 1966 and decommissioned in 1996. Mark IV, V and VA telescope proposals The Mark IV, V and VA telescope proposals were put forward in the 1960s through to the 1980s to build even larger radio telescopes. The Mark IV proposal was for a diameter standalone telescope, built as a national project. The Mark V proposal was for a moveable telescope. The concept of this proposal was for a telescope on a railway line adjoining Jodrell Bank, but concerns about future levels of interference meant that a site in Wales would have been preferable. Design proposals by Husband and Co and Freeman Fox, who had designed the Parkes Observatory telescope in Australia, were put forward. The Mark VA was similar to the Mark V but with a smaller dish of and a design using prestressed concrete, similar to the Mark II (the previous two designs more closely resembled the Lovell telescope). None of the proposed telescopes was constructed, although design studies were carried out and scale models were made, partly because of the changing political climate, and partly due to the financial constraints of astronomical research in the UK. Also it became necessary to upgrade the Lovell Telescope to the Mark IA, which overran in terms of cost. Other single dishes A 50 ft (15 m) alt-azimuth dish was constructed in 1964 for astronomical research and to track the Zond 1, Zond 2, Ranger 6 and Ranger 7 space probes and Apollo 11. After an accident that irreparably damaged the 50 ft telescope's surface, it was demolished in 1982 and replaced with a more accurate telescope, the "42 ft". The 42 ft (12.8 m) dish is mainly used to observe pulsars, and continually monitors the Crab Pulsar. When the 42 ft was installed, a smaller dish, the "7 m" (actually 6.4 m, or 21 ft, in diameter) was installed and is used for undergraduate teaching. The 42 ft and 7 m telescopes were originally used at the Woomera Rocket Testing Range in South Australia. The 7 m was originally constructed in 1970 by the Marconi Company. A Polar Axis telescope was built in 1962. It had a circular 50 ft (15.2 m) dish on a polar mount, and was mostly used for moon radar experiments. It has been decommissioned. An reflecting optical telescope was donated to the observatory in 1951 but was not used much, and was donated to the Salford Astronomical Society around 1971. MERLIN The Multi-Element Radio Linked Interferometer Network (MERLIN) is an array of radio telescopes spread across England and the Welsh borders. The array is run from Jodrell Bank on behalf of the Science and Technology Facilities Council as a National Facility. The array consists of up to seven radio telescopes and includes the Lovell Telescope, the Mark II, Cambridge, Defford, Knockin, Darnhall, and Pickmere (previously known as Tabley). The longest baseline is and MERLIN can operate at frequencies between 151 MHz and 24 GHz. At a wavelength of 6 cm (5 GHz frequency), MERLIN has a resolution of 50 milliarcseconds which is comparable to that of the HST at optical wavelengths. Very Long Baseline Interferometry Jodrell Bank has been involved with Very Long Baseline Interferometry (VLBI) since the late 1960s; the Lovell telescope took part in the first transatlantic interferometer experiment in 1968, with other telescopes at Algonquin and Penticton in Canada. The Lovell Telescope and the Mark II telescopes are regularly used for VLBI with telescopes across Europe (the European VLBI Network), giving a resolution of around 0.001 arcseconds. Square Kilometre Array In April 2011, Jodrell Bank was named as the location of the control centre for the planned Square Kilometre Array, or SKA Project Office (SPO). The SKA is planned by a collaboration of 20 countries and when completed, is intended to be the most powerful radio telescope ever built. In April 2015 it was announced that Jodrell Bank would be the permanent home of the SKA headquarters for the period of operation expected for the telescope (over 50 years). Research The Jodrell Bank Centre for Astrophysics, of which the Observatory is a part, is one of the largest astrophysics research groups in the UK. About half of the research of the group is in the area of radio astronomy – including research into pulsars, the Cosmic Microwave Background Radiation, gravitational lenses, active galaxies and astrophysical masers. The group also carries out research at different wavelengths, looking into star formation and evolution, planetary nebula and astrochemistry. The first director of Jodrell Bank was Bernard Lovell, who established the observatory in 1945. He was succeeded in 1980 by Sir Francis Graham-Smith, followed by Professor Rod Davies around 1990 and Professor Andrew Lyne in 1999. Professor Phil Diamond took over the role on 1 October 2006, at the time when the Jodrell Bank Centre for Astrophysics was formed. Prof Ralph Spencer was Acting Director during 2009 and 2010. In October 2010, Prof. Albert Zijlstra became Director of the Jodrell Bank Centre for Astrophysics. Professor Lucio Piccirillo was the Director of the Observatory from Oct 2010 to Oct 2011. Prof. Simon Garrington is the JBCA Associate Director for the Jodrell Bank Observatory. In 2016, Prof. Michael Garrett was appointed as the inaugural Sir Bernard Lovell chair of Astrophysics and Director of Jodrell Bank Centre for Astrophysics. As Director JBCA, Prof. Garrett also has overall responsibility for Jodrell Bank Observatory. In May 2017 Jodrell Bank entered into a partnership with the Breakthrough Listen initiative and will share information with Jodrell Bank's team, who wish to conduct an independent SETI search via its 76-m radio telescope and e-MERLIN array. There is an active development programme researching and constructing telescope receivers and instrumentation. The observatory has been involved in the construction of several Cosmic Microwave Background experiments, including the Tenerife Experiment, which ran from the 1980s to 2000, and the amplifiers and cryostats for the Very Small Array. It has also constructed the front-end modules of the 30 and 44 GHz receivers for the Planck spacecraft. Receivers were also designed at Jodrell Bank for the Parkes Telescope in Australia. Visitor facilities, and events A visitors' centre, opened on 19 April 1971 by the Duke of Devonshire, attracted around 120,000 visitors per year. It covered the history of Jodrell Bank and had a planetarium and 3D theatre hosting simulated trips to Mars. Asbestos in the visitors' centre buildings led to its demolition in 2003 leaving a remnant of its far end. A marquee was set up in its grounds while a new science centre was planned. The plans were shelved when Victoria University of Manchester and UMIST merged to become the University of Manchester in 2004, leaving the interim centre, which received around 70,000 visitors a year. In October 2010, work on a new visitor centre started and the Jodrell Bank Discovery Centre opened on 11 April 2011. It includes an entrance building, the Planet Pavilion, a Space Pavilion for exhibitions and events, a glass-walled cafe with a view of the Lovell Telescope and an outdoor dining area, an education space, and landscaped gardens including the Galaxy Maze. A large orrery was installed in 2013. It does not, however, include a planetarium, though a small inflatable planetarium dome has been in use on the site in recent years. The visitor centre is open Tuesday to Sunday and Mondays during school and bank holidays and organises public outreach events, including public lectures, star parties, and "ask an astronomer" sessions. A path around the Lovell telescope is approximately 20 m from the telescope's outer railway, information boards explain how the telescope works and the research that is done with it. The arboretum, created in 1972, houses the UK's national collections of crab apple Malus and mountain ash Sorbus species, and the Heather Society's Calluna collection. The arboretum also has a small scale model of the Solar System, the scale is approximately 1:5,000,000,000. At Jodrell Bank, as part of the SpacedOut project, is the Sun in a 1:15,000,000 scale model of the Solar System covering Britain. On 7 July 2010, it was announced that the observatory was being considered for the 2011 United Kingdom Tentative List for World Heritage Site status. It was announced on 22 March 2011 that it was on the UK government's shortlist. In January 2018, it became the UK's candidate for World Heritage status. In July 2011 the visitor centre and observatory hosted "Live from Jodrell Bank - Transmission 001" – a rock concert with bands including The Flaming Lips, British Sea Power, Wave Machines, OK GO and Alice Gold. On 23 July 2012, Elbow performed live at the observatory and filmed a documentary of the event and the facility which was released as a live CD/DVD of the concert. On 6 July 2013, Transmission 4 featured Australian Pink Floyd, Hawkwind, The Time & Space Machine and The Lucid Dream. On 7 July 2013, Transmission 5 featured New Order, Johnny Marr, The Whip, Public Service Broadcasting, Jake Evans and Hot Vestry. On 30 August 2013, Transmission 6 featured Sigur Ros, Polca and Daughter. On 31 August 2013, Jodrell Bank hosted a concert performed by the Hallé Orchestra to commemorate what would have been Lovell's 100th birthday. As well as a number of operatic performances during the day, the evening Halle performance saw numbers such as themes from Star Trek, Star Wars and Doctor Who among others. The main Lovell telescope was rotated to face the onlooking crowd and used as a huge projection screen showing various animated planetary effects. During the interval the 'screen' was used to show a history of Lovell's work and Jodrell Bank. There is an astronomy podcast from the observatory, named The Jodcast. The BBC television programme Stargazing Live was hosted in the control room of the observatory from 2011 to 2016. Since 2016, the observatory hosted Bluedot, a music and science festival, featuring musical acts such as Public Service Broadcasting, The Chemical Brothers, as well as talks by scientists and scientific communicators such as Jim Al-Khalili and Richard Dawkins. Threat of closure On 3 March 2008, it was reported that Britain's Science and Technology Facilities Council (STFC), faced with an £80 million shortfall in its budget, was considering withdrawing its planned £2.7 million annual funding of Jodrell Bank's e-MERLIN project. The project, which aimed to replace the microwave links between Jodrell Bank and a number of other radio telescopes with high-bandwidth fibre-optic cables, greatly increasing the sensitivity of observations, was seen as critical to the survival of the facility. Bernard Lovell said "It will be a disaster … The fate of the Jodrell Bank telescope is bound up with the fate of e-MERLIN. I don't think the establishment can survive if the e-MERLIN funding is cut". On 9 July 2008, it was reported that, following an independent review, STFC had reversed its initial position and would now guarantee funding of £2.5 million annually for three years. Fictional references Jodrell Bank has been mentioned in several works of fiction, including Doctor Who (The Tenth Planet, Remembrance of the Daleks, "The Poison Sky", "The Eleventh Hour", "Spyfall") and Birthday Boy by David Baddiel. It was intended to be a filming location for Logopolis (Tom Baker's final Doctor Who serial) but budget restrictions prevented this and another location with a superimposed model of a radio telescope was used instead. It was also mentioned in The Hitchhiker's Guide to the Galaxy (as well as The Hitchhiker's Guide to the Galaxy film), The Creeping Terror and Meteor. Jodrell Bank was also featured heavily in the 1983 music video "Secret Messages" by Electric Light Orchestra and also "Are We Ourselves?" by The Fixx. The Prefab Sprout song Technique (from debut album Swoon) opens with the line "Her husband works at Jodrell Bank/He's home late in the morning". The observatory is the site of several episodes in the novel Boneland by the local novelist Alan Garner (2012), and the central character, Colin Whisterfield, is an astrophysicist on its staff. Jodrell bank made an appearance in the CBBC series Bitsa. Appraisal Since 13 July 1988 the Lovell Telescope has been designated as a Grade I listed building. On 10 July 2017 the Mark II Telescope was also designated at the same grade. On the same date five other buildings on the site were designated at Grade II; namely the Searchlight Telescope, the Control Building, the Park Royal Building, the Electrical Workshop, and the Link Hut. Grade I is the highest of the three grades of listing, and is applied to buildings that are of "exceptional interest", and Grade II, the lowest grade, is applied to buildings "of special interest". At the 43rd Session of the UNESCO World Heritage Committee in Baku on 7 July 2019, the Jodrell Bank Observatory was adopted as a World Heritage Site on the basis of 4 criteria Criterion (i): Jodrell Bank Observatory is a masterpiece of human creative genius related to its scientific and technical achievements. Criterion (ii): Jodrell Bank Observatory represents an important interchange of human values over a span of time and on a global scale on developments Criterion (iv): Jodrell Bank Observatory represents an outstanding example of a technological ensemble which illustrates a significant stage in human history Criterion (vi): Jodrell Bank Observatory is directly and tangibly associated with events and ideas of outstanding universal significance.
Technology
Ground-based observatories
null
165384
https://en.wikipedia.org/wiki/Curie%20temperature
Curie temperature
In physics and materials science, the Curie temperature (TC), or Curie point, is the temperature above which certain materials lose their permanent magnetic properties, which can (in most cases) be replaced by induced magnetism. The Curie temperature is named after Pierre Curie, who showed that magnetism is lost at a critical temperature. The force of magnetism is determined by the magnetic moment, a dipole moment within an atom that originates from the angular momentum and spin of electrons. Materials have different structures of intrinsic magnetic moments that depend on temperature; the Curie temperature is the critical point at which a material's intrinsic magnetic moments change direction. Permanent magnetism is caused by the alignment of magnetic moments, and induced magnetism is created when disordered magnetic moments are forced to align in an applied magnetic field. For example, the ordered magnetic moments (ferromagnetic, Figure 1) change and become disordered (paramagnetic, Figure 2) at the Curie temperature. Higher temperatures make magnets weaker, as spontaneous magnetism only occurs below the Curie temperature. Magnetic susceptibility above the Curie temperature can be calculated from the Curie–Weiss law, which is derived from Curie's law. In analogy to ferromagnetic and paramagnetic materials, the Curie temperature can also be used to describe the phase transition between ferroelectricity and paraelectricity. In this context, the order parameter is the electric polarization that goes from a finite value to zero when the temperature is increased above the Curie temperature. Curie temperatures of materials History That heating destroys magnetism was already described in De Magnete (1600):Iron filings, after being heated for a long time, are attracted by a loadstone, yet not so strongly or from so great a distance as when not heated. A loadstone loses some of its virtue by too great a heat; for its humour is set free, whence its peculiar nature is marred. (Book 2, Chapter 23). Magnetic moments At the atomic level, there are two contributors to the magnetic moment, the electron magnetic moment and the nuclear magnetic moment. Of these two terms, the electron magnetic moment dominates, and the nuclear magnetic moment is insignificant. At higher temperatures, electrons have higher thermal energy. This has a randomizing effect on aligned magnetic domains, leading to the disruption of order, and the phenomena of the Curie point. Ferromagnetic, paramagnetic, ferrimagnetic, and antiferromagnetic materials have different intrinsic magnetic moment structures. At a material's specific Curie temperature (), these properties change. The transition from antiferromagnetic to paramagnetic (or vice versa) occurs at the Néel temperature (), which is analogous to Curie temperature. Materials with magnetic moments that change properties at the Curie temperature Ferromagnetic, paramagnetic, ferrimagnetic, and antiferromagnetic structures are made up of intrinsic magnetic moments. If all the electrons within the structure are paired, these moments cancel out due to their opposite spins and angular momenta. Thus, even with an applied magnetic field, these materials have different properties and no Curie temperature. Paramagnetic A material is paramagnetic only above its Curie temperature. Paramagnetic materials are non-magnetic when a magnetic field is absent and magnetic when a magnetic field is applied. When a magnetic field is absent, the material has disordered magnetic moments; that is, the magnetic moments are asymmetrical and not aligned. When a magnetic field is present, the magnetic moments are temporarily realigned parallel to the applied field; the magnetic moments are symmetrical and aligned. The magnetic moments being aligned in the same direction are what causes an induced magnetic field. For paramagnetism, this response to an applied magnetic field is positive and is known as magnetic susceptibility. The magnetic susceptibility only applies above the Curie temperature for disordered states. Sources of paramagnetism (materials which have Curie temperatures) include: All atoms that have unpaired electrons; Atoms that have inner shells that are incomplete in electrons; Free radicals; Metals. Above the Curie temperature, the atoms are excited, and the spin orientations become randomized but can be realigned by an applied field, i.e., the material becomes paramagnetic. Below the Curie temperature, the intrinsic structure has undergone a phase transition, the atoms are ordered, and the material is ferromagnetic. The paramagnetic materials' induced magnetic fields are very weak compared with ferromagnetic materials' magnetic fields. Ferromagnetic Materials are only ferromagnetic below their corresponding Curie temperatures. Ferromagnetic materials are magnetic in the absence of an applied magnetic field. When a magnetic field is absent the material has spontaneous magnetization which is a result of the ordered magnetic moments; that is, for ferromagnetism, the atoms are symmetrical and aligned in the same direction creating a permanent magnetic field. The magnetic interactions are held together by exchange interactions; otherwise thermal disorder would overcome the weak interactions of magnetic moments. The exchange interaction has a zero probability of parallel electrons occupying the same point in time, implying a preferred parallel alignment in the material. The Boltzmann factor contributes heavily as it prefers interacting particles to be aligned in the same direction. This causes ferromagnets to have strong magnetic fields and high Curie temperatures of around . Below the Curie temperature, the atoms are aligned and parallel, causing spontaneous magnetism; the material is ferromagnetic. Above the Curie temperature the material is paramagnetic, as the atoms lose their ordered magnetic moments when the material undergoes a phase transition. Ferrimagnetic Materials are only ferrimagnetic below their corresponding Curie temperature. Ferrimagnetic materials are magnetic in the absence of an applied magnetic field and are made up of two different ions. When a magnetic field is absent the material has a spontaneous magnetism which is the result of ordered magnetic moments; that is, for ferrimagnetism one ion's magnetic moments are aligned facing in one direction with certain magnitude and the other ion's magnetic moments are aligned facing in the opposite direction with a different magnitude. As the magnetic moments are of different magnitudes in opposite directions there is still a spontaneous magnetism and a magnetic field is present. Similar to ferromagnetic materials the magnetic interactions are held together by exchange interactions. The orientations of moments however are anti-parallel which results in a net momentum by subtracting their momentum from one another. Below the Curie temperature the atoms of each ion are aligned anti-parallel with different momentums causing a spontaneous magnetism; the material is ferrimagnetic. Above the Curie temperature the material is paramagnetic as the atoms lose their ordered magnetic moments as the material undergoes a phase transition. Antiferromagnetic and the Néel temperature Materials are only antiferromagnetic below their corresponding Néel temperature or magnetic ordering temperature, TN. This is similar to the Curie temperature as above the Néel Temperature the material undergoes a phase transition and becomes paramagnetic. That is, the thermal energy becomes large enough to destroy the microscopic magnetic ordering within the material. It is named after Louis Néel (1904–2000), who received the 1970 Nobel Prize in Physics for his work in the area. The material has equal magnetic moments aligned in opposite directions resulting in a zero magnetic moment and a net magnetism of zero at all temperatures below the Néel temperature. Antiferromagnetic materials are weakly magnetic in the absence or presence of an applied magnetic field. Similar to ferromagnetic materials the magnetic interactions are held together by exchange interactions preventing thermal disorder from overcoming the weak interactions of magnetic moments. When disorder occurs it is at the Néel temperature. Listed below are the Néel temperatures of several materials: Curie–Weiss law The Curie–Weiss law is an adapted version of Curie's law. The Curie–Weiss law is a simple model derived from a mean-field approximation, this means it works well for the materials temperature, , much greater than their corresponding Curie temperature, , i.e. ; it however fails to describe the magnetic susceptibility, , in the immediate vicinity of the Curie point because of correlations in the fluctuations of neighboring magnetic moments. Neither Curie's law nor the Curie–Weiss law holds for . Curie's law for a paramagnetic material: The Curie constant is defined as The Curie–Weiss law is then derived from Curie's law to be: where: is the Weiss molecular field constant. For full derivation see Curie–Weiss law. Physics Approaching Curie temperature from above As the Curie–Weiss law is an approximation, a more accurate model is needed when the temperature, , approaches the material's Curie temperature, . Magnetic susceptibility occurs above the Curie temperature. An accurate model of critical behaviour for magnetic susceptibility with critical exponent : The critical exponent differs between materials and for the mean-field model is taken as  = 1. As temperature is inversely proportional to magnetic susceptibility, when approaches the denominator tends to zero and the magnetic susceptibility approaches infinity allowing magnetism to occur. This is a spontaneous magnetism which is a property of ferromagnetic and ferrimagnetic materials. Approaching Curie temperature from below Magnetism depends on temperature and spontaneous magnetism occurs below the Curie temperature. An accurate model of critical behaviour for spontaneous magnetism with critical exponent : The critical exponent differs between materials and for the mean-field model as taken as  =  where . The spontaneous magnetism approaches zero as the temperature increases towards the materials Curie temperature. Approaching absolute zero (0 kelvin) The spontaneous magnetism, occurring in ferromagnetic, ferrimagnetic, and antiferromagnetic materials, approaches zero as the temperature increases towards the material's Curie temperature. Spontaneous magnetism is at its maximum as the temperature approaches 0 K. That is, the magnetic moments are completely aligned and at their strongest magnitude of magnetism due to lack of thermal disturbance. In paramagnetic materials thermal energy is sufficient to overcome the ordered alignments. As the temperature approaches 0 K, the entropy decreases to zero, that is, the disorder decreases and the material becomes ordered. This occurs without the presence of an applied magnetic field and obeys the third law of thermodynamics. Both Curie's law and the Curie–Weiss law fail as the temperature approaches 0 K. This is because they depend on the magnetic susceptibility, which only applies when the state is disordered. Gadolinium sulfate continues to satisfy Curie's law at 1 K. Between 0 and 1 K the law fails to hold and a sudden change in the intrinsic structure occurs at the Curie temperature. Ising model of phase transitions The Ising model is mathematically based and can analyse the critical points of phase transitions in ferromagnetic order due to spins of electrons having magnitudes of ±. The spins interact with their neighbouring dipole electrons in the structure and here the Ising model can predict their behaviour with each other. This model is important for solving and understanding the concepts of phase transitions and hence solving the Curie temperature. As a result, many different dependencies that affect the Curie temperature can be analysed. For example, the surface and bulk properties depend on the alignment and magnitude of spins and the Ising model can determine the effects of magnetism in this system. One should note, in 1D the Curie (critical) temperature for a magnetic order phase transition is found to be at zero temperature, i.e. the magnetic order takes over only at = 0. In 2D, the critical temperature, e.g. a finite magnetization, can be calculated by solving the inequality: Weiss domains and surface and bulk Curie temperatures Materials structures consist of intrinsic magnetic moments which are separated into domains called Weiss domains. This can result in ferromagnetic materials having no spontaneous magnetism as domains could potentially balance each other out. The position of particles can therefore have different orientations around the surface than the main part (bulk) of the material. This property directly affects the Curie temperature as there can be a bulk Curie temperature and a different surface Curie temperature for a material. This allows for the surface Curie temperature to be ferromagnetic above the bulk Curie temperature when the main state is disordered, i.e. ordered and disordered states occur simultaneously. The surface and bulk properties can be predicted by the Ising model and electron capture spectroscopy can be used to detect the electron spins and hence the magnetic moments on the surface of the material. An average total magnetism is taken from the bulk and surface temperatures to calculate the Curie temperature from the material, noting the bulk contributes more. The angular momentum of an electron is either + or − due to it having a spin of , which gives a specific size of magnetic moment to the electron; the Bohr magneton. Electrons orbiting around the nucleus in a current loop create a magnetic field which depends on the Bohr magneton and magnetic quantum number. Therefore, the magnetic moments are related between angular and orbital momentum and affect each other. Angular momentum contributes twice as much to magnetic moments than orbital. For terbium which is a rare-earth metal and has a high orbital angular momentum the magnetic moment is strong enough to affect the order above its bulk temperatures. It is said to have a high anisotropy on the surface, that is it is highly directed in one orientation. It remains ferromagnetic on its surface above its Curie temperature (219 K) while its bulk becomes antiferromagnetic and then at higher temperatures its surface remains antiferromagnetic above its bulk Néel Temperature (230 K) before becoming completely disordered and paramagnetic with increasing temperature. The anisotropy in the bulk is different from its surface anisotropy just above these phase changes as the magnetic moments will be ordered differently or ordered in paramagnetic materials. Changing a material's Curie temperature Composite materials Composite materials, that is, materials composed from other materials with different properties, can change the Curie temperature. For example, a composite which has silver in it can create spaces for oxygen molecules in bonding which decreases the Curie temperature as the crystal lattice will not be as compact. The alignment of magnetic moments in the composite material affects the Curie temperature. If the material's moments are parallel with each other, the Curie temperature will increase and if perpendicular the Curie temperature will decrease as either more or less thermal energy will be needed to destroy the alignments. Preparing composite materials through different temperatures can result in different final compositions which will have different Curie temperatures. Doping a material can also affect its Curie temperature. The density of nanocomposite materials changes the Curie temperature. Nanocomposites are compact structures on a nano-scale. The structure is built up of high and low bulk Curie temperatures, however will only have one mean-field Curie temperature. A higher density of lower bulk temperatures results in a lower mean-field Curie temperature, and a higher density of higher bulk temperature significantly increases the mean-field Curie temperature. In more than one dimension the Curie temperature begins to increase as the magnetic moments will need more thermal energy to overcome the ordered structure. Particle size The size of particles in a material's crystal lattice changes the Curie temperature. Due to the small size of particles (nanoparticles) the fluctuations of electron spins become more prominent, which results in the Curie temperature drastically decreasing when the size of particles decreases, as the fluctuations cause disorder. The size of a particle also affects the anisotropy causing alignment to become less stable and thus lead to disorder in magnetic moments. The extreme of this is superparamagnetism which only occurs in small ferromagnetic particles. In this phenomenon, fluctuations are very influential causing magnetic moments to change direction randomly and thus create disorder. The Curie temperature of nanoparticles is also affected by the crystal lattice structure: body-centred cubic (bcc), face-centred cubic (fcc), and a hexagonal structure (hcp) all have different Curie temperatures due to magnetic moments reacting to their neighbouring electron spins. fcc and hcp have tighter structures and as a results have higher Curie temperatures than bcc as the magnetic moments have stronger effects when closer together. This is known as the coordination number which is the number of nearest neighbouring particles in a structure. This indicates a lower coordination number at the surface of a material than the bulk which leads to the surface becoming less significant when the temperature is approaching the Curie temperature. In smaller systems the coordination number for the surface is more significant and the magnetic moments have a stronger effect on the system. Although fluctuations in particles can be minuscule, they are heavily dependent on the structure of crystal lattices as they react with their nearest neighbouring particles. Fluctuations are also affected by the exchange interaction as parallel facing magnetic moments are favoured and therefore have less disturbance and disorder, therefore a tighter structure influences a stronger magnetism and therefore a higher Curie temperature. Pressure Pressure changes a material's Curie temperature. Increasing pressure on the crystal lattice decreases the volume of the system. Pressure directly affects the kinetic energy in particles as movement increases causing the vibrations to disrupt the order of magnetic moments. This is similar to temperature as it also increases the kinetic energy of particles and destroys the order of magnetic moments and magnetism. Pressure also affects the density of states (DOS). Here the DOS decreases causing the number of electrons available to the system to decrease. This leads to the number of magnetic moments decreasing as they depend on electron spins. It would be expected because of this that the Curie temperature would decrease; however, it increases. This is the result of the exchange interaction. The exchange interaction favours the aligned parallel magnetic moments due to electrons being unable to occupy the same space in time and as this is increased due to the volume decreasing the Curie temperature increases with pressure. The Curie temperature is made up of a combination of dependencies on kinetic energy and the DOS. The concentration of particles also affects the Curie temperature when pressure is being applied and can result in a decrease in Curie temperature when the concentration is above a certain percent. Orbital ordering Orbital ordering changes the Curie temperature of a material. Orbital ordering can be controlled through applied strains. This is a function that determines the wave of a single electron or paired electrons inside the material. Having control over the probability of where the electron will be allows the Curie temperature to be altered. For example, the delocalised electrons can be moved onto the same plane by applied strains within the crystal lattice. The Curie temperature is seen to increase greatly due to electrons being packed together in the same plane, they are forced to align due to the exchange interaction and thus increases the strength of the magnetic moments which prevents thermal disorder at lower temperatures. Curie temperature in ferroelectric materials In analogy to ferromagnetic and paramagnetic materials, the term Curie temperature () is also applied to the temperature at which a ferroelectric material transitions to being paraelectric. Hence, is the temperature where ferroelectric materials lose their spontaneous polarisation as a first or second order phase change occurs. In case of a second order transition, the Curie Weiss temperature which defines the maximum of the dielectric constant is equal to the Curie temperature. However, the Curie temperature can be 10 K higher than in case of a first order transition. Ferroelectric and dielectric Materials are only ferroelectric below their corresponding transition temperature . Ferroelectric materials are all pyroelectric and therefore have a spontaneous electric polarisation as the structures are unsymmetrical. Ferroelectric materials' polarization is subject to hysteresis (Figure 4); that is they are dependent on their past state as well as their current state. As an electric field is applied the dipoles are forced to align and polarisation is created, when the electric field is removed polarisation remains. The hysteresis loop depends on temperature and as a result as the temperature is increased and reaches the two curves become one curve as shown in the dielectric polarisation (Figure 5). Relative permittivity A modified version of the Curie–Weiss law applies to the dielectric constant, also known as the relative permittivity: Applications A heat-induced ferromagnetic-paramagnetic transition is used in magneto-optical storage media for erasing and writing of new data. Famous examples include the Sony Minidisc format as well as the now-obsolete CD-MO format. Curie point electro-magnets have been proposed and tested for actuation mechanisms in passive safety systems of fast breeder reactors, where control rods are dropped into the reactor core if the actuation mechanism heats up beyond the material's Curie point. Other uses include temperature control in soldering irons and stabilizing the magnetic field of tachometer generators against temperature variation.
Physical sciences
Phase transitions
Physics
165423
https://en.wikipedia.org/wiki/Digestion
Digestion
Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use. In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food. The saliva also contains mucus, which lubricates the food; the electrolyte hydrogencarbonate (), which provides the ideal conditions of pH for amylase to work; and other electrolytes (, , ). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damaging effects of chemicals like concentrated hydrochloric acid while also aiding lubrication. Hydrochloric acid provides acidic pH for pepsin. At the same time protein digestion is occurring, mechanical mixing occurs by peristalsis, which is waves of muscular contractions that move along the stomach wall. This allows the mass of food to further mix with the digestive enzymes. Pepsin breaks down proteins into peptides or proteoses, which is further broken down into dipeptides and amino acids by enzymes in the small intestine. Studies suggest that increasing the number of chews per bite increases relevant gut hormones and may decrease self-reported hunger and food intake. When the pyloric sphincter valve opens, partially digested food (chyme) enters the duodenum where it mixes with digestive enzymes from the pancreas and bile juice from the liver and then passes through the small intestine, in which digestion continues. When the chyme is fully digested, it is absorbed into the blood. 95% of nutrient absorption occurs in the small intestine. Water and minerals are reabsorbed back into the blood in the colon (large intestine) where the pH is slightly acidic (about 5.6 ~ 6.9). Some vitamins, such as biotin and vitamin K (K2MK7) produced by bacteria in the colon are also absorbed into the blood in the colon. Absorption of water, simple sugar and alcohol also takes place in stomach. Waste material (feces) is eliminated from the rectum during defecation. Digestive system Digestive systems take many forms. There is a fundamental distinction between internal and external digestion. External digestion developed earlier in evolutionary history, and most fungi still rely on it. In this process, enzymes are secreted into the environment surrounding the organism, where they break down an organic material, and some of the products diffuse back to the organism. Animals have a tube (gastrointestinal tract) in which internal digestion occurs, which is more efficient because more of the broken down products can be captured, and the internal chemical environment can be more efficiently controlled. Some organisms, including nearly all spiders, secrete biotoxins and digestive chemicals (e.g., enzymes) into the extracellular environment prior to ingestion of the consequent "soup". In others, once potential nutrients or food is inside the organism, digestion can be conducted to a vesicle or a sac-like structure, through a tube, or through several specialized organs aimed at making the absorption of nutrients more efficient. Secretion systems Bacteria use several systems to obtain nutrients from other organisms in the environments. Channel transport system In a channel transport system, several proteins form a contiguous channel traversing the inner and outer membranes of the bacteria. It is a simple system, which consists of only three protein subunits: the ABC protein, membrane fusion protein (MFP), and outer membrane protein. This secretion system transports various chemical species, from ions, drugs, to proteins of various sizes (20–900 kDa). The chemical species secreted vary in size from the small Escherichia coli peptide colicin V, (10 kDa) to the Pseudomonas fluorescens cell adhesion protein LapA of 900 kDa. Molecular syringe A type III secretion system means that a molecular syringe is used through which a bacterium (e.g. certain types of Salmonella, Shigella, Yersinia) can inject nutrients into protist cells. One such mechanism was first discovered in Y. pestis and showed that toxins could be injected directly from the bacterial cytoplasm into the cytoplasm of its host's cells rather than be secreted into the extracellular medium. Conjugation machinery The conjugation machinery of some bacteria (and archaeal flagella) is capable of transporting both DNA and proteins. It was discovered in Agrobacterium tumefaciens, which uses this system to introduce the Ti plasmid and proteins into the host, which develops the crown gall (tumor). The VirB complex of Agrobacterium tumefaciens is the prototypic system. In the nitrogen-fixing Rhizobia, conjugative elements naturally engage in inter-kingdom conjugation. Such elements as the Agrobacterium Ti or Ri plasmids contain elements that can transfer to plant cells. Transferred genes enter the plant cell nucleus and effectively transform the plant cells into factories for the production of opines, which the bacteria use as carbon and energy sources. Infected plant cells form crown gall or root tumors. The Ti and Ri plasmids are thus endosymbionts of the bacteria, which are in turn endosymbionts (or parasites) of the infected plant. The Ti and Ri plasmids are themselves conjugative. Ti and Ri transfer between bacteria uses an independent system (the tra, or transfer, operon) from that for inter-kingdom transfer (the vir, or virulence, operon). Such transfer creates virulent strains from previously avirulent Agrobacteria. Release of outer membrane vesicles In addition to the use of the multiprotein complexes listed above, gram-negative bacteria possess another method for release of material: the formation of outer membrane vesicles. Portions of the outer membrane pinch off, forming spherical structures made of a lipid bilayer enclosing periplasmic materials. Vesicles from a number of bacterial species have been found to contain virulence factors, some have immunomodulatory effects, and some can directly adhere to and intoxicate host cells. While release of vesicles has been demonstrated as a general response to stress conditions, the process of loading cargo proteins seems to be selective. Gastrovascular cavity The gastrovascular cavity functions as a stomach in both digestion and the distribution of nutrients to all parts of the body. Extracellular digestion takes place within this central cavity, which is lined with the gastrodermis, the internal layer of epithelium. This cavity has only one opening to the outside that functions as both a mouth and an anus: waste and undigested matter is excreted through the mouth/anus, which can be described as an incomplete gut. In a plant such as the Venus flytrap that can make its own food through photosynthesis, it does not eat and digest its prey for the traditional objectives of harvesting energy and carbon, but mines prey primarily for essential nutrients (nitrogen and phosphorus in particular) that are in short supply in its boggy, acidic habitat. Phagosome A phagosome is a vacuole formed around a particle absorbed by phagocytosis. The vacuole is formed by the fusion of the cell membrane around the particle. A phagosome is a cellular compartment in which pathogenic microorganisms can be killed and digested. Phagosomes fuse with lysosomes in their maturation process, forming phagolysosomes. In humans, Entamoeba histolytica can phagocytose red blood cells. Specialised organs and behaviours To aid in the digestion of their food, animals evolved organs such as beaks, tongues, radulae, teeth, crops, gizzards, and others. Beaks Birds have bony beaks that are specialised according to the bird's ecological niche. For example, macaws primarily eat seeds, nuts, and fruit, using their beaks to open even the toughest seed. First they scratch a thin line with the sharp point of the beak, then they shear the seed open with the sides of the beak. The mouth of the squid is equipped with a sharp horny beak mainly made of cross-linked proteins. It is used to kill and tear prey into manageable pieces. The beak is very robust, but does not contain any minerals, unlike the teeth and jaws of many other organisms, including marine species. The beak is the only indigestible part of the squid. Tongue The tongue is skeletal muscle on the floor of the mouth of most vertebrates, that manipulates food for chewing (mastication) and swallowing (deglutition). It is sensitive and kept moist by saliva. The underside of the tongue is covered with a smooth mucous membrane. The tongue also has a touch sense for locating and positioning food particles that require further chewing. The tongue is used to roll food particles into a bolus before being transported down the esophagus through peristalsis. The sublingual region underneath the front of the tongue is a location where the oral mucosa is very thin, and underlain by a plexus of veins. This is an ideal location for introducing certain medications to the body. The sublingual route takes advantage of the highly vascular quality of the oral cavity, and allows for the speedy application of medication into the cardiovascular system, bypassing the gastrointestinal tract. Teeth Teeth (singular tooth) are small whitish structures found in the jaws (or mouths) of many vertebrates that are used to tear, scrape, milk and chew food. Teeth are not made of bone, but rather of tissues of varying density and hardness, such as enamel, dentine and cementum. Human teeth have a blood and nerve supply which enables proprioception. This is the ability of sensation when chewing, for example if we were to bite into something too hard for our teeth, such as a chipped plate mixed in food, our teeth send a message to our brain and we realise that it cannot be chewed, so we stop trying. The shapes, sizes and numbers of types of animals' teeth are related to their diets. For example, herbivores have a number of molars which are used to grind plant matter, which is difficult to digest. Carnivores have canine teeth which are used to kill and tear meat. Crop A crop, or croup, is a thin-walled expanded portion of the alimentary tract used for the storage of food prior to digestion. In some birds it is an expanded, muscular pouch near the gullet or throat. In adult doves and pigeons, the crop can produce crop milk to feed newly hatched birds. Certain insects may have a crop or enlarged esophagus. Abomasum Herbivores have evolved cecums (or an abomasum in the case of ruminants). Ruminants have a fore-stomach with four chambers. These are the rumen, reticulum, omasum, and abomasum. In the first two chambers, the rumen and the reticulum, the food is mixed with saliva and separates into layers of solid and liquid material. Solids clump together to form the cud (or bolus). The cud is then regurgitated, chewed slowly to completely mix it with saliva and to break down the particle size. Fibre, especially cellulose and hemi-cellulose, is primarily broken down into the volatile fatty acids, acetic acid, propionic acid and butyric acid in these chambers (the reticulo-rumen) by microbes: (bacteria, protozoa, and fungi). In the omasum, water and many of the inorganic mineral elements are absorbed into the blood stream. The abomasum is the fourth and final stomach compartment in ruminants. It is a close equivalent of a monogastric stomach (e.g., those in humans or pigs), and digesta is processed here in much the same way. It serves primarily as a site for acid hydrolysis of microbial and dietary protein, preparing these protein sources for further digestion and absorption in the small intestine. Digesta is finally moved into the small intestine, where the digestion and absorption of nutrients occurs. Microbes produced in the reticulo-rumen are also digested in the small intestine. Specialised behaviours Regurgitation has been mentioned above under abomasum and crop, referring to crop milk, a secretion from the lining of the crop of pigeons and doves with which the parents feed their young by regurgitation. Many sharks have the ability to turn their stomachs inside out and evert it out of their mouths in order to get rid of unwanted contents (perhaps developed as a way to reduce exposure to toxins). Other animals, such as rabbits and rodents, practise coprophagia behaviours – eating specialised faeces in order to re-digest food, especially in the case of roughage. Capybara, rabbits, hamsters and other related species do not have a complex digestive system as do, for example, ruminants. Instead they extract more nutrition from grass by giving their food a second pass through the gut. Soft faecal pellets of partially digested food are excreted and generally consumed immediately. They also produce normal droppings, which are not eaten. Young elephants, pandas, koalas, and hippos eat the faeces of their mother, probably to obtain the bacteria required to properly digest vegetation. When they are born, their intestines do not contain these bacteria (they are completely sterile). Without them, they would be unable to get any nutritional value from many plant components. In earthworms An earthworm's digestive system consists of a mouth, pharynx, esophagus, crop, gizzard, and intestine. The mouth is surrounded by strong lips, which act like a hand to grab pieces of dead grass, leaves, and weeds, with bits of soil to help chew. The lips break the food down into smaller pieces. In the pharynx, the food is lubricated by mucus secretions for easier passage. The esophagus adds calcium carbonate to neutralize the acids formed by food matter decay. Temporary storage occurs in the crop where food and calcium carbonate are mixed. The powerful muscles of the gizzard churn and mix the mass of food and dirt. When the churning is complete, the glands in the walls of the gizzard add enzymes to the thick paste, which helps chemically breakdown the organic matter. By peristalsis, the mixture is sent to the intestine where friendly bacteria continue chemical breakdown. This releases carbohydrates, protein, fat, and various vitamins and minerals for absorption into the body. Overview of vertebrate digestion In most vertebrates, digestion is a multistage process in the digestive system, starting from ingestion of raw materials, most often other organisms. Ingestion usually involves some type of mechanical and chemical processing. Digestion is separated into four steps: Ingestion: placing food into the mouth (entry of food in the digestive system), Mechanical and chemical breakdown: mastication and the mixing of the resulting bolus with water, acids, bile and enzymes in the stomach and intestine to break down complex chemical species into simple structures, Absorption: of nutrients from the digestive system to the circulatory and lymphatic capillaries through osmosis, active transport, and diffusion, and Egestion (Excretion): Removal of undigested materials from the digestive tract through defecation. Underlying the process is muscle movement throughout the system through swallowing and peristalsis. Each step in digestion requires energy, and thus imposes an "overhead charge" on the energy made available from absorbed substances. Differences in that overhead cost are important influences on lifestyle, behavior, and even physical structures. Examples may be seen in humans, who differ considerably from other hominids (lack of hair, smaller jaws and musculature, different dentition, length of intestines, cooking, etc.). The major part of digestion takes place in the small intestine. The large intestine primarily serves as a site for fermentation of indigestible matter by gut bacteria and for resorption of water from digests before excretion. In mammals, preparation for digestion begins with the cephalic phase in which saliva is produced in the mouth and digestive enzymes are produced in the stomach. Mechanical and chemical digestion begin in the mouth where food is chewed, and mixed with saliva to begin enzymatic processing of starches. The stomach continues to break food down mechanically and chemically through churning and mixing with both acids and enzymes. Absorption occurs in the stomach and gastrointestinal tract, and the process finishes with defecation. Human digestion process The human gastrointestinal tract is around long. Food digestion physiology varies between individuals and upon other factors such as the characteristics of the food and size of the meal, and the process of digestion normally takes between 24 and 72 hours. Digestion begins in the mouth with the secretion of saliva and its digestive enzymes. Food is formed into a bolus by the mechanical mastication and swallowed into the esophagus from where it enters the stomach through the action of peristalsis. Gastric juice contains hydrochloric acid and pepsin which could damage the stomach lining, but mucus and bicarbonates are secreted for protection. In the stomach further release of enzymes break down the food further and this is combined with the churning action of the stomach. Mainly proteins are digested in stomach. The partially digested food enters the duodenum as a thick semi-liquid chyme. In the small intestine, the larger part of digestion takes place and this is helped by the secretions of bile, pancreatic juice and intestinal juice. The intestinal walls are lined with villi, and their epithelial cells are covered with numerous microvilli to improve the absorption of nutrients by increasing the surface area of the intestine. Bile helps in emulsification of fats and also activates lipases. In the large intestine, the passage of food is slower to enable fermentation by the gut flora to take place. Here, water is absorbed and waste material stored as feces to be removed by defecation via the anal canal and anus. Neural and biochemical control mechanisms Different phases of digestion take place including: the cephalic phase, gastric phase, and intestinal phase. The cephalic phase occurs at the sight, thought and smell of food, which stimulate the cerebral cortex. Taste and smell stimuli are sent to the hypothalamus and medulla oblongata. After this it is routed through the vagus nerve and release of acetylcholine. Gastric secretion at this phase rises to 40% of maximum rate. Acidity in the stomach is not buffered by food at this point and thus acts to inhibit parietal (secretes acid) and G cell (secretes gastrin) activity via D cell secretion of somatostatin. The gastric phase takes 3 to 4 hours. It is stimulated by distension of the stomach, presence of food in stomach and decrease in pH. Distention activates long and myenteric reflexes. This activates the release of acetylcholine, which stimulates the release of more gastric juices. As protein enters the stomach, it binds to hydrogen ions, which raises the pH of the stomach. Inhibition of gastrin and gastric acid secretion is lifted. This triggers G cells to release gastrin, which in turn stimulates parietal cells to secrete gastric acid. Gastric acid is about 0.5% hydrochloric acid, which lowers the pH to the desired pH of 1–3. Acid release is also triggered by acetylcholine and histamine. The intestinal phase has two parts, the excitatory and the inhibitory. Partially digested food fills the duodenum. This triggers intestinal gastrin to be released. Enterogastric reflex inhibits vagal nuclei, activating sympathetic fibers causing the pyloric sphincter to tighten to prevent more food from entering, and inhibits local reflexes. Breakdown into nutrients Protein digestion Protein digestion occurs in the stomach and duodenum in which 3 main enzymes, pepsin secreted by the stomach and trypsin and chymotrypsin secreted by the pancreas, break down food proteins into polypeptides that are then broken down by various exopeptidases and dipeptidases into amino acids. The digestive enzymes however are mostly secreted as their inactive precursors, the zymogens. For example, trypsin is secreted by pancreas in the form of trypsinogen, which is activated in the duodenum by enterokinase to form trypsin. Trypsin then cleaves proteins to smaller polypeptides. Fat digestion Digestion of some fats can begin in the mouth where lingual lipase breaks down some short chain lipids into diglycerides. However fats are mainly digested in the small intestine. The presence of fat in the small intestine produces hormones that stimulate the release of pancreatic lipase from the pancreas and bile from the liver which helps in the emulsification of fats for absorption of fatty acids. Complete digestion of one molecule of fat (a triglyceride) results a mixture of fatty acids, mono- and di-glycerides, but no glycerol. Carbohydrate digestion In humans, dietary starches are composed of glucose units arranged in long chains called amylose, a polysaccharide. During digestion, bonds between glucose molecules are broken by salivary and pancreatic amylase, resulting in progressively smaller chains of glucose. This results in simple sugars glucose and maltose (2 glucose molecules) that can be absorbed by the small intestine. Lactase is an enzyme that breaks down the disaccharide lactose to its component parts, glucose and galactose. Glucose and galactose can be absorbed by the small intestine. Approximately 65 percent of the adult population produce only small amounts of lactase and are unable to eat unfermented milk-based foods. This is commonly known as lactose intolerance. Lactose intolerance varies widely by genetic heritage; more than 90 percent of peoples of east Asian descent are lactose intolerant, in contrast to about 5 percent of people of northern European descent. Sucrase is an enzyme that breaks down the disaccharide sucrose, commonly known as table sugar, cane sugar, or beet sugar. Sucrose digestion yields the sugars fructose and glucose which are readily absorbed by the small intestine. DNA and RNA digestion DNA and RNA are broken down into mononucleotides by the nucleases deoxyribonuclease and ribonuclease (DNase and RNase) from the pancreas. Non-destructive digestion Some nutrients are complex molecules (for example vitamin B12) which would be destroyed if they were broken down into their functional groups. To digest vitamin B12 non-destructively, haptocorrin in saliva strongly binds and protects the B12 molecules from stomach acid as they enter the stomach and are cleaved from their protein complexes. After the B12-haptocorrin complexes pass from the stomach via the pylorus to the duodenum, pancreatic proteases cleave haptocorrin from the B12 molecules which rebind to intrinsic factor (IF). These B12-IF complexes travel to the ileum portion of the small intestine where cubilin receptors enable assimilation and circulation of B12-IF complexes in the blood. Digestive hormones There are at least five hormones that aid and regulate the digestive system in mammals. There are variations across the vertebrates, as for instance in birds. Arrangements are complex and additional details are regularly discovered. Connections to metabolic control (largely the glucose-insulin system) have been uncovered. Gastrin – is in the stomach and stimulates the gastric glands to secrete pepsinogen (an inactive form of the enzyme pepsin) and hydrochloric acid. Secretion of gastrin is stimulated by food arriving in stomach. The secretion is inhibited by low pH. Secretin – is in the duodenum and signals the secretion of sodium bicarbonate in the pancreas and it stimulates the bile secretion in the liver. This hormone responds to the acidity of the chyme. Cholecystokinin (CCK) – is in the duodenum and stimulates the release of digestive enzymes in the pancreas and stimulates the emptying of bile in the gall bladder. This hormone is secreted in response to fat in chyme. Gastric inhibitory peptide (GIP) – is in the duodenum and decreases the stomach churning in turn slowing the emptying in the stomach. Another function is to induce insulin secretion. Motilin – is in the duodenum and increases the migrating myoelectric complex component of gastrointestinal motility and stimulates the production of pepsin. Significance of pH Digestion is a complex process controlled by several factors. pH plays a crucial role in a normally functioning digestive tract. In the mouth, pharynx and esophagus, pH is typically about 6.8, very weakly acidic. Saliva controls pH in this region of the digestive tract. Salivary amylase is contained in saliva and starts the breakdown of carbohydrates into monosaccharides. Most digestive enzymes are sensitive to pH and will denature in a high or low pH environment. The stomach's high acidity inhibits the breakdown of carbohydrates within it. This acidity confers two benefits: it denatures proteins for further digestion in the small intestines, and provides non-specific immunity, damaging or eliminating various pathogens. In the small intestines, the duodenum provides critical pH balancing to activate digestive enzymes. The liver secretes bile into the duodenum to neutralize the acidic conditions from the stomach, and the pancreatic duct empties into the duodenum, adding bicarbonate to neutralize the acidic chyme, thus creating a neutral environment. The mucosal tissue of the small intestines is alkaline with a pH of about 8.5.
Biology and health sciences
Biology
null
165450
https://en.wikipedia.org/wiki/Phytophthora%20infestans
Phytophthora infestans
Phytophthora infestans is an oomycete or water mold, a fungus-like microorganism that causes the serious potato and tomato disease known as late blight or potato blight. Early blight, caused by Alternaria solani, is also often called "potato blight". Late blight was a major culprit in the 1840s European, the 1845–1852 Irish, and the 1846 Highland potato famines. The organism can also infect some other members of the Solanaceae. The pathogen is favored by moist, cool environments: sporulation is optimal at in water-saturated or nearly saturated environments, and zoospore production is favored at temperatures below . Lesion growth rates are typically optimal at a slightly warmer temperature range of . Etymology The genus name Phytophthora comes from the Greek (), meaning "plant" – plus the Greek (), meaning "decay, ruin, perish". The species name infestans is the present participle of the Latin verb , meaning "attacking, destroying", from which the word "to infest" is derived. The name Phytophthora infestans was coined in 1876 by the German mycologist Heinrich Anton de Bary (1831–1888). Life cycle, signs and symptoms The asexual life cycle of Phytophthora infestans is characterized by alternating phases of hyphal growth, sporulation, sporangia germination (either through zoospore release or direct germination, i.e. germ tube emergence from the sporangium), and the re-establishment of hyphal growth. There is also a sexual cycle, which occurs when isolates of opposite mating type (A1 and A2, see below) meet. Hormonal communication triggers the formation of the sexual spores, called oospores. The different types of spores play major roles in the dissemination and survival of P. infestans. Sporangia are spread by wind or water and enable the movement of P. infestans between different host plants. The zoospores released from sporangia are biflagellated and chemotactic, allowing further movement of P. infestans on water films found on leaves or soils. Both sporangia and zoospores are short-lived, in contrast to oospores which can persist in a viable form for many years. People can observe P. infestans produce dark green, then brown then black spots on the surface of potato leaves and stems, often near the tips or edges, where water or dew collects. The sporangia and sporangiophores appear white on the lower surface of the foliage. As for tuber blight, the white mycelium often shows on the tubers' surface. Under ideal conditions, P. infestans completes its life cycle on potato or tomato foliage in about five days. Sporangia develop on the leaves, spreading through the crop when temperatures are above and humidity is over 75–80% for 2 days or more. Rain can wash spores into the soil where they infect young tubers, and the spores can also travel long distances on the wind. The early stages of blight are easily missed. Symptoms include the appearance of dark blotches on leaf tips and plant stems. White mold will appear under the leaves in humid conditions and the whole plant may quickly collapse. Infected tubers develop grey or dark patches that are reddish brown beneath the skin, and quickly decay to a foul-smelling mush caused by the infestation of secondary soft bacterial rots. Seemingly healthy tubers may rot later when in store. P. infestans survives poorly in nature apart from on its plant hosts. Under most conditions, the hyphae and asexual sporangia can survive for only brief periods in plant debris or soil, and are generally killed off during frosts or very warm weather. The exceptions involve oospores, and hyphae present within tubers. The persistence of viable pathogen within tubers, such as those that are left in the ground after the previous year's harvest or left in cull piles is a major problem in disease management. In particular, volunteer plants sprouting from infected tubers are thought to be a major source of inoculum (or propagules) at the start of a growing season. This can have devastating effects by destroying entire crops. Mating types The mating types are broadly divided into A1 and A2. Until the 1980s populations could only be distinguished by virulence assays and mating types, but since then more detailed analysis has shown that mating type and genotype are substantially decoupled. These types each produce a mating hormone of their own. Pathogen populations are grouped into clonal lineages of these mating types and includes: A1 A1 produces a mating hormone, a diterpene α1. Clonal lineages of A1 include: CN-1, -2, -4, -5, -6, -7, -8 – mtDNA haplotype Ia, China in 1996–97 – Ia, China, 1996–97 – Ia, China, 2004 – IIb, China, 2000 & 2002 – IIa, China, 2004–09 – Ia/IIb, China, 2004–09 – (only presumed to be A1), mtDNA haplo Ia subtype , Japan, Philippines, India, China, Malaysia, Nepal, present some time before 1950 – Ia, India, Nepal, 1993 – Ia, India, 1993 JP-2/SIB-1/RF006 – mtDNA haplo IIa, distinguishable by RG57, intermediate level of metalaxyl resistance, Japan, China, Korea, Thailand, 1996–present – IIa, distinguishable by RG57, intermediate level of metalaxyl resistance, Japan, 1996–present – IIa, distinguishable by RG57, intermediate level of metalaxyl resistance, Japan, 1996–present sensu Zhang (not to be confused with #KR-1 sensu Gotoh below) – IIa, Korea, 2002–04 KR_1_A1 – mtDNA haplo unknown, Korea, 2009–16 – Ia, China, 2004 – Ia, India, Nepal, 1993, 1996–97 – Ia, Nepal, 1997 – Ia, Nepal, 1999–2000 – (Also A2, see #the A2 type of NP2 below) Ia, Nepal, 1999–2000 (not to be confused with #US-1 below) – Ib, Nepal, 1999–2000 (not to be confused with #NP3/US-1 above) – Ib, China, India, Nepal, Japan, Taiwan, Thailand, Vietnam, 1940–2000 – Ia, Nepal, 1999–2000 – mtDNA haplo unknown, Nepal, 1999–2000 – IIb, Taiwan, Korea, Vietnam, 1998–2016 – IIb, China, 2002 & 2004 – IIa, Korea, 2003–04 – Ia, Indonesia, 2016–19 A2 Discovered by John Niederhauser in the 1950s, in the Toluca Valley in Central Mexico, while working for the Rockefeller Foundation's Mexican Agriculture Program. Published in Niederhauser 1956. A2 produces a mating hormone α2. Clonal lineages of A2 include: CN02 – See #13_A2/CN02 below – with mtDNA haplotype H-20 – IIa, Japan, Korea, Indonesia, late 1980s–present sensu Gotoh (not to be confused with #KR-1 sensu Zhang above) – IIa, differs from JP-1 by one RG57 band, Korea, 1992 – mtDNA haplo unknown, Korea, 2009–16 – Ia, China, 2001 – (Also A1, see #the A1 type of NP2 above) Ia, Nepal, 1999–2000 – Ib, Nepal, 1999–2000 – Ia, Nepal, 1999–2000 – Ia, Thailand, China, Nepal, 1994 & 1997 Unknown – Ib, India, 1996–2003 – Brazil – IIa, Korea, 2002–03 /CN02 – Ia, China, India, Bangladesh, Nepal, Pakistan, Myanmar, 2005–19 Self-fertile A self-fertile type was present in China between 2009 and 2013. Physiology is the in P. infestans. Hosts respond with autophagy upon detection of this elicitor, Liu et al. 2005 finding this to be the only alternative to mass hypersensitivity leading to mass programmed cell death. Genetics P. infestans is diploid, with about 8–10 chromosomes, and in 2009 scientists completed the sequencing of its genome. The genome was found to be considerably larger (240 Mbp) than that of most other Phytophthora species whose genomes have been sequenced; P. sojae has a 95 Mbp genome and P. ramorum had a 65 Mbp genome. About 18,000 genes were detected within the P. infestans genome. It also contained a diverse variety of transposons and many gene families encoding for effector proteins that are involved in causing pathogenicity. These proteins are split into two main groups depending on whether they are produced by the water mold in the symplast (inside plant cells) or in the apoplast (between plant cells). Proteins produced in the symplast included RXLR proteins, which contain an arginine-X-leucine-arginine (where X can be any amino acid) sequence at the amino terminus of the protein. Some RXLR proteins are avirulence proteins, meaning that they can be detected by the plant and lead to a hypersensitive response which restricts the growth of the pathogen. P. infestans was found to encode around 60% more of these proteins than most other Phytophthora species. Those found in the apoplast include hydrolytic enzymes such as proteases, lipases and glycosylases that act to degrade plant tissue, enzyme inhibitors to protect against host defence enzymes and necrotizing toxins. Overall the genome was found to have an extremely high repeat content (around 74%) and to have an unusual gene distribution in that some areas contain many genes whereas others contain very few. The pathogen shows high allelic diversity in many isolates collected in Europe. This may be due to widespread trisomy or polyploidy in those populations. Research Study of P. infestans presents sampling difficulties in the United States. It occurs only sporadically and usually has significant founder effects due to each epidemic starting from introduction of a single genotype. Origin and diversity The highlands of central Mexico are considered by many to be the center of origin of P. infestans, although others have proposed its origin to be in the Andes, which is also the origin of potatoes. A recent study evaluated these two alternate hypotheses and found conclusive support for central Mexico being the center of origin. Support for Mexico specifically the Toluca Valley comes from multiple observations including the fact that populations are genetically most diverse in Mexico, late blight is observed in native tuber-bearing Solanum species, populations of the pathogen are in Hardy–Weinberg equilibrium, the two mating (see § Mating types above) types occur in a 1:1 ratio, and detailed phylogeographic and evolutionary studies. Furthermore, the closest relatives of P. infestans, namely P. mirabilis and P. ipomoeae are endemic to central Mexico. On the other hand, the only close relative found in South America, namely P. andina, is a hybrid that does not share a single common ancestor with P. infestans. Finally, populations of P. infestans in South America lack genetic diversity and are clonal. Migrations from Mexico to North America or Europe have occurred several times throughout history, probably linked to the movement of tubers. Until the 1970s, the A2 mating type was restricted to Mexico, but now in many regions of the world both A1 and A2 isolates can be found in the same region. The co-occurrence of the two mating types is significant due to the possibility of sexual recombination and formation of oospores, which can survive the winter. Only in Mexico and Scandinavia, however, is oospore formation thought to play a role in overwintering. In other parts of Europe, increasing genetic diversity has been observed as a consequence of sexual reproduction. This is notable since different forms of P. infestans vary in their aggressiveness on potato or tomato, in sporulation rate, and sensitivity to fungicides. Variation in such traits also occurs in North America, however importation of new genotypes from Mexico appears to be the predominant cause of genetic diversity, as opposed to sexual recombination within potato or tomato fields. In 1976 – due to a summer drought in Europe – there was a potato production shortfall and so eating potatoes were imported to fill the shortfall. It is thought that this was the vehicle for mating type A2 to reach the rest of the world. In any case, there had been little diversity, consisting of the US-1 strain, and of that only one type of: mating type, mtDNA, restriction fragment length polymorphism, and di-locus isozyme. Then in 1980 suddenly greater diversity and A2 appeared in Europe. In 1981 it was found in the Netherlands, United Kingdom, 1985 in Sweden, the early 1990s in Norway and Finland, 1996 in Denmark, and 1999 in Iceland. In the UK new A1 lineages only replaced the old lineage by end of the '80s, and A2 spread even more slowly, with Britain having low levels and Ireland (north and Republic) having none-to-trace detections through the '90s. Many of the strains that appeared outside of Mexico since the 1980s have been more aggressive, leading to increased crop losses. In Europe since 2013 the populations have been tracked by the EuroBlight network (see links below). Some of the differences between strains may be related to variation in the RXLR effectors that are present. Disease management P. infestans is still a difficult disease to control. There are many chemical options in agriculture for the control of damage to the foliage as well as the fruit (for tomatoes) and the tuber (for potatoes). A few of the most common foliar-applied fungicides are Ridomil, a Gavel/SuperTin tank mix, and Previcur Flex. All of the aforementioned fungicides need to be tank mixed with a broad-spectrum fungicide, such as mancozeb or chlorothalonil, not just for resistance management but also because the potato plants will be attacked by other pathogens at the same time. If adequate field scouting occurs and late blight is found soon after disease development, localized patches of potato plants can be killed with a desiccant (e.g. paraquat) through the use of a backpack sprayer. This management technique can be thought of as a field-scale hypersensitive response similar to what occurs in some plant-viral interactions whereby cells surrounding the initial point of infection are killed in order to prevent proliferation of the pathogen. If infected tubers make it into a storage bin, there is a very high risk to the storage life of the entire bin. Once in storage, there is not much that can be done besides emptying the parts of the bin that contain tubers infected with Phytophthora infestans. To increase the probability of successfully storing potatoes from a field where late blight was known to occur during the growing season, some products can be applied just prior to entering storage (e.g., Phostrol). Around the world the disease causes around $6 billion of damage to crops each year. Resistant plants Breeding for resistance, particularly in potato plants, has had limited success in part due to difficulties in crossing cultivated potato with its wild relatives, which are the source of potential resistance genes. In addition, most resistance genes work only against a subset of P. infestans isolates, since effective plant disease resistance results only when the pathogen expresses a RXLR effector gene that matches the corresponding plant resistance (R) gene; effector-R gene interactions trigger a range of plant defenses, such as the production of compounds toxic to the pathogen. Potato and tomato varieties vary in their susceptibility to blight. Most early varieties are very vulnerable; they should be planted early so that the crop matures before blight starts (usually in July in the Northern Hemisphere). Many old crop varieties, such as King Edward potato, are also very susceptible but are grown because they are wanted commercially. Maincrop varieties which are very slow to develop blight include Cara, Stirling, Teena, Torridon, Remarka, and Romano. Some so-called resistant varieties can resist some strains of blight and not others, so their performance may vary depending on which are around. These crops have had polygenic resistance bred into them, and are known as "field resistant". New varieties, such as Sarpo Mira and Sarpo Axona, show great resistance to blight even in areas of heavy infestation. Defender is an American cultivar whose parentage includes Ranger Russet and Polish potatoes resistant to late blight. It is a long white-skinned cultivar with both foliar and tuber resistance to late blight. Defender was released in 2004. Genetic engineering may also provide options for generating resistance cultivars. A resistance gene effective against most known strains of blight has been identified from a wild relative of the potato, Solanum bulbocastanum, and introduced by genetic engineering into cultivated varieties of potato. This is an example of cisgenic genetic engineering. Melatonin in the plant/P. infestans co-environment reduces the stress tolerance of the parasite. Reducing inoculum Blight can be controlled by limiting the source of inoculum. Only good-quality seed potatoes and tomatoes obtained from certified suppliers should be planted. Often discarded potatoes from the previous season and self-sown tubers can act as sources of inoculum. Compost, soil or potting medium can be heat-treated to kill oomycetes such as Phytophthora infestans. The recommended sterilisation temperature for oomycetes is for 30 minutes. Environmental conditions There are several environmental conditions that are conducive to P. infestans. An example of such took place in the United States during the 2009 growing season. As colder than average for the season and with greater than average rainfall, there was a major infestation of tomato plants, specifically in the eastern states. By using weather forecasting systems, such as BLITECAST, if the following conditions occur as the canopy of the crop closes, then the use of fungicides is recommended to prevent an epidemic. A is a period of 48 consecutive hours, in at least 46 of which the hourly readings of temperature and relative humidity at a given place have not been less than and 75%, respectively. A is at least two consecutive days where min temperature is or above and on each day at least 11 hours when the relative humidity is greater than 90%. The Beaumont and Smith periods have traditionally been used by growers in the United Kingdom, with different criteria developed by growers in other regions. The Smith period has been the preferred system used in the UK since its introduction in the 1970s. Based on these conditions and other factors, several tools have been developed to help growers manage the disease and plan fungicide applications. Often these are deployed as part of decision support systems accessible through web sites or smart phones. Several studies have attempted to develop systems for real-time detection via flow cytometry or microscopy of airborne sporangia collected in air samplers. Whilst these methods show potential to allow detection of sporangia in advance of occurrence of detectable disease symptoms on plants, and would thus be useful in enhancing existing decision support systems, none have been commercially deployed to date. Use of fungicides Fungicides for the control of potato blight are normally used only in a preventative manner, optionally in conjunction with disease forecasting. In susceptible varieties, sometimes fungicide applications may be needed weekly. An early spray is most effective. The choice of fungicide can depend on the nature of local strains of P. infestans. Metalaxyl is a fungicide that was marketed for use against P. infestans, but suffered serious resistance issues when used on its own. In some regions of the world during the 1980s and 1990s, most strains of P. infestans became resistant to metalaxyl, but in subsequent years many populations shifted back to sensitivity. To reduce the occurrence of resistance, it is strongly advised to use single-target fungicides such as metalaxyl along with carbamate compounds. A combination of other compounds are recommended for managing metalaxyl-resistant strains. These include mandipropamid, chlorothalonil, fluazinam, triphenyltin, mancozeb, and others. In the United States, the Environmental Protection Agency has approved oxathiapiprolin for use against late blight. In African smallholder production fungicide application can be necessary up to once every three days. In organic production In the past, copper(II) sulfate solution (called 'bluestone') was used to combat potato blight. Copper pesticides remain in use on organic crops, both in the form of copper hydroxide and copper sulfate. Given the dangers of copper toxicity, other organic control options that have been shown to be effective include horticultural oils, phosphorous acids, and rhamnolipid biosurfactants, while sprays containing "beneficial" microbes such as Bacillus subtilis or compounds that encourage the plant to produce defensive chemicals (such as knotweed extract) have not performed as well. During the crop year 2008, many of the certified organic potatoes produced in the United Kingdom and certified by the Soil Association as organic were sprayed with a copper pesticide to control potato blight. According to the Soil Association, the total copper that can be applied to organic land is /year. Control of tuber blight Ridging is often used to reduce tuber contamination by blight. This normally involves piling soil or mulch around the stems of the potato blight, meaning the pathogen has farther to travel to get to the tuber. Another approach is to destroy the canopy around five weeks before harvest, using a contact herbicide or sulfuric acid to burn off the foliage. Eliminating infected foliage reduces the likelihood of tuber infection. Historical impact The first recorded instances of the disease were in the United States, in Philadelphia and New York City in early 1843. Winds then spread the spores, and in 1845 it was found from Illinois to Nova Scotia, and from Virginia to Ontario. It crossed the Atlantic Ocean with a shipment of seed potatoes for Belgian farmers in 1845. The disease being first identified in Europe around Kortrijk, Belgium, in June 1845, and resulted in the Flemish potato harvest failing that summer, yields declining 75–80%, leading to an estimated forty thousand deaths in the locale. All of the potato-growing countries in Europe would be affected within a year. The effect of Phytophthora infestans in Ireland in 1845–52 was one of the factors which caused more than one million to starve to death and forced another two million to emigrate. Most commonly referenced is the Great Irish Famine, during the late 1840s. Implicated in Ireland's fate was the island's disproportionate dependency on a single variety of potato, the Irish Lumper. The lack of genetic variability created a susceptible host population for the organism after the blight strains originating in Chiloé Archipelago replaced earlier potatoes of Peruvian origin in Europe. During the First World War, all of the copper in Germany was used for shell casings and electric wire and therefore none was available for making copper sulfate to spray potatoes. A major late blight outbreak on potato in Germany therefore went untreated, and the resulting scarcity of potatoes contributed to the deaths from the blockade. Since 1941, Eastern Africa has been suffering potato production losses because of strains of P. infestans from Europe. France, Canada, the United States, and the Soviet Union researched P. infestans as a biological weapon in the 1940s and 1950s. Potato blight was one of more than 17 agents that the United States researched as potential biological weapons before the nation suspended its biological weapons program. Whether a weapon based on the pathogen would be effective is questionable, due to the difficulties in delivering viable pathogen to an enemy's fields, and the role of uncontrollable environmental factors in spreading the disease. Late blight (A2 type) has not yet been detected in Australia and strict biosecurity measures are in place. The disease has been seen in China, India and south-east Asian countries. A large outbreak of P. infestans occurred on tomato plants in the Northeast United States in 2009. In light of the periodic epidemics of P. infestans ever since its first emergence, it may be regarded as a periodically emerging pathogen – or a periodically "re-emerging pathogen".
Biology and health sciences
SAR supergroup
Plants
165585
https://en.wikipedia.org/wiki/Lock%20%28water%20navigation%29
Lock (water navigation)
A lock is a device used for raising and lowering boats, ships and other watercraft between stretches of water of different levels on river and canal waterways. The distinguishing feature of a lock is a fixed chamber in which the water level can be varied; whereas in a caisson lock, a boat lift, or on a canal inclined plane, it is the chamber itself (usually then called a caisson) that rises and falls. Locks are used to make a river more easily navigable, or to allow a canal to cross land that is not level. Later canals used more and larger locks to allow a more direct route to be taken. Pound lock A pound lock is most commonly used on canals and rivers today. A pound lock has a chamber with gates at both ends that control the level of water in the pound. In contrast, an earlier design with a single gate was known as a flash lock. Pound locks were first used in China during the Song dynasty (960–1279 CE), having been pioneered by the Song politician and naval engineer Qiao Weiyue in 984. They replaced earlier double slipways that had caused trouble and are mentioned by the Chinese polymath Shen Kuo (1031–1095) in his book Dream Pool Essays (published in 1088), and fully described in the Chinese historical text Song Shi (compiled in 1345): The distance between the two locks was rather more than 50 paces, and the whole space was covered with a great roof like a shed. The gates were 'hanging gates'; when they were closed the water accumulated like a tide until the required level was reached, and then when the time came it was allowed to flow out. The water level could differ by at each lock and in the Grand Canal the level was raised in this way by . In medieval Europe a sort of pound lock was built in 1373 at Vreeswijk, Netherlands. This pound lock serviced many ships at once in a large basin. Yet the first true pound lock was built in 1396 at Damme near Bruges, Belgium. The Italian Bertola da Novate (c. 1410–1475) constructed 18 pound locks on the Naviglio di Bereguardo (part of the Milan canal system sponsored by Francesco Sforza) between 1452 and 1458. In Ancient Egypt, the river-locks was probably part of the Canal of the Pharaohs: Ptolemy II is credited by some for being the first to solve the problem of keeping the Nile free of salt water when his engineers invented the lock around 274/273 BC. Basic construction and operation All pound locks have three elements: A watertight chamber connecting the upper and lower canals, and large enough to enclose one or more boats. The position of the chamber is fixed, but its water level can vary. A gate (often a pair of "pointing" half-gates) at each end of the chamber. A gate is opened to allow a boat to enter or leave the chamber; when closed, the gate is watertight. A set of lock gear to empty or fill the chamber as required. This is usually a simple valve (traditionally, a flat panel (paddle) lifted by manually winding a rack and pinion mechanism) which allows water to drain into or out of the chamber. Larger locks may use pumps. The principle of operating a lock is simple. For instance, if a boat travelling downstream finds the lock already full of water: The entrance gates are opened and the boat moves in. The entrance gates are closed. A valve is opened, this lowers the boat by draining water from the chamber. The exit gates are opened and the boat moves out. If the lock were empty, the boat would have had to wait 5 to 10 minutes while the lock was filled. For a boat travelling upstream, the process is reversed; the boat enters the empty lock, and then the chamber is filled by opening a valve that allows water to enter the chamber from the upper level. The whole operation will usually take between 10 and 20 minutes, depending on the size of the lock and whether the water in the lock was originally set at the boat's level. Boaters approaching a lock are usually pleased to meet another boat coming towards them, because this boat will have just exited the lock on their level and therefore set the lock in their favour – saving about 5 to 10 minutes. However, this is not true for staircase locks, where it is quicker for boats to go through in convoy, and it also uses less water. Details and terminology Rise The rise is the change in water-level in the lock. The two deepest locks on the English canal system are Bath deep lock on the Kennet and Avon Canal and Tuel Lane Lock on the Rochdale Canal, which both have a rise of nearly . Both locks are amalgamations of two separate locks, which were combined when the canals were restored to accommodate changes in road crossings. By comparison, the Carrapatelo and Valeira locks on the Douro river in Portugal, which are long and wide, have maximum lifts of respectively. The two Ardnacrusha locks near Limerick on the Shannon navigation in Ireland have a rise of . The upper chamber rises and is connected to the lower chamber by a tunnel, which when descending does not become visible until the chamber is nearly empty. Pound A pound is the level stretch of water between two locks (also known as a reach). Cill The cill, also spelled sill, is a narrow horizontal ledge protruding a short way into the chamber from below the upper gates. Allowing the rear of the boat to "hang" on the cill is the main danger when descending a lock, and the position of the forward edge of the cill is usually marked on the lock side by a white line. The edge of the cill is usually curved, protruding less in the center than at the edges. In some locks, there is a piece of oak about thick which protects the solid part of the lock cill. On the Oxford Canal it is called a babbie; on the Grand Union Canal it is referred to as the cill bumper. Some canal operation authorities, primarily in the United States and Canada, call the ledge a miter sill (mitre sill in Canada). Photo gallery Gates Gates are the watertight doors which seal off the chamber from the upper and lower pounds. Each end of the chamber is equipped with a gate, or pair of half-gates, traditionally made of oak or elm but now usually made of steel). The most common arrangement, usually called miter gates, was invented by Leonardo da Vinci sometime around the late 15th century. Paddle On the old Erie Canal, there was a danger of injury when operating the paddles: water, on reaching a certain position, would push the paddles with a force which could tear the windlass (or handle) out of one's hands, or if one was standing in the wrong place, could knock one into the canal, leading to injuries and drownings. Windlass ("lock key") On the Chesapeake and Ohio Canal, the lockkeepers were required to remove the windlasses from all lock paddles at night, to prevent unauthorized use. Swell or swelling A swell was caused by opening suddenly the paddle valves in the lock gates, or when emptying a lock. To help boats traveling downstream exit a lock, the locksman would sometimes open the paddles to create a swell, which would help "flush" the boat out of the lock. A boatsman might ask for a back swell, that is, to open and shut the paddles a few times to create some waves, to help him get off the bank where he was stuck. If boats ran aground (from being overloaded) they sometimes asked passing crews to tell the upstream lock to give them an extra heavy swell, which consisted of opening all the paddles on the lock gate, creating a surge that affected the whole pound below. On the Erie Canal, some loaded boats needed a swell to get out of the lock. Particularly lumber boats, being top heavy, would list to one side and get stuck in the lock, and needed a swell to get them out. Some lockkeepers would give a swell to anyone to help them on the way, but some would ask for money for the swell. The Erie Canal management did not like swelling for two reasons. First, it used too much water lowering the water on the pound above sometimes causing boats to run aground. In addition, it raised the water level on the pound below, causing some boats to strike bridges or get stuck. Snubbing posts On horse-drawn and mule-drawn canals, snubbing posts were used to slow or stop a boat in the lock. A 200-ton boat moving at a few miles an hour could destroy the lock gate. To prevent this, a rope was wound around the snubbing post as the boat entered the lock. Pulling on the rope slowed the boat, due to the friction of the rope against the post. A rope in diameter and about long was typically used on the Erie Canal to snub a boat in a lock. One incident, which took place in June 1873 on the Chesapeake and Ohio Canal, involved the boat the Henry C. Flagg and its drunk captain. That boat was already leaking; the crew, having partially pumped the water out, entered Lock 74, moving in front of another boat. Because they failed to snub the boat, it crashed into and knocked out the downstream gates. The outrush of water from the lock caused the upstream gates to slam shut, breaking them also, and sending a cascade of water over the boat, sinking it. This suspended navigation on the canal for 48 hours until the lock gates could be replaced and the boat removed from the lock. Variations Composite locks To economise, especially where good stone would be prohibitively expensive or difficult to obtain, composite locks were made, i.e. they were constructed using rubble or inferior stone, dressing the inside walls of the lock with wood, so as not to abrade the boats. This was done, for instance, on the Chesapeake and Ohio Canal with the locks near the Paw Paw Tunnel. and also the Chenango Canal Powered operation On large modern canals, especially very large ones such as ship canals, the gates and paddles are too large to be hand operated, and are operated by hydraulic or electrical equipment. On the Caledonian Canal the lock gates were operated by man-powered capstans, one connected by chains to open the gate and another to draw it closed. By 1968 these had been replaced by hydraulic power acting through steel rams. Fish ladders The construction of locks (or weirs and dams) on rivers obstructs the passage of fish. Some fish such as lampreys, trout and salmon go upstream to spawn. Measures such as a fish ladder are often taken to counteract this. Navigation locks have also potential to be operated as fishways to provide increased access for a range of biota. Special cases Doubled, paired or twinned locks Locks can be built side by side on the same waterway. This is variously called doubling, pairing, or twinning. The Panama Canal has three sets of double locks. Doubling gives advantages in speed, avoiding hold-ups at busy times and increasing the chance of a boat finding a lock set in its favour. There can also be water savings: the locks may be of different sizes, so that a small boat does not need to empty a large lock; or each lock may be able to act as a side pond (water-saving basin) for the other. In this latter case, the word used is usually "twinned": here indicating the possibility of saving water by synchronising the operation of the chambers so that some water from the emptying chamber helps to fill the other. This facility has long been withdrawn on the English canals, although the disused paddle gear can sometimes be seen, as at Hillmorton on the Oxford Canal. Elsewhere they are still in use; a pair of twinned locks was opened in 2014 on the Dortmund–Ems Canal near Münster, Germany. The once-famous staircase at Lockport, New York, was also a doubled set of locks. Five twinned locks allowed east- and west-bound boats to climb or descend the Niagara Escarpment, a considerable engineering feat in the nineteenth century. While Lockport today has two large steel locks, half of the old twin stair acts as an emergency spillway and can still be seen, with the original lock gates having been restored in early 2016. Lock flights Loosely, a flight of locks is simply a series of locks in close-enough proximity to be identified as a single group. For many reasons, a flight of locks is preferable to the same number of locks spread more widely: crews are put ashore and picked up once, rather than multiple times; transition involves a concentrated burst of effort, rather than a continually interrupted journey; a lock keeper may be stationed to help crews through the flight quickly; and where water is in short supply, a single pump can recycle water to the top of the whole flight. The need for a flight may be determined purely by the lie of the land, but it is possible to group locks purposely into flights by using cuttings or embankments to "postpone" the height change. Examples: Caen Hill locks, Devizes. "Flight" is not synonymous with "Staircase" (see below). A set of locks is only a staircase if successive lock chambers share a gate (i.e. do not have separate top and bottom gates with a pound between them). Most flights are not staircases, because each chamber is a separate lock (with its own upper and lower gates), there is a navigable pound (however short) between each pair of locks, and the locks are operated in the conventional way. However, some flights include (or consist entirely of) staircases. On the Grand Union (Leicester) Canal, the Watford flight consists of a four-chamber staircase and three separate locks; and the Foxton flight consists entirely of two adjacent 5-chamber staircases. Staircase locks Where a very steep gradient has to be climbed, a lock staircase is used. There are two types of staircase, "real" and "apparent". A "real" staircase can be thought of as a "compressed" flight, where the intermediate pounds have disappeared, and the upper gate of one lock is also the lower gate of the one above it. However, it is incorrect to use the terms staircase and flight interchangeably: because of the absence of intermediate pounds, operating a staircase is very different from operating a flight. It can be more useful to think of a staircase as a single lock with intermediate levels (the top gate is a normal top gate, and the intermediate gates are all as tall as the bottom gate). As there is no intermediate pound, a chamber can only be filled by emptying the one above, or emptied by filling the one below: thus the whole staircase has to be full of water (except for the bottom chamber) before a boat starts to ascend, or empty (except for the top chamber) before a boat starts to descend. In an "apparent" staircase the chambers still have common gates, but the water does not pass directly from one chamber to the next, going instead via side ponds. This means it is not necessary to ensure that the flight is full or empty before starting. Examples of famous "real" staircases in England are Bingley and Grindley Brook. Two-rise staircases are more common: Snakeholme Lock and Struncheon Hill Lock on the Driffield Navigation were converted to staircase locks after low water levels hindered navigation over the bottom cill at all but the higher tides – the new bottom chamber rises just far enough to get the boat over the original lock cill. In China, the recently completed Three Gorges Dam includes a double five-step staircase for large ships, and a ship lift for vessels of less than 3000 metric tons. Examples of "apparent" staircases are Foxton Locks and Watford Locks on the Leicester Branch of the Grand Union. Operation of a staircase is more involved than a flight. Inexperienced boaters may find operating staircase locks difficult. Key concerns are either sending down more water than the lower chambers can cope with (flooding the towpath, or sending a wave along the canal) or completely emptying an intermediate chamber (although this shows that a staircase lock can be used as an emergency dry dock). To avoid these mishaps, it is usual to have the whole staircase empty before starting to descend, or full before starting to ascend, apart from the initial chamber. One difference in using a staircase of either type (compared with a single lock, or a flight) is the optimal sequence for letting boats through. In a single lock (or a flight with room for boats to pass) boats should ideally alternate in direction. In a staircase, however, it is quicker for a boat to follow a previous one going in the same direction. Partly for this reason staircase locks such as Grindley Brook, Foxton, Watford and Bratch are supervised by lockkeepers, at least during the main cruising season, in which keepers try to alternate as many boats up, followed by down as there are chambers in the flight. As with a flight, it is possible on a broad canal for more than one boat to be in a staircase at the same time, but managing this without waste of water requires expertise. On English canals, a staircase of more than two chambers is usually staffed: the lockkeepers at Bingley (looking after both the "5-rise" and the "3-rise") ensure that there are no untoward events and that boats are moved through as speedily and efficiently as possible. Such expertise permits unusual feats, such as boats travelling in opposite directions can pass each other halfway up the staircase by moving sideways around each other; or at peak times, one can have all the chambers full simultaneously with boats travelling in the same direction. Stop locks When variable conditions meant that a higher water level in the new canal could not be guaranteed, then the older company would also build a stop lock (under its own control, with gates pointing towards its own canal) which could be closed when the new canal was low. This resulted in a sequential pair of locks, with gates pointing in opposite directions: one example was at Hall Green near Kidsgrove, where the southern terminus of the Macclesfield Canal joined the Hall Green Branch of the earlier Trent and Mersey Canal. The four gate stop lock near Kings Norton Junction, between the Stratford-upon-Avon Canal and the Worcester and Birmingham Canal was replaced in 1914 by a pair of guillotine lock gates which stopped the water flow regardless of which canal was higher. These gates have been permanently open since nationalisation. Round locks The best known example of a round lock is the Agde Round Lock on the Canal du Midi in France. This serves as a lock on the main line of the canal and allows access to the Hérault River. A second French round lock can be found in the form of the now-disused Écluse des Lorraines, connecting the Canal latéral à la Loire with the River Allier. Drop locks A drop lock can consist of two conventional lock chambers leading to a sump pound, or a single long chamber incorporating the sump – although the term properly applies only to the second case. As the pounds at either end of the structure are at the same height, the lock can only be emptied either by allowing water to run to waste from the sump to a lower stream or drain, or (less wastefully) by pumping water back up to the canal. Particularly in the two-chamber type, there would be a need for a bypass culvert, to allow water to move along the interrupted pound and so supply locks further down the canal. In the case of the single-chamber type, this can be achieved by keeping the lock full and leaving the gates open while not in use. While the concept has been suggested in a number of cases, the only example in the world of a drop lock that has actually been constructed is at Dalmuir on the Forth and Clyde Canal in Scotland. This lock, of the single-chamber type, was incorporated during the restoration of the canal, to allow the replacement of a swing bridge (on a busy A road) by a fixed bridge, and so answer criticisms that the restoration of the canal would cause frequent interruptions of the heavy road traffic. It can be emptied by pumping – but as this uses a lot of electricity the method used when water supplies are adequate is to drain the lock to a nearby burn. Very large locks In 2016 the Kieldrecht Lock in the Port of Antwerp in Belgium took over the title of the world's largest lock from the Berendrecht Lock in the same port and still has the title for largest volume. In 2022 the IJmuiden sea lock serving the Port of Amsterdam became the world's largest lock by surface area. The lock is long, wide and has sliding lock gates creating a usable depth of . The size of locks cannot be compared without considering the difference in water level that they are designed to operate under. For example, the Bollène lock on the River Rhône has a fall of at least , the Leerstetten, Eckersmühlen and Hilpoltstein locks on the Rhine–Main–Danube Canal have a fall of , each and the Oskemen Lock on the Irtysh River in Kazakhstan has a drop of . History and development Pound lock The natural extension of the flash lock, or staunch, was to provide an upper gate (or pair of gates) to form an intermediate "pound" which was all that need be emptied when a boat passed through. This type of lock, called a pound lock was known in Imperial China and ancient Europe and was used by Greek engineers in the Canal of the Pharaohs under Ptolemy II (284 to 246 BC), when engineers solved the problem of overcoming the difference in height through canal locks. Pound locks were first used in medieval China during the Song dynasty (960–1279 CE). The Songshi or History of the Song Dynasty, volume 307, biography 66, records how Qiao Weiyue, a high-ranking tax administrator, was frustrated at the frequent losses incurred when his grain barges were wrecked on the West River near Huai'an in Jiangsu. The soldiers at one double slipway, he discovered, had plotted with bandits to wreck heavy imperial barges so that they could steal the spilled grain. In 984 Qiao installed a pair of sluice-gates two hundred and fifty feet apart, the entire structure roofed over like a building. By siting two staunch gates so close to one another, Qiao had created a short stretch of canal, effectively a pound-lock, filled from the canal above by raising individual wooden baulks in the top gate and emptied into the canal below by lowering baulks in the top gate and raising ones in the lower. Turf-sided lock A turf-sided lock is an early form of canal lock design that uses earth banks to form the lock chamber, subsequently attracting grasses and other vegetation, instead of the now more familiar and widespread brick, stone, or concrete lock wall constructions. This early lock design was most often used on river navigations in the early 18th century before the advent of canals in Britain. The sides of the turf-lock are sloping so, when full, the lock is quite wide. Consequently, this type of lock needs more water to operate than vertical-sided brick- or stone-walled locks. On British canals and waterways most turf-sided locks have been subsequently rebuilt in brick or stone, and so only a few good examples survive, such as at Garston Lock, and Monkey Marsh Lock, on the Kennet and Avon Canal. Use of water Water saving basins On English canals, these reservoirs are called "side ponds". The Droitwich Canal, reopened in 2011, has a flight of three locks at Hanbury which all have operational side ponds. Alternatives Inclined plane There are no working waterway inclined planes in the UK at the moment, but the remains of a famous one can be seen at Foxton in Leicestershire on the Leicester arm of the Grand Union Canal. The plane enabled wide-beam boats to bypass the flight of ten narrow locks, but failure to make improvements at the other end of the arm and high running costs led to its early demise. There are plans to restore it, and some funding has been obtained. Caisson lock Around 1800 the use of caisson locks was proposed by Robert Weldon for the Somerset Coal Canal in England. In this underwater lift, the chamber was long and deep and contained a completely enclosed wooden box big enough to take a barge. This box moved up and down in the deep pool of water. Apart from inevitable leakage, the water never left the chamber, and using the lock wasted no water. Instead, the boat entered the box and was sealed in by the door closing behind it, and the box itself was moved up or down through the water. When the box was at the bottom of the chamber, it was under almost of water – at a pressure of , in total. One of these "locks" was built and demonstrated to the Prince Regent (later George IV), but it had various engineering problems and the design was not put into use on the Coal Canal. Hydro-pneumatic canal lift Possibly inspired by Weldon's caisson lock, William Congreve in 1813 patented a "hydro-pneumatic double balance lock" in which two adjacent locks containing pneumatic caissons could be raised and lowered in counterbalance by the movement of compressed air from one caisson to the other. In about 1817 the Regents Canal Company built one of these locks at the site of the present-day Camden Lock, north London. Here the motivation was, again, water supply problems. The company insisted on various modifications to Congreve's design; the resulting installation proved to be unsatisfactory, and was soon replaced by conventional locks. Shaft lock Looking superficially similar to the caisson lock is the shaft lock. Shaft locks consist of a deep shaft with conventional upper gates. The lower gates are reached through a short tunnel. The gates only close off this approach tunnel so do not have to reach the full height of the lock. Notable examples have been built at Saint Denis (Paris, France), Horin (near Melnik, Czech Republic) and Anderten (Hannover Germany). The shaft lock at Minden has a fall of and has eight tanks linked in pairs to the lock chamber. As the lock is emptied water is run into each chamber in turn, for filling the water is released from the chambers thus saving the waste of a complete lockfull of water. An earlier attempt at a shaft lock had been made at Trollhättan in Sweden on the line of the present Göta canal. The fall would have been , astonishing in 1749. However the approach tunnel proved to be unusable in times of flood and the shaft lock was replaced by a 2-rise staircase in 1768. Diagonal lock This is similar to a shaft lock, but having the shaft built on an incline. Boats are moored to floating bollards which guide them along the shaft as it fills or empties. The "Diagonal Lock Advisory Group" has identified several sites in Britain where the new design could be installed, either on new waterways or canals under restoration. Projects under consideration include the restoration of the Lancaster Canal to Kendal and the proposed new branch of the Grand Union Canal between Bedford and Milton Keynes.
Technology
Maritime transport
null
165744
https://en.wikipedia.org/wiki/Kelp
Kelp
Kelps are large brown algae or seaweeds that make up the order Laminariales. There are about 30 different genera. Despite its appearance and use of photosynthesis in chloroplasts, kelp is technically not a plant but a stramenopile (a group containing many protists). Kelp grow from stalks close together in very dense areas like forests under shallow temperate and Arctic oceans. They were previously thought to have appeared in the Miocene, 5 to 23 million years ago based on fossils from California. New fossils of kelp holdfasts from early Oligocene rocks in Washington State show that kelps were present in the northeastern Pacific Ocean by at least 32 million years ago. The organisms require nutrient-rich water with temperatures between . They are known for their high growth rate—the genera Macrocystis and Nereocystis can grow as fast as half a metre a day (that is, about 20 inches a day), ultimately reaching . Through the 19th century, the word "kelp" was closely associated with seaweeds that could be burned to obtain soda ash (primarily sodium carbonate). The seaweeds used included species from both the orders Laminariales and Fucales. The word "kelp" was also used directly to refer to these processed ashes. Description The thallus (or body) consists of flat or leaf-like structures known as blades that originate from elongated stem-like structures, the stipes. A root-like structure, the holdfast, anchors the kelp to the substrate of the ocean. Gas-filled bladders (pneumatocysts) form at the base of blades of American species, such as Nereocystis lueteana, (Mert. & Post & Rupr.) to hold the kelp blades close to the surface. Growth and reproduction Growth occurs at the base of the meristem, where the blades and stipe meet. Growth may be limited by grazing. Sea urchins, for example, can reduce entire areas to urchin barrens. The kelp life cycle involves a diploid sporophyte and haploid gametophyte stage. The haploid phase begins when the mature organism releases many spores, which then germinate to become male or female gametophytes. Sexual reproduction then results in the beginning of the diploid sporophyte stage, which will develop into a mature individual. The parenchymatous thalli are generally covered with a mucilage layer, rather than cuticle. Taxonomy Phylogeny Seaweed were generally considered homologues of terrestrial plants, but are only very distantly related to plants, and have evolved plant-like structures through convergent evolution. Where plants have leaves, stems, and reproductive organs, kelp have independently evolved blades, stipes, and sporangia. With radiometric dating and the measure Ma “unequivocal minimum constraint for total group Pinaceae” vascular plants have been measured as having evolved around 419–454 Ma while the ancestors of Laminariales are much younger at 189 Ma. Although these groups are distantly related as well as different in evolutionary age, there are still comparisons that can be made between the structures of terrestrial plants and kelp but in terms of evolutionary history, most of these similarities come from convergent evolution. Some kelp species including giant kelp, have evolved transport mechanisms for organic as well as inorganic compounds, similar to mechanisms of transport in trees and other vascular plants. In kelp this transportation network uses trumpet-shaped sieve elements (SEs). A 2015 study aimed to evaluate the efficiency of giant kelp (Macrocystis pyrifera) transport anatomy looked at 6 different laminariales species to see if they had typical vascular plant allometric relationships (if SEs had a correlation with the size of an organism). Researchers expected to find the kelp’s phloem to work similarly to a plant's xylem and therefore display similar allometric trends to minimize pressure gradient. The study found no universal allometric scaling between all tested structures of the laminariales species which implies that the transport network of brown algae is only just beginning to evolve to efficiently fit their current niches. Apart from undergoing convergent evolution with plants, species of kelp have undergone convergent evolution within their own phylogeny that has led to niche conservatism. This niche conservatism means that some species of kelp have convergently evolved to share similar niches, as opposed to all species diverging into distinct niches through adaptive radiation. A 2020 study looked at functional traits (blade mass per area, stiffness, strength, etc.) of 14 species of kelp and found that many of these traits evolved convergently across kelp phylogeny. With different species of kelp filling slightly different environmental niches, specifically along a wave disturbance gradient, many of these convergently evolved traits for structural reinforcement also correlate with distribution along that gradient. The wave disturbance gradient that this study refers to is the environments that this kelp inhabit have a varied level of perturbation from the tide and waves that pull at the kelp. It can be assumed from these results that niche partitioning along wave disturbance gradients is a key driver of divergence between closely related kelp. Due to the often varied and turbulent habitat that kelp inhabit, plasticity of certain structural traits has been a key for the evolutionary history of the phyla. Plasticity helps with a very important aspect of kelp adaptations to ocean environments, and that is the unusually high levels of morphological homoplasy between lineages. This in fact has made classifying brown algae difficult. Kelp often have similar morphological features to other species within its own area since the roughness of the wave disturbance regime, but can look fairly different from other members of its own species that are found in different wave disturbance regimes. Plasticity in kelps most often involves blade morphology such as the width, ruffle, and thickness of blades. Just one example is the giant bull kelp Nereocystis luetkeana, which have evolved to change blade shape in order to increase drag in water and interception of light when exposed to certain environments. Bull kelp are not unique in this adaptation; many kelp species have evolved a genetic plasticity for blade shapes for different water flow habitats. So individuals of the same species will have differences to other individuals of the same species due to what habitat they grow in. Many species have different morphologies for different wave disturbance regimes but giant kelp Macrocystis integrifolia has been found to have plasticity allowing for 4 distinct types of blade morphology depending on habitat. Where many species only have two or three different blade shapes for maximizing efficiency in only two or three habitats. These different blade shapes were found to decrease breakage and increase ability to photosynthesize. Blade adaptations like these are how kelp have evolved for efficiency in structure in a turbulent ocean environment, to the point where their stability can shape entire habitats. Apart from these structural adaptations, the evolution of dispersal methods relating to structure have been important for the success of kelp as well. Kelp have had to adapt dispersal methods that can make successful use of ocean currents. Buoyancy of certain kelp structures allows for species to disperse with the flow of water. Certain kelp form kelp rafts, which can travel great distances away from the source population and colonize other areas. The bull kelp genus Durvillaea includes six species, some that have adapted buoyancy and others that have not. Those that have adapted buoyancy have done so thanks to the evolution of a gas filled structure called the pneumatocysts which is an adaptation that allows the kelp to float higher towards the surface to photosynthesize and also aids in dispersal by floating kelp rafts. For Macrocystis pyrifera, adaptation of pneumatocysts and raft forming have made the species dispersal method so successful that the immense spread of coast in which the species can be found has been found to actually be very recently colonized. This can be observed by the low genetic diversity in the subantarctic region. Dispersal by rafts from buoyant species also explains some evolutionary history for non-buoyant species of kelp. Since these rafts commonly have hitchhikers of other diverse species, they provide a mechanism for dispersal for species that lack buoyancy. This mechanism has been recently confirmed to be the cause of some dispersal and evolutionary history for kelp species in a study done with genomic analysis. Studies of kelp structure evolution have helped in the understanding of the adaptations that have allowed for kelp to not only be extremely successful as a group of organisms but also successful as an ecosystem engineer of kelp forests, some of the most diverse and dynamic ecosystems on earth. Prominent species Bull kelp, Nereocystis luetkeana, a northwestern American species. Used by coastal indigenous peoples to create fishing nets. Giant kelp, Macrocystis pyrifera, the largest seaweed. Found in the Pacific coast of North America and South America. Kombu, Saccharina japonica (formerly Laminaria japonica) and others, several edible species of kelp found in Japan. Species of Laminaria in the British Isles; Laminaria digitata (Hudson) J.V. Lamouroux (Oarweed; Tangle) Laminaria hyperborea (Gunnerus) Foslie (Curvie) Laminaria ochroleuca Bachelot de la Pylaie Saccharina latissima (Linnaeus) J.V.Lamouroux (sea belt; sugar kelp; sugarwack) Species of Laminaria worldwide, listing of species at AlgaeBase: Laminaria agardhii (NE. America) Laminaria bongardina Postels et Ruprecht (Bering Sea to California) Laminaria cuneifolia (NE. America) Laminaria dentigera Klellm. (California - America) Laminaria digitata (NE. America) Laminaria ephemera Setchell (Sitka, Alaska, to Monterey County, California - America) Laminaria farlowii Setchell (Santa Cruz, California, to Baja California - America) Laminaria groenlandica (NE. America) Laminaria longicruris (NE. America) Laminaria nigripes (NE. America) Laminaria ontermedia (NE. America) Laminaria pallida Greville ex J. Agardh (South Africa) Laminaria platymeris (NE. America) Laminaria saccharina (Linnaeus) Lamouroux, synonym of Saccharina latissima (north east Atlantic Ocean, Barents Sea south to Galicia - Spain) Laminaria setchellii Silva (Aleutian Islands, Alaska to Baja California America) Laminaria sinclairii (Harvey ex Hooker f. ex Harvey) Farlow, Anderson et Eaton (Hope Island, British Columbia to Los Angeles, California - America) Laminaria solidungula (NE. America) Laminaria stenophylla (NE. America) Other species in the Laminariales that may be considered as kelp: Alaria esculenta (North Atlantic) Alaria marginata Post. & Rupr. (Alaska and California - America) Costaria costata (C.Ag.) Saunders (Japan; Alaska, California - America) Ecklonia brevipes J. Agardh (Australia; New Zealand) Ecklonia maxima (Osbeck) Papenfuss (South Africa) Ecklonia radiata (C.Agardh) J. Agardh (Australia; Tasmania; New Zealand; South Africa) Eisenia arborea Aresch. (Vancouver Island, British Columbia, Montrey, Santa Catalina Island, California - America) Egregia menziesii (Turn.) Aresch. Hedophyllum sessile (C.Ag.) Setch (Alaska, California - America) Macrocystis pyrifera (Linnaeus, C.Agardh) (Australia; Tasmania and South Africa) Pleurophycus gardneri Setch. & Saund. (Alaska, California - America) Pterygophora californica Rupr. (Vancouver Island, British Columbia to Bahia del Ropsario, Baja California and California - America) Non-Laminariales species that may be considered as kelp: Durvillea antarctica, Fucales (New Zealand, South America, and Australia) Durvillea willana, Fucales (New Zealand) Durvillaea potatorum (Labillardière) Areschoug, Fucales (Tasmania; Australia) Ecology Kelp forests Kelp may develop dense forests with high production,Abdullah, M.I., Fredriksen, S., 2004. Production, respiration and exudation of dissolved organic matter by the kelp Laminaria hyperborea along the west coast of Norway. Journal of the Marine Biological Association of the UK 84: 887. biodiversity and ecological function. Along the Norwegian coast these forests cover 5,800 km2, and they support large numbers of animals.Jørgensen, N.M., Christie, H., 2003. l Diurnal, horizontal and vertical dispersal of kelp associated fauna. Hydrobiologia 50, 69-76. Numerous sessile animals (sponges, bryozoans and ascidians) are found on kelp stipes and mobile invertebrate fauna are found in high densities on epiphytic algae on the kelp stipes and on kelp holdfasts. More than 100,000 mobile invertebrates per square meter are found on kelp stipes and holdfasts in well-developed kelp forests. While larger invertebrates and in particular sea urchins (Strongylocentrotus droebachiensis) are important secondary consumers controlling large barren ground areas on the Norwegian coast, they are scarce inside dense kelp forests. Interactions Some animals are named after the kelp, either because they inhabit the same habitat as kelp or because they feed on kelp. These include: Northern kelp crab (Pugettia producta) and graceful kelp crab (Pugettia gracilis), Pacific coast of North America. Kelpfish (blenny) (e.g., Heterosticbus rostratus, genus Gibbonsia), Pacific coast of North America. Kelp goose (kelp hen) (Chloephaga hybrida), South America and the Falkland Islands Kelp pigeon (sheathbill) (Chionis alba and Chionis minor), Antarctic Conservation Overfishing nearshore ecosystems leads to the degradation of kelp forests. Herbivores are released from their usual population regulation, leading to over-grazing of kelp and other algae. This can quickly result in barren landscapes where only a small number of species can thrive.Sala, E., C.F. Bourdouresque and M. Harmelin-Vivien. 1998. Fishing, trophic cascades, and the structure of algal assemblages: evaluation of an old but untested paradigm. Oikos 82: 425-439. Other major factors which threaten kelp include marine pollution and the quality of water, climate changes and certain invasive species. Kelp forests are some of the most productive ecosystems in the world - they are home to a great diversity of species. Many groups, like those at the Seattle Aquarium, are studying the health, habitat, and population trends in order to understand why certain kelp (like bull kelp) thrives in some areas and not others. Remotely Operated Vehicles are used in the surveying of sites and the data extracted is used to learn about which conditions are best suited for kelp restoration. Uses Giant kelp can be harvested fairly easily because of its surface canopy and growth habit of staying in deeper water. Kelp ash is rich in iodine and alkali. In great amount, kelp ash can be used in soap and glass production. Until the Leblanc process was commercialized in the early 19th century, burning of kelp in Scotland was one of the principal industrial sources of soda ash (predominantly sodium carbonate). Around 23 tons of seaweed was required to produce 1 ton of kelp ash. The kelp ash would consist of around 5% sodium carbonate. Once the Leblanc Process became commercially viable in Britain during the 1820s, common salt replaced kelp ash as raw material for sodium carbonate. Though the price of kelp ash went into steep decline, seaweed remained the only commercial source of iodine. To supply the new industry in iodine synthesis, kelp ash production continued in some parts of West and North Scotland, North West Ireland and Guernsey. The species Saccharina latissima yielded the greatest amount of iodine (between 10 and 15 lbs per ton) and was most abundant in Guernsey. Iodine was extracted from kelp ash using a lixiviation process. As with sodium carbonate however, mineral sources eventually supplanted seaweed in iodine production. Alginate, a kelp-derived carbohydrate, is used to thicken products such as ice cream, jelly, salad dressing, and toothpaste, as well as an ingredient in exotic dog food and in manufactured goods. Alginate powder is also used frequently in general dentistry and orthodontics for making impressions of the upper and lower arches. Kelp polysaccharides are used in skin care as gelling ingredients and because of the benefits provided by fucoidan. Kombu (昆布 in Japanese, and 海带 in Chinese, Saccharina japonica and others), several Pacific species of kelp, is a very important ingredient in Chinese, Japanese, and Korean cuisines. Kombu is used to flavor broths and stews (especially dashi), as a savory garnish (tororo konbu) for rice and other dishes, as a vegetable, and a primary ingredient in popular snacks (such as tsukudani). Transparent sheets of kelp (oboro konbu) are used as an edible decorative wrapping for rice and other foods. Kombu can be used to soften beans during cooking, and to help convert indigestible sugars and thus reduce flatulence. In Russia, especially in the Russian Far East, and former Soviet Union countries several types of kelp are of commercial importance: Saccharina latissima, Laminaria digitata, Saccharina japonica. Known locally as "Sea Cabbage" (Морская капуста in Russian), it comes in retail trade in dried or frozen, as well as in canned form and used as filler in different types of salads, soups and pastries. Because of its high concentration of iodine, brown kelp (Laminaria) has been used to treat goiter, an enlargement of the thyroid gland caused by a lack of iodine, since medieval times. An intake of roughly 150 micrograms of iodine per day is beneficial for preventing hypothyroidism. Overconsumption can lead to kelp-induced thyrotoxicosis. In 2010, researchers found that alginate, the soluble fibre substance in sea kelp, was better at preventing fat absorption than most over-the-counter slimming treatments in laboratory trials. As a food additive, it may be used to reduce fat absorption and thus obesity. Kelp in its natural form has not yet been demonstrated to have such effects. Kelp's rich iron content can help prevent iron deficiency. Commercial production Commercial production of kelp harvested from its natural habitat has taken place in Japan for over a century. Many countries today produce and consume laminaria products; the largest producer is China. Laminaria japonica, the important commercial seaweed, was first introduced into China in the late 1920s from Hokkaido, Japan. Yet mariculture of this alga on a very large commercial scale was realized in China only in the 1950s. Between the 1950s and the 1980s, kelp production in China increased from about 60 to over 250,000 dry weight metric tons annually. In culture Some of the earliest evidence for human use of marine resources, coming from Middle Stone Age sites in South Africa, includes the harvesting of foods such as abalone, limpets, and mussels associated with kelp forest habitats. In 2007, Erlandson et al. suggested that kelp forests around the Pacific Rim may have facilitated the dispersal of anatomically modern humans following a coastal route from Northeast Asia to the Americas. This "kelp highway hypothesis" suggested that highly productive kelp forests supported rich and diverse marine food webs in nearshore waters, including many types of fish, shellfish, birds, marine mammals, and seaweeds that were similar from Japan to California, Erlandson and his colleagues also argued that coastal kelp forests reduced wave energy and provided a linear dispersal corridor entirely at sea level, with few obstacles to maritime peoples. Archaeological evidence from California's Channel Islands confirms that islanders were harvesting kelp forest shellfish and fish, beginning as much as 12,000 years ago. During the Highland Clearances, many Scottish Highlanders were moved on to areas of estates known as crofts, and went to industries such as fishing and kelping (producing soda ash from the ashes of kelp). At least until the 1840s, when there were steep falls in the price of kelp, landlords wanted to create pools of cheap or virtually free labour, supplied by families subsisting in new crofting townships. Kelp collection and processing was a very profitable way of using this labour, and landlords petitioned successfully for legislation designed to stop emigration. The profitability of kelp harvesting meant that landlords began to subdivide their land for small tenant kelpers, who could now afford higher rent than their gentleman farmer counterparts. But the economic collapse of the kelp industry in northern Scotland during the 1820s led to further emigration, especially to North America. Natives of the Falkland Islands are sometimes nicknamed "Kelpers". dictionary.com definition for "Kelper" This designation is primarily applied by outsiders rather than the natives themselves. In Chinese slang, "kelp" (), is used to describe an unemployed returnee. It has negative overtones, implying the person is drifting aimlessly, and is also a homophonic expression (, literally "sea waiting"). This expression is contrasted with the employed returnee, having a dynamic ability to travel across the ocean: the "sea turtle" () and is also homophonic with another word (, literally "sea return"). Gallery
Biology and health sciences
Other organisms
null
165767
https://en.wikipedia.org/wiki/Towel
Towel
A towel () is a piece of absorbent cloth or paper used for drying or wiping a surface. Towels draw moisture through direct contact. Bathing towels and hand towels are usually made of cotton, linen, bamboo and synthetic microfibers. In households, several types of towels are used, such as hand towels, bath towels, and kitchen towels. Paper towels are provided in commercial or office bathrooms via a dispenser for users to dry their hands. They are also used for such duties such as wiping, cleaning, and drying. History According to Middle Ages archaeological studies, "... closely held personal items included the ever present knife and a towel." However, the invention of the towel is commonly associated with the city of Bursa, Turkey, in the 17th century. These Turkish towels began as a flat, woven piece of cotton or linen called a peshtamal, often hand-embroidered. Long enough to wrap around the body, peshtamal were originally fairly narrow, but are now wider and commonly measure . Peshtamel were used in Turkish baths as they stayed light when wet and were very absorbent. As the Ottoman Empire grew, so did the use of the towel. Weavers were asked to embroider more elaborate designs, aided by their knowledge of carpet-weaving. By the 18th century, towels began to feature loops sticking up from the pile of the material. These looped towels became known as havly; over time, this word has changed to havlu, the Turkish word for towel, and means 'with loops'. Towels did not become affordable until the 19th century, with the cotton trade and industrialization. With mechanization, cotton terry-towelling became available by the yard as well as being stocked in shops as pre-made towels. Today, towels come in a variety of sizes, materials and designs. Types A bath towel is used for drying the body when it is wet, such as after bathing or showering. It is typically rectangular, with a typical size around , and is made of terrycloth. A beach towel is usually somewhat larger than a bath towel, and often has a colorful pattern. Although often used for drying off after being in the water, its main purpose is to provide a surface on which to lay down. They are also worn for privacy while changing clothes in a public area, and for wiping sand from the body or objects. A bath sheet (or sheet towel) is larger than a bath towel. The classic bath sheet size is 80×160 cm. A large bath sheet that can wrap the entire body is 100×150 cm or 90×160 cm. They are used after bathing, in saunas, on beaches, and for massage. A foot towel is a small, rectangular towel that, in the absence of a rug, carpet or bathroom mat, is placed on the bathroom floor to stand on after finishing a shower or bath. A hand towel is significantly smaller than a bath towel (perhaps ), and is used for drying the hands after washing them. An oven towel or confectioner's mitten is a multipurpose household towel used for a kitchen or shop applications. A paper towel is a piece of paper that can be used once as a towel and then be disposed of. A perforated roll of paper towels is usually mounted on a rod slightly longer than the width of the roll, or in an alternative type of hanger that has protrusions on ears, the protrusions fitting into the ends of the paper towel roll. Paper towels might also be packaged as facial tissues are (as individual folded sheets). A disposable towel (or non-woven towel) is a towel intended for a single user, but not necessarily for a single use, as it can be reused but not washed. It is often made of non-woven fibres and often is used in hospitals, hotels, geriatric and salon or beauty settings, for their hygienic properties. A show towel is a bath or hand towel with a trim (such as satin, lace or linen) stitched onto it, or embroidery done on it—mostly for visual appeal. These types of towels are used to add a decorative touch, most commonly in the United States. They are generally not to be used for drying, as regular washing ruins the added trim, and the towels usually shrink differently than the trim. A sports towel is a synthetic or semi-synthetic towel originally developed for swimmers and divers, favoured for its super-absorbent qualities. Sports towels can be wrung out when saturated, leaving the towel able to absorb water again, though not dry. These qualities, along with their compact nature, have further popularized sports towels among general outdoor and athletic enthusiasts. The absorbent material in sports towels may be composed of viscose, PVA or microfiber, with polyester woven in for durability. Some manufacturers incorporate a silver ion or compound treatment into their towels to better inhibit microbial growth and associated odors. The term kitchen towel refers to a dish towel in American English (called a tea towel in UK and Canadian English), and to a paper towel in British English. A tea towel or tea cloth (UK and Canadian English), called dishtowel or dish towel in America, is an absorbent towel made from soft, lint-free linen. They are used in the kitchen to dry dishes, cutlery, etc. after they are washed. The towels are also used during tea time. They can be wrapped around the tea pot to keep the tea warm, prevent drips, and keep one's hand from being burned by the hot tea pot handle when serving the tea. They are commonly made of cotton rather than linen. They are also used for drying glassware, but sometimes a special glass cloth is used for that purpose. Tea towels originated in 18th-century England. A tenugui is a variety of hand towel that originates from Japan. It is most often used in the same way as a tea towel or flannel (washcloth), but can also be used for decoration, as a headband, or for wrapping bottles and other items to be given as gifts. A cloth towel dispenser or continuous cloth towel is a towel manipulated by a series of rollers, used as an alternative to paper towels and hand dryers in public washrooms. These may have a lower environmental impact than paper towels, though concerns over hygiene mean they are not used by some organisations and have greatly declined in popularity. A bar towel is an absorbent, usually small, towel used in bars and often given away free as promotional items. A fingertip towel or finger towel is a small towel that is folded and placed next to the sink or in the guest bedroom. Hosts often pin a note to these towels indicating that they are for guest use. A golf towel is a small towel which usually comes with a loop or clip to attach to a golf bag for drying hands, golfballs, and clubs. A baby towel is a smaller towel with an extra sewn-on hood at one corner to cover a baby's head. A peshtemal (or pestemal) is a unique multipurpose towel from Anatolia. A poncho towel is a wearable towel made for drying off and changing, often used poolside, at the beach or after swimming. A fouta towel is a Tunisian hammam and beach towel, which is also used as a pareo. In fiction In Douglas Adam's book The Hitchhiker's Guide to the Galaxy, there is an in-world emphasis on towels and their importance to hitchhikers, because if a hitchhiker has a towel it can be inferred by a non-hitchhiker that they also have a toothbrush, soap, washcloth, raincoat, and related things, and would be happy to lend the hitchhiker any of those items if they have "lost" them.
Biology and health sciences
Hygiene products
Health
165926
https://en.wikipedia.org/wiki/San%20Francisco%E2%80%93Oakland%20Bay%20Bridge
San Francisco–Oakland Bay Bridge
The San Francisco–Oakland Bay Bridge, commonly referred to as the Bay Bridge, is a complex of bridges spanning San Francisco Bay in California. As part of Interstate 80 and the direct road between San Francisco and Oakland, it carries about 260,000 vehicles a day on its two decks. It includes one of the longest bridge spans in the United States. The toll bridge was conceived as early as the California gold rush days, with "Emperor" Joshua Norton famously advocating for it, but construction did not begin until 1933. Designed by Charles H. Purcell, and built by American Bridge Company, it opened on Thursday, November 12, 1936, six months before the Golden Gate Bridge. It originally carried automobile traffic on its upper deck, with trucks, cars, buses and commuter trains on the lower, but after the Key System abandoned its rail service on April 20, 1958, the lower deck was converted to all-road traffic as well. On October 12, 1963, traffic was reconfigured to one way traffic on each deck, westbound on the upper deck, and eastbound on the lower deck, with trucks and buses also allowed on the upper deck. In 1986, the bridge was unofficially dedicated to former California governor James Rolph. The bridge has two sections of roughly equal length; the older western section, officially known as the Willie L. Brown Jr. Bridge (after former San Francisco Mayor and California State Assembly Speaker Willie L. Brown Jr.), connects downtown San Francisco to Yerba Buena Island, and the newer east bay section connects the island to Oakland. The western section is a double suspension bridge with two decks, westbound traffic being carried on the upper deck while eastbound is carried on the lower one. The largest span of the original eastern section was a cantilever bridge. During the 1989 Loma Prieta earthquake, a portion of the eastern section's upper deck collapsed onto the lower deck and the bridge was closed for a month. Reconstruction of the eastern section of the bridge as a causeway connected to a self-anchored suspension bridge began in 2002; the new eastern section opened September 2, 2013, at a reported cost of over $6.5 billion; the original estimate of $250 million was for a seismic retrofit of the existing span. Unlike the western section and the original eastern section of the bridge, the new eastern section is a single deck carrying all eastbound and westbound lanes. Demolition of the old east span was completed on September 8, 2018. Description The bridge consists of two crossings, east and west of Yerba Buena Island, a natural mid-bay outcropping inside San Francisco city limits. The western crossing between Yerba Buena and downtown San Francisco has two complete suspension spans connected at a center anchorage. Rincon Hill is the western anchorage and touch-down for the San Francisco landing of the bridge connected by three shorter truss spans. The eastern crossing, between Yerba Buena Island and Oakland, was a cantilever bridge with a double-tower span, five medium truss spans, and a 14-section truss causeway. Due to earthquake concerns, the eastern crossing was replaced by a new crossing that opened on Labor Day 2013. On Yerba Buena Island, the double-decked crossing is a concrete viaduct east of the west span's cable anchorage, the Yerba Buena Tunnel through the island's rocky central hill, another concrete viaduct, and a longer curved high-level steel truss viaduct that spans the final to the cantilever bridge. The toll plaza on the Oakland side (westbound traffic only since 1969) has eighteen toll lanes, with all charges now made either through the FasTrak electronic toll collection system or through invoices mailed through the USPS, based on the license plate of the car per Department of Motor Vehicle records. Metering signals are about west of the toll plaza. Two full-time bus-only lanes bypass the toll booths and metering lights around the right (north) side of the toll plaza; other high occupancy vehicles can use these lanes during weekday morning and afternoon commute periods. The two far-left toll lanes are high-occupancy vehicle lanes during weekday commute periods. Radio and television traffic reports will often refer to congestion at the toll plaza, metering lights, or a parking lot in the median of the road for bridge employees; the parking lot is about long, stretching from about east of the toll plaza to about west of the metering lights. During the morning commute hours, traffic congestion on the westbound approach from Oakland stretches back through the MacArthur Maze interchange at the east end of the bridge onto the three feeder highways, Interstate 580, Interstate 880, and I-80 toward Richmond. Since the number of lanes on the eastbound approach from San Francisco is structurally restricted, eastbound backups are also frequent during evening commute hours. The eastbound bottleneck is not the bridge itself, but the approach, which has just three lanes in each direction, in contrast to the bridge's five. The western section of the Bay Bridge is currently restricted to motorized freeway traffic. Pedestrians, bicycles, and other non-freeway vehicles are not allowed to cross this section. A project to add bicycle/pedestrian lanes to the western section has been proposed but is not finalized. A Caltrans bicycle shuttle operates between Oakland and San Francisco during peak commute hours for $1.00 each way. Freeway ramps next to the tunnel provide access to Yerba Buena Island and Treasure Island. Because the toll plaza is on the Oakland side, the western span is a de facto non-tolled bridge; traffic between the island and the main part of San Francisco can freely cross back and forth. Those who only travel from Oakland to Yerba Buena Island, and not the entire length to the main part of San Francisco, still must pay the full toll. Early history Developed at the entrance to the bay, San Francisco was well placed to prosper during the California Gold Rush. Almost all goods not produced locally arrived by ship, as did numerous travelers and erstwhile miners. But after the first transcontinental railroad was completed in May 1869, San Francisco was on the wrong side of the Bay, and separated from the new rail link. Many San Franciscans feared that the city would lose its position as the regional center of trade. Businessmen had considered the concept of a bridge spanning the San Francisco Bay since the Gold Rush days. During the 1870s, several newspaper articles explored the idea. In early 1872, a "Bay Bridge Committee" was hard at work on plans to construct a railroad bridge. The April 1872 issue of the San Francisco Real Estate Circular reported on this committee: The self-proclaimed Emperor Norton decreed three times in 1872 that a suspension bridge be constructed to connect Oakland with San Francisco. In the third of these decrees, in September 1872, Norton, frustrated that nothing had happened, proclaimed: Unlike most of Emperor Norton's eccentric ideas, his decree to build a bridge had a widespread public and political appeal. Yet the task was too much of an engineering and economic challenge, since the bay was too wide and too deep there. In 1921, more than forty years after Norton's death, an underground tube was considered, but it became clear that one would be inadequate for vehicular traffic. Support for a trans-bay crossing increased in the 1920s based on the popularity and availability of automobiles. Planning The California State Legislature and governor enacted a law, effective in 1929, to establish the California Toll Bridge Authority (Stats. 1929, Chap 763) and to authorize it and the State Department of Public Works to build a bridge connecting San Francisco and Alameda County (Stats. 1929, Chap 762). The state appointed a commission to evaluate the idea and various designs for a bridge across the Bay, the Hoover-Young Commission. Its conclusions were made public in 1930. In January 1931, Charles H. Purcell, the State Highway Engineer of California, who had also served as the secretary of the Hoover-Young Commission, assumed the position of Chief Engineer for the Bay Bridge. Glenn B. Woodruff served as design engineer for the project. He explained in a 1936 article that several elements of the bridge required not only new designs, but also new theories of design. To make the bridge feasible, a route was chosen via Yerba Buena Island, which would reduce both the material and the labor needed. Since Yerba Buena Island was a U.S. Navy base at the time, the state had to gain approval from Congress for this purpose as it regulates and controls all federal lands and the armed services. After a great deal of lobbying, California received Congressional approval to use the island on February 20, 1931, subject to final approvals by the Departments of War, Navy, and Commerce. The state applied for permits from the 3 federal departments as required. The permits were granted in January 1932, and formally presented in a ceremony on Yerba Buena Island on February 24, 1932. On May 25, 1931, Governor James Rolph Jr. signed into law two acts: one providing for the financing of state bridges by revenue bonds, and another creating the San Francisco–Oakland Bay Bridge Division of the State Department of Public Works. On September 15, 1931, this new division opened its offices at 500 Sansome Street in San Francisco. During 1931, numerous aerial photographs were taken of the chosen route for the bridge and its approaches. That year, engineers had not determined the final design concept for the western span between San Francisco and Yerba Buena Island, although the idea of a double-span suspension bridge was already favored. In April 1932, the preliminary final plan and design of the bridge was presented by Chief Engineer Charles Purcell to Col. Walter E. Garrison, Director of the State Department of Public Works, and to Ralph Modjeski, head of the Board of Engineering Consultants. Both agencies approved and preparation of the final design proceeded. In 1932, Joseph R. Knowland, a former U.S. Congressman from California, traveled to Washington to help persuade President Herbert Hoover and the Reconstruction Finance Corporation to advance $62 million to build the bridge. Construction Before work began, 12 massive underwater telephone cables were moved of the proposed bridge route by crews of the Pacific Telephone and Telegraph Co. during the summer of 1931. Construction began on July 9, 1933 after a groundbreaking ceremony attended by former president Herbert Hoover, dignitaries, and local beauty queens. The western section of the bridge between San Francisco and Yerba Buena Island presented an enormous engineering challenge. The bay was up to deep in places and the soil required new foundation-laying techniques. A single main suspension span some in length was considered but rejected, as it would have required too much fill and reduced wharfage space at San Francisco, had less vertical clearance for shipping, and cost more than the design ultimately adopted. The solution was to construct a massive concrete anchorage halfway between San Francisco and the island, and to build a main suspension span on each side of this central anchorage. East of Yerba Buena Island, the bay to Oakland was spanned by a combination of double cantilever, five long-span through-trusses, and a truss causeway, forming the longest bridge of its kind at the time. The cantilever section was longest in the nation and third-longest anywhere. Much of the original eastern section was founded upon treated wood pilings. Because of the very deep mud on the bay bottom, it was not practical to reach bedrock, although the lower levels of the mud are quite firm. Long wooden pilings were crafted from entire old-growth Douglas fir trees, which were driven through the soft mud to the firmer bottom layers. The construction project had casualties: twenty-four men would die while constructing the bridge. Yerba Buena Tunnel California Department of Transportation engineer C.H. Purcell served as chief engineer for the Bay Bridge, including the construction of the Yerba Buena Tunnel. Before starting excavation, the ground through which the western half of the tunnel would be bored was stabilized by injecting cement grout under pressure through 25 holes bored into the loose rock over the crown of the tunnel. After excavating the western and eastern open portals, three drifts were bored from west to east along the path of the tunnel: one at the crown and the other two at the lower corners. The first drift broke through in July 1934, approximately one year after the start of construction. A ceremonial party led by Governor Merriam celebrated the completion of the first drift on July 24 by walking through it, followed by a short speech. The space between the three drifts was then excavated, resulting in a single arch-shaped bore (in cross-section), and the tunnel roof was constructed using steel I-beam ribs spaced apart to support the rock, which were then embedded in concrete up to thick at the crown. No cave-ins occurred during the excavation of the tunnel. After the roof was completed, the remaining core of rock between the tunnel roof and lower deck was excavated using a power shovel. By May 1935, work on removing the core was progressing and 40 steel ribs had been placed; concrete embedment was just starting. Removal of the core was completed on November 18, 1935. Once the excavation was complete, the upper deck was placed and the interior ceiling above the upper deck was lined with tiles. The last concrete poured during the construction of the Bay Bridge was part of the upper deck lining in late summer 1936. This included the emplacement of regularly spaced refuge bays ("deadman holes") along the south wall of the lower deck tunnel, escape alcoves common in all railway tunnels into which track maintenance workers could duck if a train came along. These remain and are visible to eastbound motorists today. The completed tunnel bore is wide and high overall, and the dimensions of the tunnel interior are wide and high. In 1936, it was hailed as the world's largest-bore tunnel. The cross-sectional area of the upper half is , and the lower half is . Reminders of the long-gone bridge railway survive along the south side of the lower Yerba Buena Tunnel. These are the regularly spaced refuge bays ("deadman holes"), escape alcoves common in all railway tunnels, along the wall, into which track maintenance workers could safely retreat if a train came along. (The north side, which always carried only motor traffic, lacks these holes.) The tunnel is wide, high, and long. It is the largest diameter transportation bore tunnel in the world. The large amount of material that was excavated in boring the tunnel was used for a portion of the landfill over the shoals lying adjacent to Yerba Buena Island to its north, a project which created the artificial Treasure Island. The contract to build the Yerba Buena Cable Anchorage, Tunnel & Viaduct segment was opened for bids on March 28, 1933, and awarded to the low bidder, Clinton Construction Company of California, for $1,821,129.50 (equivalent to $ in ). Yerba Buena Island was the main site of the official groundbreaking for the Bay Bridge on July 9, 1933, when President Franklin D. Roosevelt remotely set off a dynamite blast on the eastern side of the island at 12:58 p.m. local time. Former President Herbert Hoover and Governor James Rolph were onsite; the two men were the first to turn over the earth with ceremonial golden spades. Other ceremonies took place simultaneously in San Francisco (on Rincon Hill) and Oakland Harbor. The Yerba Buena Tunnel opened, along with the rest of the Bay Bridge, on November 12, 1936. the tunnel lacks an official name. Opening day The bridge opened on November 12, 1936, at 12:30 p.m. In attendance were the former US president Herbert Hoover, Senator William G. McAdoo, and the Governor of California, Frank Merriam. Governor Merriam opened the bridge by cutting gold chains across it with an acetylene cutting torch. The San Francisco Chronicle report of November 13, 1936, read: The total cost was US$77 million (equivalent to $ in ). Before opening the bridge was blessed by Cardinal Secretary of State Eugene Cardinal Pacelli, later Pope Pius XII. Because it was in effect two bridges strung together, the western spans were ranked the second and third largest suspension bridges. Only the George Washington Bridge had a longer span between towers. As part of the celebration a United States commemorative coin was produced by the San Francisco Mint. A half dollar, the obverse portrays California's symbol, the grizzly bear, while the reverse presents a picture of the bridge spanning the bay. A total of 71,369 coins were sold, some from the bridge's tollbooths. Post-opening history 1930s–1960s The Bridge Railway Construction of the Bridge Railway began on November 29, 1937, with the laying of the first ties. The first train was run across the Bay Bridge on September 23, 1938, a test run utilizing a Key System train consisting of two articulated units with California Governor Frank Merriam at the controls. On January 14, 1939, the San Francisco Transbay Terminal was dedicated. The following morning, January 15, 1939, the electric interurban trains started in revenue service, running along the south side of the lower deck of the bridge. The terminal originally was supposed to open at the same time as the Bay Bridge, but had been delayed. Trains over the Bridge Railway were operated by the Sacramento Northern Railroad (Western Pacific), the Interurban Electric Railway (Southern Pacific) and the Key System. Freight trains never used the bridge. The tracks left the lower deck in San Francisco just southwest of the end of 1st St. They then went along an elevated viaduct above city streets, looping around and into the terminal on its east end. Departing trains exited on the loop back onto the bridge. The loop continued to be used by buses until the terminal's closure in 2010. The tracks left the lower deck in Oakland. The Interurban Electric Railway tracks ran along Engineer Road and over the Southern Pacific yard on trestles (some of it is still standing and visible from nearby roadways) onto the streets and dedicated right-of-ways in Berkeley, Albany, Oakland and Alameda. The Sacramento Northern and Key System tracks went under the SP tracks through a tunnel (which still exists and is in use as an access to the EBMUD treatment plant) and onto 40th St. Due to falling ridership, Sacramento Northern and IER service ended in 1941. On September 13, 1942, a stop was opened at Yerba Buena Island to serve expanded wartime needs on adjacent Treasure Island. Despite the vital role the railroad played, the last train went over the bridge in April 1958. The tracks were removed and replaced with pavement on the Transbay Terminal ramps and Bay Bridge. The Key System handled buses over the bridge until 1960 when its successor, AC Transit, took over operations. It still handles service today, running to a new transbay terminal located in the same vicinity in San Francisco, the Salesforce Transit Center. Emperor Norton plaque and relocation In 1872, the San Francisco entrepreneur and eccentric Emperor Norton issued three proclamations calling for the design and construction of a suspension bridge between San Francisco and Oakland via Yerba Buena Island (formerly Goat Island). A 1939 plaque honoring Emperor Norton for the original idea for the Bay Bridge was dedicated by the fraternal society E Clampus Vitus and was installed at The Cliff House in February 1955. In November 1986, in connection with the bridge's 50th anniversary, the plaque was moved to the Transbay Terminal, the public transit and Greyhound bus depot at the west end of the bridge in downtown San Francisco. When the terminal was closed in 2010, the plaque was placed in storage. 1960s–2010s Roadway retrofit Until the 1960s, the upper deck ( wide between curbs) carried three lanes of traffic in each direction and was restricted to automobiles only. The lower deck carried three lanes of truck and bus traffic, with autos allowed, on the north side of the bridge. In the 1950s traffic lights were added to set the direction of travel in the middle lane, but there still remained no divider. Two interurban railroad tracks on the south half of the lower deck carried the electric commuter trains. In 1958 the tracks were replaced with pavement, but the reconfiguration to what the traffic eventually became did not take place until 1963. The Federal highway on the bridge was originally a concurrency of U.S. Highway 40 and U.S. Highway 50. The bridge was re-designated as Interstate 80 in 1964, and the western ends of U.S. 40 and U.S. 50 are now in Silver Summit, Utah, and West Sacramento, California, respectively. Reconstruction of approaches The original western approach to (and exit from) the upper deck of the bridge was a long ramp to Fifth Street, branching to Harrison St for westward traffic off the bridge and Bryant St for eastward traffic entering. There was also an on-ramp to the upper deck on Rincon Hill from Fremont Street (which later became an off-ramp) and an off-ramp to First Street (later extended over First St to Fremont St). The lower deck ended at Essex and Harrison St; just southwest of there, the tracks of the bridge railway left the lower deck and curved northward into the elevated loop through the Transbay Terminal that was paved for buses after rail service ended. The eastern approach to the bridge included a causeway landing for the "incline" section, and the construction of three feeder highways, interlinked by an extensive interchange, which in later years became known as "The MacArthur Maze". A massive landfill was emplaced, extending along the north edge of the existing Key System rail mole to the existing bayshore, and continuing northward along the shore to the foot of Ashby Avenue in Berkeley. The fill was continued northward to the foot of University Avenue as a causeway which enclosed an artificial lagoon, subsequently developed by the WPA as "Aquatic Park". The three feeder highways were U.S. Highway 40 (Eastshore Highway) which led north through Berkeley, U.S. Highway 50 (38th Street, later MacArthur Blvd.) which led through Oakland, and State Route 17 which ran parallel to U.S. 50, along the Oakland Estuary and through the industrial and port sections of the city. The current approaches were constructed in the 1960s, as the original ones were not up to interstate highway standards and were designed mainly for local use. Yerba Buena Tunnel Reconstruction As originally completed, the upper deck was reserved for automobile traffic, and carried six lanes, each wide. The lower deck was further divided into three lanes of traffic for heavy trucks (each wide), and the two railroad tracks on the south side ( wide for both tracks). The initial design in 1932 called for the two rail tracks to flank a central truck deck on the lower level. After Key System trains stopped running over the bridge in 1958, bids were opened on October 11, 1960, to rebuild the tunnel. The rebuild consisted of multiple stages of work: Remove Key System rails, lower rail deck and repave Lower the truck traffic half of the lower deck by and repave Remove center columns supporting upper deck Lower the upper deck by by placing precast concrete units After the reconstruction, the tunnel would handle only road traffic. The upper deck was lowered to accommodate heavy truck traffic, as each deck would now carry five lanes of unidirectional traffic. The upper deck was dedicated to westbound traffic, and the lower deck was dedicated to eastbound traffic. The impact to traffic during reconstruction of the tunnel was minimized mainly by working outside normal commuting hours and through the use of a portable steel bridge long and wide, designed to fit between the curbs of the existing upper deck. The bridge spanned the gap between the new upper deck and old upper deck, and the overall elevation change of caused drivers to slow to , resulting in traffic jams. The first accident caused by "The Hump", the nickname the bridge acquired after prominent warning signs advertising its presence, occurred just twelve minutes after it was first deployed on November 25, 1961. The new precast upper deck units were each long, and were installed in two halves. One side of each half rested on a temporary falsework erected in the middle of the lower deck, and the other side rested on the shoulder of the tunnel wall previously used to support the old upper deck. After the two halves were fastened together, a steel form was used to close the gap between halves, and concrete was poured in the gap. The upper deck rests on shoulders built into the tunnel wall, padded by Masonite. The planned completion date for tunnel reconstruction was July 1962, but "The Hump" was not dismantled until October 27, 1962. The San Francisco Chronicle marked the occasion by quipping "[The Hump] produced more jams than Grandma ever made." After reconstruction, both the upper and lower decks featured of vertical clearance. Upper deck clearance is restricted by the tunnel portal, and lower deck clearance is restricted by the upper deck. Rail removal Automobile traffic increased dramatically in the ensuing decades of the bridge's opening. This, among other things, resulted in the Key Systems decline, and by the 1960s having rails on the bridge had become obsolete and a detriment to traffic, as they carried nothing on them. Work began on removing the tracks in October 1963. After the work was completed, the Bay Bridge was reconfigured with five lanes of westbound traffic on the upper deck and five lanes of eastbound traffic on the lower deck. The Key System originally planned to end train operations in 1948 when it replaced its streetcars with buses, but Caltrans did not approve of this. Trucks had their ban lifted and were allowed on the top deck for the first time. Due to this, the upper deck was retrofitted to handle the increased loads, with understringers added and prestressing added to the bottom of the floor beams. This retrofit is still in place today, and is visible to Eastbound traffic on the western span. In current times, there have been attempts to restore rail service to the bridge, but none were successful. A study released in 2000 estimated the cost of restoring rail service across the bridge at up to $8 billion . 1968 aircraft accident On February 11, 1968, a U.S. Navy training aircraft crashed into the cantilever span of the bridge, killing both reserve officers aboard. The T2V SeaStar, based at NAS Los Alamitos in southern California, was on a routine weekend mission and had just taken off in the fog from nearby NAS Alameda. The plane struck the bridge about above the upper deck roadway and then sank in the bay north of the bridge. There were no injuries among the motorists on the bridge. One of the truss sections of the bridges was replaced due to damage from the impact. 1986 Cable lighting The series of lights adorning the westbound spans suspension cables were added in 1986 as part of the bridge's 50th-anniversary celebration. James B. Rolpf Jr. designation The bridge was unofficially "dedicated" to James B. "Sunny Jim" Rolph, Jr., but this was not widely recognized until the bridge's 50th-anniversary celebrations in 1986. The official name of the bridge for all functional purposes has always been the "San Francisco–Oakland Bay Bridge", and, by most local people, it is referred to simply as "the Bay Bridge". Rolph, a Mayor of San Francisco from 1912 to 1931, was the Governor of California at the time construction of the bridge began. He died in office on June 2, 1934, two years before the bridge opened, leaving the bridge to be named for him out of respect. 1989 Loma Prieta Earthquake and emergency repairs On the evening of October 17, 1989, during the Loma Prieta earthquake, which measured a 6.9 on the moment magnitude scale, a section of the upper deck of the eastern truss portion of the bridge at Pier E9 collapsed onto the deck below, indirectly causing one death. The bridge was closed for just over a month as construction crews repaired the section. That same year, the bridge reopened to traffic on November 18. 2001 terrorism threat On November 2, 2001, in the aftermath of the September 11 attacks, Governor Gray Davis announced a threat of a rush hour attack against a West Coast suspension bridge (a group which includes the Bay Bridge and the Golden Gate Bridge) some time between November 2 and 7, resulting in an increase of openly armed law enforcement patrols. A small fraction of drivers shifted to ferries and BART. It was later revealed that crews had secretly been working under armed guard for several weeks to harden the suspension cable attachment points, which were vulnerable to cutting with common weapons and tools. An anchor room was filled with concrete, doors welded shut, and a razor wire fence added. A blast wall was also added to defend against a potential truck bomb. In the end, no attack occurred. Emperor Norton naming campaign In November 2004, after a campaign by San Francisco Chronicle cartoonist Phil Frank, then-San Francisco District 3 Supervisor Aaron Peskin introduced a resolution to the San Francisco Board of Supervisors calling for the entire two-bridge system, from San Francisco to Oakland, to be named for Emperor Norton. On December 14, 2004, the Board approved a modified version of this resolution, calling for only "new additions"—i.e., the new eastern crossing—to be named "The Emperor Norton Bridge". Neither the City of Oakland nor Alameda County passed any similar resolution, so the effort went no further. Western span retrofit The western section has undergone extensive seismic retrofitting. During the retrofit, much of the structural steel supporting the bridge deck was replaced while the bridge remained open to traffic. Engineers accomplished this by using methods similar to those employed on the Chicago Skyway. The entire bridge was fabricated using hot steel rivets, which are impossible to heat treat and so remain relatively soft. Analysis showed that these could fail by shearing under extreme stress. Therefore, at most locations, rivets were replaced with high-strength bolts. Most bolts had domed heads placed facing traffic so they looked similar to the rivets that were removed.. This work had to be performed with great care as the steel of the structure had for many years been painted with lead paint, which had to be carefully removed and contained by workers with extensive protective gear so that they would not suffocate. Most of the beams were originally constructed of two plate -beams joined with lattices of flat strip or angle stock, depending upon structural requirements. These have all been reconstructed by replacing the riveted lattice elements with bolted steel plate and so converting the lattice beams into box beams. This replacement included adding face plates to the large diagonal beams joining the faces of the main towers, which now have an improved appearance when viewed from certain angles. Diagonal box beams have been added to each bay of the upper and lower decks of the western spans. These add stiffness to reduce side-to-side motion during an earthquake and reduce the probability of damage to the decking surfaces. Analysis showed that some massive concrete supports could burst and crumble under likely stresses. In particular the western supports were extensively modified. First, the location of existing reinforcing bar is determined using magnetic techniques. In areas between bars holes are drilled. Into these holes is inserted and glued an L-shaped bar that protrudes . This bar is retained in the hole with a high-strength epoxy adhesive. The entire surface of the structure is thus covered with closely spaced protrusions. A network of horizontal and vertical reinforcing bars is then attached to these protrusions. Mold surface plates are then positioned to retain high-strength concrete, which is then pumped into the void. After removal of the formwork the surface appears similar to the original concrete. This technique has been applied elsewhere throughout California to improve freeway overpass abutments and some overpass central supports that have unconventional shapes. (Other techniques such as jacket and grout are applied to simple vertical posts; see the seismic retrofit article.) The western approaches have also been retrofitted in part, but mostly these have been replaced with new construction of reinforced concrete. 2007 Cosco Busan oil spill In 2007, a container ship then named the Cosco Busan, and subsequently renamed the Hanjin Venezia, collided with the Delta Tower fender, resulting in the Cosco Busan oil spill. October 2009 eyebar crack, repair failure and bridge closure During the 2009 Labor Day weekend closure for a portion of the replacement, a major crack was found in an eyebar, significant enough to warrant bridge closure. Working in parallel with the retrofit, California Department of Transportation (Caltrans), and its contractors and subcontractors, were able to design, engineer, fabricate, and install the pieces required to repair the bridge, delaying its planned opening by only hours. The repair was not inspected by the Federal Highway Administration, which relied on state inspection reports to ensure safety guidelines were met. On October 27, 2009, during the evening commute, the steel crossbeam and two steel tie rods repaired over Labor Day weekend snapped off the Bay Bridge's eastern section and fell to the upper deck. This may have been due to metal-on-metal vibration from bridge traffic and wind gusts of up to , which resulted in one of the rods breaking off and caused one of the metal sections to come crashing down. Three vehicles were either struck by or hit the fallen debris, though there were no injuries. On November 1, Caltrans announced that the bridge would probably stay closed at least through the morning commute of Monday, November 2 after repairs performed during the weekend failed a stress test on Sunday. BART and the Golden Gate Ferry systems added supplemental service to accommodate the increased passenger load during the bridge closure. The bridge reopened to traffic on November 2, 2009. The pieces that broke off on October 27 were a saddle, crossbars, and two tension rods. 2010s–present Willie L. Brown, Jr., Bridge naming resolution In June 2013, nine state assemblymen, joined by two state senators, introduced Assembly Concurrent Resolution No. 65 (ACR 65) to name the western crossing of the bridge for former California Assembly Speaker and former San Francisco Mayor Willie Brown. Six weeks later, a grassroots petition was launched seeking to name the entire two-bridge system for Emperor Norton. In September 2013, the petition's author launched a nonprofit, The Emperor's Bridge Campaign — now known as The Emperor Norton Trust — that advocates for adding "Emperor Norton Bridge" as an honorary name (rather than "renaming" the bridge) and that undertakes other efforts to advance Norton's legacy. The state legislative resolution naming the western section of the Bay Bridge the "Willie L. Brown, Jr., Bridge" passed the Assembly in August 2013 and the Senate in September 2013. A ceremony was held on February 11, 2014, marking the resolution and the installation of signs on either end of the section. Eastern span replacement For various reasons, the eastern section would have been too expensive to retrofit compared to replacing it, so the decision was made to replace it. The replacement section underwent a series of design changes, both progressive and regressive, with increasing cost estimates and contractor bids. The final design included a single-towered self-anchored suspension span starting at Yerba Buena island, leading to a long inclined viaduct to the Oakland touchdown. Separated and protected bicycle lanes are a visually prominent feature on the south side of the new eastern section. The bikeway and pedestrian path across the eastern span opened in October 2016 and carries recreational and commuter cyclists between Oakland and Yerba Buena Island. The original eastern cantilever span had firefighting dry standpipes installed. No firefighting dry or wet standpipes were designed for the eastern section replacement, although, the firefighting wet standpipes do exist on the original western section visible on both the north-side upper and lower decks. The original eastern section closed permanently to traffic on August 28, 2013, and the replacement span opened for traffic five days later. The original eastern section was dismantled between January 2014 and November 2017. 2013 public "light sculpture" installation On March 5, 2013, a public art installation called "The Bay Lights" was activated on the western span's vertical cables. The installation was designed by artist Leo Villareal and consists of 25,000 LED lights originally scheduled to be on nightly display until March 2015. However, on December 17, 2014, the non-profit Illuminate The Arts announced that it had raised the $4 million needed to make the lights permanent; the display was temporarily turned off starting in March 2015 in order to perform maintenance and install sturdier bulbs and then re-lit on January 30, 2016. In order to reduce driver distractions, the privately funded display is not visible to users of the bridge, only to distant observers. This lighting effort is intended to form part of a larger project to "light the bay". Villareal used various algorithms to generate patterns such as rainfall, reflections on water, bird flight, expanding rings, and others. Villareal's patterns and transitions will be sequenced and their duration determined by computerized random number generator to make each viewing experience unique. Owing to the efficiency of the LED system employed, the estimated operating cost is only US$15.00 per night. The lights were switched off permanently at 8 pm on March 5, 2023 – the 10th anniversary of the artwork. This was done due to their poor condition and increasing costs to maintain properly. There is a plan to raise additional funds and install a new set of lights later in the year. Alexander Zuckermann Bike Path The pedestrian and bicycle route on the eastern section opened on September 3, 2013, and is named after Alexander Zuckermann, founding chair of the East Bay Bicycle Coalition. This forms a transbay route for the San Francisco Bay Trail. Until October 2016, the path did not connect to Yerba Buena and Treasure Island sidewalks, due to the need to demolish more of the old eastern section before final construction. On May 2, 2017, public access was extended to seven days a week, 6 a.m. to 9 p.m., with occasional closure days for continued demolition of the old bridge foundations. This work was completed on November 11, 2017. Yerba Buena Tunnel closure and repair On January 30, 2016, a chunk of concrete the size of an automobile tire fell from the tunnel wall into the slow lane of eastbound traffic on the lower deck of the Yerba Buena Tunnel, causing a minor accident. The concrete fell from where the upper deck is connected to the tunnel wall. Based on an examination of photographs, a professor from Georgia Tech postulated that water infiltration into the concrete wall had caused the reinforcing steel to corrode and expand, forcing a chunk of the tunnel wall out. A subsequent California Department of Transportation (Caltrans) investigation identified 12 spots on both sides of the tunnel wall in the lower deck space showed signs of corrosion-induced damage, but no immediate risk of further spalling. The apparent cause was rainwater leaking from upper deck drains. Caltrans engineers speculated the Masonite pads had swelled due to rainwater infiltration, cracking the tunnel walls and allowing moisture in to the reinforcing steel. Repairs to the degraded concrete started in February 2017. Drains and catch basins were replaced to reduce the likelihood of clogging, and fiberglass-reinforced mortar was used to patch removed concrete. The repairs, which required some daytime lane closures, were expected to last until June 2017. 2020 bus lane proposal In January 2020, the AC Transit and BART boards of directors supported the establishment of dedicated bus lanes on the bridge. In February 2020, Rob Bonta introduced state legislation to begin planning bus lanes on the bridge. Opening of the Judge John Shutter Regional Shoreline On October 21, 2020, the Judge John Sutter Regional Shoreline park opened to the public. Located at the foot of the bridge, the opening of the park has led to easier access to the bike and pedestrian path due to improved parking and pedestrian access. 2016-2023 exit reconstructions In the 1960s directional reconfiguration, there were three off-ramps added to Yerba Buena Island and Treasure Island: a single left-hand side exit in the western direction at the east end of the tunnel, a left-hand side exit in the eastern direction at the west end of the tunnel (originally signed as just "Treasure Island"), and a right-hand side exit in eastern direction at the east end of the tunnel (originally signed as just "Yerba Buena Island"). The eastbound left exit in particular presented an unusual hazard – drivers had to slow within the normal traffic flow and move into a very short off-ramp that ended in a short radius turn left turn; accordingly, a 15 MPH advisory was posted there. The turn had been further narrowed from its original design by the installation of crash pads on the island side. The eastbound and westbound on-ramps were then on the usual right-hand side, but they did not have dedicated merge lanes, forcing drivers to await gaps in traffic and then accelerate from a stop sign to traffic speeds in a short distance. In 2016, a new on-ramp and off-ramp at the east end of the tunnel were opened in the western direction on the right-hand side of the roadway, replacing the left-hand side off-ramp in that direction. Meanwhile, the eastbound right-hand side off-ramp and on-ramp at the east end of the tunnel was demolished during the reconstruction of the eastern span of the bridge. A new on-ramp on this side was built with a dedicated merge lane, but the off-ramp's replacement was not completed until early-May 2023, well after the bridge's bike path from the Oakland side to the island was fully completed. The eastbound left-hand side off-ramp and westbound on-ramp at the west end of the tunnel are then scheduled then close as early as late-May 2023 while the western span undergoes a seismic retrofit. Financing and tolls Current toll rates Tolls are only collected from westbound traffic at the toll plaza on the Oakland side of the bridge. Those just traveling between Yerba Buena Island and the main part of San Francisco are not tolled. All-electronic tolling has been in effect since 2020, and drivers may either pay using the FasTrak electronic toll collection device or using the license plate tolling program. It remains not truly an open road tolling system until the remaining unused toll booths are removed, forcing drivers to slow substantially from freeway speeds while passing through. Effective , the toll rate for passenger cars is $8. During peak traffic hours on weekdays between 5:00 am and 10:00 am, and between 3:00 pm and 7:00 pm, carpool vehicles carrying three or more people, clean air vehicles, or motorcycles may pay a discounted toll of $4 if they have FasTrak and use the designated carpool lane. Drivers without Fastrak or a license plate account must open and pay via a "short term" account within 48 hours after crossing the bridge or they will be sent an invoice of the unpaid toll. No additional toll violation penalty will be assessed if the invoice is paid within 21 days. Historical toll rates When the Bay Bridge opened in 1936, the toll was 65 cents (), collected in each direction by men in booths fronting each lane of traffic. Within months, the toll was lowered to 50 cents in order to compete with the ferry system, and finally to 25 cents since this was shown sufficient to pay off the original revenue bonds on schedule (equivalent to $ and $ in respectively). In 1951 there were eighty collectors working various shifts. On Monday, September 1, 1969, (Labor Day) a change of policy resulted in the toll being collected thereafter only from westbound traffic, at twice the previous rate; eastbound vehicles were toll-exempt. Tolls were subsequently raised to finance improvements to the bridge approaches, required to connect with new freeways, and to subsidize public transit in order to reduce the traffic over the bridge. The toll was increased by a quarter dollar to 75 cents in 1978 (), where it remained for a decade. Caltrans, the state highway transportation agency, maintains seven of the eight San Francisco Bay Area bridges. (The Golden Gate Bridge is owned and maintained by the Golden Gate Bridge, Highway and Transportation District.) The basic toll (for automobiles) on the seven state-owned bridges, including the San Francisco–Oakland Bay Bridge, was standardized to $1 by Regional Measure 1, approved by Bay Area voters in 1988 (). A $1 seismic retrofit surcharge was added in 1998 by the state legislature, increasing the toll to $2 (), originally for eight years, but since then extended to December 2037 (AB1171, October 2001). On March 2, 2004, voters approved Regional Measure 2 to fund various transportation improvement projects, raising the toll by another dollar to $3 (). An additional dollar was added to the toll starting January 1, 2007, to cover cost overruns on the eastern span replacement of the Bay Bridge, increasing the toll to $4 (). The Metropolitan Transportation Commission, a regional transportation agency, in its capacity as the Bay Area Toll Authority, administers RM1 and RM2 funds, a significant portion of which are allocated to public transit capital improvements and operating subsidies in the transportation corridors served by the bridges. Caltrans administers the "second dollar" seismic surcharge, and receives some of the MTC-administered funds to perform other maintenance work on the bridges. The Bay Area Toll Authority is made up of appointed officials put in place by various city and county governments, and is not subject to direct voter oversight. Due to further funding shortages for seismic retrofit projects, the Bay Area Toll Authority again raised tolls on all seven of the state-owned bridges (this excludes the Golden Gate Bridge) in July 2010. The toll rate for autos on other Bay Area bridges was increased to $5 (), but in the Bay Bridge a congestion pricing tolling scheme was implemented. This variable pricing system was not truly congestion priced because toll rates came from a preset schedule and are not based on actual congestion. A $6 toll () was charged from 5 a.m. to 10 a.m. and 3 p.m. to 7 p.m., Monday through Friday. During weekends cars paid the standard $5 toll like the other bridges. Carpools before the implementation were exempted but began to pay $2.50 (), and the carpool toll discount became available only to drivers with FasTrak electronic toll devices. The toll remained at the previous toll of $4 at all other times on weekdays (now ). The Bay Area Toll Authority reported that by October 2010 fewer users are driving during the peak hours and more vehicles are crossing the Bay Bridge before and after the 5–10 a.m. period in which the congestion toll goes into effect. Commute delays in the first six months dropped by an average of 15% compared with 2009. For vehicles with at least 3 axles, the toll rate was $5 per axle. In June 2018, Bay Area voters approved Regional Measure 3 to further raise the tolls on all seven of the state-owned bridges to fund $4.5 billion worth of transportation improvements in the area. Under the passed measure, the tolls on the Bay Bridge were raised by $1 on January 1, 2019, then again on January 1, 2022, and again on January 1, 2025. Thus under the congestion pricing scheme, the tolls for autos during the peak weekday rush hours were set to $7 in 2019, $8 in 2022, and $9 in 2025; for the non-rush periods, $5 in 2019, $6 in 2022, and $7 in 2025; and on weekends, $6 in 2019, $7 in 2022, and $8 in 2025. Congestion pricing was then suspended indefinitely in April 2020 due to the COVID-19 pandemic, leaving the weekend toll rates in effect for all days and times. In September 2019, the MTC approved a $4 million plan to eliminate toll takers and convert all seven of the state-owned bridges to all-electronic tolling, citing that 80 percent of drivers are now using Fastrak and the change would improve traffic flow. On March 20, 2020, accelerated by the COVID-19 pandemic, all-electronic tolling was placed in effect for all seven state-owned toll bridges. The MTC then installed new systems at all seven bridges to make them permanently cashless by the start of 2021. In April 2022, the Bay Area Toll Authority announced plans to remove all remaining unused toll booths and create an open-road tolling system which functions at highway speeds; until then, drivers must still slow substantially while passing through the toll plaza. The Bay Area Toll Authority then approved a plan in December 2024 to implement 50-cent annual toll increases on all seven state-owned bridges between 2026 and 2030 to help pay for bridge maintenance. The standard toll rate for autos will thus rise to $8.50 on January 1, 2026; $9 in 2027; $9.50 in 2028; $10 in 2029; and then to $10.50 in 2030. And becoming effective in 2027, a 25-cent surcharge will be added to any toll charged to a license plate account, and a 50-cent surcharge added to a toll violation invoice, due to the added cost of processing these payment methods.
Technology
Bridges
null
165927
https://en.wikipedia.org/wiki/Flamingo
Flamingo
Flamingos or flamingoes () are a type of wading bird in the family Phoenicopteridae, which is the only extant family in the order Phoenicopteriformes. There are four flamingo species distributed throughout the Americas (including the Caribbean), and two species native to Afro-Eurasia. A group of flamingoes is called a "flamboyance". Etymology The name flamingo comes from Portuguese or Spanish , which in turn comes from Provençal – a combination of and a Germanic-like suffix -ing. The word may also have been influenced by the Spanish ethnonym or . The name of the genus, Phoenicopterus, is ; other genera names include Phoeniconaias, which means , and Phoenicoparrus, which means . Taxonomy and systematics The family Phoenicopteridae was introduced by the French zoologist Charles Lucien Bonaparte in 1831, with Phoenicopterus as the type genus. Traditionally, the long-legged Ciconiiformes, probably a paraphyletic assemblage, have been considered the flamingos' closest relatives and the family was included in the order. Usually, the ibises and spoonbills of the Threskiornithidae were considered their closest relatives within this order. Earlier genetic studies, such as those of Charles Sibley and colleagues, also supported this relationship. Relationships to the waterfowl were considered as well, especially as flamingos are parasitized by feather lice of the genus Anaticola, which are otherwise exclusively found on ducks and geese. The peculiar presbyornithids were used to argue for a close relationship between flamingos, waterfowl, and waders. A 2002 paper concluded they are waterfowl, but a 2014 comprehensive study of bird orders found that flamingos and grebes are not waterfowl, but rather are part of Columbea, along with doves, sandgrouse, and mesites. Relationship with grebes Recent molecular studies have suggested a relation with grebes, while morphological evidence also strongly supports a relationship between flamingos and grebes. They hold at least 11 morphological traits in common, which are not found in other birds. Many of these characteristics have been previously identified on flamingos, but not on grebes. The fossil palaelodids can be considered evolutionarily, and ecologically, intermediate between flamingos and grebes. For the grebe-flamingo clade, the taxon Mirandornithes ("miraculous birds" due to their extreme divergence and apomorphies) has been proposed. Alternatively, they could be placed in one order, with Phoenocopteriformes taking priority. Phylogeny The cladogram below showing the phylogenetic relationships between the six extant flamingo species is based on a study by Roberto Frias-Soler and collaborators that was published in 2022. Species Six extant flamingo species are recognized by most sources, and were formerly placed in one genus (have common characteristics) – Phoenicopterus. As a result of a 2014 publication, the family was reclassified into two genera. In 2020, the family had three recognized genera, according to HBW. Prehistoric species of flamingo: Elornis? Milne-Edwards, 1868 (Late Oligocene of France, Europe) Harrisonavis (Gervais, 1852) (Middle Oligocene–Middle Miocene of C. Europe) Leakeyornis (Harrison and Walker, 1976) (Early to Middle Miocene of Lake Victoria, Kenya) Phoeniconaias proeses (De Vis 1905) (Pliocene of Lake Kanunka, Australia) Phoeniconaias siamensis Cheneval et al. 1991 (Early Miocene of Mae Long Reservoir, Thailand) Phoeniconotius Miller 1963 (Late Oligocene of South Australia) Phoenicopterus copei (Miller 1963) (Late Pleistocene of North America and Mexico) Phoenicopterus floridanus (Brodkorb 1953) (Early Pliocene of Florida) Phoenicopterus minutus Howard 1955 (Late Pleistocene of California, US) Phoenicopterus novaehollandiae Miller 1963 (Late Oligocene of South Australia) Phoenicopterus stocki (Miller 1944) (Middle Pliocene of Rincón, Mexico) Xenorhynchopsis De Vis 1905 (Pliocene to Pleistocene of Australia) Description Flamingos usually stand on one leg with the other tucked beneath the body. The reason for this behaviour is not fully understood. One theory is that standing on one leg allows the birds to conserve more body heat, given that they spend a significant amount of time wading in cold water. However, the behaviour also takes place in warm water and is also observed in birds that do not typically stand in water. An alternative theory is that standing on one leg reduces the energy expenditure for producing muscular effort to stand and balance on one leg. A study on cadavers showed that the one-legged pose could be held without any muscle activity, while living flamingos demonstrate substantially less body sway in a one-legged posture. While walking, a flamingo's legs may appear to bend backwards. This appearance is due to the middle joint on their legs being their ankle, not their knee. Flamingos also have webbed feet that aid with swimming and they may stamp their feet in the mud to stir up food from the bottom. Flamingos are capable flyers, and flamingos in captivity often require wing clipping to prevent escape. A pair of African flamingos which had not yet had their wings clipped escaped from the Wichita, Kansas, zoo in 2005. One was spotted in Texas 14 years later. It had been seen previously by birders in Texas, Wisconsin and Louisiana. Young flamingos hatch with grayish-red plumage, but adults range from light pink to bright red due to aqueous bacteria and beta-carotene obtained from their food supply. A well-fed, healthy flamingo is more vibrantly colored, thus a more desirable mate; a white or pale flamingo, however, is usually unhealthy or malnourished. Captive flamingos are a notable exception; even if adequately nourished, they may turn a pale pink if they are not fed carotene at levels comparable to the wild. The greater flamingo is the tallest of the six different species of flamingos, standing at with a weight up to , and the shortest flamingo species (the lesser) has a height of and weighs . Flamingos can have a wingspan as small as to as big as . Flamingos can open their bills by raising the upper jaw as well as by dropping the lower. Behavior and ecology Feeding Flamingos are omnivores who filter-feed on brine shrimp and blue-green algae as well as insect larvae, small insects, mollusks and crustaceans. Their bills are specially adapted to separate mud and silt from the food they eat, and are uniquely used upside-down. The filtering of food items is assisted by hairy structures called lamellae, which line the mandibles, and the large, rough-surfaced tongue. The pink or reddish color of flamingos comes from carotenoids in their diet of animal and plant plankton. American flamingos are a brighter red color because of the beta carotene availability in their food while the lesser flamingos are a paler pink due to ingesting a smaller amount of this pigment. These carotenoids are broken down into pigments by liver enzymes. The source of this varies by species, and affects the color saturation. Flamingos whose sole diet is blue-green algae are darker than those that get it second-hand by eating animals that have digested blue-green algae. Though flamingos prefer to drink freshwater, they are equipped with glands under their eyes that remove extra salt from their bodies. This organ allows them to drink saltwater as well. Vocalization sounds Flamingos are considered very noisy birds with their noises and vocalizations ranging from grunting or growling to nasal honking. Vocalizations play an important role in parent-chick recognition, ritualized displays, and keeping large flocks together. Variations in vocalizations exist in the voices of different species of flamingos. Life cycle Flamingos are very social birds; they live in colonies whose population can number in the thousands. These large colonies are believed to serve three purposes for the flamingos: avoiding predators, maximizing food intake, and using scarce suitable nesting sites more efficiently. Before breeding, flamingo colonies split into breeding groups of about 15 to 50 birds. Both males and females in these groups perform synchronized ritual displays. The members of a group stand together and display to each other by stretching their necks upwards, then uttering calls while head-flagging, and then flapping their wings. The displays do not seem directed towards an individual, but occur randomly. These displays stimulate "synchronous nesting" (see below) and help pair up those birds that do not already have mates. Flamingos form strong pair bonds, although in larger colonies, flamingos sometimes change mates, presumably because more mates are available to choose. Flamingo pairs establish and defend nesting territories. They locate a suitable spot on the mudflat to build a nest (the female usually selects the place). Copulation usually occurs during nest building, which is sometimes interrupted by another flamingo pair trying to commandeer the nesting site for their use. Flamingos aggressively defend their nesting sites. Both the male and the female contribute to building the nest, and to protecting the nest and egg. Same-sex pairs have been reported. After the chicks hatch, the only parental expense is feeding. Both the male and the female feed their chicks with a kind of crop milk, produced in glands lining the whole of the upper digestive tract (not just the crop). The hormone prolactin stimulates production. Crop milk contains both fat and protein, as with mammalian milk, but unlike mammalian milk, it contains no carbohydrates. (Pigeons and doves also produce crop milk, though just in the glands lining the crop, which contains less fat and more protein than flamingo crop milk.) For the first six days after the chicks hatch, the adults and chicks stay in the nesting sites. At around 7–12 days old, the chicks begin to move out of their nests and explore their surroundings. When they are two weeks old, the chicks congregate in groups, called "microcrèches", and their parents leave them alone. After a while, the microcrèches merge into "crèches" containing thousands of chicks. Chicks that do not stay in their crèches are vulnerable to predators. When young flamingos are around three to three and a half months old, their flight feathers will finish growing in, allowing them to fly. Status and conservation In captivity The first flamingo hatched in a European zoo was a Chilean flamingo at Zoo Basel in Switzerland in 1958. Since then, over 389 flamingos have grown up in Basel and been distributed to other zoos around the globe. Greater, an at least 83-year-old greater flamingo, believed to be the oldest in the world, died at the Adelaide Zoo in Australia in January 2014. Zoos have used mirrors to improve flamingo breeding behaviour. The mirrors are thought to give the flamingos the impression that they are in a larger flock than they actually are. Relationship with humans Ancient Roman cuisine While many different kinds of birds were valued items in Roman food, flamingos were among the most prized in Ancient Roman cuisine. An early reference to their consumption, and especially of their tongues, is found in Pliny the Elder, who states in the Natural History: Although a few recipes for flamingos are found in Apicius' extant works, none refer specifically to flamingo tongues. The three flamingo recipes in the (On the Subject of Cooking) involve the whole creature: 220: roasted with an egg sauce, a recipe for wood pigeons, squabs, fattened fowl; flamingo is an afterthought. 230: boiled; parrot may be substituted. 231: roasted with a must sauce. Suetonius mentions flamingo tongues in his Life of Vitellius: Martial, the poet, devoted an ironic epigram, alluding to flamingo tongues: There is also a mention of flamingo brains in a later, highly contentious source, detailing, in the life of Elagabalus, a food item not apparently to his liking as much as camels' heels and parrot tongues, in the belief that the latter was a prophylactic: Other In the Americas, the Moche people of ancient Peru worshipped nature. They placed emphasis on animals, and often depicted flamingos in their art. The Ancient Egyptian god Set is depicted with a flamingo head in the Book of the Faiyum. Flamingos are the national bird of the Bahamas. Andean miners have killed flamingos for their fat, believing that it would cure tuberculosis. In the United States, pink plastic flamingos are sometimes used as lawn ornaments. They were first designed by Don Featherstone in 1957. Their popularity was influenced in part by the prevalence of flamingo souvenirs in Florida along with the Flamingo grand hotel in Miami Beach, prompting the correlation of flamingos with style and wealth.
Biology and health sciences
Phoenicopteriformes
null
166008
https://en.wikipedia.org/wiki/Cramer%27s%20rule
Cramer's rule
In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729. Cramer's rule, implemented in a naive way, is computationally inefficient for systems of more than two or three equations. In the case of equations in unknowns, it requires computation of determinants, while Gaussian elimination produces the result with the same computational complexity as the computation of a single determinant. Cramer's rule can also be numerically unstable even for 2×2 systems. However, Cramer's rule can be implemented with the same complexity as Gaussian elimination, (consistently requires twice as many arithmetic operations and has the same numerical stability when the same permutation matrices are applied). General case Consider a system of linear equations for unknowns, represented in matrix multiplication form as follows: where the matrix has a nonzero determinant, and the vector is the column vector of the variables. Then the theorem states that in this case the system has a unique solution, whose individual values for the unknowns are given by: where is the matrix formed by replacing the -th column of by the column vector . A more general version of Cramer's rule considers the matrix equation where the matrix has a nonzero determinant, and , are matrices. Given sequences and , let be the submatrix of with rows in and columns in . Let be the matrix formed by replacing the column of by the column of , for all . Then In the case , this reduces to the normal Cramer's rule. The rule holds for systems of equations with coefficients and unknowns in any field, not just in the real numbers. Proof The proof for Cramer's rule uses the following properties of the determinants: linearity with respect to any given column and the fact that the determinant is zero whenever two columns are equal, which is implied by the property that the sign of the determinant flips if you switch two columns. Fix the index of a column, and consider that the entries of the other columns have fixed values. This makes the determinant a function of the entries of the th column. Linearity with respect of this column means that this function has the form where the are coefficients that depend on the entries of that are not in column . So, one has (Laplace expansion provides a formula for computing the but their expression is not important here.) If the function is applied to any other column of , then the result is the determinant of the matrix obtained from by replacing column by a copy of column , so the resulting determinant is 0 (the case of two equal columns). Now consider a system of linear equations in unknowns , whose coefficient matrix is , with det(A) assumed to be nonzero: If one combines these equations by taking times the first equation, plus times the second, and so forth until times the last, then for every the resulting coefficient of becomes So, all coefficients become zero, except the coefficient of that becomes Similarly, the constant coefficient becomes and the resulting equation is thus which gives the value of as As, by construction, the numerator is the determinant of the matrix obtained from by replacing column by , we get the expression of Cramer's rule as a necessary condition for a solution. It remains to prove that these values for the unknowns form a solution. Let be the matrix that has the coefficients of as th row, for (this is the adjugate matrix for ). Expressed in matrix terms, we have thus to prove that is a solution; that is, that For that, it suffices to prove that where is the identity matrix. The above properties of the functions show that one has , and therefore, This completes the proof, since a left inverse of a square matrix is also a right-inverse (see Invertible matrix theorem). For other proofs, see below. Finding inverse matrix Let be an matrix with entries in a field . Then where denotes the adjugate matrix, is the determinant, and is the identity matrix. If is nonzero, then the inverse matrix of is This gives a formula for the inverse of , provided . In fact, this formula works whenever is a commutative ring, provided that is a unit. If is not a unit, then is not invertible over the ring (it may be invertible over a larger ring in which some non-unit elements of may be invertible). Applications Explicit formulas for small systems Consider the linear system which in matrix format is Assume is nonzero. Then, with the help of determinants, and can be found with Cramer's rule as The rules for matrices are similar. Given which in matrix format is Then the values of and can be found as follows: Differential geometry Ricci calculus Cramer's rule is used in the Ricci calculus in various calculations involving the Christoffel symbols of the first and second kind. In particular, Cramer's rule can be used to prove that the divergence operator on a Riemannian manifold is invariant with respect to change of coordinates. We give a direct proof, suppressing the role of the Christoffel symbols. Let be a Riemannian manifold equipped with local coordinates . Let be a vector field. We use the summation convention throughout. Theorem. The divergence of , is invariant under change of coordinates. Let be a coordinate transformation with non-singular Jacobian. Then the classical transformation laws imply that where . Similarly, if , then . Writing this transformation law in terms of matrices yields , which implies . Now one computes In order to show that this equals , it is necessary and sufficient to show that which is equivalent to Carrying out the differentiation on the left-hand side, we get: where denotes the matrix obtained from by deleting the th row and th column. But Cramer's Rule says that is the th entry of the matrix . Thus completing the proof. Computing derivatives implicitly Consider the two equations and . When u and v are independent variables, we can define and An equation for can be found by applying Cramer's rule. First, calculate the first derivatives of F, G, x, and y: Substituting dx, dy into dF and dG, we have: Since u, v are both independent, the coefficients of du, dv must be zero. So we can write out equations for the coefficients: Now, by Cramer's rule, we see that: This is now a formula in terms of two Jacobians: Similar formulas can be derived for Integer programming Cramer's rule can be used to prove that an integer programming problem whose constraint matrix is totally unimodular and whose right-hand side is integer, has integer basic solutions. This makes the integer program substantially easier to solve. Ordinary differential equations Cramer's rule is used to derive the general solution to an inhomogeneous linear differential equation by the method of variation of parameters. Geometric interpretation Cramer's rule has a geometric interpretation that can be considered also a proof or simply giving insight about its geometric nature. These geometric arguments work in general and not only in the case of two equations with two unknowns presented here. Given the system of equations it can be considered as an equation between vectors The area of the parallelogram determined by and is given by the determinant of the system of equations: In general, when there are more variables and equations, the determinant of vectors of length will give the volume of the parallelepiped determined by those vectors in the -th dimensional Euclidean space. Therefore, the area of the parallelogram determined by and has to be times the area of the first one since one of the sides has been multiplied by this factor. Now, this last parallelogram, by Cavalieri's principle, has the same area as the parallelogram determined by and Equating the areas of this last and the second parallelogram gives the equation from which Cramer's rule follows. Other proofs A proof by abstract linear algebra This is a restatement of the proof above in abstract language. Consider the map where is the matrix with substituted in the th column, as in Cramer's rule. Because of linearity of determinant in every column, this map is linear. Observe that it sends the th column of to the th basis vector (with 1 in the th place), because determinant of a matrix with a repeated column is 0. So we have a linear map which agrees with the inverse of on the column space; hence it agrees with on the span of the column space. Since is invertible, the column vectors span all of , so our map really is the inverse of . Cramer's rule follows. A short proof A short proof of Cramer's rule can be given by noticing that is the determinant of the matrix On the other hand, assuming that our original matrix is invertible, this matrix has columns , where is the n-th column of the matrix . Recall that the matrix has columns , and therefore . Hence, by using that the determinant of the product of two matrices is the product of the determinants, we have The proof for other is similar. Using Geometric Algebra Inconsistent and indeterminate cases A system of equations is said to be inconsistent when there are no solutions and it is called indeterminate when there is more than one solution. For linear equations, an indeterminate system will have infinitely many solutions (if it is over an infinite field), since the solutions can be expressed in terms of one or more parameters that can take arbitrary values. Cramer's rule applies to the case where the coefficient determinant is nonzero. In the 2×2 case, if the coefficient determinant is zero, then the system is incompatible if the numerator determinants are nonzero, or indeterminate if the numerator determinants are zero. For 3×3 or higher systems, the only thing one can say when the coefficient determinant equals zero is that if any of the numerator determinants are nonzero, then the system must be inconsistent. However, having all determinants zero does not imply that the system is indeterminate. A simple example where all determinants vanish (equal zero) but the system is still incompatible is the 3×3 system x+y+z=1, x+y+z=2, x+y+z=3.
Mathematics
Linear algebra
null
166010
https://en.wikipedia.org/wiki/Vorticity
Vorticity
In continuum mechanics, vorticity is a pseudovector (or axial vector) field that describes the local spinning motion of a continuum near some point (the tendency of something to rotate), as would be seen by an observer located at that point and traveling along with the flow. It is an important quantity in the dynamical theory of fluids and provides a convenient framework for understanding a variety of complex flow phenomena, such as the formation and motion of vortex rings. Mathematically, the vorticity is the curl of the flow velocity : where is the nabla operator. Conceptually, could be determined by marking parts of a continuum in a small neighborhood of the point in question, and watching their relative displacements as they move along the flow. The vorticity would be twice the mean angular velocity vector of those particles relative to their center of mass, oriented according to the right-hand rule. By its own definition, the vorticity vector is a solenoidal field since In a two-dimensional flow, is always perpendicular to the plane of the flow, and can therefore be considered a scalar field. Mathematical definition and properties Mathematically, the vorticity of a three-dimensional flow is a pseudovector field, usually denoted by , defined as the curl of the velocity field describing the continuum motion. In Cartesian coordinates: In words, the vorticity tells how the velocity vector changes when one moves by an infinitesimal distance in a direction perpendicular to it. In a two-dimensional flow where the velocity is independent of the -coordinate and has no -component, the vorticity vector is always parallel to the -axis, and therefore can be expressed as a scalar field multiplied by a constant unit vector : The vorticity is also related to the flow's circulation (line integral of the velocity) along a closed path by the (classical) Stokes' theorem. Namely, for any infinitesimal surface element with normal direction and area , the circulation along the perimeter of is the dot product where is the vorticity at the center of . Since vorticity is an axial vector, it can be associated with a second-order antisymmetric tensor (the so-called vorticity or rotation tensor), which is said to be the dual of . The relation between the two quantities, in index notation, are given by where is the three-dimensional Levi-Civita tensor. The vorticity tensor is simply the antisymmetric part of the tensor , i.e., Examples In a mass of continuum that is rotating like a rigid body, the vorticity is twice the angular velocity vector of that rotation. This is the case, for example, in the central core of a Rankine vortex. The vorticity may be nonzero even when all particles are flowing along straight and parallel pathlines, if there is shear (that is, if the flow speed varies across streamlines). For example, in the laminar flow within a pipe with constant cross section, all particles travel parallel to the axis of the pipe; but faster near that axis, and practically stationary next to the walls. The vorticity will be zero on the axis, and maximum near the walls, where the shear is largest. Conversely, a flow may have zero vorticity even though its particles travel along curved trajectories. An example is the ideal irrotational vortex, where most particles rotate about some straight axis, with speed inversely proportional to their distances to that axis. A small parcel of continuum that does not straddle the axis will be rotated in one sense but sheared in the opposite sense, in such a way that their mean angular velocity about their center of mass is zero. {| border="0" |- | style="text-align:center;" colspan=3 | Example flows: |- | valign="top" | | valign="top" | | valign="top" | |- | style="text-align:center;" | Rigid-body-like vortex | style="text-align:center;" | Parallel flow with shear | style="text-align:center;" | Irrotational vortex |- | style="text-align:center;" colspan=3 | where is the velocity of the flow, is the distance to the center of the vortex and ∝ indicates proportionality.Absolute velocities around the highlighted point: |- | valign="top" | | valign="top" | | valign="top" | |- | style="text-align:center;" colspan=3 | Relative velocities (magnified) around the highlighted point |- | valign="top" | | valign="top" | | valign="top" | |- | style="text-align:center;" | Vorticity ≠ 0 | style="text-align:center;" | Vorticity ≠ 0 | style="text-align:center;" | Vorticity = 0 |} Another way to visualize vorticity is to imagine that, instantaneously, a tiny part of the continuum becomes solid and the rest of the flow disappears. If that tiny new solid particle is rotating, rather than just moving with the flow, then there is vorticity in the flow. In the figure below, the left subfigure demonstrates no vorticity, and the right subfigure demonstrates existence of vorticity. Evolution The evolution of the vorticity field in time is described by the vorticity equation, which can be derived from the Navier–Stokes equations. In many real flows where the viscosity can be neglected (more precisely, in flows with high Reynolds number), the vorticity field can be modeled by a collection of discrete vortices, the vorticity being negligible everywhere except in small regions of space surrounding the axes of the vortices. This is true in the case of two-dimensional potential flow (i.e. two-dimensional zero viscosity flow), in which case the flowfield can be modeled as a complex-valued field on the complex plane. Vorticity is useful for understanding how ideal potential flow solutions can be perturbed to model real flows. In general, the presence of viscosity causes a diffusion of vorticity away from the vortex cores into the general flow field; this flow is accounted for by a diffusion term in the vorticity transport equation. Vortex lines and vortex tubes A vortex line or vorticity line is a line which is everywhere tangent to the local vorticity vector. Vortex lines are defined by the relation where is the vorticity vector in Cartesian coordinates. A vortex tube is the surface in the continuum formed by all vortex lines passing through a given (reducible) closed curve in the continuum. The 'strength' of a vortex tube (also called vortex flux) is the integral of the vorticity across a cross-section of the tube, and is the same everywhere along the tube (because vorticity has zero divergence). It is a consequence of Helmholtz's theorems (or equivalently, of Kelvin's circulation theorem) that in an inviscid fluid the 'strength' of the vortex tube is also constant with time. Viscous effects introduce frictional losses and time dependence. In a three-dimensional flow, vorticity (as measured by the volume integral of the square of its magnitude) can be intensified when a vortex line is extended — a phenomenon known as vortex stretching. This phenomenon occurs in the formation of a bathtub vortex in outflowing water, and the build-up of a tornado by rising air currents. Vorticity meters Rotating-vane vorticity meter A rotating-vane vorticity meter was invented by Russian hydraulic engineer A. Ya. Milovich (1874–1958). In 1913 he proposed a cork with four blades attached as a device qualitatively showing the magnitude of the vertical projection of the vorticity and demonstrated a motion-picture photography of the float's motion on the water surface in a model of a river bend. Rotating-vane vorticity meters are commonly shown in educational films on continuum mechanics (famous examples include the NCFMF's "Vorticity" and "Fundamental Principles of Flow" by Iowa Institute of Hydraulic Research). Specific sciences Aeronautics In aerodynamics, the lift distribution over a finite wing may be approximated by assuming that each spanwise segment of the wing has a semi-infinite trailing vortex behind it. It is then possible to solve for the strength of the vortices using the criterion that there be no flow induced through the surface of the wing. This procedure is called the vortex panel method of computational fluid dynamics. The strengths of the vortices are then summed to find the total approximate circulation about the wing. According to the Kutta–Joukowski theorem, lift per unit of span is the product of circulation, airspeed, and air density. Atmospheric sciences The relative vorticity is the vorticity relative to the Earth induced by the air velocity field. This air velocity field is often modeled as a two-dimensional flow parallel to the ground, so that the relative vorticity vector is generally scalar rotation quantity perpendicular to the ground. Vorticity is positive when – looking down onto the Earth's surface – the wind turns counterclockwise. In the northern hemisphere, positive vorticity is called cyclonic rotation, and negative vorticity is anticyclonic rotation; the nomenclature is reversed in the Southern Hemisphere. The absolute vorticity is computed from the air velocity relative to an inertial frame, and therefore includes a term due to the Earth's rotation, the Coriolis parameter. The potential vorticity is absolute vorticity divided by the vertical spacing between levels of constant (potential) temperature (or entropy). The absolute vorticity of an air mass will change if the air mass is stretched (or compressed) in the vertical direction, but the potential vorticity is conserved in an adiabatic flow. As adiabatic flow predominates in the atmosphere, the potential vorticity is useful as an approximate tracer of air masses in the atmosphere over the timescale of a few days, particularly when viewed on levels of constant entropy. The barotropic vorticity equation is the simplest way for forecasting the movement of Rossby waves (that is, the troughs and ridges of 500 hPa geopotential height) over a limited amount of time (a few days). In the 1950s, the first successful programs for numerical weather forecasting utilized that equation. In modern numerical weather forecasting models and general circulation models (GCMs), vorticity may be one of the predicted variables, in which case the corresponding time-dependent equation is a prognostic equation. Related to the concept of vorticity is the helicity , defined as where the integral is over a given volume . In atmospheric science, helicity of the air motion is important in forecasting supercells and the potential for tornadic activity.
Physical sciences
Fluid mechanics
Physics
166017
https://en.wikipedia.org/wiki/Avocado
Avocado
The avocado, alligator pear or avocado pear (Persea americana) is an evergreen tree in the laurel family (Lauraceae). It is native to the Americas and was first domesticated in Mesoamerica more than 5,000 years ago. It was prized for its large and unusually oily fruit. The tree likely originated in the highlands bridging south-central Mexico and Guatemala. Avocado trees have a native growth range from Mexico to Costa Rica. Its fruit, sometimes also referred to as an alligator pear or avocado pear, is botanically a large berry containing a single large seed. Sequencing of its genome showed that the evolution of avocados was shaped by polyploidy events and that commercial varieties have a hybrid origin. Avocado trees are partly self-pollinating, and are often propagated through grafting to maintain consistent fruit output. Avocados are presently cultivated in the tropical and Mediterranean climates of many countries. Mexico is the world's leading producer of avocados as of 2020, supplying nearly 30% of the global harvest in that year. The fruit of domestic varieties have smooth, buttery, golden-green flesh when ripe. Depending on the cultivar, avocados have green, brown, purplish, or black skin, and may be pear-shaped, egg-shaped, or spherical. For commercial purposes the fruits are picked while unripe and ripened after harvesting. The nutrient density and extremely high fat content of avocado flesh are useful to a variety of cuisines and are often eaten to enrich vegetarian diets. In major production regions like Chile, Mexico and California the water demands of avocado farms place strain on local resources. Avocado production is also implicated in other externalities, including deforestation and human rights concerns associated with the partial control of their production in Mexico by organized crime. Global warming is expected to result in significant changes to the suitable growing zones for avocados, and place additional pressures on the locales in which they are produced due to heat waves and drought. Description Persea americana is a tree that grows to with a trunk diameter between . The leaves are long and alternately arranged. Flower Panicles of flowers with deciduous bracts arise from new growth or the axils of leaves. The tree flowers thousands of blossoms every year. Avocado blossoms sprout from racemes near the leaf axils; they are small and inconspicuous wide. They have no petals but instead two whorls of three pale-green or greenish-yellow downy perianth lobes, each blossom has 9 stamens with 2 basal orange nectar glands. Fruit The avocado fruit is a climacteric, single-seeded berry, due to the imperceptible endocarp covering the seed, rather than a drupe. The pear-shaped fruit is usually long, weighs between , and has a large central seed, long. Early wild avocados prior to domestication had much smaller seeds around in diameter, likely corresponding to smaller fruit size. The species produces various cultivars with larger, fleshier fruits with a thinner exocarp because of selective breeding by humans. Taxonomy and evolution The species was scientifically named by the British botanist Philip Miller in 1768. The genus Persea to which the avocado belongs is considered to have a North American origin, with Persea suggested to have diversified in Central America during the Pleistocene epoch. The modern avocado is thought to have speciated from other Persea during the Pleistocene, estimated at around either 1.3 million or 430,000 years ago. A number of authors, including Connie Barlow in her 2001 book The Ghosts of Evolution, have speculated that the avocado is an "evolutionary anachronism" with megafaunal dispersal syndrome (a concept originally proposed in the 1980s by Paul S. Martin and Daniel H. Janzen), arguing that the avocado likely coevolved dispersal of its large seed by now-extinct megafauna. Barlow proposed that the dispersers included the gomphothere (elephant relative) Cuvieronius, as well as ground sloths, toxodontids, and glyptodonts. The concept of evolutionary anachronisms/megafaunal dispersal syndrome has been criticised by some authors, who note that many large fruit are readily dispersed by non-megafaunal animals, with it being noted that living agoutis disperse avocado seeds, with spectacled bears have also having been observed eating domestic avocados. History The earliest known written account of the avocado in Europe is that of Martín Fernández de Enciso (1528) in 1519 in his book, Suma De Geographia Que Trata De Todas Las Partidas Y Provincias Del Mundo, while describing the native settlement of Yaharo (present-day Dibulla, Colombia). The first detailed account that unequivocally describes the avocado was given by Gonzalo Fernández de Oviedo y Valdés in his work Sumario de la natural historia de las Indias in 1526, while holding administrative Spanish colonial duties in Santo Domingo and visiting Castilla de Oro. The first written record in English of the use of the word 'avocado' was by Hans Sloane, who coined the term, in a 1696 index of Jamaican plants. Etymology The word avocado comes from the Spanish , which derives from the Nahuatl (Mexican) word , which goes back to the proto-Aztecan . In Molina's Nahuatl dictionary "auacatl" is given also as the translation for compañón "testicle", and this has been taken up in popular culture where a frequent claim is that testicle was the word's original meaning. This is not the case, as the original meaning can be reconstructed as "avocado" – rather the word seems to have been used in Nahuatl as a euphemism for "testicle". The modern English name comes from a rendering of the Spanish as . The earliest known written use in English is attested from 1697 as avogato pear, later avocado pear (due to its shape), a term sometimes corrupted to alligator pear. Regional names In Central American, Caribbean Spanish-speaking countries, and Spain it is known by the Mexican Spanish name , while South American Spanish-speaking countries Argentina, Chile, Perú and Uruguay use a Quechua-derived word, . In Portuguese, it is . The Nahuatl can be compounded with other words, as in , meaning avocado soup or sauce, from which the Spanish word derives. In the United Kingdom the term avocado pear, applied when avocados first became commonly available in the 1960s, is sometimes used. Originating as a diminutive in Australian English, a clipped form, , has since become a common colloquialism in South Africa and the United Kingdom. It is known as "butter fruit" in parts of India and Hong Kong. Cultivation History Domestication, leading to genetically distinct cultivars, possibly originated in the Tehuacan Valley in the state of Puebla, Mexico. There is evidence for three possible separate domestications of the avocado, resulting in the currently recognized Guatemalan (quilaoacatl), Mexican (aoacatl) and West Indian (tlacacolaocatl) landraces. The Guatemalan and Mexican and landraces originated in the highlands of those countries, while the West Indian landrace is a lowland variety that ranges from Guatemala, Costa Rica, Colombia, Ecuador to Peru, achieving a wide range through human agency before the arrival of the Europeans. The three separate landraces were most likely to have already intermingled in pre-Columbian America and were described in the Florentine Codex. As a result of artificial selection, the fruit and correspondingly the seeds of cultivated avocados became considerably larger relative to their earlier wild forebears millennia before the Columbian exchange. The earliest residents of northern coastal Peru were living in temporary camps in an ancient wetland and eating avocados, along with chilies, mollusks, sharks, birds, and sea lions. The oldest discovery of an avocado pit comes from Coxcatlan Cave, dating from around 9,000 to 10,000 years ago. Other caves in the Tehuacan Valley from around the same time period also show early evidence for the presence and consumption of avocado. There is evidence for avocado use at Norte Chico civilization sites in Peru by at least 3,200 years ago and at Caballo Muerto in Peru from around 3,800 to 4,500 years ago. The avocado tree also has a long history of cultivation in Central and South America, likely beginning as early as 5,000 BC. A water jar shaped like an avocado, dating to AD 900, was discovered in the pre-Inca city of Chan Chan. The plant was introduced to Spain in 1601, Indonesia around 1750, Mauritius in 1780, Brazil in 1809, the United States mainland in 1825, South Africa and Australia in the late 19th century, and the Ottoman Empire in 1908. In the United States, the avocado was introduced to Florida and Hawaii in 1833 and in California in 1856. The name avocado has been used in English since at least 1764, with minor spelling variants such as avogato attested even earlier. The avocado was commonly referred to in California as ahuacate and in Florida as alligator pear until 1915, when the California Avocado Association popularized the term avocado. Requirements As a subtropical species, avocados need a climate without frost and with little wind. High winds reduce the humidity, dehydrate the flowers, and affect pollination. When even a mild frost occurs, premature fruit drop may occur; although the 'Hass' cultivar can tolerate temperatures down to −1 °C. Several cold-hardy varieties are planted in the region of Gainesville, Florida, which survive temperatures as low as with only minor leaf damage. The trees also need well-aerated soils, ideally more than 1 m deep. However, Guatemalan varieties such as "MacArthur", "Rincon", or "Nabal" can withstand temperatures down to . According to information published by the Water Footprint Network, it takes an average of approximately of applied fresh ground or surface water, not including rainfall or natural moisture in the soil, to grow one avocado (). However, the amount of water needed depends on where it is grown; for example, in the main avocado-growing region of Chile, about of applied water are needed to grow one avocado (). Increasing demand and production of avocados may cause water shortages in some avocado production areas, such as the Mexican state of Michoacán. Avocados may also cause environmental and socioeconomic impacts in major production areas, illegal deforestation, and water disputes. Water requirements for growing avocados are three times higher than for apples, and 18 times higher than for tomatoes. Harvest and postharvest Commercial orchards produce an average of seven tonnes per hectare each year, with some orchards achieving 20 tonnes per hectare. Biennial bearing can be a problem, with heavy crops in one year being followed by poor yields the next. Like the banana, the avocado is a climacteric fruit, which matures on the tree, but ripens off the tree. Avocados used in commerce are picked hard and green and kept in coolers at until they reach their final destination. Avocados must be mature to ripen properly. Avocados that fall off the tree ripen on the ground. Generally, the fruit is picked once it reaches maturity; Mexican growers pick 'Hass' avocados when they have more than 23% dry matter, and other producing countries have similar standards. Once picked, avocados ripen in one to two weeks (depending on the cultivar) at room temperature (faster if stored with other fruits such as apples or bananas, because of the influence of ethylene gas). Some supermarkets sell ripened avocados which have been treated with synthetic ethylene to hasten ripening. The use of an ethylene gas "ripening room", which is now an industry standard, was pioneered in the 1980s by farmer Gil Henry of Escondido, California, in response to footage from a hidden supermarket camera which showed shoppers repeatedly squeezing hard, unripe avocados, putting them "back in the bin", and moving on without making a purchase. In some cases, avocados can be left on the tree for several months, which is an advantage to commercial growers who seek the greatest return for their crop, but if the fruit remains unpicked for too long, it falls to the ground. Breeding The species is only partially able to self-pollinate because of dichogamy in its flowering. This limitation, added to the long juvenile period, makes the species difficult to breed. Most cultivars are propagated by grafting, having originated from random seedling plants or minor mutations derived from cultivars. Modern breeding programs tend to use isolation plots where the chances of cross-pollination are reduced. That is the case for programs at the University of California, Riverside, as well as the Volcani Centre and the Instituto de Investigaciones Agropecuarias in Chile. The avocado is unusual in that the timing of the male and female flower phases differs among cultivars. The two flowering types are A and B. A-cultivar flowers open as female on the morning of the first day and close in late morning or early afternoon. Then they open as male in the afternoon of the second day. B varieties open as female on the afternoon of the first day, close in late afternoon and reopen as male the following morning. A cultivars: 'Hass', 'Gwen', 'Lamb Hass', 'Pinkerton', 'Reed' B cultivars: 'Fuerte', 'Sharwil', 'Zutano', 'Bacon', 'Ettinger', 'Sir Prize', 'Walter Hole' Certain cultivars, such as the 'Hass', have a tendency to bear well only in alternate years. After a season with a low yield, due to factors such as cold (which the avocado does not tolerate well), the trees tend to produce abundantly the next season. In addition, due to environmental circumstances during some years, seedless avocados may appear on the trees. Known in the avocado industry as "cukes", they are usually discarded commercially due to their small size. Propagation and rootstocks Avocados can be propagated by seed, taking roughly four to six years to bear fruit, although in some cases seedlings can take 10 years to come into bearing. The offspring is unlikely to be identical to the parent cultivar in fruit quality. Prime quality varieties are therefore propagated by grafting to rootstocks that are propagated by seed (seedling rootstocks) or by layering (clonal rootstocks). After about a year of growing in a greenhouse, the young rootstocks are ready to be grafted. Terminal and lateral grafting is normally used. The scion cultivar grows for another 6–12 months before the tree is ready to be sold. Clonal rootstocks are selected for tolerance of specific soil and disease conditions, such as poor soil aeration or resistance to the soil-borne disease (root rot) caused by Phytophthora cinnamomi. Advances in cloning techniques that can produce up to 500 new plants from a single millimetre of tree cutting have the potential to increase the availability of rootstocks. Commercial avocado production is limited to a small fraction of the vast genetic diversity in the species. Conservation of this genetic diversity has relied largely on field collection, as avocado seeds often do not survive storage in seed banks. This is problematic, as field preservation of living cultivars is expensive, and habitat loss threatens wild cultivars. More recently, an alternate method of conservation has been developed based on cryopreservation of avocado somatic embryos with reliable methods for somatic embryogenesis and reconstitution into living trees. As a houseplant The avocado tree can be grown domestically and used as a decorative houseplant. The pit germinates in normal soil conditions or partially submerged in a small glass (or container) of water. In the latter method, the pit sprouts in four to six weeks, at which time it is planted in standard houseplant potting soil. The plant normally grows large enough to be prunable; it does not bear fruit unless it has ample sunlight. Home gardeners can graft a branch from a fruit-bearing plant to speed maturity, which typically takes four to six years to bear fruit. Pests and diseases Avocado trees are vulnerable to bacterial, viral, fungal, and nutritional diseases (excesses and deficiencies of key minerals). Disease can affect all parts of the plant, causing spotting, rotting, cankers, pitting, and discoloration. The pyriform scale insect (Protopulvinaria pyriformis) is known from Australia, South Africa, Israel, Italy, France, Spain, Cuba, Florida, and Peru. It is normally found on avocado, and in Peru it is said to be the worst insect pest of the fruit. Certain cultivars of avocado seem more susceptible to attack by the scale than others. Cultivation by location Cultivation in Mexico Mexico is by far the world's largest avocado growing country, producing several times more than the second largest producer. In 2013, the total area dedicated to avocado production was , and the harvest was 2.03 million tonnes in 2017. The states that produce the most are México, Morelos, Nayarit, Puebla, and Michoacan, accounting for 86% of the total. In Michoacán, the cultivation is complicated by the existence of drug cartels that extort protection fees from cultivators. They are reported to exact 2,000 Mexican pesos per hectare from avocado farmers and 1 to 3 pesos/kg of harvested fruit. It is such a problem that the phrase blood guacamole has been adopted to describe the social effects in Mexico of the vast worldwide demand for its fruits. Cultivation in California Avocados were introduced to California from Nicaragua in the early 1850s, when avocado trees imported from the Central American country were observed and reported growing near San Gabriel. The avocado has since become a successful cash crop. About – as of 2015, some 80% of United States avocado production – is located in Southern California. Avocado is the official fruit of the state of California. Fallbrook, California, claims, without official recognition, the title of "Avocado Capital of the World" (also claimed by the town of Uruapan in Mexico), and both it and Carpinteria, California, host annual avocado festivals. The California Avocado Commission and the California Avocado Society are the two major grower organizations and Calavo Growers is a major distributor. Cultivation in Peru 'Hass' avocado production in Peru encompasses thousands of hectares in central and western Peru. Peru has now become the largest supplier of avocados imported to the European Union and the second largest supplier to Asia and the United States. The country's location near the equator and along the Pacific Ocean creates consistently mild temperatures all year. 'Hass' avocados from Peru are seasonally available to consumers from May through September and are promoted under the auspices of the Peruvian Avocado Commission, headquartered in Washington, D.C. Cultivation in Chile Chile has produced avocados for over 100 years with production increasing dramatically in the early 1980s due to global demand. New York magazine reported in 2015 that "Large avocado growers are draining the country's groundwater and rivers faster than they can replenish themselves." 88% of total production and 99% of exported avocados from Chile are Hass avocados. Avocados are a staple fruit in Chile with 30% of production destined for the domestic market. No import tariffs are imposed on Chilean avocados by China, the United States, or the European Union due to free trade agreements. Cultivars A cultivars 'Choquette': A seedling from Miami, Florida. 'Choquette' bore large fruit of good eating quality in large quantities and had good disease resistance, and thus became a major cultivar. Today 'Choquette' is widely propagated in south Florida both for commercial growing and for home growing. 'Gwen': A seedling bred from 'Hass' x 'Thille' in 1982, 'Gwen' is higher yielding and more dwarfing than 'Hass' in California. The fruit has an oval shape, slightly smaller than 'Hass' (), with a rich, nutty flavor. The skin texture is more finely pebbled than 'Hass', and is dull green when ripe. It is frost-hardy down to . 'Hass': The 'Hass' is the most common cultivar of avocado. It produces fruit year-round and accounts for 80% of cultivated avocados in the world. All 'Hass' trees are descended from a single "mother tree" raised by a mail carrier named Rudolph Hass, of La Habra Heights, California. Hass patented the productive tree in 1935. The "mother tree", of uncertain subspecies, died of root rot and was cut down in September 2002. 'Lula': A seedling reportedly grown from a 'Taft' avocado planted in Miami on the property of George Cellon, it is named after Cellon's wife, Lula. It was likely a cross between Guatemalan and Mexican types. 'Lula' was recognized for its flavor and high oil content and propagated commercially in Florida. 'Maluma': A relatively new cultivar, it was discovered in South Africa in the early 1990s by Mr. A.G. (Dries) Joubert. It is a chance seedling of unknown parentage. 'Pinkerton': First grown on the Pinkerton Ranch in Saticoy, California, in the early 1970s, 'Pinkerton' is a seedling of 'Hass' x 'Rincon'. The large fruit has a small seed, and its green skin deepens in color as it ripens. The thick flesh has a smooth, creamy texture, pale green color, good flavor, and high oil content. It shows some cold tolerance, to and bears consistently heavy crops. A hybrid Guatemalan type, it has excellent peeling characteristics. 'Reed': Developed from a chance seedling found in 1948 by James S. Reed in California, this cultivar has large, round, green fruit with a smooth texture and dark, thick, glossy skin. Smooth and delicate, the flesh has a slightly nutty flavor. The skin ripens green. A Guatemalan type, it is hardy to . Tree size is about . B cultivars 'Fuerte': Commercialized in the U.S. from budwood imported from Atlixco, Mexico in 1911, Fuerte was the dominant commercial variety in the U.S. for the first half of the 20th century. 'Sharwil': Developed by James Cockburn Wilson (died 1990) with Frank Victor Sharpe in Tamborine Mountain, Queensland, Australia, in the 1950s, a portmanteau of Sharpe and Wilson. Wilson also developed the Willard variety (Wilson and Hazzard), imported the Reed variety into Australia, and developed the Shepard variety. Sharpe was later awarded a CMG in 1972 for services to the avocado industry. The variety originated in Guatemala. Other cultivars Other avocado cultivars include 'Spinks'. Historically attested varieties (which may or may not survive among horticulturists) include the 'Challenge', 'Dickinson', 'Kist', 'Queen', 'Rey', 'Royal', 'Sharpless', and 'Taft'. Stoneless avocado A stoneless avocado, marketed as a "cocktail avocado", which does not contain a pit, is available on a limited basis. They are five to eight centimetres long; the whole fruit may be eaten, including the skin. It is produced from an unpollinated blossom in which the seed does not develop. Seedless avocados regularly appear on trees. Known in the avocado industry as "cukes", they are usually discarded commercially due to their small size. Production In 2020, world production of avocados was 8.1 million tonnes, led by Mexico with 30% (2.4 million tonnes) of the total (table). Other major producers were Colombia, Dominican Republic, Peru, and Indonesia, together producing 35% of the world total. Despite market effects of the 2020 COVID-19 pandemic, volume production of avocados in Mexico increased by 40% over 2019 levels. In 2018, the US Department of Agriculture estimated that in total were under cultivation for avocado production in Mexico, a 6% increase over the previous year, and that 2 million tonnes would be exported. The Mexican state of Michoacán is the world leader in avocado production, accounting for 80% of all Mexican output. Most Mexican growers produce the Hass variety due to its longer shelf life for shipping and high demand among consumers. Market Seventy-six percent of Mexico's avocado exports go to the United States, with the free trade agreement between the US, Canada and Mexico in July 2020 facilitating avocado shipments within the North American free trade zone. The Mexican domestic market was expanding during 2020. Mexican avocado exports are challenged by growth of production by Peru and the Dominican Republic to supply the US and European markets. During the COVID-19 pandemic, Mexican avocado farmers restricted harvesting as the overall demand and supply chain slowed due to labor and shipping restrictions. Later in 2020, demand in the United States and within Mexico increased at a time when American retail prices continued to rise. During 2020 in the United States, month-to-month volume sales of avocados were similar to those of tomatoes at about 250 million pounds (110 million kg) per month. A report issued in mid-2020 forecast that the worldwide market, which was US$13.7 billion in 2018, would recover after the end of the pandemic and rise to US$21.6 billion by 2026. Toxicity Allergies Some people have allergic reactions to avocado. There are two main forms of allergy: those with a tree-pollen allergy develop local symptoms in the mouth and throat shortly after eating avocado; the second, known as latex-fruit syndrome, is related to latex allergy and symptoms include generalised urticaria, abdominal pain, and vomiting and can sometimes be life-threatening. Toxicity to animals Avocado leaves, bark, skin, or pit are documented to be harmful to animals; cats, dogs, cattle, goats, rabbits, rats, guinea pigs, birds, fish, and horses can be severely harmed or even killed when they consume them. The avocado fruit is poisonous to some birds, and the American Society for the Prevention of Cruelty to Animals (ASPCA) lists it as toxic to horses. Avocado leaves contain a toxic fatty acid derivative, persin, which in sufficient quantity can cause colic in horses and without veterinary treatment, death. The symptoms include gastrointestinal irritation, vomiting, diarrhea, respiratory distress, congestion, fluid accumulation around the tissues of the heart, and even death. Birds also seem to be particularly sensitive to this toxic compound. The leaves of the Guatemalan variety of P. americana are toxic to goats, sheep, and horses. Uses Nutrition Raw avocado flesh is 73% water, 15% fat, 9% carbohydrates, and 2% protein (table). In a 100-gram reference amount, avocado supplies , and is a rich source (20% or more of the Daily Value, DV) of several B vitamins (such as 28% DV in pantothenic acid) and vitamin K (20% DV), with moderate contents (10–19% DV) of vitamin C, vitamin E, and potassium. Avocados also contain phytosterols and carotenoids, such as lutein and zeaxanthin. Fat composition Avocados have diverse fats. For a typical one: About 75% of an avocado's energy comes from fat, most of which (67% of total fat) is monounsaturated fat as oleic acid (table). Other predominant fats include palmitic acid and linoleic acid. The saturated fat content amounts to 14% of the total fat. Typical total fat composition is roughly: 1% ω-3, 14% ω-6, 71% ω-9 (65% oleic and 6% palmitoleic), and 14% saturated fat (palmitic acid). Although costly to produce, nutrient-rich avocado oil has a multitude of uses for salads or cooking and in cosmetics and soap products. Research In 2022, a prospective cohort study following 110,487 people for 30 years found that eating two servings of avocado per week reduced the risk of developing cardiovascular diseases by 16–22%. The study involved replacing half a daily serving of saturated fat sources, including margarine, butter, egg, yogurt, cheese, or processed meats, with an equivalent amount of avocado. Culinary The fruit of horticultural cultivars has a markedly higher fat content than most other fruit, mostly monounsaturated fat, and as such serves as an important staple in the diet of consumers who have limited access to other fatty foods (high-fat meats and fish, dairy products). Having a high smoke point, avocado oil is expensive compared to common salad and cooking oils, and is mostly used for salads or dips. A ripe avocado yields to gentle pressure when held in the palm of the hand and squeezed. The flesh is prone to enzymatic browning, quickly turning brown after exposure to air. To prevent this, lime or lemon juice can be added to avocados after peeling. The fruit is not sweet, but distinctly and subtly flavored, with smooth texture. It is used in both savory and sweet dishes, though in many countries not for both. The avocado is common in vegetarian cuisine as a substitute for meats in sandwiches and salads because of its high fat content. Generally, avocado is served raw, though some cultivars, including the common 'Hass', can be cooked for a short time without becoming bitter. The flesh of some avocados may be rendered inedible by heat. Prolonged cooking induces this chemical reaction in all cultivars. It is used as the base for the Mexican dip known as guacamole, as well as a spread on corn tortillas or toast, served with spices. Avocado is a primary ingredient in avocado soup. Avocado slices are frequently added to hamburgers and tortas and is a key ingredient in California rolls and other makizushi ("maki", or rolled sushi). International In Mexico and Central America, avocados are served mixed with white rice, in soups, salads, or on the side of chicken and meat. They are also commonly added to pozole. In Peru, they are consumed with tequeños as mayonnaise, served as a side dish with parrillas, used in salads and sandwiches, or as a whole dish when filled with tuna, shrimp, or chicken. In Chile, it is used as a puree-like sauce with chicken, hamburgers, and hot dogs; and in slices for celery or lettuce salads. The Chilean version of Caesar salad contains large slices of mature avocado. Avocados in savory dishes, often seen as exotic, are a relative novelty in Portuguese-speaking countries, such as Brazil, where the traditional preparation is mashed with sugar and lime, and eaten as a dessert or snack. This contrasts with Spanish-speaking countries such as Chile, Mexico, or Argentina, where the opposite is true and sweet preparations are rare. With the exception of the Philippines, a former Spanish colony where avocados are traditionally used in sweet preparations and savory uses are seen as exotic. In the Philippines (where avocados were introduced from Mexico since before the 1700s), Brazil, Indonesia, Vietnam, and southern India (especially the coastal Kerala, Tamil Nadu and Karnataka region), avocados are frequently used for milkshakes and occasionally added to ice cream and other desserts. In Brazil, the Philippines Vietnam, and Indonesia, a dessert drink is made with sugar, milk or water, and pureed avocado. Chocolate syrup is sometimes added. In Morocco, a similar chilled avocado and milk drink is sweetened with confectioner's sugar and flavored with a touch of orange flower water. In Ethiopia, avocados are made into juice by mixing them with sugar and milk or water, usually served with Vimto and a slice of lemon. It is also common to serve layered multiple fruit juices in a glass (locally called Spris) made of avocados, mangoes, bananas, guavas, and papayas. Avocados are also used to make salads. In Kenya and Nigeria, the avocado is often eaten as a fruit alone or mixed with other fruits in a fruit salad, or as part of a vegetable salad. In Ghana, they are often eaten alone on sliced bread as a sandwich. In Sri Lanka, their well-ripened flesh, thoroughly mashed or pureed with milk and kitul treacle (a liquid jaggery made from the sap of the inflorescence of jaggery palms), is a common dessert. In Haiti, they are often consumed with cassava or regular bread for breakfast. In the United Kingdom, the avocado became available during the 1960s when introduced by Sainsbury's under the name 'avocado pear'. Much of the success of avocados in the UK is attributed to a long-running promotional campaign initiated by South African growers in 1995. In Australia and New Zealand, avocados are commonly served on sandwiches, sushi, toast, or with chicken. Leaves In addition to the fruit, the leaves of Mexican avocados (Persea americana var. drymifolia) are used in some cuisines as a spice, with a flavor somewhat reminiscent of anise. They are sold both dried and fresh, toasted before use, and either crumbled or used whole, commonly in bean dishes.
Biology and health sciences
Magnoliids
null
166189
https://en.wikipedia.org/wiki/Autonomic%20nervous%20system
Autonomic nervous system
The autonomic nervous system (ANS), sometimes called the visceral nervous system and formerly the vegetative nervous system, is a division of the nervous system that operates internal organs, smooth muscle and glands. The autonomic nervous system is a control system that acts largely unconsciously and regulates bodily functions, such as the heart rate, its force of contraction, digestion, respiratory rate, pupillary response, urination, and sexual arousal. The fight-or-flight response, also known as the acute stress response, is set into action by the autonomic nervous system. The autonomic nervous system is regulated by integrated reflexes through the brainstem to the spinal cord and organs. Autonomic functions include control of respiration, cardiac regulation (the cardiac control center), vasomotor activity (the vasomotor center), and certain reflex actions such as coughing, sneezing, swallowing and vomiting. Those are then subdivided into other areas and are also linked to autonomic subsystems and the peripheral nervous system. The hypothalamus, just above the brain stem, acts as an integrator for autonomic functions, receiving autonomic regulatory input from the limbic system. Although conflicting reports about its subdivisions exist in the literature, the autonomic nervous system has historically been considered a purely motor system, and has been divided into three branches: the sympathetic nervous system, the parasympathetic nervous system, and the enteric nervous system. Some textbooks do not include the enteric nervous system as part of this system. The sympathetic nervous system is responsible for setting off the fight-or response. The parasympathetic nervous system is responsible for the body's rest and digestion response. In many cases, both of these systems have "opposite" actions where one system activates a physiological response and the other inhibits it. An older simplification of the sympathetic and parasympathetic nervous systems as "excitatory" and "inhibitory" was overturned due to the many exceptions found. A more modern characterization is that the sympathetic nervous system is a "quick response mobilizing system" and the parasympathetic is a "more slowly activated dampening system", but even this has exceptions, such as in sexual arousal and orgasm, wherein both play a role. There are inhibitory and excitatory synapses between neurons. A third subsystem of neurons has been named as non-noradrenergic, non-cholinergic transmitters (because they use nitric oxide as a neurotransmitter) and are integral in autonomic function, in particular in the gut and the lungs. Although the ANS is also known as the visceral nervous system and although most of its fibers carry non-somatic information to the CNS, many authors still consider it only connected with the motor side. Most autonomous functions are involuntary but they can often work in conjunction with the somatic nervous system which provides voluntary control. Structure The autonomic nervous system has been classically divided into the sympathetic nervous system and parasympathetic nervous system only (i.e., exclusively motor). The sympathetic division emerges from the spinal cord in the thoracic and lumbar areas, terminating around L2-3. The parasympathetic division has craniosacral "outflow", meaning that the neurons begin at the cranial nerves (specifically the oculomotor nerve, facial nerve, glossopharyngeal nerve and vagus nerve) and sacral (S2-S4) spinal cord. The autonomic nervous system is unique in that it requires a sequential two-neuron efferent pathway; the preganglionic neuron must first synapse onto a postganglionic neuron before innervating the target organ. The preganglionic, or first, neuron will begin at the "outflow" and will synapse at the postganglionic, or second, neuron's cell body. The postganglionic neuron will then synapse at the target organ. Sympathetic division The sympathetic nervous system consists of cells with bodies in the lateral grey column from T1 to L2/3. These cell bodies are "GVE" (general visceral efferent) neurons and are the preganglionic neurons. There are several locations upon which preganglionic neurons can synapse for their postganglionic neurons: paravertebral ganglia (3) of the sympathetic chain (these run on either side of the vertebral bodies) cervical ganglia (3) thoracic ganglia (12) and rostral lumbar ganglia (2 or 3) caudal lumbar ganglia and sacral ganglia prevertebral ganglia (celiac ganglion, aorticorenal ganglion, superior mesenteric ganglion, inferior mesenteric ganglion) chromaffin cells of the adrenal medulla (this is the one exception to the two-neuron pathway rule: the synapse is directly efferent onto the target cell bodies) These ganglia provide the postganglionic neurons from which innervation of target organs follows. Examples of splanchnic (visceral) nerves are: cervical cardiac nerves and thoracic visceral nerves, which synapse in the sympathetic chain thoracic splanchnic nerves (greater, lesser, least), which synapse in the prevertebral ganglia lumbar splanchnic nerves, which synapse in the prevertebral ganglia sacral splanchnic nerves, which synapse in the inferior hypogastric plexus These all contain afferent (sensory) nerves as well, known as GVA (general visceral afferent) neurons. Parasympathetic division The parasympathetic nervous system consists of cells with bodies in one of two locations: the brainstem (cranial nerves III, VII, IX, X) or the sacral spinal cord (S2, S3, S4). These are the preganglionic neurons, which synapse with postganglionic neurons in these locations: parasympathetic ganglia of the head: ciliary (cranial nerve III), geniculate (cranial nerve VII), pterygopalatine (cranial nerve VII and IX), and submandibular (cranial nerve VII and IX), ottic in inner ear space (cranial nerve IX) tympanic nerve of VII with C9, C10, C5 (cranial nerves VII, XI, X, V) in promontory plexus in middle ear space trigeminal ganglion specially sensory (only mastication motor) is common with other ones in or near the wall of an organ innervated by the vagus (cranial nerve X) or sacral nerves plexus (S2, S3, S4) these ganglia provide the postganglionic neurons from which innervations of target organs follows. Examples are: the postganglionic parasympathetic splanchnic (visceral) nerves the vagus nerve, which passes through the thorax and abdominal regions innervating, among other organs, the heart, lungs, liver and stomach Enteric Nervous System Development of the Enteric Nervous System: The intricate process of enteric nervous system (ENS) development begins with the migration of cells from the vagal section of the neural crest. These cells embark on a journey from the cranial region to populate the entire gastrointestinal tract. Concurrently, the sacral section of the neural crest provides an additional layer of complexity by contributing input to the hindgut ganglia. Throughout this developmental journey, numerous receptors exhibiting tyrosine kinase activity, such as Ret and Kit, play indispensable roles. Ret, for instance, plays a critical role in the formation of enteric ganglia derived from cells known as vagal neural crest. In mice, targeted disruption of the RET gene results in renal agenesis and the absence of enteric ganglia, while in humans, mutations in the RET gene are associated with megacolon. Similarly, Kit, another receptor with tyrosine kinase activity, is implicated in Cajal interstitial cell formation, influencing the spontaneous, rhythmic, electrical excitatory activity known as slow waves in the gastrointestinal tract. Understanding the molecular intricacies of these receptors provides crucial insights into the delicate orchestration of ENS development. Structure of the Enteric Nervous System: The structural complexity of the enteric nervous system (ENS) is a fascinating aspect of its functional significance. Originally perceived as postganglionic parasympathetic neurons, the ENS earned recognition for its autonomy in the early 1900s. Boasting approximately 100 million neurons, a quantity comparable to the spinal cord, the ENS is often described as a "brain of its own." This description is rooted in the ENS's ability to communicate independently with the central nervous system through parasympathetic and sympathetic neurons. At the core of this intricate structure are the myenteric plexus (Auerbach's) and the submucous plexus (Meissner's), two main plexuses formed by the grouping of nerve-cell bodies into tiny ganglia connected by bundles of nerve processes. The myenteric plexus extends the full length of the gut, situated between the circular and longitudinal muscle layers. Beyond its primary motor and secretomotor functions, the myenteric plexus exhibits projections to submucosal ganglia and enteric ganglia in the pancreas and gallbladder, showcasing the interconnectivity within the ENS. Additionally, the myenteric plexus plays a unique role in innervating motor end plates with the inhibitory neurotransmitter nitric oxide in the striated-muscle segment of the esophagus, a feature exclusive to this organ. Meanwhile, the submucous plexus, most developed in the small intestine, occupies a crucial position in secretory regulation. Positioned in the submucosa between the circular muscle layer and the muscularis mucosa, the submucous plexus's neurons innervate intestinal endocrine cells, submucosal blood arteries, and the muscularis mucosa, emphasizing its multifaceted role in gastrointestinal function. Furthermore, ganglionated plexuses in the pancreatic, cystic duct, common bile duct, and gallbladder, resembling submucous plexuses, contribute to the overall complexity of the ENS structure. In this intricate landscape, glial cells emerge as key players, outnumbering enteric neurons and covering the majority of the surface of enteric neuronal-cell bodies with laminar extensions. Resembling the astrocytes of the central nervous system, enteric glial cells respond to cytokines by expressing MHC class II antigens and generating interleukins. This underlines their pivotal role in modulating inflammatory responses in the intestine, adding another layer of sophistication to the functional dynamics of the ENS. The varied morphological shapes of enteric neurons further contribute to the structural diversity of the ENS, with neurons capable of exhibiting up to eight different morphologies. These neurons are primarily categorized into type I and type II, where type II neurons are multipolar with numerous long, smooth processes, and type I neurons feature numerous club-shaped processes along with a single long, slender process. The rich structural diversity of enteric neurons highlights the complexity and adaptability of the ENS in orchestrating a wide array of gastrointestinal functions, reflecting its status as a dynamic and sophisticated component of the nervous system. Sensory neurons The visceral sensory system - technically not a part of the autonomic nervous system - is composed of primary neurons located in cranial sensory ganglia: the geniculate, petrosal and nodose ganglia, appended respectively to cranial nerves VII, IX and X. These sensory neurons monitor the levels of carbon dioxide, oxygen and sugar in the blood, arterial pressure and the chemical composition of the stomach and gut content. They also convey the sense of taste and smell, which, unlike most functions of the ANS, is a conscious perception. Blood oxygen and carbon dioxide are in fact directly sensed by the carotid body, a small collection of chemosensors at the bifurcation of the carotid artery, innervated by the petrosal (IXth) ganglion. Primary sensory neurons project (synapse) onto "second order" visceral sensory neurons located in the medulla oblongata, forming the nucleus of the solitary tract (nTS), that integrates all visceral information. The nTS also receives input from a nearby chemosensory center, the area postrema, that detects toxins in the blood and the cerebrospinal fluid and is essential for chemically induced vomiting or conditional taste aversion (the memory that ensures that an animal that has been poisoned by a food never touches it again). All this visceral sensory information constantly and unconsciously modulates the activity of the motor neurons of the ANS. Innervation Autonomic nerves travel to organs throughout the body. Most organs receive parasympathetic supply by the vagus nerve and sympathetic supply by splanchnic nerves. The sensory part of the latter reaches the spinal column at certain spinal segments. Pain in any internal organ is perceived as referred pain, more specifically as pain from the dermatome corresponding to the spinal segment. Motor neurons Motor neurons of the autonomic nervous system are found in "autonomic ganglia". Those of the parasympathetic branch are located close to the target organ whilst the ganglia of the sympathetic branch are located close to the spinal cord. The sympathetic ganglia here, are found in two chains: the pre-vertebral and pre-aortic chains. The activity of autonomic ganglionic neurons is modulated by "preganglionic neurons" located in the central nervous system. Preganglionic sympathetic neurons are located in the spinal cord, at the thorax and upper lumbar levels. Preganglionic parasympathetic neurons are found in the medulla oblongata where they form visceral motor nuclei; the dorsal motor nucleus of the vagus nerve; the nucleus ambiguus, the salivatory nuclei, and in the sacral region of the spinal cord. Function Sympathetic and parasympathetic divisions typically function in opposition to each other. But this opposition is better termed complementary in nature rather than antagonistic. For an analogy, one may think of the sympathetic division as the accelerator and the parasympathetic division as the brake. The sympathetic division typically functions in actions requiring quick responses. The parasympathetic division functions with actions that do not require immediate reaction. The sympathetic system is often considered the "fight or flight" system, while the parasympathetic system is often considered the "rest and digest" or "feed and breed" system. However, many instances of sympathetic and parasympathetic activity cannot be ascribed to "fight" or "rest" situations. For example, standing up from a reclining or sitting position would entail an unsustainable drop in blood pressure if not for a compensatory increase in the arterial sympathetic tonus. Another example is the constant, second-to-second, modulation of heart rate by sympathetic and parasympathetic influences, as a function of the respiratory cycles. In general, these two systems should be seen as permanently modulating vital functions, in a usually antagonistic fashion, to achieve homeostasis. Higher organisms maintain their integrity via homeostasis which relies on negative feedback regulation which, in turn, typically depends on the autonomic nervous system. Some typical actions of the sympathetic and parasympathetic nervous systems are listed below. Sympathetic nervous system Promotes a fight-or-flight response, corresponds with arousal and energy generation, and inhibits digestion Diverts blood flow away from the gastro-intestinal (GI) tract and skin via vasoconstriction Blood flow to skeletal muscles and the lungs is enhanced (by as much as 1200% in the case of skeletal muscles) Dilates bronchioles of the lung through circulating epinephrine, which allows for greater alveolar oxygen exchange Increases heart rate and the contractility of cardiac cells (myocytes), thereby providing a mechanism for enhanced blood flow to skeletal muscles Dilates pupils and relaxes the ciliary muscle to the lens, allowing more light to enter the eye and enhances far vision Provides vasodilation for the coronary vessels of the heart Constricts all the intestinal sphincters and the urinary sphincter Inhibits peristalsis Stimulates orgasm The pattern of innervation of the sweat gland—namely, the postganglionic sympathetic nerve fibers—allows clinicians and researchers to use sudomotor function testing to assess dysfunction of the autonomic nervous systems, through electrochemical skin conductance. Parasympathetic nervous system The parasympathetic nervous system has been said to promote a "rest and digest" response, promotes calming of the nerves return to regular function, and enhancing digestion. Functions of nerves within the parasympathetic nervous system include: Dilating blood vessels leading to the GI tract, increasing the blood flow. Constricting the bronchiolar diameter when the need for oxygen has diminished Dedicated cardiac branches of the vagus and thoracic spinal accessory nerves impart parasympathetic control of the heart (myocardium) Constriction of the pupil and contraction of the ciliary muscles, facilitating accommodation and allowing for closer vision Stimulating salivary gland secretion, and accelerates peristalsis, mediating digestion of food and, indirectly, the absorption of nutrients Sexual. Nerves of the peripheral nervous system are involved in the erection of genital tissues via the pelvic splanchnic nerves 2–4. They are also responsible for stimulating sexual arousal. Enteric nervous system The enteric nervous system is the intrinsic nervous system of the gastrointestinal system. It has been described as "the Second Brain of the Human Body". Its functions include: Sensing chemical and mechanical changes in the gut Regulating secretions in the gut Controlling peristalsis and some other movements Neurotransmitters At the effector organs, sympathetic ganglionic neurons release noradrenaline (norepinephrine), along with other cotransmitters such as ATP, to act on adrenergic receptors, with the exception of the sweat glands and the adrenal medulla: Acetylcholine is the preganglionic neurotransmitter for both divisions of the ANS, as well as the postganglionic neurotransmitter of parasympathetic neurons. Nerves that release acetylcholine are said to be cholinergic. In the parasympathetic system, ganglionic neurons use acetylcholine as a neurotransmitter to stimulate muscarinic receptors. At the adrenal medulla, there is no postsynaptic neuron. Instead, the presynaptic neuron releases acetylcholine to act on nicotinic receptors. Stimulation of the adrenal medulla releases adrenaline (epinephrine) into the bloodstream, which acts on adrenoceptors, thereby indirectly mediating or mimicking sympathetic activity. A full table is found at Table of neurotransmitter actions in the ANS. Autonomic nervous system and the immune system Recent studies indicate that ANS activation is critical for regulating the local and systemic immune-inflammatory responses and may influence acute stroke outcomes. Therapeutic approaches modulating the activation of the ANS or the immune-inflammatory response could promote neurologic recovery after stroke. History The specialised system of the autonomic nervous system was recognised by Galen. In 1665, Thomas Willis used the terminology, and in 1900, John Newport Langley used the term, defining the two divisions as the sympathetic and parasympathetic nervous systems. Caffeine effects Caffeine is a bioactive ingredient found in commonly consumed beverages such as coffee, tea, and sodas. Short-term physiological effects of caffeine include increased blood pressure and sympathetic nerve outflow. Habitual consumption of caffeine may inhibit physiological short-term effects. Consumption of caffeinated espresso increases parasympathetic activity in habitual caffeine consumers; however, decaffeinated espresso inhibits parasympathetic activity in habitual caffeine consumers. It is possible that other bioactive ingredients in decaffeinated espresso may also contribute to the inhibition of parasympathetic activity in habitual caffeine consumers. Caffeine is capable of increasing work capacity while individuals perform strenuous tasks. In one study, caffeine provoked a greater maximum heart rate while a strenuous task was being performed compared to a placebo. This tendency is likely due to caffeine's ability to increase sympathetic nerve outflow. Furthermore, this study found that recovery after intense exercise was slower when caffeine was consumed prior to exercise. This finding is indicative of caffeine's tendency to inhibit parasympathetic activity in non-habitual consumers. The caffeine-stimulated increase in nerve activity is likely to evoke other physiological effects as the body attempts to maintain homeostasis. The effects of caffeine on parasympathetic activity may vary depending on the position of the individual when autonomic responses are measured. One study found that the seated position inhibited autonomic activity after caffeine consumption (75 mg); however, parasympathetic activity increased in the supine position. This finding may explain why some habitual caffeine consumers (75 mg or less) do not experience short-term effects of caffeine if their routine requires many hours in a seated position. It is important to note that the data supporting increased parasympathetic activity in the supine position was derived from an experiment involving participants between the ages of 25 and 30 who were considered healthy and sedentary. Caffeine may influence autonomic activity differently for individuals who are more active or elderly.
Biology and health sciences
Nervous system
Biology
166194
https://en.wikipedia.org/wiki/Petrol%20engine
Petrol engine
A petrol engine (gasoline engine in American and Canadian English) is an internal combustion engine designed to run on petrol (gasoline). Petrol engines can often be adapted to also run on fuels such as liquefied petroleum gas and ethanol blends (such as E10 and E85). They may be designed to run on petrol with a higher octane rating, as sold at petrol stations. Most petrol engines use spark ignition, unlike diesel engines which run on diesel fuel and typically use compression ignition. Another key difference to diesel engines is that petrol engines typically have a lower compression ratio. History The first practical petrol engine was built in 1876 in Germany by Nicolaus August Otto and Eugen Langen, although there had been earlier attempts by Étienne Lenoir in 1860, Siegfried Marcus in 1864 and George Brayton in 1873. Design Thermodynamic cycle Most petrol engines use either the four-stroke Otto cycle or the two-stroke cycle. Petrol engines have also been produced using the Miller cycle and Atkinson cycle. Layout Most petrol-powered piston engines are straight engines or V engines. However, flat engines, W engines and other layouts are sometimes used. Wankel engines are classified by the number of rotors used. Compression ratio Cooling Petrol engines are either air-cooled or water-cooled. Ignition Petrol engines use spark ignition. High voltage for the spark this may be provided by a magneto or an ignition coil. In modern car engines, the ignition timing is managed by an electronic Engine Control Unit. Ignition modules can also function as a rev limiter in some cases to prevent overrevving and the consequences of it, such as valve float and connecting rod failure. Primer Primers may be used to help start the engine. They can draw fuel from fuel tanks and vaporize fuel directly into piston cylinders. Engines are difficult to start during cold weather, and the fuel primer helps because otherwise there will not be enough heat available to vaporize the fuel in the carburetor. Power output and efficiency The power output of small- and medium-sized petrol engines (along with equivalent engines using other fuels) is usually measured in kilowatts or horsepower. Typically, petrol engines have a thermodynamic efficiency of about 20-30% (approximately half that of some diesel engines). Applications Applications of petrol engines include automobiles, motorcycles, aircraft, motorboats and small engines (such as lawn mowers, chainsaws and portable generators). Petrol engines have also been used as "pony engines", a type of engine used to start a larger, stationary diesel engine.
Technology
Engines
null
166394
https://en.wikipedia.org/wiki/Aerobics
Aerobics
Aerobics is a form of physical exercise that combines rhythmic aerobic exercise with stretching and strength training routines with the goal of improving all elements of fitness (flexibility, muscular strength, and cardio-vascular fitness). It is usually performed to music and may be practiced in a group setting led by an instructor (fitness professional). With the goal of preventing illness and promoting physical fitness, practitioners perform various routines. Formal aerobics classes are divided into different levels of intensity and complexity and will have five components: warm-up (5–10 minutes), cardiovascular conditioning (25–30 minutes), muscular strength and conditioning (10–15 minutes), cool-down (5–8 minutes) and stretching and flexibility (5–8 minutes). Aerobics classes may allow participants to select their level of participation according to their fitness level. Many gyms offer different types of aerobic classes. Each class is designed for a certain level of experience and taught by a certified instructor with a specialty area related to their particular class. History Both the term and the specific exercise method were developed by Dr Kenneth H. Cooper, an exercise physiologist, and Col. Pauline Potts, a physical therapist, both of the United States Air Force. Cooper, an exercise enthusiast, was puzzled about why some people with good muscular strength were prone to perform poorly at activities such as long-distance running, swimming, and bicycling. He began using a bicycle ergometer to measure sustained performance in terms of a person's ability to use oxygen. In 1968, he published Aerobics, which included exercise programs using running, walking, swimming and bicycling. At the time the book was published there was increasing awareness of the need for increased exercise due to widespread weakness and inactivity. Cooper published a mass-market version The New Aerobics in 1979. Aerobic dancing was invented by Jacki Sorensen in 1969, inspired by Cooper's book. Sorensen began teaching her method and spreading it throughout the U.S. in the hands of hundreds of instructors in the 1970s. At the same time, Judi Missett's Jazzercise was taking off in the form of dance studio franchises in the U.S. Aerobics gained greater popularity, spreading worldwide after the release of Jane Fonda's Workout video in 1982, sparking an industry boom. Step aerobics Step aerobics is a form of aerobic exercise that uses a low elevated platform, the step, of height tailored to individual needs by inserting risers. Step aerobics classes are offered at many gyms. Step aerobics was developed independently by a few American exercise instructors working separately in the mid-1980s, especially Gin Miller and Connie Collins Williams in Atlanta, and Cathe Friedrich in New Jersey. Shoe manufacturer Reebok popularized the exercise method, selling a plastic step unit starting in 1990. Step aerobics can also be involved in dancing games, such as Dance Dance Revolution, In the Groove and Wii Fit. Moves and techniques Often moves are referred to as Reebok step moves. The "basic" step involves raising one foot onto the step, then the other so that they are both on the step, then stepping the first foot back, followed by the second. A "right basic" would involve stepping right foot up, then the left, then returning to the floor alternating right then left. Some instructors switch immediately between different moves, for example between a right basic and a left basic without any intervening moves, effectively "tapping" the foot without shifting weight; tap-free or smooth stepping alternates the feet without "taps" Common moves include: Basic Step Corner knee (or corner kick) Repeater knee (aka Triple knee) T-Step Over-the-Top Lunges V-Step Straddle Down L-Step Split Step I-Step Choreography Many instructors will prepare a set of moves that will be executed together to form the choreography of the class. Usually, the choreography will be timed to 32 beats in a set, ideally switching legs so that the set can be repeated in a mirrored fashion. A set may consist of many different moves and the different moves may have different durations. For example, a basic step as described above takes 4 beats (for the 4 steps the person takes). Similarly, the "knee up" move also takes 4 beats. Another common move, the repeater knee, is an 8-beat move. Classes vary in the level of choreography. Basic level classes will tend to have a series of relatively basic moves strung together into a sequence. More advanced classes incorporate dance elements such as turns, mambos, and stomps. These elements are put together into 2–3 routines in each class. One learns the routines during the class and then all are performed at the end of the class. Regardless of the complexity of the choreography, most instructors offer various options for different levels of intensity/dance ability while teaching the routines. Aerobic dances Aerobic dances are musical fitness routines in which an instructor choreographs several short dance combinations and teaches them to a class. This is usually achieved by teaching the class one to two movements at a time and repeating the movements until the class is able to join the whole choreography together. Popular music is used throughout the class. This is sometimes followed by a strength section which uses body weight exercises to strengthen muscles and a stretch routine to cool down and improve flexibility. Classes are usually 30–60 minutes in length and may include the use of equipment such as a barbell, aerobic step, or small weights. In freestyle aerobics, the instructor choreographs the routine and adjusts it to the needs and wants of their class. There is often no difference between base movements in freestyle and pre-choreographed programs. It is practiced to improve cardio and strength. Aerobic gymnastics Aerobic gymnastics, also known as sport aerobics and competitive aerobics, may combine complicated choreography, rhythmic and acrobatic gymnastics with elements of aerobics. Performance is divided into categories by age, sex and groups (individual, mixed pairs and trios) and are judged on the following elements: dynamic and static strength, jumps and leaps, kicks, balance and flexibility. Ten exercises are mandatory: four consecutive high leg kicks, patterns. A maximum of ten elements from following families are allowed: push-ups, supports and balances, kicks and splits, jumps and leaps. Elements of tumbling such as handsprings, handstands, back flips, and aerial somersaults are prohibited. Scoring is by judging of artistic quality, creativity, execution, and difficulty of routines. Sport aerobics has state, national, and international competitions, but is not an Olympic sport.
Biology and health sciences
Physical fitness
Health
166404
https://en.wikipedia.org/wiki/First%20law%20of%20thermodynamics
First law of thermodynamics
The first law of thermodynamics is a formulation of the law of conservation of energy in the context of thermodynamic processes. The law distinguishes two principal forms of energy transfer, heat and thermodynamic work, that modify a thermodynamic system containing a constant amount of matter. The law also defines the internal energy of a system, an extensive property for taking account of the balance of heat and work in the system. Energy cannot be created or destroyed, but it can be transformed from one form to another. In an isolated system the sum of all forms of energy is constant. An equivalent statement is that perpetual motion machines of the first kind are impossible; work done by a system on its surroundings requires that the system's internal energy be consumed, so that the amount of internal energy lost by that work must be resupplied as heat by an external energy source or as work by an external machine acting on the system to sustain the work of the system continuously. The ideal isolated system, of which the entire universe is an example, is often only used as a model. Many systems in practical applications require the consideration of internal chemical or nuclear reactions, as well as transfers of matter into or out of the system. For such considerations, thermodynamics also defines the concept of open systems, closed systems, and other types. Definition For thermodynamic processes of energy transfer without transfer of matter, the first law of thermodynamics is often expressed by the algebraic sum of contributions to the internal energy, , from all work, , done on or by the system, and the quantity of heat, , supplied or withdrawn from the system. The historical sign convention for the terms has been that heat supplied to the system is positive, but work done by the system is subtracted. This was the convention of Rudolf Clausius, so that a change in the internal energy, , is written . Modern formulations, such as by Max Planck, and by IUPAC, often replace the subtraction with addition, and consider all net energy transfers to the system as positive and all net energy transfers from the system as negative, irrespective of the use of the system, for example as an engine. When a system expands in an isobaric process, the thermodynamic work, , done by the system on the surroundings is the product, , of system pressure, , and system volume change, , whereas is said to be the thermodynamic work done on the system by the surroundings. The change in internal energy of the system is: where denotes the quantity of heat supplied to the system from its surroundings. Work and heat express physical processes of supply or removal of energy, while the internal energy is a mathematical abstraction that keeps account of the changes of energy that befall the system. The term is the quantity of energy added or removed as heat in the thermodynamic sense, not referring to a form of energy within the system. Likewise, denotes the quantity of energy gained or lost through thermodynamic work. Internal energy is a property of the system, while work and heat describe the process, not the system. Thus, a given internal energy change, , can be achieved by different combinations of heat and work. Heat and work are said to be path dependent, while change in internal energy depends only on the initial and final states of the system, not on the path between. Thermodynamic work is measured by change in the system, and is not necessarily the same as work measured by forces and distances in the surroundings, though, ideally, such can sometimes be arranged; this distinction is noted in the term 'isochoric work', at constant system volume, with , which is not a form of thermodynamic work. History In the first half of the eighteenth century, French philosopher and mathematician Émilie du Châtelet made notable contributions to the emerging theoretical framework of energy, for example by emphasising Leibniz's concept of ' vis viva ', mv2, as distinct from Newton's momentum, mv. Empirical developments of the early ideas, in the century following, wrestled with contravening concepts such as the caloric theory of heat. In the few years of his life (1796–1832) after the 1824 publication of his book Reflections on the Motive Power of Fire, Sadi Carnot came to understand that the caloric theory of heat was restricted to mere calorimetry, and that heat and "motive power" are interconvertible. This is known only from his posthumously published notes. He wrote: At that time, the concept of mechanical work had not been formulated. Carnot was aware that heat could be produced by friction and by percussion, as forms of dissipation of "motive power". As late as 1847, Lord Kelvin believed in the caloric theory of heat, being unaware of Carnot's notes. In 1840, Germain Hess stated a conservation law (Hess's law) for the heat of reaction during chemical transformations. This law was later recognized as a consequence of the first law of thermodynamics, but Hess's statement was not explicitly concerned with the relation between energy exchanges by heat and work. In 1842, Julius Robert von Mayer made a statement that was rendered by Clifford Truesdell (1980) as "in a process at constant pressure, the heat used to produce expansion is universally interconvertible with work", but this is not a general statement of the first law, for it does not express the concept of the thermodynamic state variable, the internal energy. Also in 1842, Mayer measured a temperature rise caused by friction in a body of paper pulp. This was near the time of the 1842–1845 work of James Prescott Joule, measuring the mechanical equivalent of heat. In 1845, Joule published a paper entitled The Mechanical Equivalent of Heat, in which he specified a numerical value for the amount of mechanical work required to "produce a unit of heat", based on heat production by friction in the passage of electricity through a resistor and in the rotation of a paddle in a vat of water. The first full statements of the law came in 1850 from Rudolf Clausius, and from William Rankine. Some scholars consider Rankine's statement less distinct than that of Clausius. Original statements: the "thermodynamic approach" The original 19th-century statements of the first law appeared in a conceptual framework in which transfer of energy as heat was taken as a primitive notion, defined by calorimetry. It was presupposed as logically prior to the theoretical development of thermodynamics. Jointly primitive with this notion of heat were the notions of empirical temperature and thermal equilibrium. This framework also took as primitive the notion of transfer of energy as work. This framework did not presume a concept of energy in general, but regarded it as derived or synthesized from the prior notions of heat and work. By one author, this framework has been called the "thermodynamic" approach. The first explicit statement of the first law of thermodynamics, by Rudolf Clausius in 1850, referred to cyclic thermodynamic processes, and to the existence of a function of state of the system, the internal energy. He expressed it in terms of a differential equation for the increments of a thermodynamic process. This equation may be described as follows: Reflecting the experimental work of Mayer and of Joule, Clausius wrote: Because of its definition in terms of increments, the value of the internal energy of a system is not uniquely defined. It is defined only up to an arbitrary additive constant of integration, which can be adjusted to give arbitrary reference zero levels. This non-uniqueness is in keeping with the abstract mathematical nature of the internal energy. The internal energy is customarily stated relative to a conventionally chosen standard reference state of the system. The concept of internal energy is considered by Bailyn to be of "enormous interest". Its quantity cannot be immediately measured, but can only be inferred, by differencing actual immediate measurements. Bailyn likens it to the energy states of an atom, that were revealed by Bohr's energy relation . In each case, an unmeasurable quantity (the internal energy, the atomic energy level) is revealed by considering the difference of measured quantities (increments of internal energy, quantities of emitted or absorbed radiative energy). Conceptual revision: the "mechanical approach" In 1907, George H. Bryan wrote about systems between which there is no transfer of matter (closed systems): "Definition. When energy flows from one system or part of a system to another otherwise than by the performance of mechanical work, the energy so transferred is called heat." This definition may be regarded as expressing a conceptual revision, as follows. This reinterpretation was systematically expounded in 1909 by Constantin Carathéodory, whose attention had been drawn to it by Max Born. Largely through Born's influence, this revised conceptual approach to the definition of heat came to be preferred by many twentieth-century writers. It might be called the "mechanical approach". Energy can also be transferred from one thermodynamic system to another in association with transfer of matter. Born points out that in general such energy transfer is not resolvable uniquely into work and heat moieties. In general, when there is transfer of energy associated with matter transfer, work and heat transfers can be distinguished only when they pass through walls physically separate from those for matter transfer. The "mechanical" approach postulates the law of conservation of energy. It also postulates that energy can be transferred from one thermodynamic system to another adiabatically as work, and that energy can be held as the internal energy of a thermodynamic system. It also postulates that energy can be transferred from one thermodynamic system to another by a path that is non-adiabatic, and is unaccompanied by matter transfer. Initially, it "cleverly" (according to Martin Bailyn) refrains from labelling as 'heat' such non-adiabatic, unaccompanied transfer of energy. It rests on the primitive notion of walls, especially adiabatic walls and non-adiabatic walls, defined as follows. Temporarily, only for purpose of this definition, one can prohibit transfer of energy as work across a wall of interest. Then walls of interest fall into two classes, (a) those such that arbitrary systems separated by them remain independently in their own previously established respective states of internal thermodynamic equilibrium; they are defined as adiabatic; and (b) those without such independence; they are defined as non-adiabatic. This approach derives the notions of transfer of energy as heat, and of temperature, as theoretical developments, not taking them as primitives. It regards calorimetry as a derived theory. It has an early origin in the nineteenth century, for example in the work of Hermann von Helmholtz, but also in the work of many others. Conceptually revised statement, according to the mechanical approach The revised statement of the first law postulates that a change in the internal energy of a system due to any arbitrary process, that takes the system from a given initial thermodynamic state to a given final equilibrium thermodynamic state, can be determined through the physical existence, for those given states, of a reference process that occurs purely through stages of adiabatic work. The revised statement is then For a closed system, in any arbitrary process of interest that takes it from an initial to a final state of internal thermodynamic equilibrium, the change of internal energy is the same as that for a reference adiabatic work process that links those two states. This is so regardless of the path of the process of interest, and regardless of whether it is an adiabatic or a non-adiabatic process. The reference adiabatic work process may be chosen arbitrarily from amongst the class of all such processes. This statement is much less close to the empirical basis than are the original statements, but is often regarded as conceptually parsimonious in that it rests only on the concepts of adiabatic work and of non-adiabatic processes, not on the concepts of transfer of energy as heat and of empirical temperature that are presupposed by the original statements. Largely through the influence of Max Born, it is often regarded as theoretically preferable because of this conceptual parsimony. Born particularly observes that the revised approach avoids thinking in terms of what he calls the "imported engineering" concept of heat engines. Basing his thinking on the mechanical approach, Born in 1921, and again in 1949, proposed to revise the definition of heat. In particular, he referred to the work of Constantin Carathéodory, who had in 1909 stated the first law without defining quantity of heat. Born's definition was specifically for transfers of energy without transfer of matter, and it has been widely followed in textbooks (examples:). Born observes that a transfer of matter between two systems is accompanied by a transfer of internal energy that cannot be resolved into heat and work components. There can be pathways to other systems, spatially separate from that of the matter transfer, that allow heat and work transfer independent of and simultaneous with the matter transfer. Energy is conserved in such transfers. Description Cyclic processes The first law of thermodynamics for a closed system was expressed in two ways by Clausius. One way referred to cyclic processes and the inputs and outputs of the system, but did not refer to increments in the internal state of the system. The other way referred to an incremental change in the internal state of the system, and did not expect the process to be cyclic. A cyclic process is one that can be repeated indefinitely often, returning the system to its initial state. Of particular interest for single cycle of a cyclic process are the net work done, and the net heat taken in (or 'consumed', in Clausius' statement), by the system. In a cyclic process in which the system does net work on its surroundings, it is observed to be physically necessary not only that heat be taken into the system, but also, importantly, that some heat leave the system. The difference is the heat converted by the cycle into work. In each repetition of a cyclic process, the net work done by the system, measured in mechanical units, is proportional to the heat consumed, measured in calorimetric units. The constant of proportionality is universal and independent of the system and in 1845 and 1847 was measured by James Joule, who described it as the mechanical equivalent of heat. Various statements of the law for closed systems The law is of great importance and generality and is consequently thought of from several points of view. Most careful textbook statements of the law express it for closed systems. It is stated in several ways, sometimes even by the same author. For the thermodynamics of closed systems, the distinction between transfers of energy as work and as heat is central and is within the scope of the present article. For the thermodynamics of open systems, such a distinction is beyond the scope of the present article, but some limited comments are made on it in the section below headed 'First law of thermodynamics for open systems'. There are two main ways of stating a law of thermodynamics, physically or mathematically. They should be logically coherent and consistent with one another. An example of a physical statement is that of Planck (1897/1903): It is in no way possible, either by mechanical, thermal, chemical, or other devices, to obtain perpetual motion, i.e. it is impossible to construct an engine which will work in a cycle and produce continuous work, or kinetic energy, from nothing. This physical statement is restricted neither to closed systems nor to systems with states that are strictly defined only for thermodynamic equilibrium; it has meaning also for open systems and for systems with states that are not in thermodynamic equilibrium. An example of a mathematical statement is that of Crawford (1963): For a given system we let large-scale mechanical energy, large-scale potential energy, and total energy. The first two quantities are specifiable in terms of appropriate mechanical variables, and by definition For any finite process, whether reversible or irreversible, The first law in a form that involves the principle of conservation of energy more generally is Here and are heat and work added, with no restrictions as to whether the process is reversible, quasistatic, or irreversible.[Warner, Am. J. Phys., 29, 124 (1961)] This statement by Crawford, for , uses the sign convention of IUPAC, not that of Clausius. Though it does not explicitly say so, this statement refers to closed systems. Internal energy is evaluated for bodies in states of thermodynamic equilibrium, which possess well-defined temperatures, relative to a reference state. The history of statements of the law for closed systems has two main periods, before and after the work of George H. Bryan (1907), of Carathéodory (1909), and the approval of Carathéodory's work given by Born (1921). The earlier traditional versions of the law for closed systems are nowadays often considered to be out of date. Carathéodory's celebrated presentation of equilibrium thermodynamics refers to closed systems, which are allowed to contain several phases connected by internal walls of various kinds of impermeability and permeability (explicitly including walls that are permeable only to heat). Carathéodory's 1909 version of the first law of thermodynamics was stated in an axiom which refrained from defining or mentioning temperature or quantity of heat transferred. That axiom stated that the internal energy of a phase in equilibrium is a function of state, that the sum of the internal energies of the phases is the total internal energy of the system, and that the value of the total internal energy of the system is changed by the amount of work done adiabatically on it, considering work as a form of energy. That article considered this statement to be an expression of the law of conservation of energy for such systems. This version is nowadays widely accepted as authoritative, but is stated in slightly varied ways by different authors. Such statements of the first law for closed systems assert the existence of internal energy as a function of state defined in terms of adiabatic work. Thus heat is not defined calorimetrically or as due to temperature difference. It is defined as a residual difference between change of internal energy and work done on the system, when that work does not account for the whole of the change of internal energy and the system is not adiabatically isolated. The 1909 Carathéodory statement of the law in axiomatic form does not mention heat or temperature, but the equilibrium states to which it refers are explicitly defined by variable sets that necessarily include "non-deformation variables", such as pressures, which, within reasonable restrictions, can be rightly interpreted as empirical temperatures, and the walls connecting the phases of the system are explicitly defined as possibly impermeable to heat or permeable only to heat. According to A. Münster (1970), "A somewhat unsatisfactory aspect of Carathéodory's theory is that a consequence of the Second Law must be considered at this point [in the statement of the first law], i.e. that it is not always possible to reach any state 2 from any other state 1 by means of an adiabatic process." Münster instances that no adiabatic process can reduce the internal energy of a system at constant volume. Carathéodory's paper asserts that its statement of the first law corresponds exactly to Joule's experimental arrangement, regarded as an instance of adiabatic work. It does not point out that Joule's experimental arrangement performed essentially irreversible work, through friction of paddles in a liquid, or passage of electric current through a resistance inside the system, driven by motion of a coil and inductive heating, or by an external current source, which can access the system only by the passage of electrons, and so is not strictly adiabatic, because electrons are a form of matter, which cannot penetrate adiabatic walls. The paper goes on to base its main argument on the possibility of quasi-static adiabatic work, which is essentially reversible. The paper asserts that it will avoid reference to Carnot cycles, and then proceeds to base its argument on cycles of forward and backward quasi-static adiabatic stages, with isothermal stages of zero magnitude. Sometimes the concept of internal energy is not made explicit in the statement. Sometimes the existence of the internal energy is made explicit but work is not explicitly mentioned in the statement of the first postulate of thermodynamics. Heat supplied is then defined as the residual change in internal energy after work has been taken into account, in a non-adiabatic process. A respected modern author states the first law of thermodynamics as "Heat is a form of energy", which explicitly mentions neither internal energy nor adiabatic work. Heat is defined as energy transferred by thermal contact with a reservoir, which has a temperature, and is generally so large that addition and removal of heat do not alter its temperature. A current student text on chemistry defines heat thus: "heat is the exchange of thermal energy between a system and its surroundings caused by a temperature difference." The author then explains how heat is defined or measured by calorimetry, in terms of heat capacity, specific heat capacity, molar heat capacity, and temperature. A respected text disregards the Carathéodory's exclusion of mention of heat from the statement of the first law for closed systems, and admits heat calorimetrically defined along with work and internal energy. Another respected text defines heat exchange as determined by temperature difference, but also mentions that the Born (1921) version is "completely rigorous". These versions follow the traditional approach that is now considered out of date, exemplified by that of Planck (1897/1903). Evidence for the first law of thermodynamics for closed systems The first law of thermodynamics for closed systems was originally induced from empirically observed evidence, including calorimetric evidence. It is nowadays, however, taken to provide the definition of heat via the law of conservation of energy and the definition of work in terms of changes in the external parameters of a system. The original discovery of the law was gradual over a period of perhaps half a century or more, and some early studies were in terms of cyclic processes. The following is an account in terms of changes of state of a closed system through compound processes that are not necessarily cyclic. This account first considers processes for which the first law is easily verified because of their simplicity, namely adiabatic processes (in which there is no transfer as heat) and adynamic processes (in which there is no transfer as work). Adiabatic processes In an adiabatic process, there is transfer of energy as work but not as heat. For all adiabatic process that takes a system from a given initial state to a given final state, irrespective of how the work is done, the respective eventual total quantities of energy transferred as work are one and the same, determined just by the given initial and final states. The work done on the system is defined and measured by changes in mechanical or quasi-mechanical variables external to the system. Physically, adiabatic transfer of energy as work requires the existence of adiabatic enclosures. For instance, in Joule's experiment, the initial system is a tank of water with a paddle wheel inside. If we isolate the tank thermally, and move the paddle wheel with a pulley and a weight, we can relate the increase in temperature with the distance descended by the mass. Next, the system is returned to its initial state, isolated again, and the same amount of work is done on the tank using different devices (an electric motor, a chemical battery, a spring,...). In every case, the amount of work can be measured independently. The return to the initial state is not conducted by doing adiabatic work on the system. The evidence shows that the final state of the water (in particular, its temperature and volume) is the same in every case. It is irrelevant if the work is electrical, mechanical, chemical,... or if done suddenly or slowly, as long as it is performed in an adiabatic way, that is to say, without heat transfer into or out of the system. Evidence of this kind shows that to increase the temperature of the water in the tank, the qualitative kind of adiabatically performed work does not matter. No qualitative kind of adiabatic work has ever been observed to decrease the temperature of the water in the tank. A change from one state to another, for example an increase of both temperature and volume, may be conducted in several stages, for example by externally supplied electrical work on a resistor in the body, and adiabatic expansion allowing the body to do work on the surroundings. It needs to be shown that the time order of the stages, and their relative magnitudes, does not affect the amount of adiabatic work that needs to be done for the change of state. According to one respected scholar: "Unfortunately, it does not seem that experiments of this kind have ever been carried out carefully. ... We must therefore admit that the statement which we have enunciated here, and which is equivalent to the first law of thermodynamics, is not well founded on direct experimental evidence." Another expression of this view is "no systematic precise experiments to verify this generalization directly have ever been attempted". This kind of evidence, of independence of sequence of stages, combined with the above-mentioned evidence, of independence of qualitative kind of work, would show the existence of an important state variable that corresponds with adiabatic work, but not that such a state variable represented a conserved quantity. For the latter, another step of evidence is needed, which may be related to the concept of reversibility, as mentioned below. That important state variable was first recognized and denoted by Clausius in 1850, but he did not then name it, and he defined it in terms not only of work but also of heat transfer in the same process. It was also independently recognized in 1850 by Rankine, who also denoted it ; and in 1851 by Kelvin who then called it "mechanical energy", and later "intrinsic energy". In 1865, after some hesitation, Clausius began calling his state function "energy". In 1882 it was named as the internal energy by Helmholtz. If only adiabatic processes were of interest, and heat could be ignored, the concept of internal energy would hardly arise or be needed. The relevant physics would be largely covered by the concept of potential energy, as was intended in the 1847 paper of Helmholtz on the principle of conservation of energy, though that did not deal with forces that cannot be described by a potential, and thus did not fully justify the principle. Moreover, that paper was critical of the early work of Joule that had by then been performed. A great merit of the internal energy concept is that it frees thermodynamics from a restriction to cyclic processes, and allows a treatment in terms of thermodynamic states. In an adiabatic process, adiabatic work takes the system either from a reference state with internal energy to an arbitrary one with internal energy , or from the state to the state : Except under the special, and strictly speaking, fictional, condition of reversibility, only one of the processes or is empirically feasible by a simple application of externally supplied work. The reason for this is given as the second law of thermodynamics and is not considered in the present article. The fact of such irreversibility may be dealt with in two main ways, according to different points of view: Since the work of Bryan (1907), the most accepted way to deal with it nowadays, followed by Carathéodory, is to rely on the previously established concept of quasi-static processes,Planck, M. (1897/1903), Section 71, p. 52. as follows. Actual physical processes of transfer of energy as work are always at least to some degree irreversible. The irreversibility is often due to mechanisms known as dissipative, that transform bulk kinetic energy into internal energy. Examples are friction and viscosity. If the process is performed more slowly, the frictional or viscous dissipation is less. In the limit of infinitely slow performance, the dissipation tends to zero and then the limiting process, though fictional rather than actual, is notionally reversible, and is called quasi-static. Throughout the course of the fictional limiting quasi-static process, the internal intensive variables of the system are equal to the external intensive variables, those that describe the reactive forces exerted by the surroundings. This can be taken to justify the formula Another way to deal with it is to allow that experiments with processes of heat transfer to or from the system may be used to justify the formula () above. Moreover, it deals to some extent with the problem of lack of direct experimental evidence that the time order of stages of a process does not matter in the determination of internal energy. This way does not provide theoretical purity in terms of adiabatic work processes, but is empirically feasible, and is in accord with experiments actually done, such as the Joule experiments mentioned just above, and with older traditions. The formula () above allows that to go by processes of quasi-static adiabatic work from the state to the state we can take a path that goes through the reference state , since the quasi-static adiabatic work is independent of the path This kind of empirical evidence, coupled with theory of this kind, largely justifies the following statement: For all adiabatic processes between two specified states of a closed system of any nature, the net work done is the same regardless the details of the process, and determines a state function called internal energy, . Adynamic processes A complementary observable aspect of the first law is about heat transfer. Adynamic transfer of energy as heat can be measured empirically by changes in the surroundings of the system of interest by calorimetry. This again requires the existence of adiabatic enclosure of the entire process, system and surroundings, though the separating wall between the surroundings and the system is thermally conductive or radiatively permeable, not adiabatic. A calorimeter can rely on measurement of sensible heat, which requires the existence of thermometers and measurement of temperature change in bodies of known sensible heat capacity under specified conditions; or it can rely on the measurement of latent heat, through measurement of masses of material that change phase, at temperatures fixed by the occurrence of phase changes under specified conditions in bodies of known latent heat of phase change. The calorimeter can be calibrated by transferring an externally determined amount of heat into it, for instance from a resistive electrical heater inside the calorimeter through which a precisely known electric current is passed at a precisely known voltage for a precisely measured period of time. The calibration allows comparison of calorimetric measurement of quantity of heat transferred with quantity of energy transferred as (surroundings-based) work. According to one textbook, "The most common device for measuring is an adiabatic bomb calorimeter." According to another textbook, "Calorimetry is widely used in present day laboratories." According to one opinion, "Most thermodynamic data come from calorimetry...". When the system evolves with transfer of energy as heat, without energy being transferred as work, in an adynamic process, the heat transferred to the system is equal to the increase in its internal energy: General case for reversible processes Heat transfer is practically reversible when it is driven by practically negligibly small temperature gradients. Work transfer is practically reversible when it occurs so slowly that there are no frictional effects within the system; frictional effects outside the system should also be zero if the process is to be reversible in the strict thermodynamic sense. For a particular reversible process in general, the work done reversibly on the system, , and the heat transferred reversibly to the system, are not required to occur respectively adiabatically or adynamically, but they must belong to the same particular process defined by its particular reversible path, , through the space of thermodynamic states. Then the work and heat transfers can occur and be calculated simultaneously. Putting the two complementary aspects together, the first law for a particular reversible process can be written This combined statement is the expression the first law of thermodynamics for reversible processes for closed systems. In particular, if no work is done on a thermally isolated closed system we have . This is one aspect of the law of conservation of energy and can be stated: The internal energy of an isolated system remains constant. General case for irreversible processes If, in a process of change of state of a closed system, the energy transfer is not under a practically zero temperature gradient, practically frictionless, and with nearly balanced forces, then the process is irreversible. Then the heat and work transfers may be difficult to calculate with high accuracy, although the simple equations for reversible processes still hold to a good approximation in the absence of composition changes. Importantly, the first law still holds and provides a check on the measurements and calculations of the work done irreversibly on the system, , and the heat transferred irreversibly to the system, , which belong to the same particular process defined by its particular irreversible path, , through the space of thermodynamic states. This means that the internal energy is a function of state and that the internal energy change between two states is a function only of the two states. Overview of the weight of evidence for the law The first law of thermodynamics is so general that its predictions cannot all be directly tested. In many properly conducted experiments it has been precisely supported, and never violated. Indeed, within its scope of applicability, the law is so reliably established, that, nowadays, rather than experiment being considered as testing the accuracy of the law, it is more practical and realistic to think of the law as testing the accuracy of experiment. An experimental result that seems to violate the law may be assumed to be inaccurate or wrongly conceived, for example due to failure to account for an important physical factor. Thus, some may regard it as a principle more abstract than a law. State functional formulation for infinitesimal processes When the heat and work transfers in the equations above are infinitesimal in magnitude, they are often denoted by , rather than exact differentials denoted by , as a reminder that heat and work do not describe the state of any system. The integral of an inexact differential depends upon the particular path taken through the space of thermodynamic parameters while the integral of an exact differential depends only upon the initial and final states. If the initial and final states are the same, then the integral of an inexact differential may or may not be zero, but the integral of an exact differential is always zero. The path taken by a thermodynamic system through a chemical or physical change is known as a thermodynamic process. The first law for a closed homogeneous system may be stated in terms that include concepts that are established in the second law. The internal energy may then be expressed as a function of the system's defining state variables , entropy, and , volume: . In these terms, , the system's temperature, and , its pressure, are partial derivatives of with respect to and . These variables are important throughout thermodynamics, though not necessary for the statement of the first law. Rigorously, they are defined only when the system is in its own state of internal thermodynamic equilibrium. For some purposes, the concepts provide good approximations for scenarios sufficiently near to the system's internal thermodynamic equilibrium. The first law requires that: Then, for the fictive case of a reversible process, can be written in terms of exact differentials. One may imagine reversible changes, such that there is at each instant negligible departure from thermodynamic equilibrium within the system and between system and surroundings. Then, mechanical work is given by and the quantity of heat added can be expressed as . For these conditions While this has been shown here for reversible changes, it is valid more generally in the absence of chemical reactions or phase transitions, as can be considered as a thermodynamic state function of the defining state variables and : Equation () is known as the fundamental thermodynamic relation for a closed system in the energy representation, for which the defining state variables are and , with respect to which and are partial derivatives of . It is only in the reversible case or for a quasistatic process without composition change that the work done and heat transferred are given by and . In the case of a closed system in which the particles of the system are of different types and, because chemical reactions may occur, their respective numbers are not necessarily constant, the fundamental thermodynamic relation for dU becomes: where dNi is the (small) increase in number of type-i particles in the reaction, and μi is known as the chemical potential of the type-i particles in the system. If dNi is expressed in mol then μi is expressed in J/mol. If the system has more external mechanical variables than just the volume that can change, the fundamental thermodynamic relation further generalizes to: Here the Xi are the generalized forces corresponding to the external variables xi. The parameters Xi are independent of the size of the system and are called intensive parameters and the xi are proportional to the size and called extensive parameters. For an open system, there can be transfers of particles as well as energy into or out of the system during a process. For this case, the first law of thermodynamics still holds, in the form that the internal energy is a function of state and the change of internal energy in a process is a function only of its initial and final states, as noted in the section below headed First law of thermodynamics for open systems. A useful idea from mechanics is that the energy gained by a particle is equal to the force applied to the particle multiplied by the displacement of the particle while that force is applied. Now consider the first law without the heating term: dU = −P dV. The pressure P can be viewed as a force (and in fact has units of force per unit area) while dV is the displacement (with units of distance times area). We may say, with respect to this work term, that a pressure difference forces a transfer of volume, and that the product of the two (work) is the amount of energy transferred out of the system as a result of the process. If one were to make this term negative then this would be the work done on the system. It is useful to view the T dS term in the same light: here the temperature is known as a "generalized" force (rather than an actual mechanical force) and the entropy is a generalized displacement. Similarly, a difference in chemical potential between groups of particles in the system drives a chemical reaction that changes the numbers of particles, and the corresponding product is the amount of chemical potential energy transformed in process. For example, consider a system consisting of two phases: liquid water and water vapor. There is a generalized "force" of evaporation that drives water molecules out of the liquid. There is a generalized "force" of condensation that drives vapor molecules out of the vapor. Only when these two "forces" (or chemical potentials) are equal is there equilibrium, and the net rate of transfer zero. The two thermodynamic parameters that form a generalized force-displacement pair are called "conjugate variables". The two most familiar pairs are, of course, pressure-volume, and temperature-entropy. Fluid dynamics In fluid dynamics, the first law of thermodynamics reads . Spatially inhomogeneous systems Classical thermodynamics is initially focused on closed homogeneous systems (e.g. Planck 1897/1903), which might be regarded as 'zero-dimensional' in the sense that they have no spatial variation. But it is desired to study also systems with distinct internal motion and spatial inhomogeneity. For such systems, the principle of conservation of energy is expressed in terms not only of internal energy as defined for homogeneous systems, but also in terms of kinetic energy and potential energies of parts of the inhomogeneous system with respect to each other and with respect to long-range external forces. How the total energy of a system is allocated between these three more specific kinds of energy varies according to the purposes of different writers; this is because these components of energy are to some extent mathematical artefacts rather than actually measured physical quantities. For any closed homogeneous component of an inhomogeneous closed system, if denotes the total energy of that component system, one may write where and denote respectively the total kinetic energy and the total potential energy of the component closed homogeneous system, and denotes its internal energy. Potential energy can be exchanged with the surroundings of the system when the surroundings impose a force field, such as gravitational or electromagnetic, on the system. A compound system consisting of two interacting closed homogeneous component subsystems has a potential energy of interaction between the subsystems. Thus, in an obvious notation, one may write The quantity in general lacks an assignment to either subsystem in a way that is not arbitrary, and this stands in the way of a general non-arbitrary definition of transfer of energy as work. On occasions, authors make their various respective arbitrary assignments. The distinction between internal and kinetic energy is hard to make in the presence of turbulent motion within the system, as friction gradually dissipates macroscopic kinetic energy of localised bulk flow into molecular random motion of molecules that is classified as internal energy. The rate of dissipation by friction of kinetic energy of localised bulk flow into internal energy, whether in turbulent or in streamlined flow, is an important quantity in non-equilibrium thermodynamics. This is a serious difficulty for attempts to define entropy for time-varying spatially inhomogeneous systems. First law of thermodynamics for open systems For the first law of thermodynamics, there is no trivial passage of physical conception from the closed system view to an open system view. For closed systems, the concepts of an adiabatic enclosure and of an adiabatic wall are fundamental. Matter and internal energy cannot permeate or penetrate such a wall. For an open system, there is a wall that allows penetration by matter. In general, matter in diffusive motion carries with it some internal energy, and some microscopic potential energy changes accompany the motion. An open system is not adiabatically enclosed. There are some cases in which a process for an open system can, for particular purposes, be considered as if it were for a closed system. In an open system, by definition hypothetically or potentially, matter can pass between the system and its surroundings. But when, in a particular case, the process of interest involves only hypothetical or potential but no actual passage of matter, the process can be considered as if it were for a closed system. Internal energy for an open system Since the revised and more rigorous definition of the internal energy of a closed system rests upon the possibility of processes by which adiabatic work takes the system from one state to another, this leaves a problem for the definition of internal energy for an open system, for which adiabatic work is not in general possible. According to Max Born, the transfer of matter and energy across an open connection "cannot be reduced to mechanics". In contrast to the case of closed systems, for open systems, in the presence of diffusion, there is no unconstrained and unconditional physical distinction between convective transfer of internal energy by bulk flow of matter, the transfer of internal energy without transfer of matter (usually called heat conduction and work transfer), and change of various potential energies. The older traditional way and the conceptually revised (Carathéodory) way agree that there is no physically unique definition of heat and work transfer processes between open systems. In particular, between two otherwise isolated open systems an adiabatic wall is by definition impossible. This problem is solved by recourse to the principle of conservation of energy. This principle allows a composite isolated system to be derived from two other component non-interacting isolated systems, in such a way that the total energy of the composite isolated system is equal to the sum of the total energies of the two component isolated systems. Two previously isolated systems can be subjected to the thermodynamic operation of placement between them of a wall permeable to matter and energy, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new single unpartitioned system. The internal energies of the initial two systems and of the final new system, considered respectively as closed systems as above, can be measured. Then the law of conservation of energy requires that where and denote the changes in internal energy of the system and of its surroundings respectively. This is a statement of the first law of thermodynamics for a transfer between two otherwise isolated open systems, that fits well with the conceptually revised and rigorous statement of the law stated above. For the thermodynamic operation of adding two systems with internal energies and , to produce a new system with internal energy , one may write ; the reference states for , and should be specified accordingly, maintaining also that the internal energy of a system be proportional to its mass, so that the internal energies are extensive variables. There is a sense in which this kind of additivity expresses a fundamental postulate that goes beyond the simplest ideas of classical closed system thermodynamics; the extensivity of some variables is not obvious, and needs explicit expression; indeed one author goes so far as to say that it could be recognized as a fourth law of thermodynamics, though this is not repeated by other authors. Also of course where and denote the changes in mole number of a component substance of the system and of its surroundings respectively. This is a statement of the law of conservation of mass. Process of transfer of matter between an open system and its surroundings A system connected to its surroundings only through contact by a single permeable wall, but otherwise isolated, is an open system. If it is initially in a state of contact equilibrium with a surrounding subsystem, a thermodynamic process of transfer of matter can be made to occur between them if the surrounding subsystem is subjected to some thermodynamic operation, for example, removal of a partition between it and some further surrounding subsystem. The removal of the partition in the surroundings initiates a process of exchange between the system and its contiguous surrounding subsystem. An example is evaporation. One may consider an open system consisting of a collection of liquid, enclosed except where it is allowed to evaporate into or to receive condensate from its vapor above it, which may be considered as its contiguous surrounding subsystem, and subject to control of its volume and temperature. A thermodynamic process might be initiated by a thermodynamic operation in the surroundings, that mechanically increases in the controlled volume of the vapor. Some mechanical work will be done within the surroundings by the vapor, but also some of the parent liquid will evaporate and enter the vapor collection which is the contiguous surrounding subsystem. Some internal energy will accompany the vapor that leaves the system, but it will not make sense to try to uniquely identify part of that internal energy as heat and part of it as work. Consequently, the energy transfer that accompanies the transfer of matter between the system and its surrounding subsystem cannot be uniquely split into heat and work transfers to or from the open system. The component of total energy transfer that accompanies the transfer of vapor into the surrounding subsystem is customarily called 'latent heat of evaporation', but this use of the word heat is a quirk of customary historical language, not in strict compliance with the thermodynamic definition of transfer of energy as heat. In this example, kinetic energy of bulk flow and potential energy with respect to long-range external forces such as gravity are both considered to be zero. The first law of thermodynamics refers to the change of internal energy of the open system, between its initial and final states of internal equilibrium. Open system with multiple contacts An open system can be in contact equilibrium with several other systems at once. This includes cases in which there is contact equilibrium between the system, and several subsystems in its surroundings, including separate connections with subsystems through walls that are permeable to the transfer of matter and internal energy as heat and allowing friction of passage of the transferred matter, but immovable, and separate connections through adiabatic walls with others, and separate connections through diathermic walls impermeable to matter with yet others. Because there are physically separate connections that are permeable to energy but impermeable to matter, between the system and its surroundings, energy transfers between them can occur with definite heat and work characters. Conceptually essential here is that the internal energy transferred with the transfer of matter is measured by a variable that is mathematically independent of the variables that measure heat and work. With such independence of variables, the total increase of internal energy in the process is then determined as the sum of the internal energy transferred from the surroundings with the transfer of matter through the walls that are permeable to it, and of the internal energy transferred to the system as heat through the diathermic walls, and of the energy transferred to the system as work through the adiabatic walls, including the energy transferred to the system by long-range forces. These simultaneously transferred quantities of energy are defined by events in the surroundings of the system. Because the internal energy transferred with matter is not in general uniquely resolvable into heat and work components, the total energy transfer cannot in general be uniquely resolved into heat and work components. Under these conditions, the following formula can describe the process in terms of externally defined thermodynamic variables, as a statement of the first law of thermodynamics: where ΔU0 denotes the change of internal energy of the system, and denotes the change of internal energy of the of the surrounding subsystems that are in open contact with the system, due to transfer between the system and that surrounding subsystem, and denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, and denotes the energy transferred from the system to the surrounding subsystems that are in adiabatic connection with it. The case of a wall that is permeable to matter and can move so as to allow transfer of energy as work is not considered here. Combination of first and second laws If the system is described by the energetic fundamental equation, U0 = U0(S, V, Nj), and if the process can be described in the quasi-static formalism, in terms of the internal state variables of the system, then the process can also be described by a combination of the first and second laws of thermodynamics, by the formula where there are n chemical constituents of the system and permeably connected surrounding subsystems, and where T, S, P, V, Nj, and μj, are defined as above. For a general natural process, there is no immediate term-wise correspondence between equations () and (), because they describe the process in different conceptual frames. Nevertheless, a conditional correspondence exists. There are three relevant kinds of wall here: purely diathermal, adiabatic, and permeable to matter. If two of those kinds of wall are sealed off, leaving only one that permits transfers of energy, as work, as heat, or with matter, then the remaining permitted terms correspond precisely. If two of the kinds of wall are left unsealed, then energy transfer can be shared between them, so that the two remaining permitted terms do not correspond precisely. For the special fictive case of quasi-static transfers, there is a simple correspondence. For this, it is supposed that the system has multiple areas of contact with its surroundings. There are pistons that allow adiabatic work, purely diathermal walls, and open connections with surrounding subsystems of completely controllable chemical potential (or equivalent controls for charged species). Then, for a suitable fictive quasi-static transfer, one can write where is the added amount of species and is the corresponding molar entropy. For fictive quasi-static transfers for which the chemical potentials in the connected surrounding subsystems are suitably controlled, these can be put into equation (4) to yield where is the molar enthalpy of species . Non-equilibrium transfers The transfer of energy between an open system and a single contiguous subsystem of its surroundings is considered also in non-equilibrium thermodynamics. The problem of definition arises also in this case. It may be allowed that the wall between the system and the subsystem is not only permeable to matter and to internal energy, but also may be movable so as to allow work to be done when the two systems have different pressures. In this case, the transfer of energy as heat is not defined. The first law of thermodynamics for any process on the specification of equation (3) can be defined as where ΔU denotes the change of internal energy of the system, denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, denotes the work of the system and is the molar enthalpy of species , coming into the system from the surrounding that is in contact with the system. Formula (6) is valid in general case, both for quasi-static and for irreversible processes. The situation of the quasi-static process is considered in the previous Section, which in our terms defines To describe deviation of the thermodynamic system from equilibrium, in addition to fundamental variables that are used to fix the equilibrium state, as was described above, a set of variables that are called internal variables have been introduced, which allows to formulate for the general case Methods for study of non-equilibrium processes mostly deal with spatially continuous flow systems. In this case, the open connection between system and surroundings is usually taken to fully surround the system, so that there are no separate connections impermeable to matter but permeable to heat. Except for the special case mentioned above when there is no actual transfer of matter, which can be treated as if for a closed system, in strictly defined thermodynamic terms, it follows that transfer of energy as heat is not defined. In this sense, there is no such thing as 'heat flow' for a continuous-flow open system. Properly, for closed systems, one speaks of transfer of internal energy as heat, but in general, for open systems, one can speak safely only of transfer of internal energy. A factor here is that there are often cross-effects between distinct transfers, for example that transfer of one substance may cause transfer of another even when the latter has zero chemical potential gradient. Usually transfer between a system and its surroundings applies to transfer of a state variable, and obeys a balance law, that the amount lost by the donor system is equal to the amount gained by the receptor system. Heat is not a state variable. For his 1947 definition of "heat transfer" for discrete open systems, the author Prigogine carefully explains at some length that his definition of it does not obey a balance law. He describes this as paradoxical. The situation is clarified by Gyarmati, who shows that his definition of "heat transfer", for continuous-flow systems, really refers not specifically to heat, but rather to transfer of internal energy, as follows. He considers a conceptual small cell in a situation of continuous-flow as a system defined in the so-called Lagrangian way, moving with the local center of mass. The flow of matter across the boundary is zero when considered as a flow of total mass. Nevertheless, if the material constitution is of several chemically distinct components that can diffuse with respect to one another, the system is considered to be open, the diffusive flows of the components being defined with respect to the center of mass of the system, and balancing one another as to mass transfer. Still there can be a distinction between bulk flow of internal energy and diffusive flow of internal energy in this case, because the internal energy density does not have to be constant per unit mass of material, and allowing for non-conservation of internal energy because of local conversion of kinetic energy of bulk flow to internal energy by viscosity. Gyarmati shows that his definition of "the heat flow vector" is strictly speaking a definition of flow of internal energy, not specifically of heat, and so it turns out that his use here of the word heat is contrary to the strict thermodynamic definition of heat, though it is more or less compatible with historical custom, that often enough did not clearly distinguish between heat and internal energy; he writes "that this relation must be considered to be the exact definition of the concept of heat flow, fairly loosely used in experimental physics and heat technics". Apparently in a different frame of thinking from that of the above-mentioned paradoxical usage in the earlier sections of the historic 1947 work by Prigogine, about discrete systems, this usage of Gyarmati is consistent with the later sections of the same 1947 work by Prigogine, about continuous-flow systems, which use the term "heat flux" in just this way. This usage is also followed by Glansdorff and Prigogine in their 1971 text about continuous-flow systems. They write: "Again the flow of internal energy may be split into a convection flow and a conduction flow. This conduction flow is by definition the heat flow . Therefore: where denotes the [internal] energy per unit mass. [These authors actually use the symbols and to denote internal energy but their notation has been changed here to accord with the notation of the present article. These authors actually use the symbol to refer to total energy, including kinetic energy of bulk flow.]" This usage is followed also by other writers on non-equilibrium thermodynamics such as Lebon, Jou, and Casas-Vásquez, and de Groot and Mazur. This usage is described by Bailyn as stating the non-convective flow of internal energy, and is listed as his definition number 1, according to the first law of thermodynamics. This usage is also followed by workers in the kinetic theory of gases. This is not the ad hoc definition of "reduced heat flux" of Rolf Haase. In the case of a flowing system of only one chemical constituent, in the Lagrangian representation, there is no distinction between bulk flow and diffusion of matter. Moreover, the flow of matter is zero into or out of the cell that moves with the local center of mass. In effect, in this description, one is dealing with a system effectively closed to the transfer of matter. But still one can validly talk of a distinction between bulk flow and diffusive flow of internal energy, the latter driven by a temperature gradient within the flowing material, and being defined with respect to the local center of mass of the bulk flow. In this case of a virtually closed system, because of the zero matter transfer, as noted above, one can safely distinguish between transfer of energy as work, and transfer of internal energy as heat.
Physical sciences
Thermodynamics
Physics
166415
https://en.wikipedia.org/wiki/Heat%20equation
Heat equation
In mathematics and physics, the heat equation is a parabolic partial differential equation. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region. Since then, the heat equation and its variants have been found to be fundamental in many parts of both pure and applied mathematics. Definition Given an open subset of and a subinterval of , one says that a function is a solution of the heat equation if where denotes a general point of the domain. It is typical to refer to as time and as spatial variables, even in abstract contexts where these phrases fail to have their intuitive meaning. The collection of spatial variables is often referred to simply as . For any given value of , the right-hand side of the equation is the Laplacian of the function . As such, the heat equation is often written more compactly as In physics and engineering contexts, especially in the context of diffusion through a medium, it is more common to fix a Cartesian coordinate system and then to consider the specific case of a function of three spatial variables and time variable . One then says that is a solution of the heat equation if in which is a positive coefficient called the thermal diffusivity of the medium. In addition to other physical phenomena, this equation describes the flow of heat in a homogeneous and isotropic medium, with being the temperature at the point and time . If the medium is not homogeneous and isotropic, then would not be a fixed coefficient, and would instead depend on ; the equation would also have a slightly different form. In the physics and engineering literature, it is common to use to denote the Laplacian, rather than . In mathematics as well as in physics and engineering, it is common to use Newton's notation for time derivatives, so that is used to denote , so the equation can be written Note also that the ability to use either or to denote the Laplacian, without explicit reference to the spatial variables, is a reflection of the fact that the Laplacian is independent of the choice of coordinate system. In mathematical terms, one would say that the Laplacian is translationally and rotationally invariant. In fact, it is (loosely speaking) the simplest differential operator which has these symmetries. This can be taken as a significant (and purely mathematical) justification of the use of the Laplacian and of the heat equation in modeling any physical phenomena which are homogeneous and isotropic, of which heat diffusion is a principal example. The diffusivity constant, , is often not present in mathematical studies of the heat equation, while its value can be very important in engineering. This is not a major difference, for the following reason. Let be a function with Define a new function . Then, according to the chain rule, one has Thus, there is a straightforward way of translating between solutions of the heat equation with a general value of and solutions of the heat equation with . As such, for the sake of mathematical analysis, it is often sufficient to only consider the case . Since there is another option to define a satisfying as in () above by setting . Note that the two possible means of defining the new function discussed here amount, in physical terms, to changing the unit of measure of time or the unit of measure of length. Steady-state equation The steady-state heat equation is by definition not dependent on time. In other words, it is assumed conditions exist such that: This condition depends on the time constant and the amount of time passed since boundary conditions have been imposed. Thus, the condition is fulfilled in situations in which the time equilibrium constant is fast enough that the more complex time-dependent heat equation can be approximated by the steady-state case. Equivalently, the steady-state condition exists for all cases in which enough time has passed that the thermal field u no longer evolves in time. In the steady-state case, a spatial thermal gradient may (or may not) exist, but if it does, it does not change in time. This equation therefore describes the end result in all thermal problems in which a source is switched on (for example, an engine started in an automobile), and enough time has passed for all permanent temperature gradients to establish themselves in space, after which these spatial gradients no longer change in time (as again, with an automobile in which the engine has been running for long enough). The other (trivial) solution is for all spatial temperature gradients to disappear as well, in which case the temperature become uniform in space, as well. The equation is much simpler and can help to understand better the physics of the materials without focusing on the dynamic of the heat transport process. It is widely used for simple engineering problems assuming there is equilibrium of the temperature fields and heat transport, with time. Steady-state condition: The steady-state heat equation for a volume that contains a heat source (the inhomogeneous case), is the Poisson's equation: where u is the temperature, k is the thermal conductivity and q is the rate of heat generation per unit volume. In electrostatics, this is equivalent to the case where the space under consideration contains an electrical charge. The steady-state heat equation without a heat source within the volume (the homogeneous case) is the equation in electrostatics for a volume of free space that does not contain a charge. It is described by Laplace's equation: Interpretation Informally, the Laplacian operator gives the difference between the average value of a function in the neighborhood of a point, and its value at that point. Thus, if is the temperature, conveys if (and by how much) the material surrounding each point is hotter or colder, on the average, than the material at that point. By the second law of thermodynamics, heat will flow from hotter bodies to adjacent colder bodies, in proportion to the difference of temperature and of the thermal conductivity of the material between them. When heat flows into (respectively, out of) a material, its temperature increases (respectively, decreases), in proportion to the amount of heat divided by the amount (mass) of material, with a proportionality factor called the specific heat capacity of the material. By the combination of these observations, the heat equation says the rate at which the material at a point will heat up (or cool down) is proportional to how much hotter (or cooler) the surrounding material is. The coefficient in the equation takes into account the thermal conductivity, specific heat, and density of the material. Interpretation of the equation The first half of the above physical thinking can be put into a mathematical form. The key is that, for any fixed , one has where is the single-variable function denoting the average value of over the surface of the sphere of radius centered at ; it can be defined by in which denotes the surface area of the unit ball in -dimensional Euclidean space. This formalizes the above statement that the value of at a point measures the difference between the value of and the value of at points nearby to , in the sense that the latter is encoded by the values of for small positive values of . Following this observation, one may interpret the heat equation as imposing an infinitesimal averaging of a function. Given a solution of the heat equation, the value of for a small positive value of may be approximated as times the average value of the function over a sphere of very small radius centered at . Character of the solutions The heat equation implies that peaks (local maxima) of will be gradually eroded down, while depressions (local minima) will be filled in. The value at some point will remain stable only as long as it is equal to the average value in its immediate surroundings. In particular, if the values in a neighborhood are very close to a linear function , then the value at the center of that neighborhood will not be changing at that time (that is, the derivative will be zero). A more subtle consequence is the maximum principle, that says that the maximum value of in any region of the medium will not exceed the maximum value that previously occurred in , unless it is on the boundary of . That is, the maximum temperature in a region can increase only if heat comes in from outside . This is a property of parabolic partial differential equations and is not difficult to prove mathematically (see below). Another interesting property is that even if initially has a sharp jump (discontinuity) of value across some surface inside the medium, the jump is immediately smoothed out by a momentary, infinitesimally short but infinitely large rate of flow of heat through that surface. For example, if two isolated bodies, initially at uniform but different temperatures and , are made to touch each other, the temperature at the point of contact will immediately assume some intermediate value, and a zone will develop around that point where will gradually vary between and . If a certain amount of heat is suddenly applied to a point in the medium, it will spread out in all directions in the form of a diffusion wave. Unlike the elastic and electromagnetic waves, the speed of a diffusion wave drops with time: as it spreads over a larger region, the temperature gradient decreases, and therefore the heat flow decreases too. Specific examples Heat flow in a uniform rod For heat flow, the heat equation follows from the physical laws of conduction of heat and conservation of energy . By Fourier's law for an isotropic medium, the rate of flow of heat energy per unit area through a surface is proportional to the negative temperature gradient across it: where is the thermal conductivity of the material, is the temperature, and is a vector field that represents the magnitude and direction of the heat flow at the point of space and time . If the medium is a thin rod of uniform section and material, the position x is a single coordinate and the heat flow towards is a scalar field. The equation becomes Let be the internal energy (heat) per unit volume of the bar at each point and time. The rate of change in heat per unit volume in the material, , is proportional to the rate of change of its temperature, . That is, where is the specific heat capacity (at constant pressure, in case of a gas) and is the density (mass per unit volume) of the material. This derivation assumes that the material has constant mass density and heat capacity through space as well as time. Applying the law of conservation of energy to a small element of the medium centred at , one concludes that the rate at which heat changes at a given point is equal to the derivative of the heat flow at that point (the difference between the heat flows either side of the particle). That is, From the above equations it follows that which is the heat equation in one dimension, with diffusivity coefficient This quantity is called the thermal diffusivity of the medium. Accounting for radiative loss An additional term may be introduced into the equation to account for radiative loss of heat. According to the Stefan–Boltzmann law, this term is , where is the temperature of the surroundings, and is a coefficient that depends on the Stefan-Boltzmann constant and the emissivity of the material. The rate of change in internal energy becomes and the equation for the evolution of becomes Non-uniform isotropic medium Note that the state equation, given by the first law of thermodynamics (i.e. conservation of energy), is written in the following form (assuming no mass transfer or radiation). This form is more general and particularly useful to recognize which property (e.g. cp or ) influences which term. where is the volumetric heat source. Heat flow in non-homogeneous anisotropic media In general, the study of heat conduction is based on several principles. Heat flow is a form of energy flow, and as such it is meaningful to speak of the time rate of flow of heat into a region of space. The time rate of heat flow into a region V is given by a time-dependent quantity qt(V). We assume q has a density Q, so that Heat flow is a time-dependent vector function H(x) characterized as follows: the time rate of heat flowing through an infinitesimal surface element with area dS and with unit normal vector n is Thus the rate of heat flow into V is also given by the surface integral where n(x) is the outward pointing normal vector at x. The Fourier law states that heat energy flow has the following linear dependence on the temperature gradient where A(x) is a 3 × 3 real matrix that is symmetric and positive definite. By the divergence theorem, the previous surface integral for heat flow into V can be transformed into the volume integral The time rate of temperature change at x is proportional to the heat flowing into an infinitesimal volume element, where the constant of proportionality is dependent on a constant κ Putting these equations together gives the general equation of heat flow: Remarks The coefficient κ(x) is the inverse of specific heat of the substance at x × density of the substance at x: . In the case of an isotropic medium, the matrix A is a scalar matrix equal to thermal conductivity k. In the anisotropic case where the coefficient matrix A is not scalar and/or if it depends on x, then an explicit formula for the solution of the heat equation can seldom be written down, though it is usually possible to consider the associated abstract Cauchy problem and show that it is a well-posed problem and/or to show some qualitative properties (like preservation of positive initial data, infinite speed of propagation, convergence toward an equilibrium, smoothing properties). This is usually done by one-parameter semigroups theory: for instance, if A is a symmetric matrix, then the elliptic operator defined by is self-adjoint and dissipative, thus by the spectral theorem it generates a one-parameter semigroup. Three-dimensional problem In the special cases of propagation of heat in an isotropic and homogeneous medium in a 3-dimensional space, this equation is where: is temperature as a function of space and time; is the rate of change of temperature at a point over time; , , and are the second spatial derivatives (thermal conductions) of temperature in the , , and directions, respectively; is the thermal diffusivity, a material-specific quantity depending on the thermal conductivity , the specific heat capacity , and the mass density . The heat equation is a consequence of Fourier's law of conduction (see heat conduction). If the medium is not the whole space, in order to solve the heat equation uniquely we also need to specify boundary conditions for u. To determine uniqueness of solutions in the whole space it is necessary to assume additional conditions, for example an exponential bound on the growth of solutions or a sign condition (nonnegative solutions are unique by a result of David Widder). Solutions of the heat equation are characterized by a gradual smoothing of the initial temperature distribution by the flow of heat from warmer to colder areas of an object. Generally, many different states and starting conditions will tend toward the same stable equilibrium. As a consequence, to reverse the solution and conclude something about earlier times or initial conditions from the present heat distribution is very inaccurate except over the shortest of time periods. The heat equation is the prototypical example of a parabolic partial differential equation. Using the Laplace operator, the heat equation can be simplified, and generalized to similar equations over spaces of arbitrary number of dimensions, as where the Laplace operator, Δ or ∇2, the divergence of the gradient, is taken in the spatial variables. The heat equation governs heat diffusion, as well as other diffusive processes, such as particle diffusion or the propagation of action potential in nerve cells. Although they are not diffusive in nature, some quantum mechanics problems are also governed by a mathematical analog of the heat equation (see below). It also can be used to model some phenomena arising in finance, like the Black–Scholes or the Ornstein-Uhlenbeck processes. The equation, and various non-linear analogues, has also been used in image analysis. The heat equation is, technically, in violation of special relativity, because its solutions involve instantaneous propagation of a disturbance. The part of the disturbance outside the forward light cone can usually be safely neglected, but if it is necessary to develop a reasonable speed for the transmission of heat, a hyperbolic problem should be considered instead – like a partial differential equation involving a second-order time derivative. Some models of nonlinear heat conduction (which are also parabolic equations) have solutions with finite heat transmission speed. Internal heat generation The function u above represents temperature of a body. Alternatively, it is sometimes convenient to change units and represent u as the heat density of a medium. Since heat density is proportional to temperature in a homogeneous medium, the heat equation is still obeyed in the new units. Suppose that a body obeys the heat equation and, in addition, generates its own heat per unit volume (e.g., in watts/litre - W/L) at a rate given by a known function q varying in space and time. Then the heat per unit volume u satisfies an equation For example, a tungsten light bulb filament generates heat, so it would have a positive nonzero value for q when turned on. While the light is turned off, the value of q for the tungsten filament would be zero. Solving the heat equation using Fourier series The following solution technique for the heat equation was proposed by Joseph Fourier in his treatise Théorie analytique de la chaleur, published in 1822. Consider the heat equation for one space variable. This could be used to model heat conduction in a rod. The equation is where u = u(x, t) is a function of two variables x and t. Here x is the space variable, so x ∈ [0, L], where L is the length of the rod. t is the time variable, so t ≥ 0. We assume the initial condition where the function f is given, and the boundary conditions Let us attempt to find a solution of that is not identically zero satisfying the boundary conditions but with the following property: u is a product in which the dependence of u on x, t is separated, that is: This solution technique is called separation of variables. Substituting u back into equation , Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value −λ. Thus: and We will now show that nontrivial solutions for for values of λ ≤ 0 cannot occur: Suppose that λ < 0. Then there exist real numbers B, C such that From we get X(0) = 0 = X(L) and therefore B = 0 = C which implies u is identically 0. Suppose that λ = 0. Then there exist real numbers B, C such that X(x) = Bx + C. From equation we conclude in the same manner as in 1 that u is identically 0. Therefore, it must be the case that λ > 0. Then there exist real numbers A, B, C such that and From we get C = 0 and that for some positive integer n, This solves the heat equation in the special case that the dependence of u has the special form . In general, the sum of solutions to that satisfy the boundary conditions also satisfies and . We can show that the solution to , and is given by where Generalizing the solution technique The solution technique used above can be greatly extended to many other types of equations. The idea is that the operator uxx with the zero boundary conditions can be represented in terms of its eigenfunctions. This leads naturally to one of the basic ideas of the spectral theory of linear self-adjoint operators. Consider the linear operator Δu = uxx. The infinite sequence of functions for n ≥ 1 are eigenfunctions of Δ. Indeed, Moreover, any eigenfunction f of Δ with the boundary conditions f(0) = f(L) = 0 is of the form en for some n ≥ 1. The functions en for n ≥ 1 form an orthonormal sequence with respect to a certain inner product on the space of real-valued functions on [0, L]. This means Finally, the sequence {en}n ∈ N spans a dense linear subspace of L2((0, L)). This shows that in effect we have diagonalized the operator Δ. Mean-value property Solutions of the heat equations satisfy a mean-value property analogous to the mean-value properties of harmonic functions, solutions of though a bit more complicated. Precisely, if u solves and then where Eλ is a heat-ball, that is a super-level set of the fundamental solution of the heat equation: Notice that as λ → ∞ so the above formula holds for any (x, t) in the (open) set dom(u) for λ large enough. Fundamental solutions A fundamental solution of the heat equation is a solution that corresponds to the initial condition of an initial point source of heat at a known position. These can be used to find a general solution of the heat equation over certain domains (see, for instance, ). In one variable, the Green's function is a solution of the initial value problem (by Duhamel's principle, equivalent to the definition of Green's function as one with a delta function as solution to the first equation) where is the Dirac delta function. The fundamental solution to this problem is given by the heat kernel One can obtain the general solution of the one variable heat equation with initial condition u(x, 0) = g(x) for −∞ < x < ∞ and 0 < t < ∞ by applying a convolution: In several spatial variables, the fundamental solution solves the analogous problem The n-variable fundamental solution is the product of the fundamental solutions in each variable; i.e., The general solution of the heat equation on Rn is then obtained by a convolution, so that to solve the initial value problem with u(x, 0) = g(x), one has The general problem on a domain Ω in Rn is with either Dirichlet or Neumann boundary data. A Green's function always exists, but unless the domain Ω can be readily decomposed into one-variable problems (see below), it may not be possible to write it down explicitly. Other methods for obtaining Green's functions include the method of images, separation of variables, and Laplace transforms (Cole, 2011). Some Green's function solutions in 1D A variety of elementary Green's function solutions in one-dimension are recorded here; many others are available elsewhere. In some of these, the spatial domain is (−∞,∞). In others, it is the semi-infinite interval (0,∞) with either Neumann or Dirichlet boundary conditions. One further variation is that some of these solve the inhomogeneous equation where f is some given function of x and t. Homogeneous heat equation Initial value problem on (−∞,∞) Comment. This solution is the convolution with respect to the variable x of the fundamental solution and the function g(x). (The Green's function number of the fundamental solution is X00.) Therefore, according to the general properties of the convolution with respect to differentiation, u = g ∗ Φ is a solution of the same heat equation, for Moreover, so that, by general facts about approximation to the identity, Φ(⋅, t) ∗ g → g as t → 0 in various senses, according to the specific g. For instance, if g is assumed bounded and continuous on R then converges uniformly to g as t → 0, meaning that u(x, t) is continuous on with Initial value problem on (0,∞) with homogeneous Dirichlet boundary conditions Comment. This solution is obtained from the preceding formula as applied to the data g(x) suitably extended to R, so as to be an odd function, that is, letting g(−x) := −g(x) for all x. Correspondingly, the solution of the initial value problem on (−∞,∞) is an odd function with respect to the variable x for all values of t, and in particular it satisfies the homogeneous Dirichlet boundary conditions u(0, t) = 0. The Green's function number of this solution is X10. Initial value problem on (0,∞) with homogeneous Neumann boundary conditions Comment. This solution is obtained from the first solution formula as applied to the data g(x) suitably extended to R so as to be an even function, that is, letting g(−x) := g(x) for all x. Correspondingly, the solution of the initial value problem on R is an even function with respect to the variable x for all values of t > 0, and in particular, being smooth, it satisfies the homogeneous Neumann boundary conditions ux(0, t) = 0. The Green's function number of this solution is X20. Problem on (0,∞) with homogeneous initial conditions and non-homogeneous Dirichlet boundary conditions Comment. This solution is the convolution with respect to the variable t of and the function h(t). Since Φ(x, t) is the fundamental solution of the function ψ(x, t) is also a solution of the same heat equation, and so is u := ψ ∗ h, thanks to general properties of the convolution with respect to differentiation. Moreover, so that, by general facts about approximation to the identity, ψ(x, ⋅) ∗ h → h as x → 0 in various senses, according to the specific h. For instance, if h is assumed continuous on R with support in [0, ∞) then ψ(x, ⋅) ∗ h converges uniformly on compacta to h as x → 0, meaning that u(x, t) is continuous on with Inhomogeneous heat equation Problem on (-∞,∞) homogeneous initial conditions Comment. This solution is the convolution in R2, that is with respect to both the variables x and t, of the fundamental solution and the function f(x, t), both meant as defined on the whole R2 and identically 0 for all t → 0. One verifies that which expressed in the language of distributions becomes where the distribution δ is the Dirac's delta function, that is the evaluation at 0. Problem on (0,∞) with homogeneous Dirichlet boundary conditions and initial conditions Comment. This solution is obtained from the preceding formula as applied to the data f(x, t) suitably extended to R × [0,∞), so as to be an odd function of the variable x, that is, letting f(−x, t) := −f(x, t) for all x and t. Correspondingly, the solution of the inhomogeneous problem on (−∞,∞) is an odd function with respect to the variable x for all values of t, and in particular it satisfies the homogeneous Dirichlet boundary conditions u(0, t) = 0. Problem on (0,∞) with homogeneous Neumann boundary conditions and initial conditions Comment. This solution is obtained from the first formula as applied to the data f(x, t) suitably extended to R × [0,∞), so as to be an even function of the variable x, that is, letting f(−x, t) := f(x, t) for all x and t. Correspondingly, the solution of the inhomogeneous problem on (−∞,∞) is an even function with respect to the variable x for all values of t, and in particular, being a smooth function, it satisfies the homogeneous Neumann boundary conditions ux(0, t) = 0. Examples Since the heat equation is linear, solutions of other combinations of boundary conditions, inhomogeneous term, and initial conditions can be found by taking an appropriate linear combination of the above Green's function solutions. For example, to solve let u = w + v where w and v solve the problems Similarly, to solve let u = w + v + r where w, v, and r solve the problems Applications As the prototypical parabolic partial differential equation, the heat equation is among the most widely studied topics in pure mathematics, and its analysis is regarded as fundamental to the broader field of partial differential equations. The heat equation can also be considered on Riemannian manifolds, leading to many geometric applications. Following work of Subbaramiah Minakshisundaram and Åke Pleijel, the heat equation is closely related with spectral geometry. A seminal nonlinear variant of the heat equation was introduced to differential geometry by James Eells and Joseph Sampson in 1964, inspiring the introduction of the Ricci flow by Richard Hamilton in 1982 and culminating in the proof of the Poincaré conjecture by Grigori Perelman in 2003. Certain solutions of the heat equation known as heat kernels provide subtle information about the region on which they are defined, as exemplified through their application to the Atiyah–Singer index theorem. The heat equation, along with variants thereof, is also important in many fields of science and applied mathematics. In probability theory, the heat equation is connected with the study of random walks and Brownian motion via the Fokker–Planck equation. The Black–Scholes equation of financial mathematics is a small variant of the heat equation, and the Schrödinger equation of quantum mechanics can be regarded as a heat equation in imaginary time. In image analysis, the heat equation is sometimes used to resolve pixelation and to identify edges. Following Robert Richtmyer and John von Neumann's introduction of artificial viscosity methods, solutions of heat equations have been useful in the mathematical formulation of hydrodynamical shocks. Solutions of the heat equation have also been given much attention in the numerical analysis literature, beginning in the 1950s with work of Jim Douglas, D.W. Peaceman, and Henry Rachford Jr. Particle diffusion One can model particle diffusion by an equation involving either: the volumetric concentration of particles, denoted c, in the case of collective diffusion of a large number of particles, or the probability density function associated with the position of a single particle, denoted P. In either case, one uses the heat equation or Both c and P are functions of position and time. D is the diffusion coefficient that controls the speed of the diffusive process, and is typically expressed in meters squared over second. If the diffusion coefficient D is not constant, but depends on the concentration c (or P in the second case), then one gets the nonlinear diffusion equation. Brownian motion Let the stochastic process be the solution to the stochastic differential equation where is the Wiener process (standard Brownian motion). The probability density function of is given at any time by which is the solution to the initial value problem where is the Dirac delta function. Schrödinger equation for a free particle With a simple division, the Schrödinger equation for a single particle of mass m in the absence of any applied force field can be rewritten in the following way: , where i is the imaginary unit, ħ is the reduced Planck constant, and ψ is the wave function of the particle. This equation is formally similar to the particle diffusion equation, which one obtains through the following transformation: Applying this transformation to the expressions of the Green functions determined in the case of particle diffusion yields the Green functions of the Schrödinger equation, which in turn can be used to obtain the wave function at any time through an integral on the wave function at t = 0: with Remark: this analogy between quantum mechanics and diffusion is a purely formal one. Physically, the evolution of the wave function satisfying Schrödinger's equation might have an origin other than diffusion. Thermal diffusivity in polymers A direct practical application of the heat equation, in conjunction with Fourier theory, in spherical coordinates, is the prediction of thermal transfer profiles and the measurement of the thermal diffusivity in polymers (Unsworth and Duarte). This dual theoretical-experimental method is applicable to rubber, various other polymeric materials of practical interest, and microfluids. These authors derived an expression for the temperature at the center of a sphere where is the initial temperature of the sphere and the temperature at the surface of the sphere, of radius . This equation has also found applications in protein energy transfer and thermal modeling in biophysics. Financial Mathematics The heat equation arises in a number of phenomena and is often used in financial mathematics in the modeling of options. The Black–Scholes option pricing model's differential equation can be transformed into the heat equation allowing relatively easy solutions from a familiar body of mathematics. Many of the extensions to the simple option models do not have closed form solutions and thus must be solved numerically to obtain a modeled option price. The equation describing pressure diffusion in a porous medium is identical in form with the heat equation. Diffusion problems dealing with Dirichlet, Neumann and Robin boundary conditions have closed form analytic solutions . Image Analysis The heat equation is also widely used in image analysis and in machine learning as the driving theory behind scale-space or graph Laplacian methods. The heat equation can be efficiently solved numerically using the implicit Crank–Nicolson method of . This method can be extended to many of the models with no closed form solution, see for instance . Riemannian geometry An abstract form of heat equation on manifolds provides a major approach to the Atiyah–Singer index theorem, and has led to much further work on heat equations in Riemannian geometry.
Physical sciences
Thermodynamics
Physics
166441
https://en.wikipedia.org/wiki/Lyapunov%20exponent
Lyapunov exponent
In mathematics, the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation vector diverge (provided that the divergence can be treated within the linearized approximation) at a rate given by where is the Lyapunov exponent. The rate of separation can be different for different orientations of initial separation vector. Thus, there is a spectrum of Lyapunov exponents—equal in number to the dimensionality of the phase space. It is common to refer to the largest one as the maximal Lyapunov exponent (MLE), because it determines a notion of predictability for a dynamical system. A positive MLE is usually taken as an indication that the system is chaotic (provided some other conditions are met, e.g., phase space compactness). Note that an arbitrary initial separation vector will typically contain some component in the direction associated with the MLE, and because of the exponential growth rate, the effect of the other exponents will be obliterated over time. The exponent is named after Aleksandr Lyapunov. Definition of the maximal Lyapunov exponent The maximal Lyapunov exponent can be defined as follows: The limit ensures the validity of the linear approximation at any time. For discrete time system (maps or fixed point iterations) , for an orbit starting with this translates into: Definition of the Lyapunov spectrum For a dynamical system with evolution equation in an n–dimensional phase space, the spectrum of Lyapunov exponents in general, depends on the starting point . However, we will usually be interested in the attractor (or attractors) of a dynamical system, and there will normally be one set of exponents associated with each attractor. The choice of starting point may determine which attractor the system ends up on, if there is more than one. (For Hamiltonian systems, which do not have attractors, this is not a concern.) The Lyapunov exponents describe the behavior of vectors in the tangent space of the phase space and are defined from the Jacobian matrix this Jacobian defines the evolution of the tangent vectors, given by the matrix , via the equation with the initial condition . The matrix describes how a small change at the point propagates to the final point . The limit defines a matrix (the conditions for the existence of the limit are given by the Oseledets theorem). The Lyapunov exponents are defined by the eigenvalues of . The set of Lyapunov exponents will be the same for almost all starting points of an ergodic component of the dynamical system. Lyapunov exponent for time-varying linearization To introduce Lyapunov exponent consider a fundamental matrix (e.g., for linearization along a stationary solution in a continuous system), the fundamental matrix is consisting of the linearly-independent solutions of the first-order approximation of the system. The singular values of the matrix are the square roots of the eigenvalues of the matrix . The largest Lyapunov exponent is as follows Lyapunov proved that if the system of the first approximation is regular (e.g., all systems with constant and periodic coefficients are regular) and its largest Lyapunov exponent is negative, then the solution of the original system is asymptotically Lyapunov stable. Later, it was stated by O. Perron that the requirement of regularity of the first approximation is substantial. Perron effects of largest Lyapunov exponent sign inversion In 1930 O. Perron constructed an example of a second-order system, where the first approximation has negative Lyapunov exponents along a zero solution of the original system but, at the same time, this zero solution of the original nonlinear system is Lyapunov unstable. Furthermore, in a certain neighborhood of this zero solution almost all solutions of original system have positive Lyapunov exponents. Also, it is possible to construct a reverse example in which the first approximation has positive Lyapunov exponents along a zero solution of the original system but, at the same time, this zero solution of original nonlinear system is Lyapunov stable. The effect of sign inversion of Lyapunov exponents of solutions of the original system and the system of first approximation with the same initial data was subsequently called the Perron effect. Perron's counterexample shows that a negative largest Lyapunov exponent does not, in general, indicate stability, and that a positive largest Lyapunov exponent does not, in general, indicate chaos. Therefore, time-varying linearization requires additional justification. Basic properties If the system is conservative (i.e., there is no dissipation), a volume element of the phase space will stay the same along a trajectory. Thus the sum of all Lyapunov exponents must be zero. If the system is dissipative, the sum of Lyapunov exponents is negative. If the system is a flow and the trajectory does not converge to a single point, one exponent is always zero—the Lyapunov exponent corresponding to the eigenvalue of with an eigenvector in the direction of the flow. Significance of the Lyapunov spectrum The Lyapunov spectrum can be used to give an estimate of the rate of entropy production, of the fractal dimension, and of the Hausdorff dimension of the considered dynamical system. In particular from the knowledge of the Lyapunov spectrum it is possible to obtain the so-called Lyapunov dimension (or Kaplan–Yorke dimension) , which is defined as follows: where is the maximum integer such that the sum of the largest exponents is still non-negative. represents an upper bound for the information dimension of the system. Moreover, the sum of all the positive Lyapunov exponents gives an estimate of the Kolmogorov–Sinai entropy accordingly to Pesin's theorem. Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, which is based on the direct Lyapunov method with special Lyapunov-like functions. The Lyapunov exponents of bounded trajectory and the Lyapunov dimension of attractor are invariant under diffeomorphism of the phase space. The multiplicative inverse of the largest Lyapunov exponent is sometimes referred in literature as Lyapunov time, and defines the characteristic e-folding time. For chaotic orbits, the Lyapunov time will be finite, whereas for regular orbits it will be infinite. Numerical calculation Generally the calculation of Lyapunov exponents, as defined above, cannot be carried out analytically, and in most cases one must resort to numerical techniques. An early example, which also constituted the first demonstration of the exponential divergence of chaotic trajectories, was carried out by R. H. Miller in 1964. Currently, the most commonly used numerical procedure estimates the matrix based on averaging several finite time approximations of the limit defining . One of the most used and effective numerical techniques to calculate the Lyapunov spectrum for a smooth dynamical system relies on periodic Gram–Schmidt orthonormalization of the Lyapunov vectors to avoid a misalignment of all the vectors along the direction of maximal expansion. The Lyapunov spectrum of various models are described. Source codes for nonlinear systems such as the Hénon map, the Lorenz equations, a delay differential equation and so on are introduced. For the calculation of Lyapunov exponents from limited experimental data, various methods have been proposed. However, there are many difficulties with applying these methods and such problems should be approached with care. The main difficulty is that the data does not fully explore the phase space, rather it is confined to the attractor which has very limited (if any) extension along certain directions. These thinner or more singular directions within the data set are the ones associated with the more negative exponents. The use of nonlinear mappings to model the evolution of small displacements from the attractor has been shown to dramatically improve the ability to recover the Lyapunov spectrum, provided the data has a very low level of noise. The singular nature of the data and its connection to the more negative exponents has also been explored. Local Lyapunov exponent Whereas the (global) Lyapunov exponent gives a measure for the total predictability of a system, it is sometimes of interest to estimate the local predictability around a point in phase space. This may be done through the eigenvalues of the Jacobian matrix . These eigenvalues are also called local Lyapunov exponents. Local exponents are not invariant under a nonlinear change of coordinates. Conditional Lyapunov exponent This term is normally used regarding synchronization of chaos, in which there are two systems that are coupled, usually in a unidirectional manner so that there is a drive (or master) system and a response (or slave) system. The conditional exponents are those of the response system with the drive system treated as simply the source of a (chaotic) drive signal. Synchronization occurs when all of the conditional exponents are negative.
Mathematics
Dynamical systems
null
13407628
https://en.wikipedia.org/wiki/Metroxylon%20sagu
Metroxylon sagu
Metroxylon sagu, the true sago palm, is a species of palm in the genus Metroxylon, native to tropical southeastern Asia. The tree is a major source of sago starch. Description True sago palm is a suckering (multiple-stemmed) palm, each stem only flowering once (hapaxanthic) with a large upright terminal inflorescence. A stem grows tall before it ends in an inflorescence. Before flowering, a stem bears about 20 pinnate leaves up to long. Each leaf has about 150–180 leaflets up to long. The inflorescence, tall and wide, consists of the continuation of the stem and 15–30 upwardly-curving (first-order) branches spirally arranged on it. Each first-order branch has 15–25 rigid, distichously arranged second-order branches; each second-order branch has 10–12 rigid, distichously arranged third-order branches. Flower pairs are spirally arranged on the third-order branches, each pair consisting of one male and one hermaphrodite flower. The fruit is drupe-like, about in diameter, covered in scales which turn from bright green to straw-coloured upon ripening. The sago palm reproduces by fruiting. Each stem (trunk) in a sago palm clump flowers and fruits at the end of its life, but the sago palm as an individual organism lives on through its suckers (shoots that are continuously branching off a stem at or below ground level). Distribution and habitat Metroxylon sagu is native to the Maluku Islands and New Guinea. It has been naturalised in other parts of tropical Asia, including Sumatra, Borneo and Thailand. Its habitat is in lowland swamp forests. Uses The tree is of commercial importance as the main source of sago, a starch obtained from the trunk by washing the starch kernels out of the pulverized pith with water. A trunk cut just prior to flowering contains enough sago to feed a person for a year. Sago is used in cooking for puddings, noodles, breads, and as a thickener. In the Sepik River region of New Guinea, pancakes made from sago are a staple food, often served with fresh fish. Its leaflets are also used as thatching which can remain intact for up to five years. The dried petioles (called gaba-gaba in Indonesian) are used to make walls and ceilings; they are very light, and therefore also used in the construction of rafts. To harvest the starch in the stem, it is felled shortly before or early during its final flowering stage when starch content is highest. Sago palm is propagated by man by collecting (cutting) and replanting young suckers rather than by seed. The upper portion of the trunk's core can be roasted for food; the young nuts, fresh shoots and palm cabbage are also edible. Research published in 2013 indicates that the sago palm was an important food source for the ancient people of coastal China, in the period prior to the cultivation of rice. Culture Sago was noted by the Chinese historian Zhao Rukuo (1170–1231) during the Song dynasty. In his Zhu Fan Zhi (1225), a collection of descriptions of foreign countries, he writes that the "kingdom of Boni (i.e. Brunei) ... produces no wheat, but hemp and rice, and they use sha-hu (sago) for grain".
Biology and health sciences
Arecales (inc. Palms)
Plants
1002536
https://en.wikipedia.org/wiki/Freeze%20drying
Freeze drying
Freeze drying, also known as lyophilization or cryodesiccation, is a low temperature dehydration process that involves freezing the product and lowering pressure, thereby removing the ice by sublimation. This is in contrast to dehydration by most conventional methods that evaporate water using heat. Because of the low temperature used in processing, the rehydrated product retains many of its original qualities. When solid objects like strawberries are freeze dried the original shape of the product is maintained. If the product to be dried is a liquid, as often seen in pharmaceutical applications, the properties of the final product are optimized by the combination of excipients (i.e., inactive ingredients). Primary applications of freeze drying include biological (e.g., bacteria and yeasts), biomedical (e.g., surgical transplants), food processing (e.g., coffee), and preservation. History The Inca were freeze drying potatoes into chuño since the 13th century. The process involved multiple cycles of exposing potatoes to below freezing temperatures on mountain peaks in the Andes during the evening, and squeezing water out and drying them in the sunlight during the day. The Inca people also used the unique climate of the Altiplano to freeze dry meat. The Japanese koya-dofu, freeze-dried tofu, dates to the mid-1500s in Nagano and the 1600s on Mount Koya. Modern freeze drying began as early as 1890 by Richard Altmann who devised a method to freeze dry tissues (either plant or animal), but went virtually unnoticed until the 1930s. In 1909, L. F. Shackell independently created the vacuum chamber by using an electrical pump. No further freeze drying information was documented until Tival in 1927 and Elser in 1934 had patented freeze drying systems with improvements to freezing and condenser steps. A significant turning point for freeze drying occurred during World War II when blood plasma and penicillin were needed to treat the wounded in the field. Because of the lack of refrigerated transport, many serum supplies spoiled before reaching their recipients. The freeze-drying process was developed as a commercial technique that enabled blood plasma and penicillin to be rendered chemically stable and viable without refrigeration. In the 1950s–1960s, freeze drying began to be viewed as a multi-purpose tool for both pharmaceuticals and food processing. In 2020, freeze dried candy saw a major surge in popularity due to viral popularity on social media with freeze dried versions of popular candies such as Skittles, Nerd Gummy Clusters, and SweeTarts appearing in stores. Early uses in food Freeze-dried foods became a major component of astronaut and military rations. What began for astronaut crews as tubed meals and freeze-dried snacks that were difficult to rehydrate, were transformed into hot meals in space by improving the process of rehydrating freeze-dried meals with water. As technology and food processing improved, NASA looked for ways to provide a complete nutrient profile while reducing crumbs, disease-producing bacteria, and toxins. The complete nutrient profile was improved with the addition of an algae-based vegetable-like oil to add polyunsaturated fatty acids. Polyunsaturated fatty acids are beneficial in mental and vision development and, as they remain stable during space travel, can provide astronauts with added benefits. The crumb problem was solved with the addition of a gelatin coating on the foods to lock in and prevent crumbs. Disease-producing bacteria and toxins were reduced by quality control and the development of the Hazard Analysis and Critical Control Points (HACCP) plan, which is widely used today to evaluate food material before, during, and after processing. With the combination of these three innovations, NASA could provide safe and wholesome foods to their crews from freeze-dried meals. Military rations have also come a long way, from being served cured pork and corn meal to beefsteaks with mushroom gravy. How rations are chosen and developed are based on acceptance, nutrition, wholesomeness, producibility, cost, and sanitation. Additional requirements for rations include a minimum shelf life of three years, be deliverable by air, consumable in worldwide environments, and provide a complete nutritional profile. The new T-rations have been improved upon by increasing acceptable items and provide high quality meals while in the field. Freeze-dried coffee was also incorporated by replacing spray-dried coffee in the meal, ready-to-eat category. Stages of freeze drying There are four stages in the complete freeze drying process: pretreatment, freezing, primary drying, and secondary drying. Pretreatment Pretreatment includes any method of treating the product prior to freezing. This may include concentrating the product, formulation revision (i.e., addition of components to increase stability, preserve appearance, and/or improve processing), decreasing a high-vapor-pressure solvent, or increasing the surface area. Food pieces are often IQF treated to make them free flowing prior to freeze drying. Freeze dried pharmaceutical products are in most cases parenterals administered after reconstitution by injection which need to be sterile as well as free of impurity particles. Pre-treatment in these cases consists of solution preparation followed by a multi-step filtration. Afterwards the liquid is filled under sterile conditions into the final containers which in production scale freeze dryers are loaded automatically to the shelves. In many instances the decision to pretreat a product is based on theoretical knowledge of freeze-drying and its requirements, or is demanded by cycle time or product quality considerations. Freezing and annealing During the freezing stage, the material is cooled below its triple point, the temperature at which the solid, liquid, and gas phases of the material can coexist. This ensures that sublimation rather than melting will occur in the following steps. To facilitate faster and more efficient freeze drying, larger ice crystals are preferable. The large ice crystals form a network within the product which promotes faster removal of water vapor during sublimation. To produce larger crystals, the product should be frozen slowly or can be cycled up and down in temperature in a process called annealing. The freezing phase is the most critical in the whole freeze-drying process, as the freezing method can impact the speed of reconstitution, duration of freeze-drying cycle, product stability, and appropriate crystallization. Amorphous materials do not have a eutectic point, but they do have a critical point, below which the product must be maintained to prevent melt-back or collapse during primary and secondary drying. Structurally sensitive goods In the case of goods where preservation of structure is required, like food or objects with formerly-living cells, large ice crystals break the cell walls, resulting in increasingly poor texture and loss of nutritive content. In this case, rapidly freezing the material to below its eutectic point avoids the formation of large ice crystals. Usually, the freezing temperatures are between and . Primary drying During the primary drying phase, the pressure is lowered (to the range of a few millibars), and enough heat is supplied to the material for the ice to sublimate. The amount of heat necessary can be calculated using the sublimating molecules' latent heat of sublimation. In this initial drying phase, about 95% of the water in the material is sublimated. This phase may be slow (can be several days in the industry), because, if too much heat is added, the material's structure could be altered. In this phase, pressure is controlled through the application of partial vacuum. The vacuum speeds up the sublimation, making it useful as a deliberate drying process. Furthermore, a cold condenser chamber and/or condenser plates provide a surface(s) for the water vapor to re-liquify and solidify on. It is important to note that, in this range of pressure, the heat is brought mainly by conduction or radiation; the convection effect is negligible, due to the low air density. Secondary drying The secondary drying phase aims to remove unfrozen water molecules, since the ice was removed in the primary drying phase. This part of the freeze-drying process is governed by the material's adsorption isotherms. In this phase, the temperature is raised higher than in the primary drying phase, and can even be above , to break any physico-chemical interactions that have formed between the water molecules and the frozen material. Usually the pressure is also lowered in this stage to encourage desorption (typically in the range of microbars, or fractions of a pascal). However, there are products that benefit from increased pressure as well. After the freeze-drying process is complete, the vacuum is usually broken with an inert gas, such as nitrogen, before the material is sealed. At the end of the operation, the final residual water content in the product is extremely low, around 1–4%. Applications of freeze drying Freeze-drying causes less damage to the substance than other dehydration methods using higher temperatures. Nutrient factors that are sensitive to heat are lost less in the process as compared to the processes incorporating heat treatment for drying purposes. Freeze-drying does not usually cause shrinkage or toughening of the material being dried. In addition, flavors, smells, and nutritional content generally remain unchanged, making the process popular for preserving food. However, water is not the only chemical capable of sublimation, and the loss of other volatile compounds such as acetic acid (vinegar) and alcohols can yield undesirable results. Freeze-dried products can be rehydrated (reconstituted) much more quickly and easily because the process leaves microscopic pores. The pores are created by the ice crystals that sublimate, leaving gaps or pores in their place. This is especially important when it comes to pharmaceutical uses. Freeze-drying can also be used to increase the shelf life of some pharmaceuticals for many years. Pharmaceuticals and biotechnology Pharmaceutical companies often use freeze-drying to increase the shelf life of the products, such as live virus vaccines, biologics, and other injectables. By removing the water from the material and sealing the material in a glass vial, the material can be easily stored, shipped, and later reconstituted to its original form for injection. Another example from the pharmaceutical industry is the use of freeze drying to produce tablets or wafers, the advantage of which is less excipient as well as a rapidly absorbed and easily administered dosage form. Freeze-dried pharmaceutical products are produced as lyophilized powders for reconstitution in vials and more recently in prefilled syringes for self-administration by a patient. Examples of lyophilized biological products include many vaccines such as live measles virus vaccine, typhoid vaccine, and meningococcal polysaccharide vaccine groups A and C combined. Other freeze-dried biological products include antihemophilic factor VIII, interferon alfa, anti-blood clot medicine streptokinase, and wasp venom allergenic extract. Many bio-pharmaceutical products based on therapeutic proteins such as monoclonal antibodies require lyophilization for stability. Examples of lyophilized biopharmaceuticals include blockbuster drugs such as etanercept (Enbrel by Amgen), infliximab (Remicade by Janssen Biotech), rituximab, and trastuzumab (Herceptin by Genentech). Cell extracts that support cell-free biotechnology applications such as point-of-care diagnostics and biomanufacturing are also freeze-dried to improve stability under room temperature storage. Dry powders of probiotics are often produced by bulk freeze-drying of live microorganisms such as lactic acid bacteria and bifidobacteria. Lyophilized biologics can be pressed into pellets and tablets for anhydrous and high-density, solid-state storage of biological products. Freeze drying of food The primary purpose of freeze drying within the food industry is to extend the shelf-life of the food while maintaining the quality. Freeze-drying is known to result in the highest quality of foods of all drying techniques because structural integrity is maintained along with preservation of flavors. Because freeze drying is expensive, it is used mainly with high-value products. Examples of high-value freeze-dried products are seasonal fruits and vegetables because of their limited availability, coffee; and foods used for military rations, astronauts/cosmonauts, and/or hikers. NASA and military rations Because of its light weight per volume of reconstituted food, freeze-dried products are popular and convenient for hikers, as military rations, or astronaut meals. A greater amount of dried food can be carried compared to the same weight of wet food. In replacement of wet food, freeze dried food can easily be rehydrated with water if desired and shelf-life of the dried product is longer than fresh/wet product making it ideal for long trips taken by hikers, military personnel, or astronauts. The development of freeze drying increased meal and snack variety to include items like shrimp cocktail, chicken and vegetables, butterscotch pudding, and apple sauce. Coffee Coffee contains flavor and aroma qualities that are created due to the Maillard reaction during roasting and can be preserved with freeze-drying. Compared to other drying methods like room temperature drying, hot-air drying, and solar drying, Robusta coffee beans that were freeze-dried contained higher amounts of essential amino acids like leucine, lysine, and phenylalanine. Also, few non-essential amino acids that significantly contributed to taste were preserved. Fruits With conventional dehydration, berries can degrade in quality as their structure is delicate and contains high levels of moisture. Strawberries were found to have the highest quality when freeze dried; retaining color, flavor, and ability to be re-hydrated. Insects Freeze-drying is used extensively to preserve insects for the purposes of consumption. Whole freeze-dried insects are sold as exotic pet food, bird feed, fish bait, and increasingly for human consumption. Powdered freeze-dried insects are used as a protein base in animal feeds, and in some markets, as a nutritional supplement for human use. Farmed insects are generally used for all of the aforementioned purposes versus harvesting wild insects, except in the case of grasshoppers which are often harvested out of field crops. Technological industry In chemical synthesis, products are often freeze-dried to make them more stable, or easier to dissolve in water for subsequent use. In bioseparations, freeze-drying can be used also as a late-stage purification procedure, because it can effectively remove solvents. Furthermore, it is capable of concentrating substances with low molecular weights that are too small to be removed by a filtration membrane. Freeze-drying is a relatively expensive process. The equipment is about three times as expensive as the equipment used for other separation processes, and the high energy demands lead to high energy costs. Furthermore, freeze-drying also has a long process time, because the addition of too much heat to the material can cause melting or structural deformations. Therefore, freeze-drying is often reserved for materials that are heat-sensitive, such as proteins, enzymes, microorganisms, and blood plasma. The low operating temperature of the process leads to minimal damage of these heat-sensitive products. In nanotechnology, freeze-drying is used for nanotube purification to avoid aggregation due to capillary forces during regular thermal vaporization drying. Taxidermy Freeze-drying is among the methods used to preserve animals in the field of taxidermy. When animals are preserved in this manner they are called "freeze-dried taxidermy" or "freeze-dried mounts". Freeze-drying is commonly used to preserve crustaceans, fish, amphibians, reptiles, insects, and smaller mammals. Freeze-drying is also used as a means to memorialize pets after death. Rather than opting for a traditional skin mount when choosing to preserve their pet via taxidermy, many owners opt for freeze-drying because it is less invasive upon the pet's body. Other uses Organizations such as the Document Conservation Laboratory at the United States National Archives and Records Administration (NARA) have done studies on freeze-drying as a recovery method of water-damaged books and documents. While recovery is possible, restoration quality depends on the material of the documents. If a document is made of a variety of materials, which have different absorption properties, expansion will occur at a non-uniform rate, which could lead to deformations. Water can also cause mold to grow or make inks bleed. In these cases, freeze-drying may not be an effective restoration method. In bacteriology freeze-drying is used to conserve special strains. Advanced ceramics processes sometimes use freeze-drying to create a formable powder from a sprayed slurry mist. Freeze-drying creates softer particles with a more homogeneous chemical composition than traditional hot spray drying, but it is also more expensive. A new form of burial which previously freeze-dries the body with liquid nitrogen has been developed by the Swedish company Promessa Organic AB, which puts it forward as an environmentally friendly alternative to traditional casket and cremation burials. Advantages Freeze-drying is viewed as the optimal method of choice for dehydration of food because of the preservation of quality, meaning characteristics of the food product such as aroma, rehydration, and bioactivity, are noticeably higher compared to foods dried using other techniques. Shelf-life extension Shelf-life extension results from low processing temperatures in conjunction with rapid transition of water through sublimation. With these processing conditions, deterioration reactions, including nonenzymic browning, enzymatic browning, and protein denaturation, are minimized. When the product is successfully dried, packaged properly, and placed in ideal storage conditions the foods have a shelf life of greater than 12 months. Re-hydration If a dried product cannot be easily or fully re-hydrated, it is considered to be of lower quality. Because if the final freeze dried product is porous, complete re-hydration can occur in the food. This signifies greater quality of the product and makes it ideal for ready-to-eat instant meals. Effect on nutrients and sensory quality Due to the low processing temperatures and the minimization of deterioration reactions, nutrients are retained and color is maintained. Freeze-dried fruit maintains its original shape and has a characteristic soft crispy texture. Disadvantages Microbial growth Since the main method of microbial decontamination for freeze drying is the low temperature dehydration process, spoilage organisms and pathogens resistant to these conditions can remain in the product. Although microbial growth is inhibited by the low moisture conditions, it can still survive in the food product. An example of this is a viral hepatitis A outbreak that occurred in the United States in 2016, associated with frozen strawberries. If the product is not properly packaged and/or stored, the product can absorb moisture, allowing the once inhibited pathogens to begin reproducing as well. Cost Freeze-drying costs about five times as much as conventional drying, so it is most suitable for products which increase in value with processing. Costs are also variable depending on the product, the packaging material, processing capacity, etc. The most energy-intensive step is sublimation. Silicone oil leakage Silicone oil is the common fluid that is used to heat or cool shelves in the freeze-dryer. The continuous heat/cool cycle can lead to a leakage of silicone oil at weak areas that connect the shelf and hose. This can contaminate the product leading to major losses of pharmaceuticals and food products. Hence, to avoid this issue, mass spectrometers are used to identify vapors released by silicone oil to immediately take corrective action and prevent contamination of the product. Products Mammalian cells generally do not survive freeze drying even though they still can be preserved. Equipment and types of freeze dryers There are many types of freeze-dryers available, however, they usually contain a few essential components. These are a vacuum chamber, shelves, process condenser, shelf-fluid system, refrigeration system, vacuum system, and control system. Function of essential components Chamber The chamber is highly polished and contains insulation, internally. It is manufactured with stainless steel and contains multiple shelves for holding the product. A hydraulic or electric motor is in place to ensure the door is vacuum-tight when closed. Process condenser The process condenser consists of refrigerated coils or plates that can be external or internal to the chamber. During the drying process, the condenser traps water. For increased efficiency, the condenser temperature should be 20 °C (36 °F) less than the product during primary drying and have a defrosting mechanism to ensure that the maximum amount of water vapor in the air is condensed. Shelf fluid The amount of heat energy needed at times of the primary and secondary drying phase is regulated by an external heat exchanger. Usually, silicone oil is circulated around the system with a pump. Refrigeration system This system works to cool shelves and the process condenser by using compressors or liquid nitrogen, which will supply energy necessary for the product to freeze. Vacuum system During the drying process, a vacuum of 50–100 microbar is applied, by the vacuum system, to remove the solvent. A two-stage rotary vacuum pump is used, however, if the chamber is large then multiple pumps are needed. This system compresses non-condensable gases through the condenser. Control system Finally, the control system sets up controlled values for shelf temperature, pressure, and time that are dependent on the product and/or the process. The freeze-dryer can run for a few hours or days depending on the product. Contact freeze dryers Contact freeze dryers use contact (conduction) of the food with the heating element to supply the sublimation energy. This type of freeze dryer is a basic model that is simple to set up for sample analysis. One of the major ways contact freeze dryers heat is with shelf-like platforms contacting the samples. The shelves play a major role as they behave like heat exchangers at different times of the freeze-drying process. They are connected to a silicone oil system that will remove heat energy during freezing and provide energy during drying times. Additionally, the shelf-fluid system works to provide specific temperatures to the shelves during drying by pumping a fluid (usually silicone oil) at low pressure. The downside to this type of freeze dryer is that the heat is only transferred from the heating element to the side of the sample immediately touching the heater. This problem can be minimized by maximizing the surface area of the sample touching the heating element by using a ribbed tray, slightly compressing the sample between two solid heated plates above and below, or compressing with a heated mesh from above and below. Radiant freeze dryers Radiant freeze dryers use infrared radiation to heat the sample in the tray. This type of heating allows for simple flat trays to be used as an infrared source can be located above the flat trays to radiate downwards onto the product. Infrared radiation heating allows for a uniform heating of the surface of the product, but has little capacity for penetration so it is used mostly with shallow trays and homogeneous sample matrices. Microwave-assisted freeze dryers Microwave-assisted freeze dryers utilize microwaves to allow for deeper penetration into the sample to expedite the sublimation and heating processes in freeze-drying. This method can be complicated to set up and run as the microwaves can create an electrical field capable of causing gases in the sample chamber to become plasma. This plasma could potentially burn the sample, so maintaining a microwave strength appropriate for the vacuum levels is imperative. The rate of sublimation in a product can affect the microwave impedance, in which power of the microwave must be changed accordingly.
Physical sciences
Phase separations
Chemistry
1002551
https://en.wikipedia.org/wiki/Flap%20%28aeronautics%29
Flap (aeronautics)
A flap is a high-lift device used to reduce the stalling speed of an aircraft wing at a given weight. Flaps are usually mounted on the wing trailing edges of a fixed-wing aircraft. Flaps are used to reduce the take-off distance and the landing distance. Flaps also cause an increase in drag so they are retracted when not needed. The flaps installed on most aircraft are partial-span flaps; spanwise from near the wing root to the inboard end of the ailerons. When partial-span flaps are extended they alter the spanwise lift distribution on the wing by causing the inboard half of the wing to supply an increased proportion of the lift, and the outboard half to supply a reduced proportion of the lift. Reducing the proportion of the lift supplied by the outboard half of the wing is accompanied by a reduction in the angle of attack on the outboard half. This is beneficial because it increases the margin above the stall of the outboard half, maintaining aileron effectiveness and reducing the likelihood of asymmetric stall, and spinning. The ideal lift distribution across a wing is elliptical, and extending partial-span flaps causes a significant departure from the elliptical. This increases lift-induced drag which can be beneficial during approach and landing because it allows the aircraft to descend at a steeper angle. Extending the wing flaps increases the camber or curvature of the wing, raising the maximum lift coefficient or the upper limit to the lift a wing can generate. This allows the aircraft to generate the required lift at a lower speed, reducing the minimum speed (known as stall speed) at which the aircraft will safely maintain flight. For most aircraft configurations, a useful side effect of flap deployment is a decrease in aircraft pitch angle which lowers the nose thereby improving the pilot's view of the runway over the nose of the aircraft during landing. There are many different designs of flaps, with the specific choice depending on the size, speed and complexity of the aircraft on which they are to be used, as well as the era in which the aircraft was designed. Plain flaps, slotted flaps, and Fowler flaps are the most common. Krueger flaps are positioned on the leading edge of the wings and are used on many jet airliners. The Fowler, Fairey-Youngman and Gouge types of flap increase the wing area in addition to changing the camber. The larger lifting surface reduces wing loading, hence further reducing the stalling speed. Some flaps are fitted elsewhere. Leading-edge flaps form the wing leading edge and when deployed they rotate down to increase the wing camber. The de Havilland DH.88 Comet racer had flaps running beneath the fuselage and forward of the wing trailing edge. Many of the Waco Custom Cabin series biplanes have the flaps at mid-chord on the underside of the top wing. Principles of operation The general airplane lift equation demonstrates these relationships: where: L is the amount of Lift produced, is the air density, V is the true airspeed of the airplane or the Velocity of the airplane, relative to the air S is the area of the wing is the lift coefficient, which is determined by the shape of the airfoil used and the angle at which the wing meets the air (or angle of attack). Here, it can be seen that increasing the area (S) and lift coefficient () allow a similar amount of lift to be generated at a lower airspeed (V). Thus, flaps are extensively in use for short takeoffs and landings (STOL). Extending the flaps also increases the drag coefficient of the aircraft. Therefore, for any given weight and airspeed, flaps increase the drag force. Flaps increase the drag coefficient of an aircraft due to higher induced drag caused by the distorted spanwise lift distribution on the wing with flaps extended. Some flaps increase the wing area and, for any given speed, this also increases the parasitic drag component of total drag. Flaps during takeoff Depending on the aircraft type, flaps may be partially extended for takeoff. When used during takeoff, flaps trade runway distance for climb rate: using flaps reduces ground roll but also reduces the climb rate. The amount of flap used on takeoff is specific to each type of aircraft, and the manufacturer will suggest limits and may indicate the reduction in climb rate to be expected. The Cessna 172S Pilot Operating Handbook recommends 10° of flaps on takeoff, when the ground is soft or it is a short runway, otherwise 0 degrees is used. Flaps during landing Flaps may be fully extended for landing to give the aircraft a lower stall speed so the approach to landing can be flown more slowly, which also allows the aircraft to land in a shorter distance. The higher lift and drag associated with fully extended flaps allows a steeper and slower approach to the landing site, but imposes handling difficulties in aircraft with very low wing loading (i.e. having little weight and a large wing area). Winds across the line of flight, known as crosswinds, cause the windward side of the aircraft to generate more lift and drag, causing the aircraft to roll, yaw and pitch off its intended flight path, and as a result many light aircraft land with reduced flap settings in crosswinds. Furthermore, once the aircraft is on the ground, the flaps may decrease the effectiveness of the brakes since the wing is still generating lift and preventing the entire weight of the aircraft from resting on the tires, thus increasing stopping distance, particularly in wet or icy conditions. Usually, the pilot will raise the flaps as soon as possible to prevent this from occurring. Maneuvering flaps Some gliders not only use flaps when landing, but also in flight to optimize the camber of the wing for the chosen speed. While thermalling, flaps may be partially extended to reduce the stall speed so that the glider can be flown more slowly and thereby reduce the rate of sink, which lets the glider use the rising air of the thermal more efficiently, and to turn in a smaller circle to make best use of the core of the thermal. At higher speeds a negative flap setting is used to reduce the nose-down pitching moment. This reduces the balancing load required on the horizontal stabilizer, which in turn reduces the trim drag associated with keeping the glider in longitudinal trim. Negative flap may also be used during the initial stage of an aerotow launch and at the end of the landing run in order to maintain better control by the ailerons. Like gliders, some fighters such as the Nakajima Ki-43 also use special flaps to improve maneuverability during air combat, allowing the fighter to create more lift at a given speed, allowing for much tighter turns. The flaps used for this must be designed specifically to handle the greater stresses and most flaps have a maximum speed at which they can be deployed. Control line model aircraft built for precision aerobatics competition usually have a type of maneuvering flap system that moves them in an opposing direction to the elevators, to assist in tightening the radius of a maneuver. Flap tracks Manufactured most often from PH steels and titanium, flap tracks control the flaps located on the trailing edge of an aircraft's wings. Extending flaps often run on guide tracks. Where these run outside the wing structure they may be faired in to streamline them and protect them from damage. Some flap track fairings are designed to act as anti-shock bodies, which reduce drag caused by local sonic shock waves where the airflow becomes transonic at high speeds. Thrust gates Thrust gates, or gaps, in the trailing edge flaps may be required to minimise interference between the engine flow and deployed flaps. In the absence of an inboard aileron, which provides a gap in many flap installations, a modified flap section may be needed. The thrust gate on the Boeing 757 was provided by a single-slotted flap in between the inboard and outboard double-slotted flaps. The A320, A330, A340 and A380 have no inboard aileron. No thrust gate is required in the continuous, single-slotted flap. Interference in the go-around case while the flaps are still fully deployed can cause increased drag which must not compromise the climb gradient. Types of flap Plain flap The rear portion of airfoil rotates downwards on a simple hinge mounted at the front of the flap. The Royal Aircraft Factory and National Physical Laboratory in the United Kingdom tested flaps in 1913 and 1914, but these were never installed in an actual aircraft. In 1916, the Fairey Aviation Company made a number of improvements to a Sopwith Baby they were rebuilding, including their Patent Camber Changing Gear, making the Fairey Hamble Baby as they renamed it, the first aircraft to fly with flaps. These were full span plain flaps which incorporated ailerons, making it also the first instance of flaperons. Fairey were not alone however, as Breguet soon incorporated automatic flaps into the lower wing of their Breguet 14 reconnaissance/bomber in 1917. Owing to the greater efficiency of other flap types, the plain flap is normally only used where simplicity is required. Split flap The rear portion of the lower surface of the airfoil hinges downwards from the leading edge of the flap, while the upper surface stays immobile. This can cause large changes in longitudinal trim, pitching the nose either down or up. At full deflection, a split flaps acts much like a spoiler, adding significantly to drag coefficient. It also adds a little to lift coefficient. It was invented by Orville Wright and James M. H. Jacobs in 1920, but only became common in the 1930s and was then quickly superseded. The Douglas DC-1 (progenitor to the DC-3 and C-47) was one of the first of many aircraft types to use split flaps. Slotted flap A gap between the flap and the wing forces high pressure air from below the wing over the flap helping the airflow remain attached to the flap, increasing lift compared to a split flap. Additionally, lift across the entire chord of the primary airfoil is greatly increased as the velocity of air leaving its trailing edge is raised, from the typical non-flap 80% of freestream, to that of the higher-speed, lower-pressure air flowing around the leading edge of the slotted flap. Any flap that allows air to pass between the wing and the flap is considered a slotted flap. The slotted flap was a result of research at Handley-Page, a variant of the slot that dates from the 1920s, but was not widely used until much later. Some flaps use multiple slots to further boost the effect. Fowler flap A split flap that slides backwards, before hinging downward, thereby increasing first chord, then camber. The flap may form part of the upper surface of the wing, like a plain flap, or it may not, like a split flap, but it must slide rearward before lowering. As a defining feature – distinguishing it from the Gouge Flap – it always provides a slot effect. The flap was invented by Harlan D. Fowler in 1924, and tested by Fred Weick at NACA in 1932. First used on the Martin 146 prototype in 1935, it entered production on the 1937 Lockheed Super Electra, and remains in widespread use on modern aircraft, often with multiple slots. Junkers flap A slotted plain flap fixed below the trailing edge of the wing, and rotating about its forward edge. When not in use, it has more drag than other types, but is more effective at creating additional lift than a plain or split flap, while retaining their mechanical simplicity. Invented by Otto Mader at Junkers in the late 1920s, they were most often seen on the Junkers Ju 52 and the Junkers Ju 87 Stuka, though the same basic design can also be found on many modern ultralights, like the Denney Kitfox. This type of flap is sometimes referred to as an external-airfoil flap. Gouge flap A type of split flap that slides backward along curved tracks that force the trailing edge downward, increasing chord and camber without affecting trim or requiring any additional mechanisms. It was invented by Arthur Gouge for Short Brothers in 1936 and used on the Short Empire and Sunderland flying boats, which used the very thick Shorts A.D.5 airfoil. Short Brothers may have been the only company to use this type. Fairey-Youngman flap Drops down (becoming a Junkers Flap) before sliding aft and then rotating up or down. Fairey was one of the few exponents of this design, which was used on the Fairey Firefly and Fairey Barracuda. When in the extended position, it could be angled up (to a negative angle of incidence) so that the aircraft could be dived vertically without needing excessive trim changes. Zap flap The Zap flap was invented by Edward F. Zaparka while he was with Berliner/Joyce and tested on a General Airplanes Corporation Aristocrat in 1932 and on other types periodically thereafter, but it saw little use on production aircraft other than on the Northrop P-61 Black Widow. The leading edge of the flap is mounted on a track, while a point at mid chord on the flap is connected via an arm to a pivot just above the track. When the flap's leading edge moves aft along the track, the triangle formed by the track, the shaft and the surface of the flap (fixed at the pivot) gets narrower and deeper, forcing the flap down. Krueger flap A hinged flap which folds out from under the wing's leading edge while not forming a part of the leading edge of the wing when retracted. This increases the camber and thickness of the wing, which in turn increases lift and drag. This is not the same as a leading edge droop flap, as that is formed from the entire leading edge. Invented by Werner Krüger in 1943 and evaluated in Goettingen, Krueger flaps are found on many modern swept wing airliners. Gurney flap A small fixed perpendicular tab of between 1 and 2% of the wing chord, mounted on the high pressure side of the trailing edge of an airfoil. It was named for racing car driver Dan Gurney who rediscovered it in 1971, and has since been used on some helicopters such as the Sikorsky S-76B to correct control problems without having to resort to a major redesign. It boosts the efficiency of even basic theoretical airfoils (made up of a triangle and a circle overlapped) to the equivalent of a conventional airfoil. The principle was discovered in the 1930s, but was rarely used and was then forgotten. Late marks of the Supermarine Spitfire used a bead on the trailing edge of the elevators, which functioned in a similar manner. Leading edge flap The entire leading edge of the wing rotates downward, effectively increasing camber and also slightly reducing chord. Most commonly found on fighters with very thin wings unsuited to other leading edge high lift devices. Slats are one of such devices, Slats are extendable high lift devices on the leading edge of the wings of some fixed wing aircraft. Their purpose is to increase lift during low speed operations such as take-off, initial climb, approach and landing. Blown flap A type of Boundary Layer Control System, blown flaps pass engine-generated air or exhaust over the flaps to increase lift beyond that attainable with mechanical flaps. Types include the original (internally blown flap) which blows compressed air from the engine over the top of the flap, the externally blown flap, which blows engine exhaust over the upper and lower surfaces of the flap, and upper surface blowing which blows engine exhaust over the top of the wing and flap. While testing was done in Britain and Germany before the Second World War, and flight trials started, the first production aircraft with blown flaps was not until the 1957 Lockheed T2V SeaStar. Upper Surface Blowing was used on the Boeing YC-14 in 1976. Flexible flap Also known as the FlexFoil. A modern interpretation of wing warping, internal mechanical actuators bend a lattice that changes the airfoil shape. It may have a flexible gap seal at the transition between fixed and flexible airfoils. Flaperon A type of aircraft control surface that combines the functions of both flaps and ailerons. Continuous trailing-edge flap As of 2014, U.S. Army Research Laboratory (ARL) researchers at NASA's Langley Research Center developed an active-flap design for helicopter rotor blades. The Continuous Trailing-Edge Flap (CTEF) uses components to change blade camber during flight, eliminating mechanical hinges in order to improve system reliability. Prototypes were constructed for wind-tunnel testing. A team from ARL completed a live-fire test of a rotor blade with individual blade control technology in January 2016. The live fire experiments explored the ballistic vulnerability of blade control technologies. Researchers fired three shots representative of typical ground fire on a 7-foot-span, 10-inch-chord rotor blade section with a 4-foot-long CTEF at ARL's Airbase Experimental Facility. Related devices Leading edge slats and slots are mounted on the top of the wings' leading edge and while they may be either fixed or retractable, when deployed they provide a slot or gap under the slat to force air against the top of the wing, which is absent on a Krueger flap. They offer excellent lift and enhance controllability at low speeds. Leading edge slats allow the wing to fly at a higher angle of attack which decrease takeoff and landing distances. Other types of flaps may be equipped with one or more slots to increase their effectiveness, a typical setup on many modern airliners. These are known as slotted flaps as described above. Frederick Handley Page experimented with fore and aft slot designs in the 20s and 30s. Spoilers are intended to create drag and reduce lift by "spoiling" the airflow over the wing. A spoiler is much larger than a Gurney flap, and can be retracted. Spoilers are usually installed mid chord on the upper surface of the wing, but may also be installed on the lower surface of the wing as well. Air brakes are used to increase drag, allowing the aircraft to decelerate rapidly. When installed on the wings they differ from flaps and spoilers in that they are not intended to modify the lift and are built strongly enough to be deployed at much higher speeds. Ailerons are similar to flaps (and work the same way), but are intended to provide lateral control, rather than to change the lifting characteristics of both wings together, and so operate differentially – when an aileron on one wing increases the lift, the opposite aileron does not, and will often work to decrease lift. When ailerons are designed to lower in conjunction with flaps, they are usually called flaperons, while those that spoil lift (typically placed on the upper surface before the trailing edge) they are called spoilerons.
Technology
Aircraft components
null
1002559
https://en.wikipedia.org/wiki/Asian%20giant%20hornet
Asian giant hornet
The Asian giant hornet (Vespa mandarinia) or northern giant hornet, including the color form referred to as the Japanese giant hornet, is the world's largest hornet. It is native to temperate and tropical East Asia, South Asia, Mainland Southeast Asia, and parts of the Russian Far East. It was also found in the Pacific Northwest of North America in late 2019 with a few more additional sightings in 2020, and nests found in 2021, prompting concern that it could become an invasive species, but in December 2024, it was announced that the hornets had been eradicated from the region as well as from the rest of the United States. Asian giant hornets prefer to live in low mountains and forests, while almost completely avoiding plains and high-altitude climates. V. mandarinia creates nests by digging, co-opting pre-existing tunnels dug by rodents, or occupying spaces near rotten pine roots. It feeds primarily on larger insects, colonies of other eusocial insects, tree sap, and honey from honey bee colonies. The hornet has a body length of , a wingspan around , and a stinger long, which injects a large amount of potent venom. Taxonomy and phylogeny V. mandarinia is a species in the genus Vespa, which comprises all true hornets. Along with seven other species, V. mandarinia is a part of the V. tropica species group, defined by the single notch located on the apical margin of the seventh gastral sternum of the male. The most closely related species within the species group is V. soror. The triangular shape of the apical margin of the clypeus of the female is diagnostic, the vertex of both species is enlarged, and the shape of the apex of the aedeagus is distinct and similar. Division of the genus into subgenera has been attempted in the past, but has been abandoned, due to the anatomical similarity among species and because behavioral similarity is not associated with phylogeny. The species has existed since the Miocene epoch, as indicated by fossils found in the Shanwang Formation. As of 2012, three subspecies were recognized: V. m. mandarinia, V. m. magnifica, and V. m. nobilis. The former subspecies referred to as V. m. japonica has not been considered valid since 1997. The most recent revision in 2020 eliminated all of the subspecies rankings entirely, with "japonica", "magnifica", and "nobilis" now relegated to informal non-taxonomic names for different color forms. Common names Since its discovery in North America, the scientific literature and official government sources refer to this species by its established common name, Asian giant hornet, whilst the mainstream media have taken to using the nickname "murder hornet". In July 2022, the Entomological Society of America stated that they will adopt the common name northern giant hornet for the species to avoid potentially discriminatory language, citing xenophobia and racism related to the COVID-19 pandemic. Description Regardless of sex, the hornet's head is a light shade of orange and its antennae are brown with a yellow-orange base. Its eyes and ocelli are dark brown to black. V. mandarinia is distinguished from other hornets by its pronounced clypeus and large genae. Its orange mandible contains a black tooth that it uses for digging. The thorax is dark brown, with two grey wings varying in span from . Its forelegs are brighter than the mid and hind legs. The base of the forelegs is darker than the rest. The abdomen alternates between bands of dark brown or black, and a yellow-orange hue (consistent with its head color). The sixth segment is yellow. Its stinger is typically long and delivers a potent venom that, in cases of multiple hornets stinging simultaneously, or by rare allergic reaction, can kill a human. Queens and workers The queens are considerably larger than workers. Queens can exceed , while workers are between . The reproductive anatomy is consistent between the two, but workers do not reproduce. Drones Drones (males) are similar to females, and can attain in length, but lack stingers. This is a consistent feature among the Hymenoptera. Larvae Larvae spin a silk cocoon when they complete development and are ready to pupate. Larval silk proteins have a wide variety of potential applications due to their wide variety of potential morphologies, including the native fiber form, but also sponge, film, and gel. Genome The mitochondrial genome is provided by Chen et al., 2015. This data has also been important to confirm the place of the wider Vespidae family in the Vespoidea superfamily, and confirms that Vespoidea is monophyletic. Misidentifications Within two days of the initial 2020 news report on V. mandarinia, insect identification centers in the eastern United States (where the wasp does not occur) began getting identification requests, and were swamped for the next several months, even though not one of the thousands of submitted photos or samples was of V. mandarinia, but were instead primarily wasps such as the European hornet (V. crabro), the eastern cicada killer (Sphecius speciosus), or the southern yellowjacket (Vespula squamosa). Submissions suspected by laypeople to be V. mandarinia also include other wasps of various sizes, bees, sawflies, horntails, wasp-mimicking flies, beetles, Jerusalem crickets, cicadas, and even a plastic children's toy that was wasp-like in appearance, all of which were routinely estimated to be 130-185% of their actual size. Reports of this species from other parts of the world appear to be erroneous identifications of other introduced hornet species, such as V. orientalis in several locations around the world, and V. velutina in Europe. Distribution Ecological distribution V. mandarinia is primarily a forest dweller. When it does live in urban landscapes, V. mandarinia is highly associated with green space. It is the most dependent upon green space of the Vespa species (with V. analis the least). Extremely urbanized areas provide a refuge for V. analis, whereas V. mandarinia – its predator – is entirely absent. Geographic distribution Asia The Asian giant hornet can be found in: Russia – Primorsky Krai, Khabarovsk Krai (southern part only), and Jewish Autonomous Oblast region Korea (where it is called (Jaŋsumalbôl) "general giant wasp, general hornet") Mainland China Taiwan () Laos Thailand Cambodia Myanmar Vietnam Afghanistan Pakistan Mongolia Bangladesh Nepal India Bhutan Sri Lanka Malaysia Indonesia Japan – It is common in Japan. It prefers rural areas where it can find trees in which to nest, and is known as the . At least as early as 2008, some popular and sensationalist media outlets in Japan also began referring to this wasp as . North America The first confirmed sightings of the Asian giant hornet in North America were confirmed in 2019 and were mainly concentrated in the Vancouver area, with nests also discovered in neighboring Whatcom County, Washington, in the United States. In August 2019, three hornets were found in Nanaimo on Vancouver Island, and a large nest was found and destroyed shortly thereafter; At the end of September, a worker was reported in Blaine, Washington. Another worker was found in Blaine in October; In December 2019, another worker was found in Blaine; Two specimens were collected in May 2020, one from Langley, British Columbia, about north of Blaine, and one from Custer, Washington, southeast of Blaine. One queen sighting in June 2020, from Bellingham, Washington, south of Custer An unmated queen was trapped in July 2020, near Birch Bay, Washington, west of Custer. A male hornet was captured in Custer, Washington in July 2020. A hornet of unknown caste was reported in August 2020, in Birch Bay, and another was trapped in the same area the following day. Three hornets were seen (and two killed) southeast of Blaine on 21 and 25 September 2020, and three more were found in the same area on 29 and 30 September, prompting officials to report that attempts were underway to pinpoint and destroy a nest believed to be in the area. In October 2020, the Washington State Department of Agriculture announced that a nest was found above ground in a cavity of a tree in Blaine, with dozens of hornets entering and leaving. The nest was eradicated the next day, including the immediate discovery and removal of about 100 hornets. At first the owner of the land required the nest to be returned, and he advertised it for sale. A local beekeeper bought it from him and gave it back to the state entomology team. After further analysis, it was determined that the nest had contained about 500 live specimens, including about 200 queens. Some of these specimens were sent to the Smithsonian Institution to become a part of the NMNH Biorepository permanent cryogenic collection. It was announced that several undiscovered live nests were also believed to exist within Washington State, because the captures of individual hornets in Birch, Blaine, and Custer were all relatively far from the discovered nest. However, officials expressed cautious optimism, adding that it might still be possible to eradicate the hornets before they became established in the area. A Canadian official said that although individual specimens had been found in Canada and some nests were suspected to exist there, the hornets' presence seemed to be only in areas near the US-Canadian border, while the center of the invasion appeared to be in Washington State. In November 2020, one individual was found in Abbotsford, BC. As a result the BC government asked Abbotsford beekeepers and residents to report any sightings. In November 2020, a queen was found in Aldergrove, BC. In August 2021, a nest was discovered in Whatcom County, Washington near Blaine, only from the nest WSDA eradicated in 2020. This nest was destroyed two weeks later on 25 August, before it could produce new queens. In September 2021, two more nests were found near Blaine, in the vicinity of the nest found in August, and a "potential sighting" was reported from near Everson, some 25 miles east of Blaine. A mitochondrial DNA analysis was performed to determine the maternal population(s) ancestral to the British Columbia and Washington introduced populations. The high dissimilarity between these two was similar to the mutual distances between each of the Chinese, Japanese, and Korean native populations suggesting the specimens collected in 2019 were from two different maternal populations, Japanese in BC and South Korean in Washington. This suggests that two separate introductions of the Asian giant hornet occurred in North America within about of one another within a few months. In April 2020, authorities in Washington State asked members of the public to be alert and report any sightings of these hornets, which are expected to become active in April if they are in the area. If they become established, the hornets "could decimate bee populations in the United States and establish such a deep presence that all hope for eradication could be lost." A "full-scale hunt" for the species by the WSDA was then underway. Two assessment models of their potential to spread from their present location on the US–Canadian border suggested that they could spread northward into coastal British Columbia and Southeast Alaska, and southward as far as southern Oregon. The USDA's Agricultural Research Service is engaged in lure/attractant development and molecular genetics research, both as part of its normal research mission, but also to further the near-term eradication goal in Washington. In 2020, the United States Congress considered specific legislation to eradicate V. mandarinia including a proposal by the Interior Secretary, the Fish and Wildlife Director, and the other relevant agencies, which has been introduced as an amendment to the appropriations omnibus. British Columbia Agriculture is prepared for a "long fight" lasting years, if necessary. One advantage humans will have is the lack of diversity of such an invasive population – leaving the hornets less prepared for novel environments and challenges. In June 2021 a dead, desiccated male was found near Marysville, Snohomish County, Washington and reported to WSDA. Its different, more reddish color form immediately suggested yet another parental population from the Japanese and Korean ones already known. USDA APHIS (Animal and Plant Health Inspection Service) performed a genetic analysis several days later and, together with WSDA, confirmed it was of a third, unrelated population. The discovery of a male in June is "perplexing" given that the earliest male emergence in 2020 was July, which was already earlier than normal for the home range. This and its desiccated state indicate it did not emerge in 2021 at all, but is instead a dead specimen that had already emerged in a previous year. The WSDA announced in December 2022 that there were "no confirmed sightings" of the hornet in the state for that year, and in December 2023 stated there were no sightings in 2023, and in December of 2024, it was announced that the hornets had been eradicated from North America. Nesting V. mandarinia nests in low mountain foothills and lowland forests. As a particularly dominant species, no efforts are directed toward conserving V. mandarinia or its habitats, as they are common in areas of low human disturbance. Unlike other species of Vespa, V. mandarinia almost exclusively inhabits subterranean nests in 1978 it was still doubted that aerial nests were possible, as Matsuura and Sakagami reported this to be unknown in Japan in 1973 and aerial nesting is still described as extremely rare in Japan, and yet all nests in the invasive range have been aerial. In a study of 31 nests, 25 were found around rotten pine roots, and another study found only 9 of 56 nests above ground. Additionally, rodents, snakes, or other burrowing animals previously made some of the tunnels. The depth of these nests was between . The entrance at the ground surface varies in length from either horizontally, inclined, or vertically. The queens that found the nest prefer narrow cavities. Nests of V. mandarinia typically lack a developed envelope. During the initial stages of development, the envelope is in an inverted-bowl shape. As the nest develops, one to three rough sheets of combs are created. Often, single primordial combs are created simultaneously and then fused into a single comb. A system of one main pillar and secondary pillars connects the combs. Nests usually have four to seven combs. The top comb is abandoned after summer and left to rot. The largest comb is at the middle to bottom portion of the nest. The largest combs created by V. mandarinia measured with 1,192 cells (no obstacles, circular) and (elliptical; wrapped around a root system). Colony cycle The nesting cycle of V. mandarinia is fairly consistent with that of other eusocial insects. Six phases occur in each cycle. Pre-nesting period Inseminated and uninseminated queens enter hibernation following a cycle. They first appear in early to mid-April and begin feeding on the sap of Quercus (oak) trees. Although this timing is consistent among hornets, V. mandarinia dominates the order, receiving preference for premium sap sources. Among the V. mandarinia queens is a dominance hierarchy. The top-ranked queen begins feeding, while the other queens form a circle around her. Once the top queen finishes, the second-highest-ranking queen feeds. This process repeats until the last queen feeds at a poor hour. Solitary, cooperative, and polyethic periods Inseminated queens start to search for nesting sites in late April. The uninseminated queens do not search for nests, since their ovaries never fully develop. They continue to feed, but then disappear in early July. An inseminated queen begins to create relatively small cells in which she raises around 40 small workers. Workers do not begin to work outside of the hive until July. Queens participate in activities outside the hive until mid-July, when they stay inside the nest and allow workers to do extranidal activities. Early August marks a fully developed nest, containing three combs holding 500 cells and 100 workers. After mid-September, no more eggs are laid and the focus shifts to caring for larvae. The queens die in late October. Dissolution and hibernating period Males and new queens take on their responsibilities in mid-September and mid-October, respectively. During this time, their body color becomes intense and the weights of the queens increase about 20%. Once the males and queens leave the nest, they do not return. In V. mandarinia, males wait outside the nest entrance until the queens emerge, when males intercept them in midair, bring them to the ground, and copulate from 8 to 45 seconds. After this episode, the males return to the entrance for a second chance, while the now-mated queens leave to hibernate. Many queens (up to 65%) attempt to fight off the males and leave unfertilized, at least temporarily. After this episode, pre-hibernating queens are found in moist, subterranean habitats. When sexed individuals emerge, workers shift their focus from protein and animal foods to carbohydrates. The last sexed individuals to emerge may die of starvation. Sting The stinger of the Asian giant hornet is about long. Venom Their stinger injects an especially potent venom that contains . Mastoparans are found in many bee and wasp venoms. They are cytolytic peptides that can damage tissue by stimulating phospholipase action, in addition to its own phospholipase. Masato Ono, an entomologist at Tamagawa University, described the sensation of being stung as feeling "like a hot nail being driven into my leg". Besides using their stingers to inject venom, Asian giant hornets are apparently able to spray venom into a person's eyes under certain circumstances, with one report in 2020 from Japan of long-term damage, though the exact extent of actual visual impairment still remains unassessed. The venom contains a neurotoxin called , a single-chain polypeptide with a molecular weight around 20 kDa. While a single wasp cannot inject a lethal dose, multiple stings can be lethal even to people who are not allergic if the dose is sufficient, and allergy to the venom greatly increases the risk of death. Tests involving mice found that the venom falls short of being the most lethal of all wasp venoms, having an of 4.0mg/kg. (In comparison, the deadliest wasp venom (at least to laboratory mice) by weight belongs to V. luctuosa at 1.6mg/kg.) The potency of the V. mandarinia sting is due, rather, to the relatively large amount of venom injected. Immunogenicity Evidence is insufficient to believe that prophylactic immunotherapy for the venom of other Vespidae will prevent allergic reaction to V. mandarinia venom, because of wide differences in venom chemistry. Effects on humans In 1957, van der Vecht was under the impression humans in the native range lived in constant fear of V. mandarinia and Iwata reported in 1976 that research and removal were hampered by its attacks. Parasites The strepsipteran Xenos moutoni is a common parasite among Vespa species. In a study of parasites among species of Vespa, 4.3% of V. mandarinia females were parasitized. Males were not stylopized (parasitization by stylopid strepsipterans, such as X. moutoni) at all. The major consequence of being parasitized is the inability to reproduce, and stylopized queens follow the same fate as uninseminated queens. They do not search for an area to create a new colony and feed on sap until early July, when they disappear. In other species of Vespa, males also have a chance of being stylopized. The consequences between the two sexes are similar, as neither sex is able to reproduce. Communication and perception V. mandarinia uses both visual and chemical cues as a means of navigating itself and others to the desired location. Scent marking was discussed as a way for hornets to direct other members of the colony to a food source. Even with antennae damage, V. mandarinia was able to navigate itself. It was unable to find its destination only when vision impairment was induced. This implies that while chemical signaling is important, visual cues play an equally important role in guiding individuals. Other behaviors include the formation of a "royal court" consisting of workers that lick and bite the queen, thereby ingesting her pheromones. These pheromones could directly communicate between the queen and her court or indirectly between her court and other workers due to the ingested pheromones. This is merely speculation, as no direct evidence has been collected to suggest the latter. V. mandarinia communicates acoustically, as well. When larvae are hungry, they scrape their mandibles against the walls of the cell. Furthermore, adult hornets click their mandibles as a warning to other creatures that encroach upon their territories. Scent marking V. mandarinia is the only species of social wasp known to apply a scent to direct its colony to a food source. The hornet secretes the chemical from the sixth sternal gland, also known as van der Vecht's gland. This behavior is observed during autumnal raids after the hornets begin hunting in groups instead of individually. The ability to apply scents may have arisen because the Asian giant hornet relies heavily on honey bee colonies as its main food source. A single hornet is unable to take on an entire colony of honey bees because species such as Apis cerana have a well-organized defense mechanism. The honey bees swarm one wasp and vibrate their thoracic muscles to heat up the hornet and raise carbon dioxide to a lethal level. So, organized attacks are much more effective and easily devastate a colony of tens of thousands of honey bees. Interspecies dominance In an experiment observing four different species of Vespa (V. ducalis, V. crabro, V. analis, and V. mandarinia), V. mandarinia was the dominant species. Multiple parameters were set to determine this. The first set parameter observed interaction-mediated departures, which are defined as scenarios wherein one species leaves its position due to the arrival of a more dominant individual. The proportion of interaction-mediated departures was the lowest for V. mandarinia. Another measured parameter was attempted patch entry. Over the observed time, conspecifics (interactions with the same species) resulted in refused entry far more than heterospecifics (interactions with different species). Lastly, when feeding at sap flows, fights between these hornets, Pseudotorynorrhina japonica, Neope goschkevitschii, and Lethe sicelis were observed, and once more V. mandarinia was the most dominant species. In 57 separate fights, one loss was observed to Neope goschkevitschii, giving V. mandarinia a win rate of 98.3%. Based on interaction-mediated departures, attempted patch entry, and interspecific fights, V. mandarinia is the most dominant Vespa species. Diet The Asian giant hornet is intensely predatory; it hunts medium- to large-sized insects, such as bees, other hornet and wasp species, beetles, hornworms, and mantises. The latter are favored targets in late summer and fall. Large insects such as mantises are key protein sources to feed queen and drone larvae. Workers forage to feed their larvae, and since their prey can include crop pests, the hornets are sometimes regarded as beneficial. This hornet often attacks colonies of other Vespa species (V. simillima being the usual prey species), Vespula species, and honey bee (such as Apis cerana and A. mellifera) hives to obtain the adults, pupae, and larvae as food for their own larvae. Sometimes, they cannibalize each other's colonies. A single scout, sometimes two or three, cautiously approaches the hive, producing pheromones to lead its nest-mates to the hive. The hornets can devastate a colony of honey bees, especially if it is the introduced western honey bee. A single hornet can kill as many as 40 bees per minute due to its large mandibles, which can quickly strike and decapitate prey. The honey bees' stings are ineffective because the hornets are five times their size and heavily armored. Only a few hornets (under 50) can exterminate a colony of tens of thousands of bees in a few hours. The hornets can fly up to in a single day, at speeds up to . The smaller Asian hornet similarly preys on honey bees, and has been spreading throughout Europe. Hornet larvae, but not adults, can digest solid protein. The adult hornets can only drink the juices of their victims, and they chew their prey into a paste to feed to their larvae. The workers dismember the bodies of their prey to return only the most nutrient-rich body parts, such as flight muscles, to the nest. Larvae of predatory social vespids generally, not just Vespa, secrete a clear liquid, sometimes referred to as Vespa amino acid mixture, the exact amino acid composition of which varies considerably from species to species, and which they produce to feed the adults on demand. Native honey bees Beekeepers in Japan attempted to introduce western honey bees (Apis mellifera) for the sake of their high productivity. Western honey bees have no innate defense against the hornets, which can rapidly destroy their colonies. Kakugo virus infection, though, may provide an extrinsic defence. Although a handful of Asian giant hornets can easily defeat the uncoordinated defenses of a western honey bee colony, the Japanese honey bee (Apis cerana japonica) has an effective strategy. When a hornet scout locates and approaches a Japanese honey bee hive, she emits specific pheromonal hunting signals. When the Japanese honey bees detect these pheromones, 100 or so gather near the entrance of the nest and set up a trap, keeping the entrance open. This permits the hornet to enter the hive. As the hornet enters, a mob of hundreds of bees surrounds it in a ball, completely covering it and preventing it from reacting effectively. The bees violently vibrate their flight muscles in much the same way as they do to heat the hive in cold conditions. This raises the temperature in the ball to the critical temperature of . In addition, the exertions of the honey bees raise the level of carbon dioxide (CO2) in the ball. The bees can tolerate up to even at that concentration of CO2, but the hornet cannot survive the combination of high temperature and high carbon dioxide level. Some honey bees do die along with the intruder, much as happens when they attack other intruders with their stings, but by killing the hornet scout, they prevent it from summoning reinforcements that would wipe out the entire colony. Detailed research suggests this account of the behavior of the honey bees and a few species of hornets is incomplete and that the honey bees and the predators are developing strategies to avoid expensive and mutually unprofitable conflict. Instead, when honey bees detect scouting hornets, they transmit an "I see you" signal that commonly warns off the predator. Another defence used by Apis cerana is speeding up dramatically when returning to the colony, to avoid midair attacks. Diet in North America Based on an examination of larval waste products, the Washington State Department of Agriculture determined that the prey of V. mandarinia included cluster fly, orange legged drone fly, bristle fly, bronze birch borer beetle, western honey bee, western yellowjacket, German yellowjacket, aerial yellowjacket, bald faced hornet, European paper wasp, golden paper wasp, paddle-tailed darner dragonfly, shadow darner dragonfly, large yellow underwing moth, blinded sphinx moth, and red admiral butterfly (Vanessa atalanta). They had also eaten cow's meat, but the WSDA suggests that this may have been beef from a hamburger. Predators The Asian giant hornet has very few natural predators. However, V. mandarinia nests are attacked by conspecific colonies, and crested honey buzzards may prey on this hornet. Besides the honey buzzard and each other, there are also instances of other insects such as mantises killing Asian giant hornets. Pollination V. mandarinia is not solely carnivorous, but also a pollinator. It is among the diurnal pollinators of the obligate plant parasite Mitrastemon yamamotoi. It is among the most common pollinators of Musella lasiocarpa in the Yunnan Province of China. Extermination methods As of 1973, six different methods were used to control hornets in Japan; these methods decrease damage done by V. mandarinia. Beating Hornets are crushed with wooden sticks with flat heads. Hornets do not counterattack when they are in the bee-hunting phase or the hive-attack phase ("slaughter"), but they aggressively guard a beehive once they kill the defenders and occupy it. The biggest expenditure in this method is time, as the process is inefficient. Nest removal Applying poisons or fires at night is an effective way of exterminating a colony. The most difficult part about this tactic is finding the subterranean nests. The most common method of discovering nests is giving a piece of frog or fish meat attached to a cotton ball to a wasp and following it back to its nest. With V. mandarinia, this is particularly difficult considering its common home flight radius of . V. mandarinia travels up to away from the nest. For the rare nest that is up in a tree, wrapping the tree in plastic and vacuuming the hornets out is used. Bait traps Bait traps can be placed in apiaries. The system consists of multiple compartments that direct the hornet into a one-sided hole which is difficult to return through once it is in the cul-de-sac compartment, an area located at the top of the box from which honey bees can escape through a mesh opening, but wasps cannot due to their large size. Baits used to attract the hornets include a diluted millet jelly solution or a crude sugar solution with a mixture of intoxicants, vinegar, or fruit essence. The WSDA has been using plastic bottle traps, baited with fruit juice and added alcohol. The alcohol is used because it repels bees, but not V. mandarinia, thus reducing the bycatch. Mass poisoning Hornets at the apiary are captured and fed a sugar solution or bee that has been poisoned with malathion. The toxin is expected to spread through trophallaxis. This method is good in principle, but has not been tested extensively. Trapping at hive entrances The trap is attached to the front of beehives. The effectiveness of the trap is determined by its ability to capture hornets while allowing honey bees to escape easily. The hornet enters the trap and catches a bee. When it tries to fly back through the entrance of the hive, it hits the front of the trap. The hornet flies upwards to escape and enters the capture chamber, where the hornets are left to die. Some hornets find a way to escape the trap through the front, so these traps can be very inefficient. Protective screens As explained in the trapping section, if met by resistance, hornets lose the urge to attack and instead retreat. Different measures of resistance include weeds, wire, or fishing nets or limiting the passage size so only honey bees can make it through. Experienced hornets catch on and eventually stay on these traps, awaiting the arrival of bees. The best method of controlling hornets is to combine protective screens with traps. Glue traps Some Japanese beekeepers place glue traps of the sort commonly used against mice atop the bees' artificial nesting box with a disarmed giant hornet stuck to the glue. The struggling hornet attracts more hornets who try to help and then get trapped on the glue sheet. Human consumption In some Japanese mountain villages, the nests are excavated and the larvae are considered a delicacy when fried. In the central Chūbu region, these wasps are sometimes eaten as snacks or an ingredient in drinks. The grubs are often preserved in jars, pan-fried or steamed with rice to make a savory dish called hebo-gohan or hebo-han (へぼ飯). The adults are fried on skewers, stinger and all, until the body becomes crunchy. Potential economic impact If V. mandarinia settles all suitable habitats in North America, potential control costs in the United States will be over US$113.7 million/year (possibly significantly higher). Washington is the only state with confirmed sightings, and there were no confirmed sightings in Washington in 2022 and 2023. If V. mandarinia reaches all suitable habitat in North America, bee products would bring in US$11.98 ± 0.64 million less per year, and bee-pollinated crops would produce US$101.8 million less per year. New York, Massachusetts, Pennsylvania, Connecticut, North Carolina, New Jersey, and Virginia would be most severely affected. By region, New England would be worst hit, and to a lesser degree the entire northeast and the entirety of eastern North America. New England would become by far the greatest concentration of V. mandarinia in the world, far surpassing the original introduction site (the Pacific Northwest), and even its home range of East Asia. Alfalfa/other hays, apples, grapes, tobacco, cotton, and blueberries would be the crops most severely affected.
Biology and health sciences
Hymenoptera
Animals
1002640
https://en.wikipedia.org/wiki/Abrasion%20%28medicine%29
Abrasion (medicine)
An abrasion is a partial thickness wound caused by damage to the skin. It can be superficial involving only the epidermis to deep, involving the deep dermis. Abrasions usually involve minimal bleeding. Mild abrasions, also known as grazes or scrapes, do not scar or bleed because the dermis is left intact, but deep abrasions that disrupt the normal dermal structures may lead to the formation of scar tissue. A more traumatic abrasion that removes all layers of skin is called an avulsion. Abrasion injuries most commonly occur when exposed skin comes into moving contact with a rough surface, causing a grinding or rubbing away of the upper layers of the epidermis. By degree A first-degree abrasion involves only epidermal injury. A second-degree abrasion involves the epidermis as well as the dermis and may bleed slightly. A third-degree abrasion involves damage to the subcutaneous layer and the skin and is often called an avulsion. Treatment The abrasion should be cleaned and any debris removed. A topical antibiotic (such as neomycin or bacitracin) should be applied to prevent infection and to keep the wound moist. Dressing the wound is beneficial because it helps keep the wound from drying out, providing a moist environment conducive for healing. If the abrasion is painful, a topical analgesic (such as lidocaine or benzocaine) can be applied, but for large abrasions, a systemic analgesic may be necessary. Avoid exposing abraded skin to the sun as permanent hyperpigmentation can develop. Healing The gallery below shows the healing process for an abrasion on the palm caused by sliding on concrete.
Biology and health sciences
Types
Health
1002779
https://en.wikipedia.org/wiki/Azeotropic%20distillation
Azeotropic distillation
In chemistry, azeotropic distillation is any of a range of techniques used to break an azeotrope in distillation. In chemical engineering, azeotropic distillation usually refers to the specific technique of adding another component to generate a new, lower-boiling azeotrope that is heterogeneous (e.g. producing two, immiscible liquid phases), such as the example below with the addition of benzene to water and ethanol. This practice of adding an entrainer which forms a separate phase is a specific sub-set of (industrial) azeotropic distillation methods, or combination thereof. In some senses, adding an entrainer is similar to extractive distillation. Material separation agent The addition of a material separation agent, such as benzene to an ethanol/water mixture, changes the molecular interactions and eliminates the azeotrope. Added in the liquid phase, the new component can alter the activity coefficient of various compounds in different ways thus altering a mixture's relative volatility. Greater deviations from Raoult's law make it easier to achieve significant changes in relative volatility with the addition of another component. In azeotropic distillation the volatility of the added component is the same as the mixture, and a new azeotrope is formed with one or more of the components based on differences in polarity. If the material separation agent is selected to form azeotropes with more than one component in the feed then it is referred to as an entrainer. The added entrainer should be recovered by distillation, decantation, or another separation method and returned near the top of the original column. Distillation of ethanol/water A common historical example of azeotropic distillation is its use in dehydrating ethanol and water mixtures. For this, a near azeotropic mixture is sent to the final column where azeotropic distillation takes place. Several entrainers can be used for this specific process: benzene, pentane, cyclohexane, hexane, heptane, isooctane, acetone, and diethyl ether are all options as the mixture. Of these benzene and cyclohexane have been used the most extensively, but since the identification of benzene as a carcinogen, toluene is used instead. Pressure-swing distillation Another method, pressure-swing distillation, relies on the fact that an azeotrope is pressure dependent. An azeotrope is not a range of concentrations that cannot be distilled, but the point at which the activity coefficients of the distillates are crossing one another. If the azeotrope can be "jumped over", distillation can continue, although because the activity coefficients have crossed, the component which is boiling will change. For instance, in a distillation of ethanol and water, water will boil out of the remaining ethanol, rather than the ethanol out of the water as at lower concentrations. Overall the pressure-swing distillation is a very robust and not so highly sophisticated method compared to multi component distillation or membrane processes, but the energy demand is in general higher. Also the investment cost of the distillation columns is higher, due to the pressure inside the vessels. Molecular sieves For low boiling azeotropes distillation may not allow the components to be fully separated, and must make use of separation methods that does not rely on distillation. A common approach involves the use of molecular sieves. The sieves can be subsequently regenerated by dehydration using a vacuum oven. Ethanol can be dried to 95% ABV by heating 3A molecular sieves such as 3A zeolite. Dehydration reactions In organic chemistry, some dehydration reactions are subject to unfavorable but fast equilibria. One example is the formation of dioxolanes from aldehydes: RCHO + (CH2OH)2 RCH(OCH2)2 + H2O Such unfavorable reactions proceed when water is removed by azeotropic distillation.
Physical sciences
Phase separations
Chemistry
1003338
https://en.wikipedia.org/wiki/Gillnetting
Gillnetting
Gillnetting is a fishing method that uses gillnets: vertical panels of netting that hang from a line with regularly spaced floaters that hold the line on the surface of the water. The floats are sometimes called "corks" and the line with corks is generally referred to as a "cork line." The line along the bottom of the panels is generally weighted. Traditionally this line has been weighted with lead and may be referred to as "lead line." A gillnet is normally set in a straight line. Gillnets can be characterized by mesh size, as well as colour and type of filament from which they are made. Fish may be caught by gillnets in three ways: Wedged – held by the mesh around the body. Gilled – held by mesh slipping behind the opercula. Tangled – held by teeth, spines, maxillaries, or other protrusions without the body penetrating the mesh. Most fish have gills. A fish swims into a net and passes only part way through the mesh. When it struggles to free itself, the twine slips behind the gill cover and prevents escape. Gillnets are so effective that their use is closely monitored and regulated by fisheries management and enforcement agencies. Mesh size, twine strength, as well as net length and depth are all closely regulated to reduce bycatch of non-target species. Gillnets have a high degree of size selectivity. Most salmon fisheries in particular have an extremely low incidence of catching non-target species. A fishing vessel rigged to fish by gillnetting is a gillnetter. A gillnetter which deploys its gillnet from the bow is a bowpicker, while one which deploys its gillnet from the stern is a sternpicker. Gillnets differ from seines in that the latter uses a tighter weave to trap fish in an enclosed space, rather than directly catching the fish as in a gillnet. History Gillnets existed in ancient times, as archaeological evidence from the Middle East demonstrates. In North America, Native American fishermen used cedar canoes and natural fibre nets, e.g., made with nettles or the inner bark of cedar. They would attach stones to the bottom of the nets as weights, and pieces of wood to the top, to use as floats. This allowed the net to suspend straight up and down in the water. Each net would be suspended either from shore or between two boats. Native fishers in the Pacific Northwest, Canada, and Alaska still commonly use gillnets in their fisheries for salmon and steelhead. Both drift gillnets and setnets have long been used by cultures around the world. There is evidence of fisheries exploitation, including gillnetting, going far back in Japanese history, with many specific details available from the Edo period (1603–1868). Fisheries in the Shetland Islands, which were settled by Norsemen during the Viking Age, share cultural and technological similarities with Norwegian fisheries, including gillnet fisheries for herring. Many of the Norwegian immigrant fishermen who came to fish in the great Columbia River salmon fishery during the second half of the 19th century did so because they had experience in the gillnet fishery for cod in the waters surrounding the Lofoten Islands of northern Norway. Gillnets were used as part of the seasonal round by Swedish fishermen as well. Welsh and English fishermen gillnetted for Atlantic salmon in the rivers of Wales and England in coracles, using hand-made nets, for at least several centuries. These are but a few of the examples of historic gillnet fisheries around the world. Gillnetting was an early fishing technology in colonial America, used for example, in fisheries for Atlantic salmon and shad. Immigrant fishermen from northern Europe and the Mediterranean brought a number of different adaptations of the technology from their respective homelands with them to the rapidly expanding salmon fisheries of the Columbia River from the 1860s onward. The boats used by these fisherman were typically around long and powered by oars. Many of these boats also had small sails and were called "row-sail" boats. At the beginning of the 1900s, steam powered ships would haul these smaller boats to their fishing grounds and retrieve them at the end of each day. However, at that time gas powered boats were beginning to make their appearance, and by the 1930s, the row-sail boat had virtually disappeared, except in Bristol Bay, Alaska, where motors were prohibited in the gillnet fishery by territorial law until 1951. In 1931, the first powered drum was created by Laurie Jarelainen. The drum is a circular device that is set to the side of the boat and draws in the nets. The powered drum allowed the nets to be drawn in much faster and along with the faster gas powered boats, fisherman were able to fish in areas they had previously been unable to go into, thereby revolutionizing the fishing industry. During World War II, navigation and communication devices, as well as many other forms of maritime equipment (ex. depth-sounding and radar) were improved and made more compact. These devices became much more accessible to the average fisherman, thus making their range and mobility increasingly larger. It also served to make the industry much more competitive, as the fisherman were forced to invest more in boats and equipment to stay current with developing technology. The introduction of fine synthetic fibres such as nylon in the construction of fishing gear during the 1960s marked an expansion in the commercial use of gillnets. The new materials were cheaper and easier to handle, lasted longer and required less maintenance than natural fibres. In addition, multifilament nylon, monofilament or multimonofilament fibres become almost invisible in water, so nets made with synthetic twines generally caught greater numbers of fish than natural fibre nets used in comparable situations. Nylon is highly resistant to abrasion and degradation, hence the netting has the potential to last for many years if it is not recovered. This ghost fishing is of environmental concern. Attaching the gillnet floats with biodegradable material can reduce the problem. However it is difficult to generalize about the longevity of ghost-fishing gillnets due to the varying environments in which they are used. Some researchers have found gill-nets still catching fish and crustaceans over a year after loss, while others have found lost nets destroyed by wave action within one month or overgrown with seaweeds, increasing their visibility and reducing their catching potential to such an extent that they became a microhabitat used by small fish. This type of net was heavily used by many Japanese, South Korean, and Taiwanese fishing fleets on the high seas in the 1980s to target tunas. Although highly selective with respect to size class of animals captured, gillnets are associated with high numbers of incidental captures of cetaceans (whales and dolphins). In the Sri Lankan gillnet fishery, one dolphin is caught for every 1.7–4.0 tonnes of tuna landed. This compares poorly with the rate of one dolphin per 70 tonnes of tuna landed in the eastern Pacific purse seine tuna fishery. Many types of gillnets are used by fisheries scientists to monitor fish populations. Vertical gillnets are designed to allow scientists to determine the depth distribution of the captured fish. Legal status United Nations General Assembly Resolution 46/215 called for the cessation of all "large-scale pelagic drift-net fishing" in international waters by the end of 1992. The laws of individual countries vary with regard to fishing in waters under their jurisdiction. Possession of gillnets is illegal in some U.S. states and heavily regulated in others. Oregon voters had the chance to decide on whether gillnetting will continue in the Columbia River in November 2012 by voting on Measure 81. The measure was defeated with 65% of Oregon voters voting against the measure and allowing commercial gillnet fishing to continue on the Columbia River. The Columbia River Basin is currently under a management agreement that spans from 2008 to December 31, 2017. This management agreement looks to gather information on fish harvesting through means including gillnets. The parties involved will convene again to decide on further action after the current agreement ends. The gill-netting season in Minnesota can vary from county to county and the net types used are regulated on a lake by lake basis by the Minnesota Department of Natural Resources. Virginia's gill-netting season is regulated by the Virginia Marine Resources Commission. During different months of the year, certain rivers have restricted mesh sizes, which vary by location. There have been proposed regulations to shut down drift gillnet fisheries whose by-catch numbers (which include dolphins, sea turtles and other marine life) were too high. In 2014, California lawmakers pushed for the banning of gillnet fishing through letters to federal fishing companies. The progress for these regulations have been paused in California mid 2017. According to the High Seas Fishing Compliance Act from 1996, a permit is require for all commercial fishing vessels that are registered in the United States and under this act, vessels must have a record of all their fishing efforts on the high seas. As of November 2017, there has been a bill introduced to improve the management of driftnets, with gillnets being under the umbrella of this fishing tool. The bill's focus is to ban the use of large-scale nets while supporting the use of alternative methods of fishing to decrease the maximum amount of bycatch. There is also a compensation plan proposed in the bill for fishery participants who stop using large-scale nets. Selectivity Gillnets are a series of panels of meshes with a weighted "foot rope" along the bottom, and a headline, to which floats are attached. By altering the ratio of floats to weights, buoyancy changes, and the net can therefore be set to fish at any depth in the water column. In commercial fisheries, the meshes of a gillnet are uniform in size and shape. Fish smaller than the mesh of the net pass through unhindered, while those too large to push their heads through the meshes as far as their gills are not retained. This gives gillnets the ability to target a specific size of fish, unlike other net gears such as trawls, in which smaller fish pass through the meshes and all larger fish are captured in the net. Salmon Commercial gillnet fisheries are still an important method of harvesting salmon in Alaska, British Columbia, Washington, and Oregon. In the lower Columbia River, non-Indian commercial salmon fisheries for spring Chinook have developed methods of selectively harvesting adipose fin clipped hatchery salmon using small mesh gillnets known as tangle nets or tooth nets. Non-adipose fin clipped fish (primarily natural origin salmon) must be released. Fishery management agencies estimate a relatively low release mortality rate on salmon and steelhead released from these small mesh gillnets. Problems that can arise from selective harvesting are smaller reproducing adult fish, as well as the unexpected mortality of the fish which sustain injuries from the gillnet but are not retained in the fishery. Most salmon populations include several age classes, allowing for fish of different ages, and sizes, to reproduce with each other. A recent 2009 study looked at 59 years of catch and escapement data of Bristol Bay sockeye salmon to determine age and size at maturity trends attributable to the selectivity of commercial gillnet harvests. The study found that the larger females (>550 mm) of all age classes were most susceptible to harvest. The study suggests that smaller, younger fish were more likely to successfully traverse the gillnet fishery and reproduce than the larger fish. The study also found that the average length of sockeye harvested from 1946 to 2005 was 8 mm larger than the sockeye who escaped the gillnet fishery to spawn, reducing the fecundity of the average female by 5%, or 104 eggs. If a salmon enters a gillnet, but manages to escape, it can sustain injuries. These injuries can lead to a lower degree of reproductive success. A study aimed at quantifying mortality of Bristol Bay sockeye salmon due to gillnet-related injuries found that 11–29% of sockeye sustained fishery-related injuries attributable to gillnets, and 51% of those fish were expected to not reproduce. Gillnets are sometimes a controversial gear type especially among sport fishers who argue they are inappropriate especially for salmon fisheries. These arguments are often related to allocation issues between commercial and recreational (sport) fisheries and not conservation issues. Most salmon fisheries, especially those targeting Pacific salmon in North America, are strictly managed to minimize total impacts to specific populations and salmon fishery managers continue to allow the use of gillnets in these fisheries. Swordfish Gillnets are also used out in the deep sea for fisheries whose primary catch is swordfish. California driftnet fisheries have some of the highest rates of by-catch with 12 percent of the catch being the targeted swordfish while up to 68 percent of the catch being by-catch that will be tossed back to sea. Alternatives Given the selective properties of gillnet fishing, alternative methods of harvest are currently being studied. Recent WDF&W reports suggest that purse seine is the most productive method with having highest catch per unit effort (CPUE), but has little information on the effectiveness of selectively harvesting hatchery-reared salmon. More conclusive research has been conducted jointly between the Confederated Tribes of the Colville Reservation and Bonneville Power Administration on a 10-year study on selective harvest methods of hatchery origin salmon in the Upper Columbia River by purse seine and tangle net. Their 2009 and 2010 findings show that purse seines have a higher percentage of survivability and higher CPUE than does tangle nets. A Colville Tribe biologist reports that during these two years the tribe harvested 3,163 hatchery Chinook while releasing 2,346 wild Chinook with only 1.4% direct or immediate mortality using purse seines, whereas the tangle net was far less productive but had an approximate 12.5% mortality. Researchers commented that the use of recovery boxes and shortened periods between checking the nets would have likely decreased mortality rates. While there is data that shows success of selective methods of harvest at protecting wild and ESA listed salmon, there still must be social acceptance of new methods of fishing. There have also been studies done to see if differing strategies could potentially decrease the estimated 400,000 annual avian by-catch in coastal fisheries. These include three strategies that have a possible reduction in up to 75% of avian by-catch: gear modifications, where visual devices will be placed near the top of the net so birds will be able to see the nets; abundance-based fishery openings, where of birds will determine whether the nets will be set out or not; and time-of-day restrictions, which goes along with abundance- where bird by catch tended to occur at dawn and dusk, where as fish catch occurred mostly at dawn. For marine mammal by-catch, field experiments have shown that the use of pingers on nets resulted in significantly lower numbers of by-catch than nets without pingers. After this study was completed by Jay Barlow, it was determined that there would be a 12-fold decrease in short-beaked common dolphins caught, a 4-fold decrease in other cetaceans and a 3-fold decrease in pinnipeds for nets containing pingers. Types of gillnets The FAO classifies gillnet gear types as follows: Set gillnets Set gillnets consist of a single netting wall kept vertical by a floatline (upper line/headrope) and a weighted groundline (lower line/footrope). Small floats, usually shaped like eggs or cylinders and made of solid plastic, are evenly distributed along the floatline, while lead weights are evenly distributed along groundline. The lower line can also be made of lead cored rope, which does not need additional weight. The net is set on the bottom, or at a distance above it and held in place with anchors or weights on both ends. By adjusting the design these nets can fish in surface layers, in mid water or at the bottom, targeting pelagic, demersal or benthic species. On small boats gillnets are handled by hand. Larger boats use hydraulic net haulers or net drums. Set gillnets are widely used all over the world, and are employed both in inland and sea waters. They are popular with artisanal fisheries because no specialized gear is needed, and it is low cost based on the relationship of fuel/fish. Encircling gillnets Encircling gillnets are gillnets set vertically in shallow water, with the floatline remaining at the surface so they encircle fish. Small open boats or canoes can be used to set the net around the fish. Once the fish are encircled, the fishers shout and splash the water to panic the fish so they gill or entangle themselves. There is little negative impact on the environment. As soon as the gear is set the scaring takes place and the net is hauled back in. The fish are alive and discards can be returned to the sea. Encircling gillnets are commonly used by groups of small-scale fishers, and does not require other equipment. Combined gillnets-trammel nets This bottom-set gear has two parts: the upper part is a standard gillnet where semi-demersal or pelagic fish can be gilled the lower part is a trammel net where bottom fish can entangle. The combined nets are maintained more or less vertically in the usual way by floats on the floatline and weights on the groundline. They are set on the bottom. After a time depending on the target species, they are hauled on board. Traditional combined nets were hauled by hand, especially on smaller boats. Recent hydraulic driven net haulers are now common. The gilled, entangled and enmeshed fish are removed from the net by hand. Of some concern with this method is ghost fishing by lost nets and bycatch of diving seabirds. Nets combined in this way were first used in the Mediterranean. Drift nets A drift net consists of one or more panels of webbing fastened together. They are left free to drift with the current, usually near the surface or not far below it. Floats on the floatline and weights on the groundline keep them vertical. Drift nets drift with the current while they are connected with the operating vessel, the driftnetter or drifter. Drift nets are usually used to catch schooling forage fish such as herring and sardines, and also larger pelagic fish such as tuna, salmon and pelagic squid. Net haulers are usually used to set and haul driftnets, with a drifter capstan on the forepart of the vessel. In developing countries most nets are hauled by hand. The mesh size of the gillnets is very effective at selecting or regulating the size of fish caught. The drift net has a low fuel/fish energy consumption compared to other fishing gear. However, the issue of concern with this type of net is the bycatch of species that are not targeted, such as marine mammals, seabirds and to a minor extent turtles. The use of drift nets longer than 2.5 kilometres on the high seas was banned by the United Nations in 1991. Prior to this ban, drift nets were reaching lengths of 60 kilometres. However, there are still serious concerns with ongoing violations. Gillnets and entangling nets The tangle net, or tooth net, originated in British Columbia, Canada, as a gear specifically developed for selective fisheries. Tangle nets have smaller mesh sizes than standard gillnets. They are designed to catch fish by their nose or jaw, enabling bycatch to be resuscitated and released unharmed. Tangle nets as adapted to the mark-selective fishery for spring Chinook salmon on the lower Columbia River have a standard mesh size of . Short net lengths and soak times are used in an effort to land fish in good condition. Tangle nets are typically used in situations where the release of certain (usually wild) fish unharmed is desirable. In a typical situation calling for the use of a tangle net, for instance, all fish retaining their adipose fins (usually wild) must be returned to the water. Tangle nets are used in conjunction with a live recovery box, which acts as a resuscitation chamber for unmarked fish that appear lethargic or stressed before their release into the water. Historical images
Technology
Hunting and fishing
null
1003895
https://en.wikipedia.org/wiki/Juncus
Juncus
Juncus is a genus of monocotyledonous flowering plants, commonly known as rushes. It is the largest genus in the family Juncaceae, containing around 300 species. Description Rushes of the genus Juncus are herbaceous plants that superficially resemble grasses or sedges. They have historically received little attention from botanists; in his 1819 monograph, James Ebenezer Bicheno described the genus as "obscure and uninviting". The form of the flower differentiates rushes from grasses or sedges. The flowers of Juncus comprise five whorls of floral parts: three sepals, three petals (or, taken together, six tepals), two to six stamens (in two whorls) and a stigma with three lobes. The stems are round in cross-section, unlike those of sedges, which are typically somewhat triangular in cross-section. In Juncus section Juncotypus (formerly called Juncus subg. Genuini), which contains some of the most widespread and familiar species, the leaves are reduced to sheaths around the base of the stem and the bract subtending the inflorescence closely resembles a continuation of the stem, giving the appearance that the inflorescence is lateral. Distribution and ecology Juncus has a cosmopolitan distribution, with species found throughout the world, with the exception of Antarctica. They typically grow in cold or wet habitats, and in the tropics, are most common in montane environments. Fossil record Several fossil fruits of a Juncus species have been described from middle Miocene strata of the Fasterholt area near Silkeborg in Central Jutland, Denmark. Classification The genus Juncus was first named by Carl Linnaeus in his 1753 . The type species of the genus was designated by Frederick Vernon Coville, who in 1913 chose the first species in Linnaeus' account, Juncus acutus. Juncus can be divided into two major groups, one group with cymose inflorescences that include bracteoles, and one with racemose inflorescences with no bracteoles. The genus is divided into the following subgenera and sections: Juncus subg. Juncus sect. Juncus sect. Graminei (Engelm.) Engelm. sect. Caespitosi Cout. sect. Stygiopsis Kuntze sect. Ozophyllum Dumort. sect. Iridifolii Snogerup & Kirschner Juncus subg. Poiophylli Buchenau sect. Tenageia Dumort. sect. Steirochloa Griseb. sect. Juncotypus Dumort. sect. Forskalina Kuntze Species Plants of the World Online accepts the following species in the genus Juncus: Juncus acuminatus Michx. Juncus acutiflorus Ehrh. ex Hoffm. Juncus acutus L. Juncus aemulans Liebm. Juncus alatus Franch. & Sav. Juncus alexandri L.A.S.Johnson Juncus allioides Franch. Juncus alpigenus K.Koch Juncus × alpiniformis Fernald Juncus alpinoarticulatus Chaix Juncus amabilis Edgar Juncus amplifolius A.Camus Juncus amuricus (Maxim.) V.I.Krecz. & Gontsch. Juncus anatolicus Snogerup Juncus anceps Laharpe Juncus andersonii Buchenau Juncus andinus Balslev Juncus antarcticus Hook.f. Juncus anthelatus (Wiegand) R.E.Brooks & Whittem. Juncus arcticus Willd. Juncus aridicola L.A.S.Johnson Juncus articulatus L. Juncus astreptus L.A.S.Johnson Juncus atratus Krock. Juncus australis Hook.f. Juncus austrobrasiliensis Balslev Juncus baekdusanensis M.Kim Juncus balticus Willd. Juncus bassianus L.A.S.Johnson Juncus batrachium Veldkamp Juncus benghalensis Kunth Juncus beringensis Buchenau Juncus biflorus Elliott Juncus biglumis L. Juncus biglumoides H.Hara Juncus bolanderi Engelm. Juncus brachycarpus Engelm. Juncus brachycephalus (Engelm.) Buchenau Juncus brachyphyllus Wiegand Juncus brachyspathus Maxim. Juncus brachystigma Sam. Juncus brasiliensis Breistr. Juncus brevibracteus L.A.S.Johnson Juncus brevicaudatus (Engelm.) Fernald Juncus breviculmis Balslev Juncus breweri Engelm. Juncus × brueggeri Domin Juncus bryoides F.J.Herm. Juncus bryophilus W.W.Sm. Juncus bufonius L. Juncus bulbosus L. Juncus burkartii Barros Juncus caesariensis Coville Juncus caespiticius E.Mey. Juncus canadensis J.Gay ex Laharpe Juncus capensis Thunb. Juncus capillaceus Lam. Juncus capillaris F.J.Herm. Juncus capitatus Weigel Juncus castaneus Sm. Juncus cephalostigma Sam. Juncus cephalotes Thunb. Juncus chiapasensis Balslev Juncus chlorocephalus Engelm. Juncus chrysocarpus Buchenau Juncus clarkei Buchenau Juncus compressus Jacq. Juncus concinnus D.Don Juncus concolor Sam. Juncus confusus Coville Juncus conglomeratus L. Juncus continuus L.A.S.Johnson Juncus cooperi Engelm. Juncus cordobensis Barros Juncus coriaceus Mack. Juncus × correctus Steud. Juncus covillei Piper Juncus crassistylus A.Camus Juncus curtisiae L.A.S.Johnson Juncus curvatus Buchenau Juncus cyperoides Laharpe Juncus debilis A.Gray Juncus decipiens (Buchenau) Nakai Juncus × degenianus Boros Juncus densiflorus Kunth Juncus deosaicus Noltie Juncus diastrophanthus Buchenau Juncus dichotomus Elliott Juncus diemii Barros Juncus diffusissimus Buckley Juncus × diffusus Hoppe Juncus digitatus C.W.Witham & Zika Juncus distegus Edgar Juncus dolichanthus L.A.S.Johnson Juncus dongchuanensis K.F.Wu Juncus × donyanae Fern.-Carv. Juncus dregeanus Kunth Juncus drummondii E.Mey. Juncus dubius Engelm. Juncus dudleyi Wiegand Juncus dulongjiongensis Novikov Juncus durus L.A.S.Johnson & K.L.Wilson Juncus duthiei (C.B.Clarke) Noltie Juncus ebracteatus E.Mey. Juncus echinocephalus Balslev Juncus ecuadoriensis Balslev Juncus edgariae L.A.S.Johnson & K.L.Wilson Juncus effusus L. Juncus elbrusicus Galushko Juncus elliottii Chapm. Juncus emmanuelis A.Fern. & J.G.García Juncus engleri Buchenau Juncus ensifolius Wikstr. Juncus equisetinus Proskur. Juncus ernesti-barrosi Barros Juncus exiguus (Fernald & Wiegand) Lint ex Snogerup & P.F.Zika Juncus exsertus Buchenau Juncus falcatus E.Mey. Juncus × fallax Trab. Juncus fascinatus (M.C.Johnst.) W.M.Knapp Juncus fauriei H.Lév. & Vaniot Juncus fauriensis Buchenau Juncus fernandez-carvajaliae Romero Zarco & Arán Juncus filicaulis Buchenau Juncus filiformis L. Juncus filipendulus Buckley Juncus fimbristyloides Noltie Juncus firmus L.A.S.Johnson Juncus flavidus L.A.S.Johnson Juncus fockei Buchenau Juncus foliosus Desf. Juncus fominii Zoz Juncus fontanesii J.Gay ex Laharpe Juncus fugongensis S.Y.Bao Juncus × fulvescens Fernald Juncus ganeshii Miyam. & H.Ohba Juncus georgianus Coville Juncus gerardi Loisel. Juncus giganteus Sam. Juncus glaucoturgidus Noltie Juncus gonggae Miyam. & H.Ohba Juncus × gracilescens F.J.Herm. ex Wadmond Juncus gracilicaulis A.Camus Juncus gracillimus (Buchenau) V.I.Krecz. & Gontsch. Juncus greenei Oakes & Tuck. Juncus gregiflorus L.A.S.Johnson Juncus grisebachii Buchenau Juncus guadeloupensis Buchenau & Urb. Juncus gubanovii Novikov Juncus gymnocarpus Coville Juncus haenkei E.Mey. Juncus hallii Engelm. Juncus harae Miyam. & H.Ohba Juncus heldreichianus T.Marsson ex Parl. Juncus hemiendytus F.J.Herm. Juncus heptopotamicus V.I.Krecz. & Gontsch. Juncus hesperius (Piper) Lint Juncus heterophyllus Dufour Juncus himalensis Klotzsch Juncus holoschoenus R.Br. Juncus homalocaulis F.Muell. ex Benth. Juncus howellii F.J.Herm. Juncus hybridus Brot. Juncus hydrophilus Noltie Juncus imbricatus Laharpe Juncus inflexus L. Juncus ingens N.A.Wakef. Juncus interior Wiegand Juncus × inundatus Drejer Juncus jacquinii L. Juncus jaxarticus V.I.Krecz. & Gontsch. Juncus jinpingensis S.Y.Bao Juncus kelloggii Engelm. Juncus khasiensis Buchenau Juncus kingii Rendle Juncus kleinii Barros Juncus krameri Franch. & Sav. Juncus kraussii Hochst. Juncus kuohii M.J.Jung Juncus laccatus P.F.Zika Juncus laeviusculus L.A.S.Johnson Juncus lancangensis Y.Y.Qian Juncus × langii Erdner Juncus leiospermus F.J.Herm. Juncus × lemieuxii B.Boivin Juncus leptospermus Buchenau Juncus lesueurii Bol. Juncus leucanthus Royle ex D.Don Juncus leucomelas Royle ex D.Don Juncus liebmannii J.F.Macbr. Juncus littoralis C.A.Mey. Juncus llanquihuensis Barros Juncus lomatophyllus Spreng. Juncus longiflorus (A.Camus) Noltie Juncus longii Fernald Juncus longirostris Kuvaev Juncus longistamineus A.Camus Juncus longistylis Torr. Juncus luciensis Ertter Juncus luzuliformis Franch. Juncus macrandrus Coville Juncus macrantherus V.I.Krecz. & Gontsch. Juncus macrophyllus Coville Juncus marginatus Rostk. Juncus maritimus Lam. Juncus maroccanus Kirschner Juncus maximowiczii Buchenau Juncus megacephalus M.A.Curtis Juncus megalophyllus S.Y.Bao Juncus meianthus L.A.S.Johnson ex K.L.Wilson Juncus membranaceus Royle Juncus mertensianus Bong. Juncus micranthus Schrad. ex E.Mey. Juncus microcephalus Kunth Juncus milashanensis A.M.Lu & Z.Y.Zhang Juncus militaris Bigelow Juncus minimus Buchenau Juncus minutulus (Albert & Jahand.) Prain Juncus modicus N.E.Br. Juncus mollis L.A.S.Johnson Juncus × montellii Vierh. Juncus × montserratensis Marcet Juncus × murbeckii Sagorski Juncus mustangensis Miyam. & H.Ohba Juncus nepalicus Miyam. & H.Ohba Juncus nevadensis S.Watson Juncus nodatus Coville Juncus × nodosiformis Fernald Juncus nodosus L. Juncus novae-zelandiae Hook.f. Juncus nupela Veldkamp Juncus obliquus Adamson Juncus × obotritorum Rothm. Juncus occidentalis Wiegand Juncus ochraceus Buchenau Juncus ochrocoleus L.A.S.Johnson Juncus orchonicus Novikov Juncus × oronensis Fernald Juncus orthophyllus Coville Juncus oxycarpus E.Mey. ex Kunth Juncus oxymeris Engelm. Juncus pallescens Lam. Juncus pallidus R.Br. Juncus paludosus E.L.Bridges & Orzell Juncus papillosus Franch. & Sav. Juncus parryi Engelm. Juncus patens E.Mey. Juncus pauciflorus R.Br. Juncus pelocarpus E.Mey. Juncus perpusillus Sam. Juncus persicus Boiss. Juncus pervetus Fernald Juncus petrophilus Miyam. Juncus phaeanthus L.A.S.Johnson Juncus phaeocephalus Engelm. Juncus pictus Steud. Juncus planifolius R.Br. Juncus polyanthemus Buchenau Juncus polycephalus Michx. Juncus potaninii Buchenau Juncus prismatocarpus R.Br. Juncus procerus E.Mey. Juncus prominens (Buchenau) Miyabe & Kudô Juncus przewalskii Buchenau Juncus psammophilus L.A.S.Johnson Juncus punctorius L.f. Juncus pusillus Buchenau Juncus pygmaeus Rich. ex Thuill. Juncus pylaei Laharpe Juncus radula Buchenau Juncus ramboi Barros Juncus ranarius Songeon & E.P.Perrier Juncus ratkowskyanus L.A.S.Johnson Juncus rechingeri Snogerup Juncus regelii Buchenau Juncus remotiflorus L.A.S.Johnson Juncus repens Michx. Juncus requienii Parl. Juncus revolutus R.Br. Juncus rigidus Desf. Juncus roemerianus Scheele Juncus rohtangensis Goel & Aswal Juncus × ruhmeri Asch. & Graebn. Juncus rupestris Kunth Juncus × sallandiae Corporaal & Schaminée Juncus salsuginosus Turcz. ex C.A.Mey. Juncus sandwithii Lourteig Juncus sarophorus L.A.S.Johnson Juncus saximontanus A.Nelson Juncus scabriusculus Kunth Juncus scheuchzerioides Gaudich. Juncus scirpoides Lam. Juncus secundus P.Beauv. ex Poir. Juncus semisolidus L.A.S.Johnson Juncus setchuensis Buchenau Juncus sherei Miyam. & H.Ohba Juncus sikkimensis Hook.f. Juncus socotranus (Buchenau) Snogerup Juncus sonderianus Buchenau Juncus soranthus Schrenk Juncus sorrentinoi Parl. Juncus sparganiifolius Boiss. & Kotschy ex Buchenau Juncus spectabilis Rendle Juncus sphacelatus Decne. Juncus sphaerocarpus Nees Juncus spumosus Noltie Juncus squarrosus L. Juncus stenopetalus Adamson Juncus stipulatus Nees & Meyen Juncus striatus Schousb. ex E.Mey. Juncus × stuckeyi Reinking Juncus stygius L. Juncus subcaudatus (Engelm.) Coville & S.F.Blake Juncus subglaucus L.A.S.Johnson Juncus subnodulosus Schrank Juncus subsecundus N.A.Wakef. Juncus subtilis E.Mey. Juncus subulatus Forssk. Juncus subulitepalus Balslev Juncus supiniformis Engelm. Juncus taonanensis Satake & Kitag. Juncus tenageia Ehrh. ex L.f. Juncus tenuis Willd. Juncus texanus (Engelm.) Coville Juncus textilis Buchenau Juncus thomasii Ten. Juncus thompsonianus L.A.S.Johnson Juncus thomsonii Buchenau Juncus tiehmii Ertter Juncus tingitanus Maire & Weiller Juncus tobdeniorum Noltie Juncus torreyi Coville Juncus trachyphyllus Miyam. & H.Ohba Juncus trichophyllus W.W.Sm. Juncus triformis Engelm. Juncus triglumis L. Juncus trigonocarpus Steud. Juncus trilocularis Zika Juncus turkestanicus V.I.Krecz. & Gontsch. Juncus uncialis Greene Juncus uniflorus W.W.Sm. Juncus uruguensis Griseb. Juncus usitatus L.A.S.Johnson Juncus vaginatus R.Br. Juncus × valbrayi H.Lév. Juncus validus Coville Juncus valvatus Link Juncus vaseyi Engelm. Juncus venturianus Castillón Juncus virens Buchenau Juncus wallichianus J.Gay ex Laharpe Juncus xiphioides E.Mey. Juncus yui S.Y.Bao
Biology and health sciences
Poales
null
1004486
https://en.wikipedia.org/wiki/Pharmacogenomics
Pharmacogenomics
Pharmacogenomics, often abbreviated "PGx," is the study of the role of the genome in drug response. Its name (pharmaco- + genomics) reflects its combining of pharmacology and genomics. Pharmacogenomics analyzes how the genetic makeup of a patient affects their response to drugs. It deals with the influence of acquired and inherited genetic variation on drug response, by correlating DNA mutations (including point mutations, copy number variations, and structural variations) with pharmacokinetic (drug absorption, distribution, metabolism, and elimination), pharmacodynamic (effects mediated through a drug's biological targets), and/or immunogenic endpoints. Pharmacogenomics aims to develop rational means to optimize drug therapy, with regard to the patients' genotype, to achieve maximum efficiency with minimal adverse effects. It is hoped that by using pharmacogenomics, pharmaceutical drug treatments can deviate from what is dubbed as the "one-dose-fits-all" approach. Pharmacogenomics also attempts to eliminate trial-and-error in prescribing, allowing physicians to take into consideration their patient's genes, the functionality of these genes, and how this may affect the effectiveness of the patient's current or future treatments (and where applicable, provide an explanation for the failure of past treatments). Such approaches promise the advent of precision medicine and even personalized medicine, in which drugs and drug combinations are optimized for narrow subsets of patients or even for each individual's unique genetic makeup. Whether used to explain a patient's response (or lack of it) to a treatment, or to act as a predictive tool, it hopes to achieve better treatment outcomes and greater efficacy, and reduce drug toxicities and adverse drug reactions (ADRs). For patients who do not respond to a treatment, alternative therapies can be prescribed that would best suit their requirements. In order to provide pharmacogenomic recommendations for a given drug, two possible types of input can be used: genotyping, or exome or whole genome sequencing. Sequencing provides many more data points, including detection of mutations that prematurely terminate the synthesized protein (early stop codon). Pharmacogenetics vs. pharmacogenomics The term pharmacogenomics is often used interchangeably with pharmacogenetics. Although both terms relate to drug response based on genetic influences, there are differences between the two. Pharmacogenetics is limited to monogenic phenotypes (i.e., single gene-drug interactions). Pharmacogenomics refers to polygenic drug response phenotypes and encompasses transcriptomics, proteomics, and metabolomics. Mechanisms of pharmacogenetic interactions Pharmacokinetics Pharmacokinetics involves the absorption, distribution, metabolism, and elimination of pharmaceutics. These processes are often facilitated by enzymes such as drug transporters or drug metabolizing enzymes (discussed in-depth below). Variation in DNA loci responsible for producing these enzymes can alter their expression or activity so that their functional status changes. An increase, decrease, or loss of function for transporters or metabolizing enzymes can ultimately alter the amount of medication in the body and at the site of action. This may result in deviation from the medication's therapeutic window and result in either toxicity or loss of effectiveness. Drug-metabolizing enzymes The majority of clinically actionable pharmacogenetic variation occurs in genes that code for drug-metabolizing enzymes, including those involved in both phase I and phase II metabolism. The cytochrome P450 enzyme family is responsible for metabolism of 70-80% of all medications used clinically. CYP3A4, CYP2C9, CYP2C19, and CYP2D6 are major CYP enzymes involved in drug metabolism and are all known to be highly polymorphic. Additional drug-metabolizing enzymes that have been implicated in pharmacogenetic interactions include UGT1A1 (a UDP-glucuronosyltransferase), DPYD, and TPMT. Drug transporters Many medications rely on transporters to cross cellular membranes in order to move between body fluid compartments such as the blood, gut lumen, bile, urine, brain, and cerebrospinal fluid. The major transporters include the solute carrier, ATP-binding cassette, and organic anion transporters. Transporters that have been shown to influence response to medications include OATP1B1 (SLCO1B1) and breast cancer resistance protein (BCRP) (ABCG2). Pharmacodynamics Pharmacodynamics refers to the impact a medication has on the body, or its mechanism of action. Drug targets Drug targets are the specific sites where a medication carries out its pharmacological activity. The interaction between the drug and this site results in a modification of the target that may include inhibition or potentiation. Most of the pharmacogenetic interactions that involve drug targets are within the field of oncology and include targeted therapeutics designed to address somatic mutations (see also Cancer Pharmacogenomics). For example, EGFR inhibitors like gefitinib (Iressa) or erlotinib (Tarceva) are only indicated in patients carrying specific mutations to EGFR. Germline mutations in drug targets can also influence response to medications, though this is an emerging subfield within pharmacogenomics. One well-established gene-drug interaction involving a germline mutation to a drug target is warfarin (Coumadin) and VKORC1, which codes for vitamin K epoxide reductase (VKOR). Warfarin binds to and inhibits VKOR, which is an important enzyme in the vitamin K cycle. Inhibition of VKOR prevents reduction of vitamin K, which is a cofactor required in the formation of coagulation factors II, VII, IX and X, and inhibitors protein C and S. Off-target sites Medications can have off-target effects (typically unfavorable) that arise from an interaction between the medication and/or its metabolites and a site other than the intended target. Genetic variation in the off-target sites can influence this interaction. The main example of this type of pharmacogenomic interaction is glucose-6-phosphate-dehydrogenase (G6PD). G6PD is the enzyme involved in the first step of the pentose phosphate pathway which generates NADPH (from NADP). NADPH is required for the production of reduced glutathione in erythrocytes and it is essential for the function of catalase. Glutathione and catalase protect cells from oxidative stress that would otherwise result in cell lysis. Certain variants in G6PD result in G6PD deficiency, in which cells are more susceptible to oxidative stress. When medications that have a significant oxidative effect are administered to individuals who are G6PD deficient, they are at an increased risk of erythrocyte lysis that presents as hemolytic anemia. Immunologic The human leukocyte antigen (HLA) system, also referred to as the major histocompatibility complex (MHC), is a complex of genes important for the adaptive immune system. Mutations in the HLA complex have been associated with an increased risk of developing hypersensitivity reactions in response to certain medications. Clinical pharmacogenomics resources Clinical Pharmacogenetics Implementation Consortium (CPIC) The Clinical Pharmacogenetics Implementation Consortium (CPIC) is "an international consortium of individual volunteers and a small dedicated staff who are interested in facilitating use of pharmacogenetic tests for patient care. CPIC’s goal is to address barriers to clinical implementation of pharmacogenetic tests by creating, curating, and posting freely available, peer-reviewed, evidence-based, updatable, and detailed gene/drug clinical practice guidelines. CPIC guidelines follow standardized formats, include systematic grading of evidence and clinical recommendations, use standardized terminology, are peer-reviewed, and are published in a journal (in partnership with Clinical Pharmacology and Therapeutics) with simultaneous posting to cpicpgx.org, where they are regularly updated." The CPIC guidelines are "designed to help clinicians understand HOW available genetic test results should be used to optimize drug therapy, rather than WHETHER tests should be ordered. A key assumption underlying the CPIC guidelines is that clinical high-throughput and pre-emptive (pre-prescription) genotyping will become more widespread, and that clinicians will be faced with having patients’ genotypes available even if they have not explicitly ordered a test with a specific drug in mind. CPIC's guidelines, processes and projects have been endorsed by several professional societies." U.S. Food and Drug Administration Table of Pharmacogenetic Associations In February 2020 the FDA published the Table of Pharmacogenetic Associations. For the gene-drug pairs included in the table, "the FDA has evaluated and believes there is sufficient scientific evidence to suggest that subgroups of patients with certain genetic variants, or genetic variant-inferred phenotypes (such as affected subgroup in the table below), are likely to have altered drug metabolism, and in certain cases, differential therapeutic effects, including differences in risks of adverse events." "The information in this Table is intended primarily for prescribers, and patients should not adjust their medications without consulting their prescriber. This version of the table is limited to pharmacogenetic associations that are related to drug metabolizing enzyme gene variants, drug transporter gene variants, and gene variants that have been related to a predisposition for certain adverse events. The FDA recognizes that various other pharmacogenetic associations exist that are not listed here, and this table will be updated periodically with additional pharmacogenetic associations supported by sufficient scientific evidence." Table of Pharmacogenomic Biomarkers in Drug Labeling The FDA Table of Pharmacogenomic Biomarkers in Drug Labeling lists FDA-approved drugs with pharmacogenomic information found in the drug labeling. "Biomarkers in the table include but are not limited to germline or somatic gene variants (polymorphisms, mutations), functional deficiencies with a genetic etiology, gene expression differences, and chromosomal abnormalities; selected protein biomarkers that are used to select treatments for patients are also included." PharmGKB The Pharmacogenomics Knowledgebase (PharmGKB) is an "NIH-funded resource that provides information about how human genetic variation affects response to medications. PharmGKB collects, curates and disseminates knowledge about clinically actionable gene-drug associations and genotype-phenotype relationships." Commercial Pharmacogenetic Testing Laboratories There are many commercial laboratories around the world who offer pharmacogenomic testing as a laboratory developed test (LDTs). The tests offered can vary significantly from one lab to another, including genes and alleles tested for, phenotype assignment, and any clinical annotations provided. With the exception of a few direct-to-consumer tests, all pharmacogenetic testing requires an order from an authorized healthcare professional. In order for the results to be used in a clinical setting in the United States, the laboratory performing the test much be CLIA-certified. Other regulations may vary by country and state. Direct-to-Consumer Pharmacogenetic Testing Direct-to-consumer (DTC) pharmacogenetic tests allow consumers to obtain pharmacogenetic testing without an order from a prescriber. DTC pharmacogenetic tests are generally reviewed by the FDA to determine the validity of test claims. The FDA maintains a list of DTC genetic tests that have been approved. Common Pharmacogenomic-Specific Nomenclature Genotype There are multiple ways to represent a pharmacogenomic genotype. A commonly used nomenclature system is to report haplotypes using a star (*) allele (e.g., CYP2C19 *1/*2). Single-nucleotide polymorphisms (SNPs) may be described using their assignment reference SNP cluster ID (rsID) or based on the location of the base pair or amino acid impacted. Phenotype In 2017 CPIC published results of an expert survey to standardize terms related to clinical pharmacogenetic test results. Consensus for terms to describe allele functional status, phenotype for drug metabolizing enzymes, phenotype for drug transporters, and phenotype for high-risk genotype status was reached. Applications The list below provides a few more commonly known applications of pharmacogenomics: Improve drug safety, and reduce ADRs; Tailor treatments to meet patients' unique genetic pre-disposition, identifying optimal dosing; Improve drug discovery targeted to human disease; and Improve proof of principle for efficacy trials. Pharmacogenomics may be applied to several areas of medicine, including pain management, cardiology, oncology, and psychiatry. A place may also exist in forensic pathology, in which pharmacogenomics can be used to determine the cause of death in drug-related deaths where no findings emerge using autopsy. In cancer treatment, pharmacogenomics tests are used to identify which patients are most likely to respond to certain cancer drugs. In behavioral health, pharmacogenomic tests provide tools for physicians and care givers to better manage medication selection and side effect amelioration. Pharmacogenomics is also known as companion diagnostics, meaning tests being bundled with drugs. Examples include KRAS test with cetuximab and EGFR test with gefitinib. Beside efficacy, germline pharmacogenetics can help to identify patients likely to undergo severe toxicities when given cytotoxics showing impaired detoxification in relation with genetic polymorphism, such as canonical 5-FU. In particular, genetic deregulations affecting genes coding for DPD, UGT1A1, TPMT, CDA and CYP2D6 are now considered as critical issues for patients treated with 5-FU/capecitabine, irinotecan, mercaptopurine/azathioprine, gemcitabine/capecitabine/AraC and tamoxifen, respectively. In cardiovascular disorders, the main concern is response to drugs including warfarin, clopidogrel, beta blockers, and statins. In patients with CYP2C19, who take clopidogrel, cardiovascular risk is elevated, leading to medication package insert updates by regulators. In patients with type 2 diabetes, haptoglobin (Hp) genotyping shows an effect on cardiovascular disease, with Hp2-2 at higher risk and supplemental vitamin E reducing risk by affecting HDL. In psychiatry, as of 2010, research has focused particularly on 5-HTTLPR and DRD2. Clinical implementation Initiatives to spur adoption by clinicians include the Ubiquitous Pharmacogenomics (U-PGx) program in Europe and the Clinical Pharmacogenetics Implementation Consortium (CPIC) in the United States. In a 2017 survey of European clinicians, in the prior year two-thirds had not ordered a pharmacogenetic test. In 2010, Vanderbilt University Medical Center launched Pharmacogenomic Resource for Enhanced Decisions in Care and Treatment (PREDICT); in 2015 survey, two-thirds of the clinicians had ordered a pharmacogenetic test. In 2019, the largest private health insurer, UnitedHealthcare, announced that it would pay for genetic testing to predict response to psychiatric drugs. In 2020, Canada's 4th largest health and dental insurer, Green Shield Canada, announced that it would pay for pharmacogenetic testing and its associated clinical decision support software to optimize and personalize mental health prescriptions. Reduction of polypharmacy A potential role for pharmacogenomics is to reduce the occurrence of polypharmacy: it is theorized that with tailored drug treatments, patients will not need to take several medications to treat the same condition. Thus they could potentially reduce the occurrence of adverse drug reactions, improve treatment outcomes, and save costs by avoiding purchase of some medications. For example, maybe due to inappropriate prescribing, psychiatric patients tend to receive more medications than age-matched non-psychiatric patients. The need for pharmacogenomically tailored drug therapies may be most evident in a survey conducted by the Slone Epidemiology Center at Boston University from February 1998 to April 2007. The study elucidated that an average of 82% of adults in the United States are taking at least one medication (prescription or nonprescription drug, vitamin/mineral, herbal/natural supplement), and 29% are taking five or more. The study suggested that those aged 65 years or older continue to be the biggest consumers of medications, with 17-19% in this age group taking at least ten medications in a given week. Polypharmacy has also shown to have increased since 2000 from 23% to 29%. Example case studies Case A – Antipsychotic adverse reaction Patient A has schizophrenia. Their treatment included a combination of ziprasidone, olanzapine, trazodone and benztropine. The patient experienced dizziness and sedation, so they were tapered off ziprasidone and olanzapine, and transitioned to quetiapine. Trazodone was discontinued. The patient then experienced excessive sweating, tachycardia and neck pain, gained considerable weight and had hallucinations. Five months later, quetiapine was tapered and discontinued, with ziprasidone re-introduced into their treatment, due to the excessive weight gain. Although the patient lost the excessive weight they had gained, they then developed muscle stiffness, cogwheeling, tremors and night sweats. When benztropine was added they experienced blurry vision. After an additional five months, the patient was switched from ziprasidone to aripiprazole. Over the course of 8 months, patient A gradually experienced more weight gain and sedation, and developed difficulty with their gait, stiffness, cogwheeling and dyskinetic ocular movements. A pharmacogenomics test later proved the patient had a CYP2D6 *1/*41, which has a predicted phenotype of IM and CYP2C19 *1/*2 with a predicted phenotype of IM as well. Case B – Pain Management Patient B is a woman who gave birth by caesarian section. Her physician prescribed codeine for post-caesarian pain. She took the standard prescribed dose, but she experienced nausea and dizziness while she was taking codeine. She also noticed that her breastfed infant was lethargic and feeding poorly. When the patient mentioned these symptoms to her physician, they recommended that she discontinue codeine use. Within a few days, both the patient's and her infant's symptoms were no longer present. It is assumed that if the patient had undergone a pharmacogenomic test, it would have revealed she may have had a duplication of the gene CYP2D6, placing her in the Ultra-rapid metabolizer (UM) category, explaining her reactions to codeine use. Case C – FDA Warning on Codeine Overdose for Infants On February 20, 2013, the FDA released a statement addressing a serious concern regarding the connection between children who are known as CYP2D6 UM, and fatal reactions to codeine following tonsillectomy and/or adenoidectomy (surgery to remove the tonsils and/or adenoids). They released their strongest Boxed Warning to elucidate the dangers of CYP2D6 UMs consuming codeine. Codeine is converted to morphine by CYP2D6, and those who have UM phenotypes are in danger of producing large amounts of morphine due to the increased function of the gene. The morphine can elevate to life-threatening or fatal amounts, as became evident with the death of three children in August 2012. Challenges Although there appears to be a general acceptance of the basic tenet of pharmacogenomics amongst physicians and healthcare professionals, several challenges exist that slow the uptake, implementation, and standardization of pharmacogenomics. Some of the concerns raised by physicians include: Limitation on how to apply the test into clinical practices and treatment; A general feeling of lack of availability of the test; The understanding and interpretation of evidence-based research; Combining test results with other patient data for prescription optimization; and Ethical, legal and social issues. Issues surrounding the availability of the test include: The lack of availability of scientific data: Although there are a considerable number of drug-metabolizing enzymes involved in the metabolic pathways of drugs, only a fraction have sufficient scientific data to validate their use within a clinical setting; and Demonstrating the cost-effectiveness of pharmacogenomics: Publications for the pharmacoeconomics of pharmacogenomics are scarce, therefore sufficient evidence does not at this time exist to validate the cost-effectiveness and cost-consequences of the test. Although other factors contribute to the slow progression of pharmacogenomics (such as developing guidelines for clinical use), the above factors appear to be the most prevalent. Increasingly substantial evidence and industry body guidelines for clinical use of pharmacogenetics have made it a population wide approach to precision medicine. Cost, reimbursement, education, and easy use at the point of care remain significant barriers to widescale adoption. Controversies Race-based medicine There has been call to move away from race and ethnicity in medicine and instead use genetic ancestry as a way to categorize patients. Some alleles that vary in frequency between specific populations have been shown to be associated with differential responses to specific drugs. As a result, some disease-specific guidelines only recommend pharmacogenetic testing for populations where high-risk alleles are more common and, similarly, certain insurance companies will only pay for pharmacogenetic testing for beneficiaries of high-risk populations. Genetic exceptionalism In the early 2000s, handling genetic information as exceptional, including legal or regulatory protections, garnered strong support. It was argued that genomic information may need special policy and practice protections within the context of electronic health records (EHRs). In 2008, the Genetic Information Nondiscrimination Act (GINA) was enacted to protect patients from health insurance companies discriminating against an individual based on genetic information. More recently it has been argued that genetic exceptionalism is past its expiration date as we move into a blended genomic/big data era of medicine, yet exceptionalism practices continue to permeate clinical healthcare today. Garrison et al. recently relayed a call to action to update verbiage from genetic exceptionalism to genomic contextualism in that we recognize a fundamental duality of genetic information. This allows room in the argument for different types of genetic information to be handled differently while acknowledging that genomic information is similar and yet distinct from other health-related information. Genomic contextualism would allow for a case-by-case analysis of the technology and the context of its use (e.g., clinical practice, research, secondary findings). Others argue that genetic information is indeed distinct from other health-related information but not to the extent of requiring legal/regulatory protections, similar to other sensitive health-related data such as HIV status. Additionally, Evans et al. argue that the EHR has sufficient privacy standards to hold other sensitive information such as social security numbers and that the fundamental nature of an EHR is to house highly personal information. Similarly, a systematic review reported that the public had concern over privacy of genetic information, with 60% agreeing that maintaining privacy was not possible; however, 96% agreed that a direct-to-consumer testing company had protected their privacy, with 74% saying their information would be similarly or better protected in an EHR. With increasing technological capabilities in EHRs, it is possible to mask or hide genetic data from subsets of providers and there is not consensus on how, when, or from whom genetic information should be masked. Rigorous protection and masking of genetic information is argued to impede further scientific progress and clinical translation into routine clinical practices. History Pharmacogenomics was first recognized by Pythagoras around 510 BC when he made a connection between the dangers of fava bean ingestion with hemolytic anemia and oxidative stress. In the 1950s, this identification was validated and attributed to deficiency of G6PD and is called favism. Although the first official publication was not until 1961, the unofficial beginnings of this science were around the 1950s. Reports of prolonged paralysis and fatal reactions linked to genetic variants in patients who lacked butyrylcholinesterase ('pseudocholinesterase') following succinylcholine injection during anesthesia were first reported in 1956. The term pharmacogenetics was first coined in 1959 by Friedrich Vogel of Heidelberg, Germany (although some papers suggest it was 1957 or 1958). In the late 1960s, twin studies supported the inference of genetic involvement in drug metabolism, with identical twins sharing remarkable similarities in drug response compared to fraternal twins. The term pharmacogenomics first began appearing around the 1990s. The first FDA approval of a pharmacogenetic test was in 2005 (for alleles in CYP2D6 and CYP2C19) Future Computational advances have enabled cheaper and faster sequencing. Research has focused on combinatorial chemistry, genomic mining, omic technologies, and high throughput screening. As the cost per genetic test decreases, the development of personalized drug therapies will increase. Technology now allows for genetic analysis of hundreds of target genes involved in medication metabolism and response in less than 24 hours for under $1,000. This a huge step towards bringing pharmacogenetic technology into everyday medical decisions. Likewise, companies like deCODE genetics, MD Labs Pharmacogenetics, Navigenics and 23andMe offer genome scans. The companies use the same genotyping chips that are used in GWAS studies and provide customers with a write-up of individual risk for various traits and diseases and testing for 500,000 known SNPs. Costs range from $995 to $2500 and include updates with new data from studies as they become available. The more expensive packages even included a telephone session with a genetics counselor to discuss the results. Ethics Pharmacogenetics has become a controversial issue in the area of bioethics. Privacy and confidentiality are major concerns. The evidence of benefit or risk from a genetic test may only be suggestive, which could cause dilemmas for providers. Drug development may be affected, with rare genetic variants possibly receiving less research. Access and patient autonomy are also open to discussion. Web-based resources
Technology
Biotechnology
null
7228251
https://en.wikipedia.org/wiki/Spotted%20ratfish
Spotted ratfish
The spotted ratfish (Hydrolagus colliei) is a chimaera found in the north-eastern Pacific Ocean. Often seen by divers at night in the Pacific Northwest, this cartilaginous fish gets its characteristic name from a pointed rat-like tail. The ratfish lays leathery egg cases on the bottom of muddy or sandy areas, which are often mistaken by divers as something inanimate. While mainly a deep-water species, it occurs at shallower depths in the northern part of its range. The generic name, Hydrolagus, comes from the Greek words ὕδωρ, meaning water, and λαγώς/λαγῶς, meaning hare, and the specific name honors Alexander Collie, who was a ship surgeon and early naturalist. The spotted ratfish is common in much of its range, not typically eaten by humans, and is not commercially caught. Description The spotted ratfish has a very distinct appearance compared to unrelated fish species. The female is up to long, much bigger than the male. These fish have a smooth and scaleless skin that is a silvery-bronze color, often with sparkling shades of gold, blue, and green. The speckled white spots along their backs contribute to their name. Dark edges outline both the caudal and dorsal fins, whereas the pectoral fins have a transparent outline. Their pectoral fins are large and triangular, and extend straight out from the sides of their bodies like airplane wings. They have a venomous spine located at the leading edge of their dorsal fin, which is used in defense. It does not present a serious danger to humans, but can cause painful wounds and has been known to kill harbor seals that ate spotted ratfish (caused by the spine penetrating vital tissue in the stomach or esophagus after the ratfish was swallowed). The tail of the ratfish constitutes almost half of its overall length and closely resembles a pointed, rat-like tail. The body of this fish is supported by cartilage rather than bone. It has a duckbill-shaped snout and a rabbit-like face. The mouth is small and contains one pair of forward-directed, incisor-shaped teeth in the bottom jaw and two pairs in the top jaw. Unlike sharks, which have sharp teeth that are easily replaceable, spotted ratfish teeth are plate-shaped, mineralized, and permanent, which assist them in grinding their prey. Like many bony fishes, but unlike its sister group, the Elasmobranchii, the upper jaw of the chimaera is fused with the skull. Although their jaws are soft and mouths are relatively small, they have the largest biting force and jaw leverage found within the Holocephali, which supports their ability to consume large prey. One of their most mesmerizing features is their large, emerald green eyes, which are able to reflect light, similar to the eyes of a cat. Distribution and habitat The spotted ratfish can be found in the northeastern Pacific Ocean, ranging from Alaska to Baja California, with an isolated population in the Gulf of California. They are abundant in much of their range. They be found most commonly off the Pacific Northwest. The range of depths in which this fish is found extends from below sea level, but it is most common between . Spotted ratfish typically live closer to the shore in the northern part of their range than in the southern, but it is also found as shallow as off California. Spotted ratfish tend to move closer to shallow water during the spring and autumn, then to deeper water in summer and winter. For most of the year, they prefer temperatures between , but seasonally they do move into slightly warmer water. They can most commonly be found living near the sea floor in sandy, muddy, or rocky reef environments. Unlike most of its relatives, which are entirely restricted to deep waters, the spotted ratfish has been held in public aquaria. It has also been bred in such aquaria, where two of the main issues are the requirements of low light and low temperature (generally kept at ). Diet The spotted ratfish swims slowly above the seafloor in search of food. Location of food is done by smell. Their usual hunting period is at night, when they move to shallow water to feed. They are particularly drawn to crunchy foods such as crabs and clams. Besides these, the spotted ratfish also feeds on shrimp, worms, small fish, small crustaceans, and sea stars. Species known to prey on the spotted ratfish include soupfin sharks, dogfish sharks, Pacific halibut, pinnipeds, and pigeon guillemots. Reproduction Like some sharks, spotted ratfish are oviparous. Their spawning season peaks during the spring to autumn. During this time, the female releases up to two fertilized eggs into sand or mud areas of the seabed every 10–14 days. The extrusion process can last 18–30 hours and the actual laying can last another 4–6 days. The egg sack is leather-like, long, and has a filament connected to it which is used to attach it to the ocean floor when it is let go by the mother. A female may be seen swimming around her newly laid eggs, in hopes of preventing predators from finding them. Development of the egg can take up to a year, which can be dangerous because the eggs are sometimes mistaken for inanimate objects by divers. When the young finally hatch, they are about in length and grow, reaching in length their first year. Male spotted ratfish have multiple secondary sexual characteristics, which include paired pelvic claspers, a single frontal tentaculum, and paired pelvic tentacula. The pelvic claspers are located on the ventral side of the fish. They protrude out from the pelvic fins and are responsible for the movement of sperm to the oviduct of the female. The interior of the pelvic clasper is supported by cartilage and separates into two branches, ultimately ending in a fleshy lobe on the posterior end. The cephalic clasper (tentaculum) is a unique, club-like organ not found in any other vertebrate. The cephalic clasper is located on the head of the fish, just anterior to the eyes. The tip of the retractable organ is fleshy and lined with numerous small, sharp barbs. For the male to stay attached during courtship, the clasper has been observed to clamp down on the pectoral fin of the female. Additional evidence for this use has been found in the form of scars and scratches on the dorsal sides of females. The significantly smaller body size of males, which is a sexually dimorphic characteristic, may be a contributing factor to this mating behavior. Behaviour The ratfish prefers to maintain a safe distance from divers, and are usually not aggressive. However, if they feel their territory has been invaded, they are able to inflict a mildly toxic wound with their dorsal fin spines. As they swim, they perform barrel rolls and corkscrew turns, as if they are flying. Ratfish swim using large pectoral fins, and this has often been termed aquatic flight given the resemblance to a bird. Albino Puget Sound ratfish A rare albino Puget Sound ratfish was discovered near Whidbey Island, Washington. It is the only pure albino among the 7.2 million specimens in the University of Washington's fish collection.
Biology and health sciences
Chimaeriformes
Animals
4158193
https://en.wikipedia.org/wiki/Clastic%20rock
Clastic rock
Clastic rocks are composed of fragments, or clasts, of pre-existing minerals and rock. A clast is a fragment of geological detritus, chunks, and smaller grains of rock broken off other rocks by physical weathering. Geologists use the term clastic to refer to sedimentary rocks and particles in sediment transport, whether in suspension or as bed load, and in sediment deposits. Sedimentary clastic rocks Clastic sedimentary rocks are rocks composed predominantly of broken pieces or clasts of older weathered and eroded rocks. Clastic sediments or sedimentary rocks are classified based on grain size, clast and cementing material (matrix) composition, and texture. The classification factors are often useful in determining a sample's environment of deposition. An example of clastic environment would be a river system in which the full range of grains being transported by the moving water consist of pieces eroded from solid rock upstream. Grain size varies from clay in shales and claystones; through silt in siltstones; sand in sandstones; and gravel, cobble, to boulder sized fragments in conglomerates and breccias. The Krumbein phi (φ) scale numerically orders these terms in a logarithmic size scale. Siliciclastic sedimentary rocks Siliciclastic rocks are clastic noncarbonate rocks that are composed almost exclusively of silicon, either as forms of quartz or as silicates. Composition The composition of siliciclastic sedimentary rocks includes the chemical and mineralogical components of the framework as well as the cementing material that make up these rocks. Boggs divides them into four categories; major minerals, accessory minerals, rock fragments, and chemical sediments. Major minerals can be categorized into subdivisions based on their resistance to chemical decomposition. Those that possess a great resistance to decomposition are categorized as stable, while those that do not are considered less stable. The most common stable mineral in siliciclastic sedimentary rocks is quartz (SiO2). Quartz makes up approximately 65 percent of framework grains present in sandstones and about 30 percent of minerals in the average shale. Less stable minerals present in this type of rocks are feldspars, including both potassium and plagioclase feldspars. Feldspars comprise a considerably lesser portion of framework grains and minerals. They only make up about 15 percent of framework grains in sandstones and 5% of minerals in shales. Clay mineral groups are mostly present in mudrocks (comprising more than 60% of the minerals) but can be found in other siliciclastic sedimentary rocks at considerably lower levels. Accessory minerals are associated with those whose presence in the rock are not directly important to the classification of the specimen. These generally occur in smaller amounts in comparison to the quartz, and feldspars. Furthermore, those that do occur are generally heavy minerals or coarse grained micas (both muscovite and biotite). Rock fragments also occur in the composition of siliciclastic sedimentary rocks and are responsible for about 10–15 percent of the composition of sandstone. They generally make up most of the gravel size particles in conglomerates but contribute only a very small amount to the composition of mudrocks. Though they sometimes are, rock fragments are not always sedimentary in origin. They can also be metamorphic or igneous. Chemical cements vary in abundance but are predominantly found in sandstones. The two major types are silicate based and carbonate based. The majority of silica cements are composed of quartz, but can include chert, opal, feldspars and zeolites. Composition includes the chemical and mineralogic make-up of the single or varied fragments and the cementing material (matrix) holding the clasts together as a rock. These differences are most commonly used in the framework grains of sandstones. Sandstones rich in quartz are called quartz arenites, those rich in feldspar are called arkoses, and those rich in lithics are called lithic sandstones. Classification Siliciclastic sedimentary rocks are composed of mainly silicate particles derived from the weathering of older rocks and pyroclastic volcanism. While grain size, clast and cementing material (matrix) composition, and texture are important factors when regarding composition, siliciclastic sedimentary rocks are classified according to grain size into three major categories: conglomerates, sandstones, and mudrocks. The term clay is used to classify particles smaller than .0039 millimeters. However, the term can also be used to refer to a family of sheet silicate minerals. Silt refers to particles that have a diameter between .062 and .0039 millimeters. The term mud is used when clay and silt particles are mixed in the sediment; mudrock is the name of the rock created with these sediments. Furthermore, particles that reach diameters between .062 and 2 millimeters fall into the category of sand. When sand is cemented together and lithified it becomes known as sandstone. Any particle that is larger than two millimeters is considered gravel. This category includes pebbles, cobbles and boulders. Like sandstone, when gravels are lithified they are considered conglomerates. Conglomerates and breccias Conglomerates are coarse grained rocks dominantly composed of gravel sized particles that are typically held together by a finer grained matrix. These rocks are often subdivided into conglomerates and breccias. The major characteristic that divides these two categories is the amount of rounding. The gravel sized particles that make up conglomerates are well rounded while in breccias they are angular. Conglomerates are common in stratigraphic successions of most, if not all, ages but only make up one percent or less, by weight, of the total sedimentary rock mass. In terms of origin and depositional mechanisms they are very similar to sandstones. As a result, the two categories often contain the same sedimentary structures. Sandstones Sandstones are medium-grained rocks composed of rounded or angular fragments of sand size, that often but not always have a cement uniting them together. These sand-size particles are often quartz but there are a few common categories and a wide variety of classification schemes that classify sandstones based on composition. Classification schemes vary widely, but most geologists have adopted the Dott scheme, which uses the relative abundance of quartz, feldspar, and lithic framework grains and the abundance of muddy matrix between these larger grains. Mudrocks Rocks that are classified as mudrocks are very fine grained. Silt and clay represent at least 50% of the material that mudrocks are composed of. Classification schemes for mudrocks tend to vary, but most are based on the grain size of the major constituents. In mudrocks, these are generally silt, and clay. According to Blatt, Middleton and Murray mudrocks that are composed mainly of silt particles are classified as siltstones. In turn, rocks that possess clay as the majority particle are called claystones. In geology, a mixture of both silt and clay is called mud. Rocks that possess large amounts of both clay and silt are called mudstones. In some cases the term shale is also used to refer to mudrocks and is still widely accepted by most. However, others have used the term shale to further divide mudrocks based on the percentage of clay constituents. The plate-like shape of clay allows its particles to stack up one on top of another, creating laminae or beds. The more clay present in a given specimen, the more laminated a rock is. Shale, in this case, is reserved for mudrocks that are laminated, while mudstone refers those that are not. Diagenesis of siliciclastic sedimentary rocks Siliciclastic rocks initially form as loosely packed sediment deposits including gravels, sands, and muds. The process of turning loose sediment into hard sedimentary rocks is called lithification. During the process of lithification, sediments undergo physical, chemical and mineralogical changes before becoming rock. The primary physical process in lithification is compaction. As sediment transport and deposition continues, new sediments are deposited atop previously deposited beds, burying them. Burial continues and the weight of overlying sediments causes an increase in temperature and pressure. This increase in temperature and pressure causes loose grained sediments become tightly packed, reducing porosity, essentially squeezing water out of the sediment. Porosity is further reduced by the precipitation of minerals into the remaining pore spaces. The final stage in the process is diagenesis and will be discussed in detail below. Cementation Cementation is the diagenetic process by which coarse clastic sediments become lithified or consolidated into hard, compact rocks, usually through the deposition or precipitation of minerals in the spaces between the individual grains of sediment. Cementation can occur simultaneously with deposition or at another time. Furthermore, once a sediment is deposited, it becomes subject to cementation through the various stages of diagenesis discussed below. Shallow burial (eogenesis) Eogenesis refers to the early stages of diagenesis. This can take place at very shallow depths, ranging from a few meters to tens of meters below the surface. The changes that occur during this diagenetic phase mainly relate to the reworking of the sediments. Compaction and grain repacking, bioturbation, as well as mineralogical changes all occur at varying degrees. Due to the shallow depths, sediments undergo only minor compaction and grain rearrangement during this stage. Organisms rework sediment near the depositional interface by burrowing, crawling, and in some cases sediment ingestion. This process can destroy sedimentary structures that were present upon deposition of the sediment. Structures such as lamination will give way to new structures associated with the activity of organisms. Despite being close to the surface, eogenesis does provide conditions for important mineralogical changes to occur. This mainly involves the precipitation of new minerals. Mineralogical changes during eogenesis Mineralogical changes that occur during eogenesis are dependent on the environment in which that sediment has been deposited. For example, the formation of pyrite is characteristic of reducing conditions in marine environments. Pyrite can form as cement, or replace organic materials, such as wood fragments. Other important reactions include the formation of chlorite, glauconite, illite and iron oxide (if oxygenated pore water is present). The precipitation of potassium feldspar, quartz overgrowths, and carbonate cements also occurs under marine conditions. In non marine environments oxidizing conditions are almost always prevalent, meaning iron oxides are commonly produced along with kaolin group clay minerals. The precipitation of quartz and calcite cements may also occur in non marine conditions. Deep burial (mesogenesis) Compaction As sediments are buried deeper, load pressures become greater resulting in tight grain packing and bed thinning. This causes increased pressure between grains thus increasing the solubility of grains. As a result, the partial dissolution of silicate grains occurs. This is called pressure solutions. Chemically speaking, increases in temperature can also cause chemical reaction rates to increase. This increases the solubility of most common minerals (aside from evaporites). Furthermore, beds thin and porosity decreases allowing cementation to occur by the precipitation of silica or carbonate cements into remaining pore space. In this process minerals crystallize from watery solutions that percolate through the pores between grain of sediment. The cement that is produced may or may not have the same chemical composition as the sediment. In sandstones, framework grains are often cemented by silica or carbonate. The extent of cementation is dependent on the composition of the sediment. For example, in lithic sandstones, cementation is less extensive because pore space between framework grains is filled with a muddy matrix that leaves little space for precipitation to occur. This is often the case for mudrocks as well. As a result of compaction, the clayey sediments comprising mudrocks are relatively impermeable. Dissolution Dissolution of framework silicate grains and previously formed carbonate cement may occur during deep burial. Conditions that encourage this are essentially opposite of those required for cementation. Rock fragments and silicate minerals of low stability, such as plagioclase feldspar, pyroxenes, and amphiboles, may dissolve as a result of increasing burial temperatures and the presence of organic acids in pore waters. The dissolution of frame work grains and cements increases porosity particularly in sandstones. Mineral replacement This refers to the process whereby one mineral is dissolved and a new mineral fills the space via precipitation. Replacement can be partial or complete. Complete replacement destroys the identity of the original minerals or rock fragments giving a biased view of the original mineralogy of the rock. Porosity can also be affected by this process. For example, clay minerals tend to fill up pore space and thereby reducing porosity. Telogenesis In the process of burial, it is possible that siliciclastic deposits may subsequently be uplifted as a result of a mountain building event or erosion. When uplift occurs, it exposes buried deposits to a radically new environment. Because the process brings material to or closer to the surface, sediments that undergo uplift are subjected to lower temperatures and pressures as well as slightly acidic rain water. Under these conditions, framework grains and cement are again subjected to dissolution and in turn increasing porosity. On the other hand, telogenesis can also change framework grains to clays, thus reducing porosity. These changes are dependent on the specific conditions that the rock is exposed as well as the composition of the rock and pore waters. Specific pore waters, can cause the further precipitation of carbonate or silica cements. This process can also encourage the process of oxidation on a variety of iron bearing minerals. Sedimentary breccias Sedimentary breccias are a type of clastic sedimentary rock which are composed of angular to subangular, randomly oriented clasts of other sedimentary rocks. They may form either: In submarine debris flows, avalanches, mud flow or mass flow in an aqueous medium. Technically, turbidites are a form of debris flow deposit and are a fine-grained peripheral deposit to a sedimentary breccia flow. As angular, poorly sorted, very immature fragments of rocks in a finer grained groundmass which are produced by mass wasting. These are, in essence, lithified colluvium. Thick sequences of sedimentary (colluvial) breccias are generally formed next to fault scarps in grabens. In the field, it may at times be difficult to distinguish between a debris flow sedimentary breccia and a colluvial breccia, especially if one is working entirely from drilling information. Sedimentary breccias are an integral host rock for many sedimentary exhalative deposits. Igneous clastic rocks Clastic igneous rocks include pyroclastic volcanic rocks such as tuff, agglomerate and intrusive breccias, as well as some marginal eutaxitic and taxitic intrusive morphologies. Igneous clastic rocks are broken by flow, injection or explosive disruption of solid or semi-solid igneous rocks or lavas. Igneous clastic rocks can be divided into two classes: Broken, fragmental rocks produced by intrusive processes, usually associated with plutons or porphyry stocks Broken, fragmental rocks associated with volcanic eruptions, both of lava and pyroclastic type Metamorphic clastic rocks Clastic metamorphic rocks include breccias formed in faults, as well as some protomylonite and pseudotachylite. Occasionally, metamorphic rocks can be brecciated via hydrothermal fluids, forming a hydrofracture breccia. Hydrothermal clastic rocks Hydrothermal clastic rocks are generally restricted to those formed by hydrofracture, the process by which hydrothermal circulation cracks and brecciates the wall rocks and fills them in with veins. This is particularly prominent in epithermal ore deposits and is associated with alteration zones around many intrusive rocks, especially granites. Many skarn and greisen deposits are associated with hydrothermal breccias. Impact breccias A fairly rare form of clastic rock may form during meteorite impact. This is composed primarily of ejecta; clasts of country rock, melted rock fragments, tektites (glass ejected from the impact crater) and exotic fragments, including fragments derived from the impactor itself. Identifying a clastic rock as an impact breccia requires recognising shatter cones, tektites, spherulites, and the morphology of an impact crater, as well as potentially recognizing particular chemical and trace element signatures, especially osmiridium.
Physical sciences
Sedimentary rocks
Earth science
16079692
https://en.wikipedia.org/wiki/Sewage%20treatment
Sewage treatment
Sewage treatment (or domestic wastewater treatment, municipal wastewater treatment) is a type of wastewater treatment which aims to remove contaminants from sewage to produce an effluent that is suitable to discharge to the surrounding environment or an intended reuse application, thereby preventing water pollution from raw sewage discharges. Sewage contains wastewater from households and businesses and possibly pre-treated industrial wastewater. There are a high number of sewage treatment processes to choose from. These can range from decentralized systems (including on-site treatment systems) to large centralized systems involving a network of pipes and pump stations (called sewerage) which convey the sewage to a treatment plant. For cities that have a combined sewer, the sewers will also carry urban runoff (stormwater) to the sewage treatment plant. Sewage treatment often involves two main stages, called primary and secondary treatment, while advanced treatment also incorporates a tertiary treatment stage with polishing processes and nutrient removal. Secondary treatment can reduce organic matter (measured as biological oxygen demand) from sewage,  using aerobic or anaerobic biological processes. A so-called quarternary treatment step (sometimes referred to as advanced treatment) can also be added for the removal of organic micropollutants, such as pharmaceuticals. This has been implemented in full-scale for example in Sweden. A large number of sewage treatment technologies have been developed, mostly using biological treatment processes. Design engineers and decision makers need to take into account technical and economical criteria of each alternative when choosing a suitable technology. Often, the main criteria for selection are: desired effluent quality, expected construction and operating costs, availability of land, energy requirements and sustainability aspects. In developing countries and in rural areas with low population densities, sewage is often treated by various on-site sanitation systems and not conveyed in sewers. These systems include septic tanks connected to drain fields, on-site sewage systems (OSS), vermifilter systems and many more. On the other hand, advanced and relatively expensive sewage treatment plants may include tertiary treatment with disinfection and possibly even a fourth treatment stage to remove micropollutants. At the global level, an estimated 52% of sewage is treated. However, sewage treatment rates are highly unequal for different countries around the world. For example, while high-income countries treat approximately 74% of their sewage, developing countries treat an average of just 4.2%. The treatment of sewage is part of the field of sanitation. Sanitation also includes the management of human waste and solid waste as well as stormwater (drainage) management. The term sewage treatment plant is often used interchangeably with the term wastewater treatment plant. Terminology The term sewage treatment plant (STP) (or sewage treatment works) is nowadays often replaced with the term wastewater treatment plant (WWTP). Strictly speaking, the latter is a broader term that can also refer to industrial wastewater treatment. The terms water recycling center or water reclamation plants are also in use as synonyms. Purposes and overview The overall aim of treating sewage is to produce an effluent that can be discharged to the environment while causing as little water pollution as possible, or to produce an effluent that can be reused in a useful manner. This is achieved by removing contaminants from the sewage. It is a form of waste management. With regards to biological treatment of sewage, the treatment objectives can include various degrees of the following: to transform or remove organic matter, nutrients (nitrogen and phosphorus), pathogenic organisms, and specific trace organic constituents (micropollutants). Some types of sewage treatment produce sewage sludge which can be treated before safe disposal or reuse. Under certain circumstances, the treated sewage sludge might be termed biosolids and can be used as a fertilizer. Sewage characteristics Collection Types of treatment processes Sewage can be treated close to where the sewage is created, which may be called a decentralized system or even an on-site system (on-site sewage facility, septic tanks, etc.). Alternatively, sewage can be collected and transported by a network of pipes and pump stations to a municipal treatment plant. This is called a centralized system (see also sewerage and pipes and infrastructure). A large number of sewage treatment technologies have been developed, mostly using biological treatment processes (see list of wastewater treatment technologies). Very broadly, they can be grouped into high tech (high cost) versus low tech (low cost) options, although some technologies might fall into either category. Other grouping classifications are intensive or mechanized systems (more compact, and frequently employing high tech options) versus extensive or natural or nature-based systems (usually using natural treatment processes and occupying larger areas) systems. This classification may be sometimes oversimplified, because a treatment plant may involve a combination of processes, and the interpretation of the concepts of high tech and low tech, intensive and extensive, mechanized and natural processes may vary from place to place. Low tech, extensive or nature-based processes Examples for more low-tech, often less expensive sewage treatment systems are shown below. They often use little or no energy. Some of these systems do not provide a high level of treatment, or only treat part of the sewage (for example only the toilet wastewater), or they only provide pre-treatment, like septic tanks. On the other hand, some systems are capable of providing a good performance, satisfactory for several applications. Many of these systems are based on natural treatment processes, requiring large areas, while others are more compact. In most cases, they are used in rural areas or in small to medium-sized communities. For example, waste stabilization ponds are a low cost treatment option with practically no energy requirements but they require a lot of land. Due to their technical simplicity, most of the savings (compared with high tech systems) are in terms of operation and maintenance costs. Anaerobic digester types and anaerobic digestion, for example: Upflow anaerobic sludge blanket reactor Septic tank Imhoff tank Constructed wetland (see also biofilters) Decentralized wastewater system Nature-based solutions On-site sewage facility Sand filter Vermi filter Waste stabilization pond with sub-types: e.g. Facultative ponds, high rate ponds, maturation ponds Examples for systems that can provide full or partial treatment for toilet wastewater only: Composting toilet (see also dry toilets in general) Urine-diverting dry toilet Vermifilter toilet High tech, intensive or mechanized processes Examples for more high-tech, intensive or mechanized, often relatively expensive sewage treatment systems are listed below. Some of them are energy intensive as well. Many of them provide a very high level of treatment. For example, broadly speaking, the activated sludge process achieves a high effluent quality but is relatively expensive and energy intensive. Activated sludge systems Aerobic treatment system Enhanced biological phosphorus removal Expanded granular sludge bed digestion Filtration Membrane bioreactor Moving bed biofilm reactor Rotating biological contactor Trickling filter Ultraviolet disinfection Disposal or treatment options There are other process options which may be classified as disposal options, although they can also be understood as basic treatment options. These include: Application of sludge, irrigation, soak pit, leach field, fish pond, floating plant pond, water disposal/groundwater recharge, surface disposal and storage. The application of sewage to land is both: a type of treatment and a type of final disposal. It leads to groundwater recharge and/or to evapotranspiration. Land application include slow-rate systems, rapid infiltration, subsurface infiltration, overland flow. It is done by flooding, furrows, sprinkler and dripping. It is a treatment/disposal system that requires a large amount of land per person. Design aspects Population equivalent The per person organic matter load is a parameter used in the design of sewage treatment plants. This concept is known as population equivalent (PE). The base value used for PE can vary from one country to another. Commonly used definitions used worldwide are: 1 PE equates to 60 gram of BOD per person per day, and it also equals 200 liters of sewage per day. This concept is also used as a comparison parameter to express the strength of industrial wastewater compared to sewage. Process selection When choosing a suitable sewage treatment process, decision makers need to take into account technical and economical criteria. Therefore, each analysis is site-specific. A life cycle assessment (LCA) can be used, and criteria or weightings are attributed to the various aspects. This makes the final decision subjective to some extent. A range of publications exist to help with technology selection. In industrialized countries, the most important parameters in process selection are typically efficiency, reliability, and space requirements. In developing countries, they might be different and the focus might be more on construction and operating costs as well as process simplicity. Choosing the most suitable treatment process is complicated and requires expert inputs, often in the form of feasibility studies. This is because the main important factors to be considered when evaluating and selecting sewage treatment processes are numerous. They include: process applicability, applicable flow, acceptable flow variation, influent characteristics, inhibiting or refractory compounds, climatic aspects, process kinetics and reactor hydraulics, performance, treatment residuals, sludge processing, environmental constraints, requirements for chemical products, energy and other resources; requirements for personnel, operating and maintenance; ancillary processes, reliability, complexity, compatibility, area availability. With regards to environmental impacts of sewage treatment plants the following aspects are included in the selection process: Odors, vector attraction, sludge transportation, sanitary risks, air contamination, soil and subsoil contamination, surface water pollution or groundwater contamination, devaluation of nearby areas, inconvenience to the nearby population. Odor control Odors emitted by sewage treatment are typically an indication of an anaerobic or septic condition. Early stages of processing will tend to produce foul-smelling gases, with hydrogen sulfide being most common in generating complaints. Large process plants in urban areas will often treat the odors with carbon reactors, a contact media with bio-slimes, small doses of chlorine, or circulating fluids to biologically capture and metabolize the noxious gases. Other methods of odor control exist, including addition of iron salts, hydrogen peroxide, calcium nitrate, etc. to manage hydrogen sulfide levels. Energy requirements The energy requirements vary with type of treatment process as well as sewage strength. For example, constructed wetlands and stabilization ponds have low energy requirements. In comparison, the activated sludge process has a high energy consumption because it includes an aeration step. Some sewage treatment plants produce biogas from their sewage sludge treatment process by using a process called anaerobic digestion. This process can produce enough energy to meet most of the energy needs of the sewage treatment plant itself. For activated sludge treatment plants in the United States, around 30 percent of the annual operating costs is usually required for energy. Most of this electricity is used for aeration, pumping systems and equipment for the dewatering and drying of sewage sludge. Advanced sewage treatment plants, e.g. for nutrient removal, require more energy than plants that only achieve primary or secondary treatment. Small rural plants using trickling filters may operate with no net energy requirements, the whole process being driven by gravitational flow, including tipping bucket flow distribution and the desludging of settlement tanks to drying beds. This is usually only practical in hilly terrain and in areas where the treatment plant is relatively remote from housing because of the difficulty in managing odors. Co-treatment of industrial effluent In highly regulated developed countries, industrial wastewater usually receives at least pretreatment if not full treatment at the factories themselves to reduce the pollutant load, before discharge to the sewer. The pretreatment has the following two main aims: Firstly, to prevent toxic or inhibitory compounds entering the biological stage of the sewage treatment plant and reduce its efficiency. And secondly to avoid toxic compounds from accumulating in the produced sewage sludge which would reduce its beneficial reuse options. Some industrial wastewater may contain pollutants which cannot be removed by sewage treatment plants. Also, variable flow of industrial waste associated with production cycles may upset the population dynamics of biological treatment units. Design aspects of secondary treatment processes Non-sewered areas Urban residents in many parts of the world rely on on-site sanitation systems without sewers, such as septic tanks and pit latrines, and fecal sludge management in these cities is an enormous challenge. For sewage treatment the use of septic tanks and other on-site sewage facilities (OSSF) is widespread in some rural areas, for example serving up to 20 percent of the homes in the U.S. Available process steps Sewage treatment often involves two main stages, called primary and secondary treatment, while advanced treatment also incorporates a tertiary treatment stage with polishing processes. Different types of sewage treatment may utilize some or all of the process steps listed below. Preliminary treatment Preliminary treatment (sometimes called pretreatment) removes coarse materials that can be easily collected from the raw sewage before they damage or clog the pumps and sewage lines of primary treatment clarifiers. Screening The influent in sewage water passes through a bar screen to remove all large objects like cans, rags, sticks, plastic packets, etc. carried in the sewage stream. This is most commonly done with an automated mechanically raked bar screen in modern plants serving large populations, while in smaller or less modern plants, a manually cleaned screen may be used. The raking action of a mechanical bar screen is typically paced according to the accumulation on the bar screens and/or flow rate. The solids are collected and later disposed in a landfill, or incinerated. Bar screens or mesh screens of varying sizes may be used to optimize solids removal. If gross solids are not removed, they become entrained in pipes and moving parts of the treatment plant, and can cause substantial damage and inefficiency in the process. Grit removal Grit consists of sand, gravel, rocks, and other heavy materials. Preliminary treatment may include a sand or grit removal channel or chamber, where the velocity of the incoming sewage is reduced to allow the settlement of grit. Grit removal is necessary to (1) reduce formation of deposits in primary sedimentation tanks, aeration tanks, anaerobic digesters, pipes, channels, etc. (2) reduce the frequency of tank cleaning caused by excessive accumulation of grit; and (3) protect moving mechanical equipment from abrasion and accompanying abnormal wear. The removal of grit is essential for equipment with closely machined metal surfaces such as comminutors, fine screens, centrifuges, heat exchangers, and high pressure diaphragm pumps. Grit chambers come in three types: horizontal grit chambers, aerated grit chambers, and vortex grit chambers. Vortex grit chambers include mechanically induced vortex, hydraulically induced vortex, and multi-tray vortex separators. Given that traditionally, grit removal systems have been designed to remove clean inorganic particles that are greater than , most of the finer grit passes through the grit removal flows under normal conditions. During periods of high flow deposited grit is resuspended and the quantity of grit reaching the treatment plant increases substantially. Flow equalization Equalization basins can be used to achieve flow equalization. This is especially useful for combined sewer systems which produce peak dry-weather flows or peak wet-weather flows that are much higher than the average flows. Such basins can improve the performance of the biological treatment processes and the secondary clarifiers. Disadvantages include the basins' capital cost and space requirements. Basins can also provide a place to temporarily hold, dilute and distribute batch discharges of toxic or high-strength wastewater which might otherwise inhibit biological secondary treatment (such was wastewater from portable toilets or fecal sludge that is brought to the sewage treatment plant in vacuum trucks). Flow equalization basins require variable discharge control, typically include provisions for bypass and cleaning, and may also include aerators and odor control. Fat and grease removal In some larger plants, fat and grease are removed by passing the sewage through a small tank where skimmers collect the fat floating on the surface. Air blowers in the base of the tank may also be used to help recover the fat as a froth. Many plants, however, use primary clarifiers with mechanical surface skimmers for fat and grease removal. Primary treatment Primary treatment is the "removal of a portion of the suspended solids and organic matter from the sewage".It consists of allowing sewage to pass slowly through a basin where heavy solids can settle to the bottom while oil, grease and lighter solids float to the surface and are skimmed off. These basins are called primary sedimentation tanks or primary clarifiers and typically have a hydraulic retention time (HRT) of 1.5 to 2.5 hours. The settled and floating materials are removed and the remaining liquid may be discharged or subjected to secondary treatment. Primary settling tanks are usually equipped with mechanically driven scrapers that continually drive the collected sludge towards a hopper in the base of the tank where it is pumped to sludge treatment facilities. Sewage treatment plants that are connected to a combined sewer system sometimes have a bypass arrangement after the primary treatment unit. This means that during very heavy rainfall events, the secondary and tertiary treatment systems can be bypassed to protect them from hydraulic overloading, and the mixture of sewage and storm-water receives primary treatment only. Primary sedimentation tanks remove about 50–70% of the suspended solids, and 25–40% of the biological oxygen demand (BOD). Secondary treatment The main processes involved in secondary sewage treatment are designed to remove as much of the solid material as possible. They use biological processes to digest and remove the remaining soluble material, especially the organic fraction. This can be done with either suspended-growth or biofilm processes. The microorganisms that feed on the organic matter present in the sewage grow and multiply, constituting the biological solids, or biomass. These grow and group together in the form of flocs or biofilms and, in some specific processes, as granules. The biological floc or biofilm and remaining fine solids form a sludge which can be settled and separated. After separation, a liquid remains that is almost free of solids, and with a greatly reduced concentration of pollutants. Secondary treatment can reduce organic matter (measured as biological oxygen demand) from sewage,  using aerobic or anaerobic processes. The organisms involved in these processes are sensitive to the presence of toxic materials, although these are not expected to be present at high concentrations in typical municipal sewage. Tertiary treatment Advanced sewage treatment generally involves three main stages, called primary, secondary and tertiary treatment but may also include intermediate stages and final polishing processes. The purpose of tertiary treatment (also called advanced treatment) is to provide a final treatment stage to further improve the effluent quality before it is discharged to the receiving water body or reused. More than one tertiary treatment process may be used at any treatment plant. If disinfection is practiced, it is always the final process. It is also called effluent polishing. Tertiary treatment may include biological nutrient removal (alternatively, this can be classified as secondary treatment), disinfection and partly removal of micropollutants, such as environmental persistent pharmaceutical pollutants. Tertiary treatment is sometimes defined as anything more than primary and secondary treatment in order to allow discharge into a highly sensitive or fragile ecosystem such as estuaries, low-flow rivers or coral reefs. Treated water is sometimes disinfected chemically or physically (for example, by lagoons and microfiltration) prior to discharge into a stream, river, bay, lagoon or wetland, or it can be used for the irrigation of a golf course, greenway or park. If it is sufficiently clean, it can also be used for groundwater recharge or agricultural purposes. Sand filtration removes much of the residual suspended matter. Filtration over activated carbon, also called carbon adsorption, removes residual toxins. Micro filtration or synthetic membranes are used in membrane bioreactors and can also remove pathogens. Settlement and further biological improvement of treated sewage may be achieved through storage in large human-made ponds or lagoons. These lagoons are highly aerobic, and colonization by native macrophytes, especially reeds, is often encouraged. Disinfection Disinfection of treated sewage aims to kill pathogens (disease-causing microorganisms) prior to disposal. It is increasingly effective after more elements of the foregoing treatment sequence have been completed. The purpose of disinfection in the treatment of sewage is to substantially reduce the number of pathogens in the water to be discharged back into the environment or to be reused. The target level of reduction of biological contaminants like pathogens is often regulated by the presiding governmental authority. The effectiveness of disinfection depends on the quality of the water being treated (e.g. turbidity, pH, etc.), the type of disinfection being used, the disinfectant dosage (concentration and time), and other environmental variables. Water with high turbidity will be treated less successfully, since solid matter can shield organisms, especially from ultraviolet light or if contact times are low. Generally, short contact times, low doses and high flows all militate against effective disinfection. Common methods of disinfection include ozone, chlorine, ultraviolet light, or sodium hypochlorite. Monochloramine, which is used for drinking water, is not used in the treatment of sewage because of its persistence. Chlorination remains the most common form of treated sewage disinfection in many countries due to its low cost and long-term history of effectiveness. One disadvantage is that chlorination of residual organic material can generate chlorinated-organic compounds that may be carcinogenic or harmful to the environment. Residual chlorine or chloramines may also be capable of chlorinating organic material in the natural aquatic environment. Further, because residual chlorine is toxic to aquatic species, the treated effluent must also be chemically dechlorinated, adding to the complexity and cost of treatment. Ultraviolet (UV) light can be used instead of chlorine, iodine, or other chemicals. Because no chemicals are used, the treated water has no adverse effect on organisms that later consume it, as may be the case with other methods. UV radiation causes damage to the genetic structure of bacteria, viruses, and other pathogens, making them incapable of reproduction. The key disadvantages of UV disinfection are the need for frequent lamp maintenance and replacement and the need for a highly treated effluent to ensure that the target microorganisms are not shielded from the UV radiation (i.e., any solids present in the treated effluent may protect microorganisms from the UV light). In many countries, UV light is becoming the most common means of disinfection because of the concerns about the impacts of chlorine in chlorinating residual organics in the treated sewage and in chlorinating organics in the receiving water. As with UV treatment, heat sterilization also does not add chemicals to the water being treated. However, unlike UV, heat can penetrate liquids that are not transparent. Heat disinfection can also penetrate solid materials within wastewater, sterilizing their contents. Thermal effluent decontamination systems provide low resource, low maintenance effluent decontamination once installed. Ozone () is generated by passing oxygen () through a high voltage potential resulting in a third oxygen atom becoming attached and forming . Ozone is very unstable and reactive and oxidizes most organic material it comes in contact with, thereby destroying many pathogenic microorganisms. Ozone is considered to be safer than chlorine because, unlike chlorine which has to be stored on site (highly poisonous in the event of an accidental release), ozone is generated on-site as needed from the oxygen in the ambient air. Ozonation also produces fewer disinfection by-products than chlorination. A disadvantage of ozone disinfection is the high cost of the ozone generation equipment and the requirements for special operators. Ozone sewage treatment requires the use of an ozone generator, which decontaminates the water as ozone bubbles percolate through the tank. Membranes can also be effective disinfectants, because they act as barriers, avoiding the passage of the microorganisms. As a result, the final effluent may be devoid of pathogenic organisms, depending on the type of membrane used. This principle is applied in membrane bioreactors. Biological nutrient removal Sewage may contain high levels of the nutrients nitrogen and phosphorus. Typical values for nutrient loads per person and nutrient concentrations in raw sewage in developing countries have been published as follows: 8 g/person/d for total nitrogen (45 mg/L), 4.5 g/person/d for ammonia-N (25 mg/L) and 1.0 g/person/d for total phosphorus (7 mg/L). The typical ranges for these values are: 6–10 g/person/d for total nitrogen (35–60 mg/L), 3.5–6 g/person/d for ammonia-N (20–35 mg/L) and 0.7–2.5 g/person/d for total phosphorus (4–15 mg/L). Excessive release to the environment can lead to nutrient pollution, which can manifest itself in eutrophication. This process can lead to algal blooms, a rapid growth, and later decay, in the population of algae. In addition to causing deoxygenation, some algal species produce toxins that contaminate drinking water supplies. Ammonia nitrogen, in the form of free ammonia (NH3) is toxic to fish. Ammonia nitrogen, when converted to nitrite and further to nitrate in a water body, in the process of nitrification, is associated with the consumption of dissolved oxygen. Nitrite and nitrate may also have public health significance if concentrations are high in drinking water, because of a disease called metahemoglobinemia. Phosphorus removal is important as phosphorus is a limiting nutrient for algae growth in many fresh water systems. Therefore, an excess of phosphorus can lead to eutrophication. It is also particularly important for water reuse systems where high phosphorus concentrations may lead to fouling of downstream equipment such as reverse osmosis. A range of treatment processes are available to remove nitrogen and phosphorus. Biological nutrient removal (BNR) is regarded by some as a type of secondary treatment process, and by others as a tertiary (or advanced) treatment process. Nitrogen removal Nitrogen is removed through the biological oxidation of nitrogen from ammonia to nitrate (nitrification), followed by denitrification, the reduction of nitrate to nitrogen gas. Nitrogen gas is released to the atmosphere and thus removed from the water. Nitrification itself is a two-step aerobic process, each step facilitated by a different type of bacteria. The oxidation of ammonia (NH4+) to nitrite (NO2−) is most often facilitated by bacteria such as Nitrosomonas spp. (nitroso refers to the formation of a nitroso functional group). Nitrite oxidation to nitrate (NO3−), though traditionally believed to be facilitated by Nitrobacter spp. (nitro referring the formation of a nitro functional group), is now known to be facilitated in the environment predominantly by Nitrospira spp. Denitrification requires anoxic conditions to encourage the appropriate biological communities to form. Anoxic conditions refers to a situation where oxygen is absent but nitrate is present. Denitrification is facilitated by a wide diversity of bacteria. The activated sludge process, sand filters, waste stabilization ponds, constructed wetlands and other processes can all be used to reduce nitrogen. Since denitrification is the reduction of nitrate to dinitrogen (molecular nitrogen) gas, an electron donor is needed. This can be, depending on the wastewater, organic matter (from the sewage itself), sulfide, or an added donor like methanol. The sludge in the anoxic tanks (denitrification tanks) must be mixed well (mixture of recirculated mixed liquor, return activated sludge, and raw influent) e.g. by using submersible mixers in order to achieve the desired denitrification. Over time, different treatment configurations for activated sludge processes have evolved to achieve high levels of nitrogen removal. An initial scheme was called the Ludzack–Ettinger Process. It could not achieve a high level of denitrification. The Modified Ludzak–Ettinger Process (MLE) came later and was an improvement on the original concept. It recycles mixed liquor from the discharge end of the aeration tank to the head of the anoxic tank. This provides nitrate for the facultative bacteria. There are other process configurations, such as variations of the Bardenpho process. They might differ in the placement of anoxic tanks, e.g. before and after the aeration tanks. Phosphorus removal Studies of United States sewage in the late 1960s estimated mean per capita contributions of in urine and feces, in synthetic detergents, and lesser variable amounts used as corrosion and scale control chemicals in water supplies. Source control via alternative detergent formulations has subsequently reduced the largest contribution, but naturally the phosphorus content of urine and feces remained unchanged. Phosphorus can be removed biologically in a process called enhanced biological phosphorus removal. In this process, specific bacteria, called polyphosphate-accumulating organisms (PAOs), are selectively enriched and accumulate large quantities of phosphorus within their cells (up to 20 percent of their mass). Phosphorus removal can also be achieved by chemical precipitation, usually with salts of iron (e.g. ferric chloride) or aluminum (e.g. alum), or lime. This may lead to a higher sludge production as hydroxides precipitate and the added chemicals can be expensive. Chemical phosphorus removal requires significantly smaller equipment footprint than biological removal, is easier to operate and is often more reliable than biological phosphorus removal. Another method for phosphorus removal is to use granular laterite or zeolite. Some systems use both biological phosphorus removal and chemical phosphorus removal. The chemical phosphorus removal in those systems may be used as a backup system, for use when the biological phosphorus removal is not removing enough phosphorus, or may be used continuously. In either case, using both biological and chemical phosphorus removal has the advantage of not increasing sludge production as much as chemical phosphorus removal on its own, with the disadvantage of the increased initial cost associated with installing two different systems. Once removed, phosphorus, in the form of a phosphate-rich sewage sludge, may be sent to landfill or used as fertilizer in admixture with other digested sewage sludges. In the latter case, the treated sewage sludge is also sometimes referred to as biosolids. 22% of the world's phosphorus needs could be satisfied by recycling residential wastewater. Fourth treatment stage Micropollutants such as pharmaceuticals, ingredients of household chemicals, chemicals used in small businesses or industries, environmental persistent pharmaceutical pollutants (EPPP) or pesticides may not be eliminated in the commonly used sewage treatment processes (primary, secondary and tertiary treatment) and therefore lead to water pollution. Although concentrations of those substances and their decomposition products are quite low, there is still a chance of harming aquatic organisms. For pharmaceuticals, the following substances have been identified as toxicologically relevant: substances with endocrine disrupting effects, genotoxic substances and substances that enhance the development of bacterial resistances. They mainly belong to the group of EPPP. Techniques for elimination of micropollutants via a fourth treatment stage during sewage treatment are implemented in Germany, Switzerland, Sweden and the Netherlands and tests are ongoing in several other countries. In Switzerland it has been enshrined in law since 2016. Since 1 January 2025, there has been a recast of the Urban Waste Water Treatment Directive in the European Union. Due to the large number of amendments that have now been made, the directive was rewritten on November 27, 2024 as Directive (EU) 2024/3019, published in the EU Official Journal on December 12, and entered into force on January 1, 2025. The member states now have 31 months, i.e. until July 31, 2027, to adapt their national legislation to the new directive ("implementation of the directive"). The amendment stipulates that, in addition to stricter discharge values for nitrogen and phosphorus, persistent trace substances must at least be partially separated. The target, similar to Switzerland, is that 80% of 6 key substances out of 12 must be removed between discharge into the sewage treatment plant and discharge into the water body. At least 80% of the investments and operating costs for the fourth treatment stage will be passed on to the pharmaceutical and cosmetics industry according to the polluter pays principle in order to relieve the population financially and provide an incentive for the development of more environmentally friendly products. In addition, the municipal wastewater treatment sector is to be energy neutral by 2045 and the emission of microplastics and PFAS is to be monitored. The implementation of the framework guidelines is staggered until 2045, depending on the size of the sewage treatment plant and its population equivalents (PE). Sewage treatment plants with over 150,000 PE have priority and should be adapted immediately, as a significant proportion of the pollution comes from them. The adjustments are staggered at national level in: 20% of the plants by 31 December 2033, 60% of the plants by 31 December 2039, 100% of the plants by 31 December 2045. Wastewater treatment plants with 10,000 to 150,000 PE that discharge into coastal waters or sensitive waters are staggered at national level in: 10% of the plants by 31 December 2033, 30% of the plants by 31 December 2036, 60% of the plants by 31 December 2039, 100% of the plants by 31 December 2045. The latter concerns waters with a low dilution ratio, waters from which drinking water is obtained and those that are coastal waters, or those used as bathing waters or used for mussel farming. Member States will be given the option not to apply fourth treatment in these areas if a risk assessment shows that there is no potential risk from micropollutants to human health and/or the environment. Such process steps mainly consist of activated carbon filters that adsorb the micropollutants. The combination of advanced oxidation with ozone followed by granular activated carbon (GAC) has been suggested as a cost-effective treatment combination for pharmaceutical residues. For a full reduction of microplasts the combination of ultrafiltration followed by GAC has been suggested. Also the use of enzymes such as laccase secreted by fungi is under investigation. Microbial biofuel cells are investigated for their property to treat organic matter in sewage. To reduce pharmaceuticals in water bodies, source control measures are also under investigation, such as innovations in drug development or more responsible handling of drugs. In the US, the National Take Back Initiative is a voluntary program with the general public, encouraging people to return excess or expired drugs, and avoid flushing them to the sewage system. Sludge treatment and disposal Environmental impacts Sewage treatment plants can have significant effects on the biotic status of receiving waters and can cause some water pollution, especially if the treatment process used is only basic. For example, for sewage treatment plants without nutrient removal, eutrophication of receiving water bodies can be a problem. In 2024, The Royal Academy of Engineering released a study into the effects wastewater on public health in the United Kingdom. The study gained media attention, with comments from the UKs leading health professionals, including Sir Chris Whitty. Outlining 15 recommendations for various UK bodies to dramatically reduce public health risks by increasing the water quality in its waterways, such as rivers and lakes. After the release of the report, The Guardian newspaper interviewed Whitty, who stated that improving water quality and sewage treatment should be a high level of importance and a "public health priority". He compared it to eradicating cholera in the 19th century in the country following improvements to the sewage treatment network. The study also identified that low water flows in rivers saw high concentration levels of sewage, as well as times of flooding or heavy rainfall. While heavy rainfall had always been associated with sewage overflows into streams and rivers, the British media went as far to warn parents of the dangers of paddling in shallow rivers during warm weather. Whitty's comments came after the study revealed that the UK was experiencing a growth in the number of people that were using coastal and inland waters recreationally. This could be connected to a growing interest in activities such as open water swimming or other water sports. Despite this growth in recreation, poor water quality meant some were becoming unwell during events. Most notably, the 2024 Paris Olympics had to delay numerous swimming-focused events like the triathlon due to high levels of sewage in the River Seine. Reuse Irrigation Increasingly, people use treated or even untreated sewage for irrigation to produce crops. Cities provide lucrative markets for fresh produce, so are attractive to farmers. Because agriculture has to compete for increasingly scarce water resources with industry and municipal users, there is often no alternative for farmers but to use water polluted with sewage directly to water their crops. There can be significant health hazards related to using water loaded with pathogens in this way. The World Health Organization developed guidelines for safe use of wastewater in 2006. They advocate a 'multiple-barrier' approach to wastewater use, where farmers are encouraged to adopt various risk-reducing behaviors. These include ceasing irrigation a few days before harvesting to allow pathogens to die off in the sunlight, applying water carefully so it does not contaminate leaves likely to be eaten raw, cleaning vegetables with disinfectant or allowing fecal sludge used in farming to dry before being used as a human manure. Reclaimed water Global situation Before the 20th century in Europe, sewers usually discharged into a body of water such as a river, lake, or ocean. There was no treatment, so the breakdown of the human waste was left to the ecosystem. This could lead to satisfactory results if the assimilative capacity of the ecosystem is sufficient which is nowadays not often the case due to increasing population density. Today, the situation in urban areas of industrialized countries is usually that sewers route their contents to a sewage treatment plant rather than directly to a body of water. In many developing countries, however, the bulk of municipal and industrial wastewater is discharged to rivers and the ocean without any treatment or after preliminary treatment or primary treatment only. Doing so can lead to water pollution. Few reliable figures exist on the share of the wastewater collected in sewers that is being treated worldwide. A global estimate by UNDP and UN-Habitat in 2010 was that 90% of all wastewater generated is released into the environment untreated. A more recent study in 2021 estimated that globally, about 52% of sewage is treated. However, sewage treatment rates are highly unequal for different countries around the world. For example, while high-income countries treat approximately 74% of their sewage, developing countries treat an average of just 4.2%. As of 2022, without sufficient treatment, more than 80% of all wastewater generated globally is released into the environment. High-income nations treat, on average, 70% of the wastewater they produce, according to UN Water. Only 8% of wastewater produced in low-income nations receives any sort of treatment. The Joint Monitoring Programme (JMP) for Water Supply and Sanitation by WHO and UNICEF report in 2021 that 82% of people with sewer connections are connected to sewage treatment plants providing at least secondary treatment.However, this value varies widely between regions. For example, in Europe, North America, Northern Africa and Western Asia, a total of 31 countries had universal (>99%) wastewater treatment. However, in Albania, Bermuda, North Macedonia and Serbia "less than 50% of sewered wastewater received secondary or better treatment" and in Algeria, Lebanon and Libya the value was less than 20% of sewered wastewater that was being treated. The report also found that "globally, 594 million people have sewer connections that don't receive sufficient treatment. Many more are connected to wastewater treatment plants that do not provide effective treatment or comply with effluent requirements.". Global targets Sustainable Development Goal 6 has a Target 6.3 which is formulated as follows: "By 2030, improve water quality by reducing pollution, eliminating,dumping and minimizing release of hazardous chemicals and materials, halving the proportion of untreated wastewater and substantially increasing recycling and safe reuse globally." The corresponding Indicator 6.3.1 is the "proportion of wastewater safely treated". It is anticipated that wastewater production would rise by 24% by 2030 and by 51% by 2050. Data in 2020 showed that there is still too much uncollected household wastewater: Only 66% of all household wastewater flows were collected at treatment facilities in 2020 (this is determined from data from 128 countries). Based on data from 42 countries in 2015, the report stated that "32 per cent of all wastewater flows generated from point sources received at least some treatment". For sewage that has indeed been collected at centralized sewage treatment plants, about 79% went on to be safely treated in 2020. History The history of sewage treatment had the following developments: It began with land application (sewage farms) in the 1840s in England, followed by chemical treatment and sedimentation of sewage in tanks, then biological treatment in the late 19th century, which led to the development of the activated sludge process starting in 1912. Regulations In most countries, sewage collection and treatment are subject to local and national regulations and standards. Country Examples Overview Europe In the European Union, 0.8% of total energy consumption goes to wastewater treatment facilities. The European Union needs to make extra investments of €90 billion in the water and waste sector to meet its 2030 climate and energy goals. In October 2021, British Members of Parliament voted to continue allowing untreated sewage from combined sewer overflows to be released into waterways. Asia India The 'Delhi Jal Board' (DJB) is currently operating on the construction of the largest sewage treatment plant in India. It will be operational by the end of 2022 with an estimated capacity of 564 MLD. It is supposed to solve the existing situation wherein untreated sewage water is being discharged directly into the river 'Yamuna'. Japan Africa Libya Americas United States More information Decentralized wastewater system List of largest wastewater treatment plants List of water supply and sanitation by country Nutrient Recovery and Reuse: producing agricultural nutrients from sewage Organisms involved in water purification Sanitary engineering Waste disposal
Technology
Food, water and health
null
593367
https://en.wikipedia.org/wiki/De%20Havilland%20Canada%20Dash%208
De Havilland Canada Dash 8
The De Havilland Canada DHC-8, commonly known as the Dash 8, is a series of turboprop-powered regional airliners, introduced by de Havilland Canada (DHC) in 1984. DHC was bought by Boeing in 1986, then by Bombardier in 1992, then by Longview Aviation Capital in 2019; Longview revived the De Havilland Canada brand. Powered by two Pratt & Whitney Canada PW150s, it was developed from the Dash 7 with improved cruise performance and lower operational costs, but without STOL performance. The Dash 8 was offered in four sizes: the initial Series 100 (1984–2005), the more powerful Series 200 (1995–2009) with 37–40 seats, the Series 300 (1989–2009) with 50–56 seats, and Series 400 (1999–2022) with 68–90 seats. The QSeries (Q for quiet) are post-1997 variants fitted with active noise control systems. Per a property transaction made by Bombardier before the 2019 sale to DHC, DHC had to vacate its Downsview, Toronto, manufacturing facility in August 2022, and is planning to restart Dash 8 production in Wheatland County, Alberta, by 2033. At the July 2024 Farnborough International Air Show, DHC announced orders for seven Series 400 aircraft, an order for a newly introduced quick-change combi aircraft conversion kit, and a new factory refurbishment programme. Development Initial development In the 1970s, de Havilland Canada had invested heavily in its Dash 7 project, concentrating on STOL and short-field performance, the company's traditional area of expertise. Using four medium-power engines with large, four-bladed propellers resulted in comparatively lower noise levels, which combined with its excellent STOL characteristics, made the Dash 7 suitable for operating from small in-city airports, a market DHC felt would be compelling. However, only a handful of air carriers employed the Dash 7, as most regional airlines were more concerned about the operational costs (fuel and maintenance) of four engines, rather than the benefits of short-field performance. In 1980, de Havilland responded by dropping the short-field performance requirement and adapting the basic Dash 7 layout to use only two, more powerful engines. Its favoured engine supplier, Pratt & Whitney Canada, developed the new PW100 series engines for the role, more than doubling the power from its PT6. Originally designated the PT7A-2R engine, it later became the PW120. When the Dash 8 rolled out on April 19, 1983, more than 3,800 hours of testing had been accumulated over two years on five PW100 series test engines. The Dash 8 first flight was on June 20, 1983. Certification of the PW120 followed on December 16, 1983. The airliner entered service in 1984 with NorOntair, and Piedmont Airlines, formerly Henson Airlines, was the first US customer the same year. DHC resale In 1986, Boeing bought the company in a bid to improve production at DHC's Downsview Airport plants, believing the shared production in Canada would further strengthen their bargaining position with the Canadian government for a new Air Canada order for large intercontinental airliners. Air Canada was a crown corporation at the time, and both Boeing and Airbus were competing heavily via political channels for the contract. It was eventually won by Airbus, which received an order for 34 A320 aircraft. Allegations of secret commissions paid to Prime Minister of Canada Brian Mulroney are today known as the Airbus affair. Following its failure in the competition, Boeing immediately put de Havilland Canada up for sale. The company was eventually purchased by Bombardier in 1992. Q-Series, -400 The market for new aircraft to replace existing turboprops once again grew in the mid-1990s, and DHC responded with the improved "Series 400" design. All Dash 8s delivered from the second quarter of 1996 (including all Series 400s) include the Active Noise and Vibration System designed to reduce cabin noise and vibration levels to nearly those of jet airliners. To emphasize their quietness, Bombardier renamed the go-forward production of Dash 8 models as the "Q"-Series turboprops (Q200, Q300, and Q400). The last Dash 8-100, a -102, was built in 2005. In April 2008, Bombardier announced that production of the remaining classic versions (Series Q200 and Q300) would be ended, leaving the Series Q400 as the only Dash 8 still in production. Production of the Q200 and Q300 was to cease in May 2009. A total of 672 Dash 8 classics were produced; the last one was delivered to Japan Coast Guard in August 2008. Continuing on with the Q400, the 1,000th Dash 8 was delivered in November 2010. Production Bombardier aimed to produce the Q400 more economically. A deal with its machinists union in June 2017 allowed the assembly of the wings and cockpit section outside Canada and searches for potential partners commenced. Bombardier expected to produce the cockpit section in its plant in Queretaro, Mexico, outsourcing the wings to China's Shenyang Aircraft Corp, which already builds the Q400's centre fuselage. The Q400 components are chemically milled while older variants are assembled from bonded panels and skins. The production of the Dash 8 Series 100 stopped in 2005, and that of the Series 200 and 300 in 2009. Proposed Q400X stretch Bombardier proposed the development of a Q400 stretch with two plug-in segments, called the Q400X project, in 2007. It would compete in the 90-seat market range. In response to this project, , ATR was studying a 90-seat stretch. In June 2009, Bombardier commercial aircraft president Gary Scott indicated that the Q400X would be "definitely part of our future" for possible introduction in 2013–14, although he did not detail the size of the proposed version or commit to an introduction date. In July 2010, Bombardier's vice president, Phillipe Poutissou, made comments explaining the company was still studying the prospects of designing the Q400X and talking with potential customers. At the time, Bombardier was not as committed to the Q400X as it had been previously. In May 2011, Bombardier was still strongly committed to the stretch but envisioned it more likely as a 2015 or later launch. The launch date was complicated by new powerplants from GE and PWC to be introduced in 2016. In February 2012, Bombardier was still studying the issue, at least a three-year delay was envisioned. In October 2012, a joint development deal with a government-led South Korean consortium was revealed, to develop a 90-seater turboprop regional airliner, targeting a 2019 launch date. The consortium was to have included Korea Aerospace Industries and Korean Air Lines. High-density, 90-seat Q400 At the February 2016 Singapore Airshow, Bombardier announced a high-density, 90-seat layout of the Q400, which would enter service in 2018; keeping the seat pitch of the Nok Air 86-seats, an extra row of seats is allowed by changing the configuration of the front right door and moving back the aft pressure bulkhead. The payload is increased by and the aircraft maintenance check intervals are increased: 800 hours from 600 for an A-check and 8,000 hours from 6,000 for a C-check. By August 2018, the 90-seat variant was certified before delivery to launch customer SpiceJet later in the same year. In March 2021, EASA certified the 90-seat variant for European operations; DHC believed that there were opportunities with prospective European customers . Sale to Longview, reviving the De Havilland Canada name On November 8, 2018, Canadian company Longview Aviation Capital Corporation, through its subsidiary Viking Air, acquired the entire Dash 8 program and the de Havilland brand from Bombardier, in a deal that would close by the second half of 2019. Viking had already acquired the discontinued de Havilland Canada aircraft model type certificates in 2006. By November 2018, the sales of the higher-performance Q400 were slower than the cheaper aircraft from ATR. Bombardier announced the sale was for $300 million and expected $250 million net. The sale was projected by Bombardier to result in $250 million annual savings. In January 2019, Longview announced that it would establish a new company in Ontario, reviving the de Havilland Aircraft Company of Canada name, to continue production of the Q400 and support the Dash 8 range. By February, the program sale was expected to close at the end of September. On June 3, 2019, the sale was closed with the newly formed De Havilland Canada (DHC) taking control of the Dash 8 program, including servicing the previous -100, -200, and -300 series. Production of the Q400 was planned to continue at the Downsview, Toronto production facility, under DHC's management. De Havilland was considering a 50-seat shrink, as North American airlines operate 870 ageing 50-seaters, mostly CRJs and Embraer ERJs. There were 17 Dash 8s scheduled for delivery in 2021, and De Havilland planned to pause production after those, while the factory lease expired in 2023. On February 17, 2021, DHC announced a pause in production, planned for the second half of 2021, due to a lack of Dash 8 orders from airlines. The manufacturer planned to vacate its Downsview Toronto facility and lay off 500 employees in the process. The lay-off notice resulted in Unifor, the union representing the workers, demanding a government bail-out. The company planned to restart production after the pandemic at a new location. In July 2022, DHC announced that it would review the Dash 8 programme and supply chain later in the year, and could restart production in the middle of the decade if conditions allowed. The Calgary site, where the company produced DHC-6 Twin Otters, was originally envisioned as the venue for Dash 8 production. Per a property transaction made by Bombardier prior to the 2019 sale to DHC, DHC decommissioned its Downsview, Toronto, manufacturing facility in August 2022, and in 2023 confirmed its plans to restart Dash 8 production in Wheatland County, Alberta, outside of Calgary, by 2033. At the Farnborough International Airshow in July 2024, DHC announced orders for seven Series 400 aircraft, including one for Skyward Express, two for Widerøe, and one for the Tanzania Government Flight Agency. The company also announced the launch of a factory refurbishment programme, for which 28 aircraft had been purchased, along with new freighter and combi aircraft conversion kits; one of the latter had been ordered by Advantage Air, DHC said. Hydrogen-electric powertrain In December 2021, DHC entered into a partnership with ZeroAvia with a view to offering the ZA-2000 hydrogen-electric propulsion as an option for the DHC-8, as a line-fit option for new aircraft and as an approved retrofit for existing aircraft. In May 2023, ZeroAvia unveiled a DHC-8 Q400 donated by Alaska Airlines for use as a testbed aircraft. Design Distinguishing features of the Dash 8 design are the large T-tail intended to keep the tail free of prop wash during takeoff, a very high aspect ratio wing, the elongated engine nacelles also holding the rearward-folding landing gear, and the pointed nose profile. The Dash 8 design has better cruise performance than the Dash 7, is less expensive to operate, and is much less expensive to maintain, due largely to having only two engines. It is a little noisier than the Dash 7 and cannot match the STOL performance of its earlier DHC forebears, although it is still able to operate from small airports with runways long, compared to the required by a fully laden Dash 7. Regional jet competition The introduction of the regional jet altered the sales picture. Although more expensive than turboprops, regional jets allow airlines to operate passenger services on routes not suitable for turboprops. Turboprop aircraft have lower fuel consumption and can operate from shorter runways than regional jets, but have higher engine maintenance costs, shorter ranges, and lower cruising speeds. When world oil prices drove up short-haul airfares in 2006, an increasing number of airlines that had bought regional jets began to reassess turboprop regional airliners, which use about 30–60% less fuel than regional jets. Although the market was not as robust as in the 1980s when the first Dash 8s were introduced, 2007 had increased sales of the only two 40+ seat regional turboprops still in western production, Bombardier's Q400 and its competitor, the ATR series of 50– to 70-seat turboprops. The Q400 has a cruising speed close to that of most regional jets, and its mature engines and systems require less frequent maintenance, reducing its disadvantage. Variants The aircraft has been delivered in four series. The Series 100 has a maximum capacity of 39, the Series 200 has the same capacity but offers more powerful engines, the Series 300 is a stretched, 50-seat version, and the Series 400 is further stretched to a maximum of 90 passengers. Models delivered after 1997 have cabin noise suppression and are designated with the prefix "Q". Production of the Series 100 ceased in 2005, followed by the 200 and 300 in 2009, leaving the Q400 as the only series still in production. Series 100 The Series 100 was the original 37-39 passenger version of the Dash 8 that entered service in 1984. The original engine was the Pratt & Whitney Canada PW120 and later units used the PW121. Rated engine power is 1,800 shp (1,340 kW). DHC-8-101 1984 variant powered by either two PW120 or PW120A engines and a 33,000 lb (15,000 kg) takeoff weight. 1986 variant powered by either two PW120A or PW121 engines and a 34,500 lb (15,650 kg) takeoff weight. 1987 variant powered by two PW121 engines and a 34,500 lb (15,650 kg) takeoff weight (can be modified for a 35,200 lb [15,950 kg] take-off weight). DHC-8-102A 1990 variant powered by two PW120A engines with revised Heath Tecna interior. 1992 variant powered by two PW121 engines and a 36,300 lb (16,450 kg) takeoff weight. DHC-8-100PF DHC-8-100 converted to a freighter by Voyageur Aviation, with a cargo capacity. DHC-8M-100 Two aircraft for Maritime Pollution Surveillance, operated by Transport Canada, equipped with the MSS 6000 Surveillance system. CC-142 Military transport version for the Canadian Forces in Europe. CT-142 Military navigation training version for the Canadian Forces. Used to train Canadian and allied nation's ACSOs and AESOPs E-9A Widget A United States Air Force range control aircraft that ensures that the overwater military ranges in the Gulf of Mexico are clear of civilian boats and aircraft during live fire tests of air-launched missiles and other hazardous military activities. The E-9A Widget is equipped with AN/APS-143(V)-1 radar that can detect an object in the water as small as a person in a life raft, from up to away. Aircraft operate out of Tyndall Air Force Base, Florida, with two aircraft assigned to the 82nd Aerial Targets Squadron for the support of training missions. Series 200 The Series 200 aircraft maintained the same 37–39 passenger airframe as the original Series 100, but was re-engined for improved performance. The Series 200 used the more powerful Pratt & Whitney Canada PW123 engines rated at 2,150 shp (1,600 kW). DHC-8-201 1995 variant powered by two PW123C engines. 1995 variant powered by two PW123D engines. Q200 Version of the DHC-8-200 with the ANVS (Active Noise and Vibration Suppression) system. In 2000, its unit cost was US$12 million. Series 300 The Series 300 introduced a longer airframe that was stretched over the Series 100/200 and has a passenger capacity of 50–56. The Series 300 also used the Pratt & Whitney Canada PW123 engines. Rated engine power is between 2,380 shp (1,774 kW) and 2,500 shp (1,864 kW). Design service life is 80,000 flight cycles. Under an extended service program launched in 2017, the service life of Dash 8-300 is extended by 50 percent, or approximately 15 years, to 120,000 flight cycles. 1989 variant powered by two PW123 engines. 1990 variant powered by two PW123A engines with revised Heath Tecna interior. In addition, the landing gear design changed to a slightly swept-back design intended to prevent tail strikes. 1992 variant powered by two PW123B engines. 1995 variant powered by two PW123E engines. DHC-8-300A Version of the DHC-8-300 with increased payload. Q300 Version of the DHC-8-300 with the ANVS (Active Noise and Vibration Suppression) system. DHC-8-300 MSA Upgraded variant with L-3 for maritime surveillance platform. RO-6A United States military designation for the DHC-8-315 for the United States Army as a reconnaissance platform. C-147A United States military designation for the DHC-8-315 for the United States Army as a jump platform. In 2000, its unit cost was US$14.3 million. Series 400 The Series 400 introduced an even longer airframe that was stretched over the Series 300 ( over the Series 100/200), had slightly more wing span due to a larger wing section inboard of the engines, a stouter T-tail and had a passenger capacity of 68–90. The Series 400 uses Pratt & Whitney Canada PW150A engines rated at 4,850 shp (3,620 kW). The aircraft has a cruise speed of 360 knots (667 km/h), which is 60–90 knots (111–166 km/h) higher than its predecessors. The maximum operating altitude is 25,000 ft (7,600 m) for the standard version, although a version with drop-down oxygen masks is offered, which increases maximum operating altitude to 27,000 ft (8,200 m). Between its service entry in 2000 and the 2018 sale to Longview/Viking, 585 had been delivered at a rate of 30–35 per year, leaving a backlog of 65 at the time of the 2018 sale. DHC-8-400 1999 variant with a maximum of 68 passengers. DHC-8-401 1999 variant with a maximum of 70 passengers. 1999 variant with a maximum of 78 passengers. Q400 Stretched and improved 70–78 passenger version that entered service in 2000. All Q400s include the ANVS (Active Noise and Vibration Suppression) system. Version of the Q400 with updated cabins, lighting, windows, overhead bins, landing gear, as well as reduced fuel and maintenance costs. In 2013, an Extra Capacity variant was introduced, capable of carrying a maximum of 86 passengers. The Extra Capacity variant was updated in 2016 with more closely spaced seats to carry up to 90 passengers. The first 90-seat aircraft was delivered to launch customer SpiceJet in September 2018. (now Q400AT) Over sixteen Q400 aircraft have been adapted to the aerial firefighting role as an airtanker. This aircraft is also called the Dash 8-400AT (airtanker only) or Dash 8-400MRE (multi-role airtanker). The French Sécurité Civile operate eight multi-role airtankers, while Conair Group is currently operating a fleet of airtanker-only variants in Canada, the US, Australia and France. Conair manufactures the airtanker-only variant from their hangars in Abbotsford, Canada. This tanker can carry 2642 US gallons or 10,000 litres of retardant, foam or water and travel at . DHC-8 MPA-D8 2007 converted for use as a maritime patrol aircraft. PAL Aerospace partnered to offer this variant as DHC-8 MPA P4. DHC-8-402PF 2008 converted pallet freighter variant with a payload of . Q400CC Cargo combi. Seats 50 passengers plus of payload. First delivered to launch customer Ryukyu Air Commuter in 2015. In 2017, its unit cost was US$32.2 million. Operators By 2017, the Q400 aircraft had logged 7 million flight hours with 60 operators and transported over 400 million passengers with a dispatch reliability over 99.5%. By July 2018, Dash 8s were in airline service: 143 Series 100 with 35 operators, 42 Series 200 with 16 operators, 151 Series 300 with 32 operators and 508 Q400s. By then, 56 orders were in backlog. Orders and deliveries Accidents and incidents The DHC-8 has been involved in 80 aviation accidents and incidents including 31 hull losses. Those resulted in fatalities. Accidents with fatalities Hull losses April 15, 1988: Horizon Air Flight 2658, operated by DHC-8-102 N819PH suffered an engine fire on climb-out from Seattle/Tacoma International Airport. An emergency landing was made but the aircraft struck equipment on the ground before crashing into two jetways. N819PH was destroyed by fire; there were no fatalities. November 23, 2009: a DHC-8-200, being operated on behalf of United States Africa Command, made an emergency landing at Tarakigné, Mali, and was substantially damaged when the undercarriage collapsed and the starboard wing was ripped off. The accident was caused by the aircraft running out of fuel 29 seconds before the crash. The captain had opted not to refuel at the previous departure airport. April 9, 2012: Air Tanzania Dash 8 5H-MWG was written off at Kigoma Airport, Tanzania, in an aborted take off. All 39 people on board survived. September 30, 2015: Luxair Flight 9562 experienced an aborted takeoff accident at Saarbrücken Airport in Germany. The Bombardier Q400 LX-LGH was damaged beyond repair when it settled back onto the runway after the gear was raised prematurely. The aircraft slid 2,400 feet and came to a stop with more than 1,100 feet remaining of the 6,562-foot paved runway. None of the 20 occupants were injured. May 8, 2019: Biman Bangladesh Airlines Flight 60, a Dash-8 Q400 slid off Runway 21 at Yangon International Airport, Burma, and broke into three pieces as it performed a go-around on landing. The flight originated in Dhaka, Bangladesh. Poor weather was cited as a contributing factor. At least 17 people were injured. Major landing gear accidents Accidents in 2007 In September 2007, two separate accidents of similar landing gear failures occurred within four days of each other on Scandinavian Airlines (SAS) Dash 8-Q400 aircraft. A third accident occurred in October 2007, leading to the withdrawal of the type from the airline's fleet. On September 9, 2007, the crew of SAS Flight 1209, en route from Copenhagen to Aalborg, reported problems with the locking mechanism of the right side landing gear, and Aalborg Airport was prepared for an emergency landing. Shortly after touchdown, the right main gear collapsed and the airliner skidded off the runway while fragments of the right propeller shot against the cabin, and the right engine caught fire. Of 69 passengers and four crew on board, 11 were sent to hospital, five with minor injuries. The accident was filmed by a local news channel (TV2-Nord) and broadcast live on national television. Three days later, on September 12, 2007, Scandinavian Airlines Flight 2748 from Copenhagen to Palanga had a similar problem with the landing gear, forcing the aircraft to land in Vilnius International Airport (Lithuania). No passengers or crew were injured. Immediately after this accident SAS grounded all 33 Q400 airliners in its fleet and, a few hours later, Bombardier recommended that all Q400s with more than 10,000 flights be grounded until further notice. This affected about 60 aircraft, out of 140 Q400s then in service. On October 27, 2007, Scandinavian Airlines Flight 2867 en route from Bergen to Copenhagen had severe problems with the landing gear during landing in Kastrup Airport. The right wing gear did not deploy properly (or partially), and the aircraft skidded off the runway in a controlled emergency landing. The Q400 was carrying 38 passengers, two infants, and four crew members on board. No injuries were reported. The next day, SAS permanently removed its entire Dash 8 Q400 fleet from service. In a press release on October 28, 2007, the company's president said: "Confidence in the Q400 has diminished considerably and our customers are becoming increasingly doubtful about flying in this type of aircraft. Accordingly, with the Board of Directors' approval, I have decided to immediately remove Dash 8 Q400 aircraft from service." The preliminary Danish investigation determined the latest Q400 incident was unrelated to the airline's earlier corrosion problems, in this particular case caused by a misplaced O-ring found blocking the orifice in the restrictor valve. In all, eight Q400s had landing gear failures while landing during 2007: four in Denmark, one in Germany, one in Japan, one in Lithuania, and one in South Korea. In November 2007, it was revealed that the Swedish Civil Aviation Administration had begun an investigation and found Scandinavian Airlines System culpable of cutting corners in its maintenance department. The airline reportedly made 2,300 flights in which safety equipment was not up to standard. On March 10, 2008, SAS ordered 27 more aircraft from Bombardier in a compensation deal: 14 Q400 NextGen turboprops and 13 CRJ900 jets. Other landing gear accidents On February 23, 2017, a Flybe Q400 suffered a right hand gear collapse while landing at Amsterdam Schiphol Airport. There were no injuries. The cause was identified as a deformed righthand main landing gear brace, which had been installed the night before. It is not known when the deformation had occurred. On November 10, 2017, a Flybe flight BE331, operated by a Q400, was scheduled to fly from George Best Belfast City Airport to Inverness Airport. The plane reported a technical problem shortly after takeoff and was diverted to Belfast International Airport, where it landed on its nose with the front gear retracted. One minor injury was reported. On August 19, 2018, a Q400-200 of LC Perú on a flight from Lima to Ayacucho had to return to Lima airport and make an emergency landing due to a nose gear that could not be lowered. The aircraft landed without the nose gear down. On November 15, 2018, a Q300-315 belonging to PAL Airlines was unable to lower its nose gear while trying to land at Deer Lake, Newfoundland, and diverted to Stephenville, Newfoundland, performing a nose gear up landing. On January 18, 2024, the left main landing gear on a Q400 belonging to Ethiopian Airlines collapsed on landing at Mekelle Airport. On December 28, 2024, A PAL Airlines Q400 operating as Air Canada Express Flight 2259 from St. Johns, Newfoundland to Halifax, Nova Scotia, suffered a collapse of its left main landing gear and caught fire while landing at Halifax International Airport. Propeller overspeed incidents On October 13, 2011, Airlines PNG Flight 1600, Dash 8-103 P2-MCJ, was on approach to Madang Airport when the first officer accidentally pulled the power levers through the flight idle setting and into the beta setting while trying to reduce airspeed. In beta, which is intended for ground operations and slowing the aircraft after landing, the variable-pitch props transition to flat pitch. High-speed airflow through the improperly configured props caused them to overspeed and drive the engines, which caused engine damage and a total loss of engine power. The aircraft crashed during the ensuing off-airport forced landing attempt; both pilots, the flight attendant, and a single passenger survived with injuries, while the other 28 passengers died. The Papua New Guinea Accident Investigation Commission (AIC) found that the pilots made several other serious errors, including failing to lower the landing gear and flaps, which would have slowed the aircraft and reduced the severity of the crash; however, the AIC primarily attributed the accident to the fact that it was possible to actuate beta in flight, coupled with inadequate training for flight crews to recognize and correct this situation. A beta lockout feature was available as an option for the Dash 8, but it had not been installed in P2-MCJ, and the beta warning horn had been inoperative. On December 6, 2011, QantasLink Dash 8-315 VH-SBV, on a scheduled passenger flight to Weipa Airport, encountered turbulence while the first officer's hand was resting on the power levers. The first officer inadvertently placed the levers in beta and the propellers began to overspeed. The beta warning horn sounded, initially confusing the pilots, but the horn together with the audible increase in propeller speed prompted them to quickly diagnose the problem and place the power levers back in flight idle before engine damage occurred. The flight landed safely without further incident. While investigating these events, the AIC and the Australian Transport Safety Bureau (ATSB) determined that a number of similar inadvertent in-flight beta actuations had occurred in the Dash 8, and recommended steps be taken to prevent it, including more thorough pilot training. In 2012, in cooperation with the AIC and ATSB, Transport Canada issued an airworthiness directive (AD) mandating the installation of beta lockout on all Dash 8 aircraft that did not have it, a second AD mandating more frequent testing of the beta warning horn, and a service bulletin requiring a cockpit placard to warn pilots not to move the power levers below the flight idle setting while airborne. Specifications
Technology
Specific aircraft_2
null
593693
https://en.wikipedia.org/wiki/Point%20%28geometry%29
Point (geometry)
In geometry, a point is an abstract idealization of an exact position, without size, in physical space, or its generalization to other kinds of mathematical spaces. As zero-dimensional objects, points are usually taken to be the fundamental indivisible elements comprising the space, of which one-dimensional curves, two-dimensional surfaces, and higher-dimensional objects consist; conversely, a point can be determined by the intersection of two curves or three surfaces, called a vertex or corner. In classical Euclidean geometry, a point is a primitive notion, defined as "that which has no part". Points and other primitive notions are not defined in terms of other concepts, but only by certain formal properties, called axioms, that they must satisfy; for example, "there is exactly one straight line that passes through two distinct points". As physical diagrams, geometric figures are made with tools such as a compass, scriber, or pen, whose pointed tip can mark a small dot or prick a small hole representing a point, or can be drawn across a surface to represent a curve. Since the advent of analytic geometry, points are often defined or represented in terms of numerical coordinates. In modern mathematics, a space of points is typically treated as a set, a point set. An isolated point is an element of some subset of points which has some neighborhood containing no other points of the subset. Points in Euclidean geometry Points, considered within the framework of Euclidean geometry, are one of the most fundamental objects. Euclid originally defined the point as "that which has no part". In the two-dimensional Euclidean plane, a point is represented by an ordered pair (, ) of numbers, where the first number conventionally represents the horizontal and is often denoted by , and the second number conventionally represents the vertical and is often denoted by . This idea is easily generalized to three-dimensional Euclidean space, where a point is represented by an ordered triplet (, , ) with the additional third number representing depth and often denoted by . Further generalizations are represented by an ordered tuplet of terms, where is the dimension of the space in which the point is located. Many constructs within Euclidean geometry consist of an infinite collection of points that conform to certain axioms. This is usually represented by a set of points; As an example, a line is an infinite set of points of the form where through and are constants and is the dimension of the space. Similar constructions exist that define the plane, line segment, and other related concepts. A line segment consisting of only a single point is called a degenerate line segment. In addition to defining points and constructs related to points, Euclid also postulated a key idea about points, that any two points can be connected by a straight line. This is easily confirmed under modern extensions of Euclidean geometry, and had lasting consequences at its introduction, allowing the construction of almost all the geometric concepts known at the time. However, Euclid's postulation of points was neither complete nor definitive, and he occasionally assumed facts about points that did not follow directly from his axioms, such as the ordering of points on the line or the existence of specific points. In spite of this, modern expansions of the system serve to remove these assumptions. Dimension of a point There are several inequivalent definitions of dimension in mathematics. In all of the common definitions, a point is 0-dimensional. Vector space dimension The dimension of a vector space is the maximum size of a linearly independent subset. In a vector space consisting of a single point (which must be the zero vector 0), there is no linearly independent subset. The zero vector is not itself linearly independent, because there is a non-trivial linear combination making it zero: . Topological dimension The topological dimension of a topological space is defined to be the minimum value of n, such that every finite open cover of admits a finite open cover of which refines in which no point is included in more than n+1 elements. If no such minimal n exists, the space is said to be of infinite covering dimension. A point is zero-dimensional with respect to the covering dimension because every open cover of the space has a refinement consisting of a single open set. Hausdorff dimension Let X be a metric space. If and , the d-dimensional Hausdorff content of S is the infimum of the set of numbers such that there is some (indexed) collection of balls covering S with for each that satisfies The Hausdorff dimension of X is defined by A point has Hausdorff dimension 0 because it can be covered by a single ball of arbitrarily small radius. Geometry without points Although the notion of a point is generally considered fundamental in mainstream geometry and topology, there are some systems that forgo it, e.g. noncommutative geometry and pointless topology. A "pointless" or "pointfree" space is defined not as a set, but via some structure (algebraic or logical respectively) which looks like a well-known function space on the set: an algebra of continuous functions or an algebra of sets respectively. More precisely, such structures generalize well-known spaces of functions in a way that the operation "take a value at this point" may not be defined. A further tradition starts from some books of A. N. Whitehead in which the notion of region is assumed as a primitive together with the one of inclusion or connection. Point masses and the Dirac delta function Often in physics and mathematics, it is useful to think of a point as having non-zero mass or charge (this is especially common in classical electromagnetism, where electrons are idealized as points with non-zero charge). The Dirac delta function, or function, is (informally) a generalized function on the real number line that is zero everywhere except at zero, with an integral of one over the entire real line. The delta function is sometimes thought of as an infinitely high, infinitely thin spike at the origin, with total area one under the spike, and physically represents an idealized point mass or point charge. It was introduced by theoretical physicist Paul Dirac. In the context of signal processing it is often referred to as the unit impulse symbol (or function). Its discrete analog is the Kronecker delta function which is usually defined on a finite domain and takes values 0 and 1.
Mathematics
Geometry
null
593703
https://en.wikipedia.org/wiki/Abdominal%20pain
Abdominal pain
Abdominal pain, also known as a stomach ache, is a symptom associated with both non-serious and serious medical issues. Since the abdomen contains most of the body's vital organs, it can be an indicator of a wide variety of diseases. Given that, approaching the examination of a person and planning of a differential diagnosis is extremely important. Common causes of pain in the abdomen include gastroenteritis and irritable bowel syndrome. About 15% of people have a more serious underlying condition such as appendicitis, leaking or ruptured abdominal aortic aneurysm, diverticulitis, or ectopic pregnancy. In a third of cases, the exact cause is unclear. Signs and symptoms The onset of abdominal pain can be abrupt, quick, or gradual. Sudden onset pain happens in a split second. Rapidly onset pain starts mild and gets worse over the next few minutes. Pain that gradually intensifies only after several hours or even days has passed is referred to as gradual onset pain. One can describe abdominal pain as either continuous or sporadic and as cramping, dull, or aching. The characteristic of cramping abdominal pain is that it comes in brief waves, builds to a peak, and then abruptly stops for a period during which there is no more pain. The pain flares up and off periodically. The most common cause of persistent dull or aching abdominal pain is edema or distention of the wall of a hollow viscus. A dull or aching pain may also be felt due to a stretch in the liver and spleen capsules. Causes The most frequent reasons for abdominal pain are gastroenteritis (13%), irritable bowel syndrome (8%), urinary tract problems (5%), inflammation of the stomach (5%) and constipation (5%). In about 30% of cases, the cause is not determined. About 10% of cases have a more serious cause including gallbladder (gallstones or biliary dyskinesia) or pancreas problems (4%), diverticulitis (3%), appendicitis (2%) and cancer (1%). More common in those who are older, ischemic colitis, mesenteric ischemia, and abdominal aortic aneurysms are other serious causes. Acute abdomen Acute abdomen is a condition where there is a sudden onset of severe abdominal pain requiring immediate recognition and management of the underlying cause. The underlying cause may involve infection, inflammation, vascular occlusion or bowel obstruction. The pain may elicit nausea and vomiting, abdominal distention, fever and signs of shock. A common condition associated with acute abdominal pain is appendicitis. Here is a list of acute abdomen causes: By system A more extensive list includes the following: Gastrointestinal GI tract Inflammatory: gastroenteritis, appendicitis, gastritis, esophagitis, diverticulitis, Crohn's disease, ulcerative colitis, microscopic colitis Obstruction: hernia, intussusception, volvulus, post-surgical adhesions, tumors, severe constipation, hemorrhoids Vascular: embolism, thrombosis, hemorrhage, sickle cell disease, abdominal angina, blood vessel compression (such as celiac artery compression syndrome), superior mesenteric artery syndrome, postural orthostatic tachycardia syndrome Digestive: peptic ulcer, lactose intolerance, celiac disease, food allergies, indigestion Glands Bile system Inflammatory: cholecystitis, cholangitis Obstruction: cholelithiasis Liver Inflammatory: hepatitis, liver abscess Pancreatic Inflammatory: pancreatitis Renal and urological Inflammation: pyelonephritis, bladder infection Obstruction: kidney stones, urolithiasis, urinary retention Vascular: left renal vein entrapment Gynaecological or obstetric Inflammatory: pelvic inflammatory disease Mechanical: ovarian torsion Endocrinological: menstruation, Mittelschmerz Tumors: endometriosis, fibroids, ovarian cyst, ovarian cancer Pregnancy: ruptured ectopic pregnancy, threatened abortion Abdominal wall muscle strain or trauma muscular infection neurogenic pain: herpes zoster, radiculitis in Lyme disease, abdominal cutaneous nerve entrapment syndrome (ACNES), tabes dorsalis Referred pain from the thorax: pneumonia, pulmonary embolism, ischemic heart disease, pericarditis from the spine: radiculitis from the genitals: testicular torsion Metabolic disturbance uremia, diabetic ketoacidosis, porphyria, C1-esterase inhibitor deficiency, adrenal insufficiency, lead poisoning, black widow spider bite, narcotic withdrawal Blood vessels aortic dissection, abdominal aortic aneurysm Immune system sarcoidosis vasculitis familial Mediterranean fever Idiopathic irritable bowel syndrome (IBS) (affecting up to 20% of the population, IBS is the most common cause of recurrent and intermittent abdominal pain) By location The location of abdominal pain can provide information about what may be causing the pain. The abdomen can be divided into four regions called quadrants. Locations and associated conditions include: Diffuse Peritonitis Vascular: mesenteric ischemia, ischemic colitis, Henoch-Schonlein purpura, sickle cell disease, systemic lupus erythematosus, polyarteritis nodosa Small bowel obstruction Irritable bowel syndrome Metabolic disorders: ketoacidosis, porphyria, familial Mediterranean fever, adrenal crisis Epigastric Heart: myocardial infarction, pericarditis Stomach: gastritis, stomach ulcer, stomach cancer Pancreas: pancreatitis, pancreatic cancer Intestinal: duodenal ulcer, diverticulitis, appendicitis Right upper quadrant Liver: hepatomegaly, fatty liver, hepatitis, liver cancer, abscess Gallbladder and biliary tract: inflammation, gallstones, worm infection, cholangitis Colon: bowel obstruction, functional disorders, gas accumulation, spasm, inflammation, colon cancer Other: pneumonia, Fitz-Hugh-Curtis syndrome Left upper quadrant Splenomegaly Colon: bowel obstruction, functional disorders, gas accumulation, spasm, inflammation, colon cancer Peri-umbilical (the area around the umbilicus, i.e., the belly button) Appendicitis Pancreatitis Inferior myocardial infarction Peptic ulcer Diabetic ketoacidosis Vascular: aortic dissection, aortic rupture Bowel: mesenteric ischemia, Celiac disease, inflammation, intestinal spasm, functional disorders, small bowel obstruction Lower abdominal pain Diarrhea Colitis Crohn's Dysentery Hernia Right lower quadrant Colon: intussusception, bowel obstruction, appendicitis (McBurney's point) Renal: kidney stone (nephrolithiasis), pyelonephritis Pelvic: cystitis, bladder stone, bladder cancer, pelvic inflammatory disease, pelvic pain syndrome Gynecologic: endometriosis, intrauterine pregnancy, ectopic pregnancy, ovarian cyst, ovarian torsion, fibroid (leiomyoma), abscess, ovarian cancer, endometrial cancer Left lower quadrant Bowel: diverticulitis, sigmoid colon volvulus, bowel obstruction, gas accumulation, Toxic megacolon Right low back pain Liver: hepatomegaly Kidney: kidney stone (nephrolithiasis), complicated urinary tract infection Left low back pain Spleen Kidney: kidney stone (nephrolithiasis), complicated urinary tract infection Low back pain Kidney pain (kidney stone, kidney cancer, hydronephrosis) Ureteral stone pain Mechanism Abdominal pain can be referred to as visceral pain or peritoneal pain. The contents of the abdomen can be divided into the foregut, midgut, and hindgut. The foregut contains the pharynx, lower respiratory tract, portions of the esophagus, stomach, portions of the duodenum (proximal), liver, biliary tract (including the gallbladder and bile ducts), and the pancreas. The midgut contains portions of the duodenum (distal), cecum, appendix, ascending colon, and first half of the transverse colon. The hindgut contains the distal half of the transverse colon, descending colon, sigmoid colon, rectum, and superior anal canal. Each subsection of the gut has an associated visceral afferent nerve that transmits sensory information from the viscera to the spinal cord. The visceral sensory information from the gut traveling to the spinal cord, termed the visceral afferent, is non-specific and overlaps with the somatic afferent nerves, which are very specific. Therefore, visceral afferent information traveling to the spinal cord can present in the distribution of the somatic afferent nerve; this is why appendicitis initially presents with T10 periumbilical pain when it first begins and becomes T12 pain as the abdominal wall peritoneum (which is rich with somatic afferent nerves) is involved. Diagnosis A thorough patient history and physical examination is used to better understand the underlying cause of abdominal pain. The process of gathering a history may include: Identifying more information about the chief complaint by eliciting a history of present illness; i.e. a narrative of the current symptoms such as the onset, location, duration, character, aggravating or relieving factors, and temporal nature of the pain. Identifying other possible factors may aid in the diagnosis of the underlying cause of abdominal pain, such as recent travel, recent contact with other ill individuals, and for females, a thorough gynecologic history. Learning about the patient's past medical history, focusing on any prior issues or surgical procedures. Clarifying the patient's current medication regimen, including prescriptions, over-the-counter medications, and supplements. Confirming the patient's drug and food allergies. Discussing with the patient any family history of disease processes, focusing on conditions that might resemble the patient's current presentation. Discussing with the patient any health-related behaviors (e.g. tobacco use, alcohol consumption, drug use, and sexual activity) that might make certain diagnoses more likely. Reviewing the presence of non-abdominal symptoms (e.g., fever, chills, chest pain, shortness of breath, vaginal bleeding) that can further clarify the diagnostic picture. Using Carnett's sign to differentiate between visceral pain and pain originating in the muscles of the abdominal wall. After gathering a thorough history, one should perform a physical exam in order to identify important physical signs that might clarify the diagnosis, including a cardiovascular exam, lung exam, thorough abdominal exam, and for females, a genitourinary exam. Additional investigations that can aid diagnosis include: Blood tests including complete blood count, basic metabolic panel, electrolytes, liver function tests, amylase, lipase, troponin I, and for females, a serum pregnancy test. Urinalysis Imaging including chest and abdominal X-rays Electrocardiogram If diagnosis remains unclear after history, examination, and basic investigations as above, then more advanced investigations may reveal a diagnosis. Such tests include: Computed tomography of the abdomen/pelvis Abdominal or pelvic ultrasound Endoscopy or colonoscopy Management The management of abdominal pain depends on many factors, including the etiology of the pain. Some behavioural changes implemented to prevent pain include: resting after a meal, chewing food completely and slowly, and avoiding stressful and high excitement situations after a meal. Such at home strategies may reduce the need to seek professional assistance via prevention of future abdominal pain. In the emergency department, a person presenting with abdominal pain may initially require IV fluids due to decreased intake secondary to abdominal pain and possible emesis or vomiting. Treatment for abdominal pain includes analgesia, such as non-opioid (ketorolac) and opioid medications (morphine, fentanyl). Choice of analgesia is dependent on the cause of the pain, as ketorolac can worsen some intra-abdominal processes. Patients presenting to the emergency department with abdominal pain may receive a "GI cocktail" that includes an antacid (examples include omeprazole, ranitidine, magnesium hydroxide, and calcium chloride) and lidocaine. After addressing pain, there may be a role for antimicrobial treatment in some cases of abdominal pain. Butylscopolamine (Buscopan) is used to treat cramping abdominal pain with some success. Surgical management for causes of abdominal pain includes but is not limited to cholecystectomy, appendectomy, and exploratory laparotomy. Emergencies Below is a brief overview of abdominal pain emergencies. Outlook One well-known aspect of primary health care is its low prevalence of potentially dangerous abdominal pain causes. Patients with abdominal pain have a higher percentage of unexplained complaints (category "no diagnosis") than patients with other symptoms (such as dyspnea or chest pain). Most people who suffer from stomach pain have a benign issue, like dyspepsia. In general, it is discovered that 20% to 25% of patients with abdominal pain have a serious condition that necessitates admission to an acute care hospital. Epidemiology Abdominal pain is the reason about 3% of adults see their family physician. Rates of emergency department (ED) visits in the United States for abdominal pain increased 18% from 2006 through to 2011. This was the largest increase out of 20 common conditions seen in the ED. The rate of ED use for nausea and vomiting also increased 18%. Special populations Geriatrics More time and resources are used on older patients with abdominal pain than on any other patient presentation in the emergency department (ED). Compared to younger patients with the same complaint, their length of stay is 20% longer, they need to be admitted almost half the time, and they need surgery 1/3 of the time. Age does not reduce the total number of T cells, but it does reduce their functionality. The elderly person's ability to fight infection is weakened as a result. Additionally, they have changed the strength and integrity of their skin and mucous membranes, which are physical barriers to infection. It is well known that older patients experience altered pain perception. The challenge of obtaining a sufficient history from an elderly patient can be attributed to multiple factors. Reduced memory or hearing could make the issue worse. It is common to encounter stoicism combined with a fear of losing one's independence if a serious condition is discovered. Changes in mental status, whether acute or chronic, are common. Pregnancy Unique clinical challenges arise when pregnant women experience abdominal pain. First off, there are many possible causes of abdominal pain during pregnancy. These include intraabdominal diseases that arise incidentally during pregnancy as well as obstetric or gynecologic disorders associated with pregnancy. Secondly, pregnancy modifies the natural history and clinical manifestation of numerous abdominal disorders. Third, pregnancy modifies and limits the diagnostic assessment. For instance, concerns about fetal safety during pregnancy are raised by invasive exams and radiologic testing. Fourth, while receiving therapy during pregnancy, the mother's and the fetus' interests need to be taken into account.
Biology and health sciences
Symptoms and signs
Health
593758
https://en.wikipedia.org/wiki/Dichromacy
Dichromacy
Dichromacy (from Greek di, meaning "two" and chromo, meaning "color") is the state of having two types of functioning photoreceptors, called cone cells, in the eyes. Organisms with dichromacy are called dichromats. Dichromats require only two primary colors to be able to represent their visible gamut. By comparison, trichromats need three primary colors, and tetrachromats need four. Likewise, every color in a dichromat's gamut can be evoked with monochromatic light. By comparison, every color in a trichromat's gamut can be evoked with a combination of monochromatic light and white light. Dichromacy in humans is a color vision deficiency in which one of the three cone cells is absent or not functioning and color is thereby reduced to two dimensions. Perception Dichromatic color vision is enabled by two types of cone cells with different spectral sensitivities and the neural framework to compare the excitation of the different cone cells. The resulting color vision is simpler than typical human trichromatic color vision, and much simpler than tetrachromatic color vision, typical of birds and fish. A dichromatic color space can be defined by only two primary colors. When these primary colors are also the unique hues, then the color space contains the individuals entire gamut. In dichromacy, the unique hues can be evoked by exciting only a single cone at a time, e.g. monochromatic light near the extremes of the visible spectrum. A dichromatic color space can also be defined by non-unique hues, but the color space will not contain the individual's entire gamut. For comparison, a trichromatic color space requires three primary colors to be defined. However, even when choosing three pure spectral colors as the primaries, the resulting color space will never encompass the entire trichromatic individual's gamut. The color vision of dichromats can be represented in a 2-dimensional plane, where one coordinate represented brightness, and the other coordinate represents hue. However, the perception of hue is not directly analogous to trichromatic hue, but rather a spectrum diverging from white (neutral) in the middle to two unique hues at the extreme, e.g. blue and yellow. Unlike trichromats, white (experienced when both cone cells are equally excited) can be evoked by monochromatic light. This means that dichromats see white in the rainbow. Humans Dichromacy in humans is a form of color blindness (color vision deficiency). Normal human color vision is trichromatic, so dichromacy is achieved by losing functionality of one of the three cone cells. The classification of human dichromacy depends on which cone is missing: Protanopia is a severe form of red-green color blindness, in which the L-cone is absent. It is sex-linked and affects about 1% of males. Colors of confusion include blue/purple and green/yellow. Deuteranopia is a severe form of red-green color blindness, in which the M-cone is absent. It is sex-linked and affects about 1% of males. Color vision is very similar to protanopia. Tritanopia is a severe form of blue-yellow color blindness, in which the S-cone is absent. It is much rarer than the other types, occurring in about 1 in 100,000, but is not sex-linked, so affects females and males at similar rates. They tend to confuse greens and blues, and yellow can appear pink. Diagnosis The three determining elements of a dichromatic opponent-color space are the missing color, the null-luminance plane, and the null-chrominance plane. The description of the phenomena itself does not indicate the color that is impaired to the dichromat, however, it does provide enough information to identify the fundamental color space, the colors that are seen by the dichromat. This is based on testing both the null-chrominance plane and null-luminance plane which intersect on the missing color. The cones excited to a corresponding color in the color space are visible to the dichromat and those that are not excited are the missing colors. Color detecting abilities of dichromats According to color vision researchers at the Medical College of Wisconsin (including Jay Neitz), each of the three standard color-detecting cones in the retina of trichromats – blue, green and red – can pick up about 100 different gradations of color. If each detector is independent of the others, the total number of colors discernible by an average human is their product (100 × 100 × 100), i.e. about 1 million; Nevertheless, other researchers have put the number at upwards of 2.3 million. The same calculation suggests that a dichromat (such as a human with red-green color blindness) would be able to distinguish about 100 × 100 = 10,000 different colors, but no such calculation has been verified by psychophysical testing. Furthermore, dichromats have a significantly higher threshold than trichromats for colored stimuli flickering at low (1 Hz) frequencies. At higher (10 or 16 Hz) frequencies, dichromats perform as well as or better than trichromats. This means such animals would still observe the flicker instead of a temporally fused visual perception as is the case in human movie watching at a high enough frame rate. Mammals Until the 1960s, popular belief held that most mammals outside of primates were monochromats. In the last half-century, however, a focus on behavioral and genetic testing of mammals has accumulated extensive evidence of dichromatic color vision in a number of mammalian orders. Mammals are now usually assumed to be dichromats (possessing S- and L-cones), with monochromats viewed as the exceptions. The common vertebrate ancestor, extant during the Cambrian, was tetrachromatic, possessing 4 distinct opsins classes. Early mammalian evolution would see the loss of two of these four opsins, due to the nocturnal bottleneck, as dichromacy may improve an animal's ability to distinguish colors in dim light. Placental mammals are therefore–as a rule–dichromatic. The exceptions to this rule of dichromatic vision in placental mammals are old world monkeys and apes, which re-evolved trichromacy, and marine mammals (both pinnipeds and cetaceans) which are cone monochromats. New World Monkeys are a partial exception: in most species, males are dichromats, and about 60% of females are trichromats, but the owl monkeys are cone monochromats, and both sexes of howler monkeys are trichromats. Trichromacy has been retained or re-evolved in marsupials, where trichromatic vision is widespread. Recent genetic and behavioral evidence suggests the South American marsupial Didelphis albiventris is dichromatic, with only two classes of cone opsins having been found within the genus Didelphis.
Biology and health sciences
Visual system
Biology
593765
https://en.wikipedia.org/wiki/Auto%20rickshaw
Auto rickshaw
An auto rickshaw is a motorized version of the pulled rickshaw or cycle rickshaw. Most have three wheels and do not tilt. They are known by many terms in various countries, including 3wheel, Adaidaita Sahu, Keke-napep, Maruwa, auto rickshaw, auto, baby taxi, bajaj, bao-bao, chand gari, CNG, easy bike, jonnybee, lapa, lapa-lapa, mototaxi, pigeon, pragya, tuk-tuk, tukxi, tum-tum and tempo. The auto rickshaw is a common form of transport around the world, both as a vehicle for hire and for private use. They are especially common in countries with tropical or subtropical climates since they are usually not fully enclosed, and they are found in many developing countries because they are relatively inexpensive to own and operate. There are many different auto rickshaw designs. The most common type is characterized by a sheet-metal body or open frame resting on three wheels; a canvas roof with drop-down side curtains; a small cabin at the front for the driver operating handlebar controls; and a cargo, passenger, or dual purpose space at the rear. Another type is a motorcycle that has an expanded sidecar or, less often, is pushing or pulling a passenger compartment. As of 2019, Bajaj Auto of India is the world's largest auto rickshaw manufacturer, selling 780,000 during the 2019 fiscal year. Origin In the 1930s Japan, which was the most industrialized country in Asia at the time, encouraged the development of motorized vehicles including less expensive three-wheeled vehicles based on motorcycles. The Mazda-Go, a 3-wheel open "truck" released in 1931, is often considered the first of what became auto rickshaws. Later that decade the Japanese Ministry of Posts and Telecommunications of Japan distributed about 20,000 used three-wheelers to Southeast Asia as part of efforts to expand its influence in the region. They became popular in some areas, especially Thailand, which developed local manufacturing and design after the Japanese government abolished the three-wheeler license in Japan in 1965. Production in Southeast Asia started from the knockdown production of the Daihatsu Midget, which was introduced in 1959. An exception is the indigenously modified Philippine tricycle, which originates from the Rikuo Type 97 motorcycle with a sidecar, introduced to the islands in 1941 by the Imperial Japanese Army during World War II. In Europe, Corradino D'Ascanio, aircraft designer at Piaggio and inventor of the Vespa, came up with the idea of building a light three-wheeled commercial vehicle to power Italy's post-war economic reconstruction. The Piaggio Ape followed suit in 1947. Regional variations Africa and the Middle East Egypt Locally named the "tuktuk", the rickshaw is used as a means of transportation in most parts of Egypt. It is generally rare to find in some affluent and newer parts of cities such as New Cairo and Heliopolis; and on highways due to police control and enforcement. Gaza Together with the recent boom of recreational facilities in Gaza for the local residents, donkey carts have all but been displaced by tuk-tuks in 2010. Due to the ban by Egypt and Israel on the import of most motorised vehicles, the tuk-tuks have had to be smuggled in parts through the tunnel network connecting Gaza with Egypt. Iraq Due to extreme congestion in Baghdad and other Iraqi cities combined with the insensible cost of vehicles in relation to frequent violence, rickshaws have been imported from India in large numbers to provide taxi service and other purposes, in stark contrast to previous attitudes of the pre-U.S. 2003 invasion eras with rickshaws being disdained and sedans being held in high regard as a status symbol. Rickshaws have been noted for being instrumental in political protest revolts. Madagascar In Madagascar, man-powered rickshaws are a common form of transportation in a number of cities, especially Antsirabe. They are known as "posy" from pousse-pousse, meaning push-push. Cycle rickshaws took off since 2006 in a number of flat cities like Toamasina and replaced the major part of the posy, and are now threatened by the auto rickshaws, introduced in 2009. Provincial capitals like Toamasina, Mahajanga, Toliara, and Antsiranana are taking to them rapidly. They are known as "bajaji" in the north and "tuk-tuk" or "tik-tik" in the east, and are now licensed to operate as taxis. They are not yet allowed an operating licence in the congested, and more polluted national capital, Antananarivo. Nigeria The auto rickshaw is used to provide transportation in cities all over Nigeria. Popularity and use varies across the country. In Lagos, for example, the "keke" (Hausa for bicycle) is regulated and transportation around the state's highways is prohibited while in Kano it's popularly known as "Adaidaita Sahu". South Africa Tuk-tuks, introduced in Durban in the late 1980s, have enjoyed growing popularity in recent years, particularly in Gauteng. In Cape Town they are used to deliver groceries and, more recently, transport tourists. Sudan Rickshaws, known as "Raksha" in Sudan, are the most common means of transportation, followed by the bus, in the capital Khartoum. Tanzania Locally known as "bajaji", they are a common mode of transportation in Dar es Salaam, and many other cities and villages. Uganda A local delivery company called as Sokowatch in 2020 began a pilot project using electric tuk-tuks, to cut pollution. Zimbabwe Hende Moto EV & Taxi company was founded in 2019 by Devine Mafa, an American-Zimbabwean businessman. Hende Moto taxis were first introduced in Zimbabwe as the first vehicle manufactured by Zimbabwean three-wheeler manufacturing company Hende Moto Pvt Ltd. The first Hende Moto Taxi was introduced in Kwekwe in August 2019, and thereafter in Victoria Falls City and then Harare in 2019. Hende Moto is also the manufacturer of the first Zimbabwean-made electric passenger three-wheeled vehicle. It operates on a lithium-ion battery that has a range of 70 miles on a 6-hour charge. South Asia Afghanistan Auto rickshaws are very common in the eastern Afghan city of Jalalabad, where they are popularly decorated in art and colors. They are also popular in the northern city of Kunduz. Bangladesh Auto rickshaws (locally called "baby taxis" and more recently "CNGs" due to their fuel source, compressed natural gas) are one of the more popular modes of transport in Bangladesh mainly due to their size and speed. They are best suited to narrow, crowded streets, and are thus the principal means of covering longer distances within urban areas. Two-stroke engines had been identified as one of the leading sources of air pollution in Dhaka. Thus, since January 2003, traditional auto rickshaws were banned from the capital; only the new natural gas-powered models (CNG) were permitted to operate within the city limits. All CNGs are painted green to signify that the vehicles are eco-friendly and that each one has a meter built-in. India Most cities offer auto rickshaw service, Although cycle rickshaws and hand-pulled rickshaws are also available but rarely in certain remote areas, as all other cities began using auto rickshaws. Many state governments have launched an initiative of women-friendly rickshaw service called the Pink Rickshaws driven by women. The drivers are known as the Rickshaw-wallah, auto-wallah, tuktuk-wallah or auto-kaara in places like Kerala. Auto-rickshaws are also known as tempos in some parts of India. Auto rickshaws are used in cities and towns for short distances; they are less suited to long distances because they are slow and the carriages are open to air pollution. Auto rickshaws (often called "autos") provide cheap and efficient transportation. Modern auto rickshaws run on electricity as government pushes for e-mobility through its FAME-II scheme, compressed natural gas (CNG) and liquified petroleum gas (LPG) due to government regulations and are environmentally friendly compared to full-sized cars. To augment speedy movement of traffic, auto rickshaws are not allowed in the southern part of Mumbai. India is the location of the annual Rickshaw Run. There are two types of auto rickshaws in India. In older versions the engines were below the driver's seat, while in newer versions engines are in the rear. They normally run on petrol, CNG, or diesel. The seating capacity of a normal rickshaw is four, including the driver's seat. Six-seater rickshaws exist in different parts of the country, but the model was officially banned in the city of Pune on 10 January 2003 by the Regional Transport Authority (RTA). Apart from this, modern electric auto rickshaws, which run on electric motors and have high torque and loading capacity with better speed, are also gaining popularity in India. Many auto drivers moved to electric three-wheelers as the prices of CNG or Diesel is very high and that type of auto rickshaw is much costlier compared to the electric auto rickshaw. The Government is also taking actions to convert current CNG and diesel rickshaws to electric rickshaws. CNG autos in many cities (e.g. Delhi, Agra) are distinguishable from the earlier petrol-powered autos by a green and yellow livery, as opposed to the earlier black and yellow appearance. In other cities (such as Mumbai) the only distinguishing feature is the 'CNG' print found on the back or side of the auto. Some local governments are considering four-stroke engines instead of two-stroke versions. Notable auto rickshaw manufacturers in India include Bajaj Auto, Mahindra & Mahindra, Piaggio Ape, Atul Auto, Kerala Automobiles Limited, TVS Motors and Force Motors. In Delhi there also used to be a variant powered by a Harley-Davidson engine called the phat-phati, because of the sound it made. The story goes that shortly after Independence a stock of Harley-Davidson motorbikes were found that had been used by British troops during World War II and left behind in a military storage house in Delhi. Drivers purchased these bikes, added on a gear box (probably from a Willys jeep), welded on a passenger compartment that was good for four to six passengers, and put the unconventional vehicles onto the roads. A 1998 ruling of the Supreme Court against the use of polluting vehicles finally signed the death warrant of Delhi's phat-phatis. India has about 2.4 million battery-powered, three-wheeled rickshaws on its roads. Some 11,000 new ones hit the streets each month, creating a US$3.1 billion market. Manufacturers include Mahindra & Mahindra Ltd. and Kinetic Engineering. A prerequisite for the adoption to electric vehicles is the availability of charging stations; as of early 2024, India had 12,146 public EV charging stations operational across the country. Generally rickshaw fares are controlled by the government, however auto (and taxi) driver unions frequently go on strike demanding fare hikes. They have also gone on strike multiple times in Delhi to protest against the government and High Court's 2012 order to install GPS systems, and even though GPS installation in public transport was made mandatory in 2015, as of 2017 compliance remains very low. The 200 cc variant of the Bajaj Auto auto rickshaw was used in the 2022 Rickshaw Run to set the record for the world's highest auto rickshaw, over the Umling La Pass, at Nepal Auto rickshaws were a popular mode of transport in Nepal during the 1980s and 1990s, until the government banned the movement of 600 such vehicles in the early 2000s. The earliest auto rickshaws running in Kathmandu were manufactured by Bajaj Auto. Nepal has been a popular destination for the Rickshaw Run. The 2009 Fall Run took place in Goa, India and ended in Pokhara, Nepal. Pakistan Auto rickshaws are a popular mode of transport in Pakistani towns and are mainly used for travelling short distances within cities. One of the major manufacturers of auto rickshaws is Piaggio. The government is taking measures to convert all gasoline powered auto rickshaws to cleaner CNG rickshaws by 2015 in all the major cities of Pakistan by issuing easy loans through commercial banks. Environment Canada is implementing pilot projects in Lahore, Karachi, and Quetta with engine technology developed in Mississauga, Ontario, Canada that uses CNG instead of gasoline in the two-stroke engines, in an effort to combat environmental pollution and noise levels. In many cities in Pakistan, there are also motorcycle rickshaws, usually called "chand gari" (moon car) or "chingchi", after the Chinese company Jinan Qingqi Motorcycle Co. Ltd who first introduced these to the market. There are many rickshaw manufacturers in Pakistan. Lahore is the hub of CNG auto rickshaw manufacturing. Manufacturers include: New Asia automobile Pvt, Ltd; AECO Export Company; STAHLCO Motors; Global Sources; Parhiyar Automobiles; Global Ledsys Technologies; Siwa Industries; Prime Punjab Automobiles; Murshid Farm Industries; Sazgar Automobiles; NTN Enterprises; and Imperial Engineering Company. Sri Lanka Auto rickshaws, commonly known as three-wheelers, tuk-tuks (), autos, or trishaws can be found on all roads in Sri Lanka transporting people or freight. Sri Lankan three-wheelers are of the style of the light Phnom Penh-type. Most of the three-wheelers in Sri Lanka are a slightly modified Indian Bajaj model, imported from India though there are few manufactured locally and increasingly imports from other countries in the region and other brands of three-wheelers such as Piaggio Ape. Three-wheelers were introduced to Sri Lanka for the first time around 1979 by Richard Pieris & Company. a new gasoline powered tuk-tuk typically costs around , while a newly introduced Chinese electric model cost around . Since 2008, the Sri Lankan government has banned the import of all 2-stroke gasoline engines due to environmental concerns. Ones imported to the island now are four-stroke engines. Most three-wheelers are available as hired vehicles, with few being used to haul goods or as private company or advertising vehicles. Bajaj enjoys a virtual monopoly in the island, with its agent being David Pieries Motor Co, Ltd. A few three-wheelers in Sri Lanka have distance meters. In the capital city it is becoming more and more common. The vast majority of fares are negotiated between the passenger and driver. There are 1.2 million trishaws in Sri Lanka and most are on financial loans. In Sri Lanka, tourists are able to drive a tuktuk. Through the Automobile Association of Ceylon, tourists are able to get a temporary Recognition Permit which allows them to drive a three-wheeler legally. Southeast Asia Cambodia In Cambodia, a passenger-carrying three-wheeled vehicle is known as from the French remorque. It is a widely used form of transportation in the capital of Phnom Penh and for visitors touring the Angkor temples in Siem Reap. Some have four wheels and is composed of a motorcycle (which leans) and trailer (which does not). Cambodian cities have a much lower volume of automobile traffic than Thai cities, and tuk-tuks are still the most common form of urban transport. There are more than 6,000 tuk-tuks in Phnom Penh, according to the Independent Democracy of Informal Economy Association (IDEA), a union that represents tuk-tuk drivers among other members. Indonesia In Indonesia, auto rickshaws are popular in Jakarta as Bajay, Java, Medan and Gorontalo as Bentor, and some parts of Sulawesi and other places in the country. In Jakarta, the auto rickshaws are called Bajay or Bajaj and they are the same to as the ones in India but are colored blue (for the ones which use compressed natural gas) and orange (for normal gasoline fuel). The blue ones are imported from India with the brand of Bajaj and TVS and the orange ones are the old design from 1977. The orange ones uses two-stroke engines as their prime mover, while the blue ones use four stroke engines. The orange bajaj has been banned since 2017 due to emission regulations. The Bajaj is one of the most popular modes of transportation in the city. Outside of Jakarta, the bentor-style auto rickshaw is ubiquitous, with the passenger cabin mounted as a sidecar (like in Medan) or in-front (like the ones in some parts of Sulawesi) to a motorcycle. Philippines In the Philippines, a similar mode of public transport is the "tricycle" (Filipino: traysikel; Cebuano: traysikol). Unlike auto rickshaws, however, it has a motorcycle with a sidecar configuration and a different origin. The exact date of its appearance in the Philippines is unknown, but it started appearing after World War 2, roughly at the same time as the appearance of the jeepney. It is most likely derived from the Rikuo Type 97 military motorcycle used by the Imperial Japanese Army in the Philippines starting at 1941. The motorcycle was essentially a licensed copy of a Harley-Davidson with a sidecar. However, there is also another hypothesis which places the origin of the tricycle to the similarly built "trisikad", a human-powered cycle rickshaw built in the same configuration as the tricycle. However, the provenance of the trisikad is also unknown. Prior to the tricycles and trisikad, the most common means of mass public transport in the Philippines is a carriage pulled by horses or carabaos known as the kalesa (calesa or carromata in Philippine Spanish). The pulled rickshaw never gained acceptance in the Philippines. Americans tried to introduce it in the early 20th century, but it was strongly opposed by local Filipinos who viewed it as an undignified mode of transport that turned humans into "beasts". The design and configuration of tricycles vary widely from place to place, but tends towards rough standardization within each municipality. The usual design is a passenger or cargo sidecar fitted to a motorbike, usually on the right of the motorbike. It is rare to find one with a left sidecar. A larger variant of the tricycle with the motorcycle in the center enclosed by a passenger cab with two side benches is known as a "motorela". It is found on the islands of Mindanao, Camiguin, and Bohol. Another notable variant is the tricycles of the Batanes Islands which have cabs made from wood and roofed with thatched cogon grass. In Pagadian City, tricycles are also uniquely built with the passenger cab slanting upwards, due to the city's streets that run along steep hills. Tricycles can carry three passengers or more in the sidecar, one or two pillion passengers behind the driver, and even a few on the roof of the sidecar. Tricycles are one of the main contributors to air pollution in the Philippines, which account for 45% of all volatile organic compound emissions since majority of them employ two-stroke engines. However, some local governments are working towards phasing out two-stroke tricycles for ones with cleaner four-stroke engines. Tuk-Tuks have now been accepted as Three-Wheeled Vehicles by the Land Transportation Office (Philippines) as distinct from tricycles and are now seen in Philippine streets. Electric versions are now seen especially in the city of Manila where they are called e-trikes. Combustion engine tuktuks are locally distributed by TVS Motors and Bajaj Auto through dealerships Thailand The auto rickshaw, called tuk-tuk (, ) in Thailand, is a widely used form of urban transport in Bangkok and other Thai cities. The name is onomatopoeic, mimicking the sound of a small (often two-cycle) engine. It is particularly popular where traffic congestion is a major problem, such as in Bangkok and Nakhon Ratchasima. In Bangkok in the 1960s, these were called samlaws, and they are still popularly called that today. Bangkok and other cities in Thailand have many tuk-tuks which are a more open variation on the Indian auto rickshaw. About 20,000 tuk-tuks were registered as taxis in Thailand in 2017. Bangkok alone is reported to have 9,000 tuk-tuks. Tuk-tuk hua kob (ตุ๊ก ๆ หัวกบ, , literally: frog-headed tuk tuk) is a unique tuk tuk with a cab looking like a frog's head. Only Phra Nakhon Si Ayutthaya and Trang have vehicles like this. in 2018, MuvMi, an electric tuk-tuk ride hailing service launched in Bangkok. East Asia China Various types of auto rickshaw are used around China, where they are called sān lún chē (三轮车) and sometimes sān bèng zǐ (三蹦子), meaning three wheeler or tricycle. They may be used to transport cargo or passengers in the more rural areas. However, in many urban areas the auto rickshaws for passengers are often operated illegally as they are considered unsafe and an eyesore. They are permitted in some towns and cities, however. The Southeast Asian word tuk tuk is transliterated as dū dū chē (嘟嘟车, or beep beep car) in Chinese. Europe France A number of tuk-tuks (250 in 2013 according to the Paris Prefecture) are used as an alternative tourist transport system in Paris, some of them being pedal-operated with electric motor assist. They are not yet fully licensed to operate and await customers on the streets. Vélotaxis were common during the Occupation years in Paris due to fuel restrictions. Italy Auto rickshaws have been commonly used in Italy since the late 1940s, providing a low-cost means of transportation in the post–World War II years when the country was short of economic resources. The Piaggio Ape (Tukxi), designed by Vespa creator Corradino D'Ascanio and first manufactured in 1948 by the Italian company Piaggio, though primarily designed for carrying freight has also been widely used as an auto rickshaw. It is still extremely popular throughout the country, being particularly useful in the narrow streets found in the center of many little towns in central and southern Italy. Though it no longer has a key role in transportation, Piaggio Ape is still used as a minitaxi in some areas such as the islands of Ischia and Stromboli (on Stromboli no cars are allowed). It has recently been re-launched as a trendy-ecological means of transportation, or, relying on the role the Ape played in the history of Italian design, as a promotional tool. Portugal Tuk Tuks are used in the main touristic cities and regions of the country, specially in Lisbon and the sunny region of Algarve, as a novel form of transport for visitors during the tourist season. United Kingdom In 2006 a British travel writer Antonia Bolingbroke-Kent and her friend Jo Huxster travelled with an auto rickshaw from Bangkok to Brighton. With this 98 days' trip they set a Guinness World Record for the longest journey ever with an auto rickshaw. In October 2022, Gwent police spent £40,000 on four tuk tuk vehicles in order to help fight crime. Montenegro Tuk Tuk Montenegro has implemented tours with electric tuk-tuks in Kotor, Montenegro in 2018. The Americas El Salvador The mototaxi or moto is the El Salvadoran version of the auto rickshaw. These are most commonly made from the front end and engine of a motorcycle attached to a two-wheeled passenger area in back. Commercially produced models, such as the Indian Bajaj brand, are also employed. Guatemala In Guatemala tuk-tuks operate, both as taxis and private vehicles, in Guatemala City, around the island town of Flores, Peten, in the mountain city of Antigua Guatemala, and in many small towns in the mountains. In 2005 the tuk-tuks were prevalent in the Lago de Atitlán towns of Panajachel and Santiago Atitlán. While tuk-tuks continue to serve as a prevalent form of transportation in Antigua Guatemala, their use throughout the country as a whole has declined. United States In the 1950s and 1960s, the United States Post Office (replaced in 1971 by the United States Postal Service) used the WestCoaster Mailster, a close relative of the tuk-tuk. Similar vehicles remain in limited use for parking enforcement, mall security, and other niche applications. After a short time on the market (Mid-2000s to 2008) in the United States, the vehicles failed to gain popularity in the United States, and as a result, are no longer available. The Manufacturer Bajaj cited the manual transmissions aboard the three-wheelers as the reason for poor sales. As a result of modifications that made the machines EPA and DOT compliant, the vehicles that were sold are still street-legal. Auto rickshaws are rarely seen in the United States, However there are companies that operate them as taxis, affordable transportation services, or rentals, usually in urban areas like Tuk Tuk Chicago in Chicago, Capital Tuk-Tuk in Sacramento, eTuk Ride Denver in Denver, the Boston rickshaw company in Boston and several more. The New York Police Department (NYPD) operates auto rickshaws that they call “three-wheel patrol scooters”. The patrol scooters are used for parking and traffic enforcement on city streets and to patrol places that most cars can't – like the narrow paths in Central Park. The NYPD patrol scooters started being replaced in 2016 with Smart Fortwos. The NYPD believes that the Smart Fourtwos are safer, more comfortable, and more affordable, than the three-wheel patrol scooters due to the Smart Fourtwos coming with features that the patrol scooters lack like air conditioning, and airbags, while also costing about $6,000 less. The Smart Fortwos can also be driven on highways if needed. The Smart Fortwos are also said to be more “approachable” and “friendlier looking” which helps with public relations. Cuba In Cuba, the autorickshaws are small and look like a coconut, hence the name Cocotaxi. Peru In Peru, a version of this vehicle is called a motocar or mototaxi. Mexico Some auto rickshaws have been and are still used in Mexico, Such as in Rickshaws in Mexico City. Australia and Oceania Australia Ikea did a trial run using Electric Auto Rickshaws in Sydney, Australia to deliver packages to customers from May to August 2023. A company called Just Tuk'n Around using both pedal powered rickshaws and electric auto rickshaws carries tourists around in Airlie Beach. New Zealand Mt Cook Alpine Salmon uses Auto rickshaws on its farms to move equipment and people around. A company using auto rickshaws called Tuk Tuk Taxi operates in Wanaka, South Island. A company using auto rickshaws called Tuk Tuk NZ used to operate in Wellington. A company using auto rickshaws called Kiwi Tuk Tuk used to operate in Auckland. Fuel efficiency and pollution In July 1998, the Supreme Court of India ordered the government of Delhi to implement CNG or LPG (Autogas) fuel for all autos and for the entire bus fleet in and around the city. Delhi's air quality has improved with the switch to CNG. Initially, auto rickshaw drivers in Delhi had to wait in long queues for CNG refueling, but the situation improved following an increase in the number of CNG stations. Gradually, many state governments passed similar laws, thus shifting to CNG or LPG vehicles in most large cities to improve air quality and reduce pollution. Certain local governments are pushing for four-stroke engines instead of two-stroke ones. Typical mileage for an Indian-made auto rickshaw is around of petrol. Pakistan has passed a similar law prohibiting auto rickshaws in certain areas. CNG auto rickshaws have started to appear in huge numbers in many Pakistani cities. In January 2007 the Sri Lankan government also banned two-stroke trishaws to reduce air pollution. In the Philippines there are projects to convert carbureted two-stroke engines to direct-injected via Envirofit technology. Research has shown LPG or CNG gas direct-injection can be retrofitted to existing engines, in similar fashion to the Envirofit system. In Vigan City majority of tricycles-for-hire as of 2008 are powered by motorcycles with four-stroke engines, as tricycles with two-stroke motorcycles are prevented from receiving operating permits. Direct injection is standard equipment on new machines in India. In March 2009 an international consortium coordinated by the International Centre for Hydrogen Energy Technologies initiated a two-year public-private partnership of local and international stakeholders aiming at operating a fleet of 15 hydrogen-fueled three-wheeled vehicles in New Delhi's Pragati Maidan complex. As of January 2011, the project was nearing completion. Hydrogen internal combustion (HICV) use in three-wheelers has only recently being started to be looked into, mainly by developing countries, to decrease local pollution at an affordable cost. At some point, Bajaj Auto made a HICV auto rickshaw together with the company "Energy Conversion Devices". They made a report on it called "Clean Hydrogen Technology for 3-Wheel Transportation in India" and it stated that the performance was comparable with CNG autos. In 2012, Mahindra & Mahindra showcased their first HICV auto rickshaw, called the Mahindra HyAlfa. The development of the hydrogen-powered rickshaw happened with support from the International Centre for Hydrogen Energy Technologies. World records On 16 September 2022, at 11:04 a.m. (Indian Standard Time), a Canadian team (Greg Harris and Priya Singh) and a Swiss team (Michele Daryanani and Nevena Lazarevic) set the world record for the highest altitude at which an auto rickshaw has ever been driven. The world record was officially recognized and certified by Guinness World Records on October 10, 2024. The two teams set the record by driving to the summit of Umling La Pass at an altitude of . The two teams were participating in the Rickshaw Run (Himalayan Edition), an event promoted by The Adventurists, where teams drive auto rickshaws from the Thar desert town of Jaisalmer in Rajasthan to the Himalayan town of Leh in Ladakh. Rickshaw Run teams are given the start and finish lines, but are otherwise unsupported and left to their own navigational choices in completing the approximately 2,300 km journey. The road at Umling La Pass was constructed by India's Border Roads Organization and completed in 2017. Guinness World Records certified the road as the highest motorable road in the world.
Technology
Motorized road transport
null
593784
https://en.wikipedia.org/wiki/Monochromacy
Monochromacy
Monochromacy (from Greek mono, meaning "one" and chromo, meaning "color") is the ability of organisms to perceive only light intensity without respect to spectral composition. Organisms with monochromacy lack color vision and can only see in shades of grey ranging from black to white. Organisms with monochromacy are called monochromats. Many mammals, such as cetaceans, the owl monkey and the Australian sea lion are monochromats. In humans, monochromacy is one among several other symptoms of severe inherited or acquired diseases, including achromatopsia or blue cone monochromacy, together affecting about 1 in 30,000 people. Humans Human vision relies on a duplex retina, comprising two types of photoreceptor cells. Rods are primarily responsible for dim-light scotopic vision and cones are primarily responsible for day-light photopic vision. For all known vertebrates, scotopic vision is monochromatic, since there is typically only one class of rod cell. However, the presence of multiple cone classes contributing to photopic vision enables color vision during daytime conditions. Most humans have three classes of cones, each with a different class of opsin. These three opsins have different spectral sensitivities, which is a prerequisite for trichromacy. An alteration of any of these three cone opsins can lead to colorblindness. Anomalous trichromacy, when all three cones are functional, but one or more is altered in its spectral sensitivity. Dichromacy, when one of the cones is non-functional and one of the red-green or blue-yellow opponent channels are fully disabled. Cone monochromacy, when two of the cones are non-functional and both chromatic opponent channels are disabled. Vision is reduced to blacks, whites, and greys. Rod monochromacy (Achromatopsia), when all three of the cones are non-functional and therefore photopic vision (and therefore color vision) is disabled. Monochromacy of photopic vision is a symptom of both Cone Monochromacy and Rod Monochromacy, so these two conditions are typically referred to collectively as monochromacy. Rod monochromacy Rod monochromacy (RM), also called congenital complete achromatopsia or total color blindness, is a rare and extremely severe form of an autosomal recessively inherited retinal disorder resulting in severe visual handicap. People with RM have a reduced visual acuity, (usually about 0.1 or 20/200), have total color blindness, photo-aversion and nystagmus. The nystagmus and photo-aversion usually are present during the first months of life, and the prevalence of the disease is estimated to be 1 in 30,000 worldwide. Since patients with RM have no cone function, they lack photopic vision, relying entirely on their rods and scotopic vision, which is necessarily monochromatic. They therefore cannot see any color but only shades of grey. Cone monochromacy Cone monochromacy (CM) is a condition defined by the exhibition of only one class of cones. A cone monochromat can have good pattern vision at normal daylight levels, but will not be able to distinguish hues. As humans typically exhibit three classes of cones, cone monochromats can hypothetically derive their photopic vision from any one of them, leading to three categories of cone monochromats: Blue cone monochromacy (BCM), also known as S-cone monochromacy, is an X-linked cone disease. It is a rare congenital stationary cone dysfunction syndrome, affecting less than 1 in 100,000 individuals, and is characterized by the absence of L- and M-cone function. BCM results from mutations in a single red or red–green hybrid opsin gene, mutations in both the red and the green opsin genes or deletions within the adjacent LCR (locus control region) on the X chromosome. Green cone monochromacy (GCM), also known as M-cone monochromacy, is a condition where the blue and red cones are absent in the fovea. The prevalence of this type of monochromacy is estimated to be less than 1 in 1 million. Red cone monochromacy (RCM), also known as L-cone monochromacy, is a condition where the blue and green cones are absent in the fovea. Like GCM, the prevalence of RCM is also estimated at less than 1 in 1 million. Cone Monochromats with normal rod function can sometimes exhibit mild color vision due to conditional dichromacy. In mesopic conditions, both rods and cones are active and opponent interactions between the cones and rods can afford slight color vision. According to Jay Neitz, a color vision researcher at the University of Washington, each of the three standard color-detecting cones in the retina of trichromats can detect approximately 100 gradations of color. The brain can process the combinations of these three values so that the average human can distinguish about one million colors. Therefore, a monochromat would be able to distinguish about 100 colors. Mammals Until the 1960s, popular belief held that most mammals outside of primates were monochromats. In the last half-century, however, a focus on behavioral and genetic testing of mammals has accumulated extensive evidence of at least dichromatic color vision in a number of mammalian orders. Mammals are now usually assumed to be dichromats (possessing S- and L-cones), with monochromats viewed as the exceptions. Two mammalian orders containing marine mammals exhibit monochromatic vision: Pinnipeds (including seals, sea lions and walruses) Cetaceans (including dolphins and whales) Unlike the trichromacy exhibited in most primates, Owl monkeys (genus Aotus) are also monochromats. Several members of the family Procyonidae (raccoon, crab-eating raccoon and kinkajou) and a few rodents have been demonstrated as cone monochromats, having lost functionality of the S-cone (retaining the L-cone). The light available in an animal's habitat is a significant determiner of a mammal's color vision. Marine, nocturnal or burrowing mammals, which experience less light, have less evolutionary pressure to preserve dichromacy, so often evolve monochromacy. A recent study using through PCR analysis of genes OPN1SW, OPN1LW, and PDE6C determined that all mammals in the cohort Xenarthra (representing sloths, anteaters and armadillos) developed rod monochromacy through a stem ancestor.
Biology and health sciences
Visual system
Biology
594043
https://en.wikipedia.org/wiki/Barrel%20%28unit%29
Barrel (unit)
A barrel is one of several units of volume applied in various contexts; there are dry barrels, fluid barrels (such as the U.K. beer barrel and U.S. beer barrel), oil barrels, and so forth. For historical reasons the volumes of some barrel units are roughly double the volumes of others; volumes in common use range approximately from . In many connections the term is used almost interchangeably with barrel. Since medieval times the term as a unit of measure has had various meanings throughout Europe, ranging from about 100 litres to about 1,000 litres. The name was derived in medieval times from the French , of unknown origin, but still in use, both in French and as derivations in many other languages such as Italian, Polish, and Spanish. In most countries such usage is obsolescent, increasingly superseded by SI units. As a result, the meaning of corresponding words and related concepts (vat, cask, keg etc.) in other languages often refers to a physical container rather than a known measure. In the international oil market context, however, prices in United States dollars per barrel are commonly used, and the term is variously translated, often to derivations of the Latin / Teutonic root fat (for example vat or Fass). In other commercial connections, barrel sizes such as beer keg volumes also are standardised in many countries. Dry goods in the US US dry barrel: Defined as length of stave , diameter of head , distance between heads , circumference of bulge outside measurement; representing as nearly as possible 7,056 cubic inches; and the thickness of staves not greater than (diameter ≈ ). Any barrel that is 7,056 cubic inches is recognized as equivalent. This is exactly equal to . US barrel for cranberries Defined as length of stave , diameter of head , distance between heads , circumference of bulge outside measurement; and the thickness of staves not greater than (diameter ≈ ). No equivalent in cubic inches is given in the statute, but later regulations specify it as 5,826 cubic inches. Some products have a standard weight or volume that constitutes a barrel: Cornmeal, Cement (including Portland cement), or Sugar, Wheat or rye flour, or Lime (mineral), large barrel, or small barrel Butter and cheese in UK, Salt, Fluid barrel in the US and UK Fluid barrels vary depending on what is being measured and where. In the UK a beer barrel is . In the US most fluid barrels (apart from oil) are (half a hogshead), but a beer barrel is . The size of beer kegs in the US is based loosely on fractions of the US beer barrel. When referring to beer barrels or kegs in many countries, the term may be used for the commercial package units independent of actual volume, where common range for professional use is 20–60 L, typically a DIN or Euro keg of 50 L. History Richard III, King of England from 1483 until 1485, had defined the wine puncheon as a cask holding 84 wine gallons and a wine tierce as holding 42 wine gallons. Custom had made the 42 gallon watertight tierce a standard container for shipping eel, salmon, herring, molasses, wine, whale oil, and many other commodities in the English colonies by 1700. After the American Revolution in 1776, American merchants continued to use the same size barrels. Oil barrel Definitions In the oil industry, one barrel (unit symbol bbl) is a unit of volume used for measuring oil defined as exactly 42 US gallons, approximately 159 liters, or . According to the American Petroleum Institute (API), a standard barrel of oil is the amount of oil that would occupy a volume of exactly at reference temperature and pressure conditions of and (or 1 atm). This standard barrel of oil will occupy a different volume at different pressures and temperatures. A standard barrel in this context is thus not simply a measure of volume, but of volume under specific conditions. () Unit multiples Oil companies that are publicly listed in the United States typically report their production using the unit multiples Mbbl (one thousand barrels) and MMbbl (one million barrels), derived from the Latin word "mille" and Roman Numeral M, meaning "thousand". Due to the risk of confusion The Society of Petroleum Engineers recommends in their style guide that abbreviations or prefixes M or MM are not used for barrels of oil or barrel of oil equivalent, but rather that thousands, millions or billions are spelled out. Using M for thousand and MM for million are in conflict with the SI convention where the "M" prefix stands for "mega" representing million, from the Greek for "large". Some oil companies, particularly those based in Europe, use kb (kilobarrels, one thousand barrels), mb (megabarrels, one million barrels), and gb (gigabarrels, one billion barrels). The lower case m is used to avoid confusion with the capital M used for thousand. For the same reason, the unit kbbl (one thousand barrels) is also sometimes used. Etymology The first "b" in "bbl" may have been doubled originally to indicate the plural (1 bl, 2 bbl), or possibly it was doubled to eliminate any confusion with bl as a symbol for the bale. Some sources assert that "bbl" originated as a symbol for "blue barrels" delivered by Standard Oil in its early days. However, while Ida Tarbell's 1904 Standard Oil Company history acknowledged the "holy blue barrel", the abbreviation "bbl" had been in use well before the 1859 birth of the U.S. petroleum industry. Flow rate Oil wells recover not just oil from the ground, but also natural gas and water. The term barrels of liquid per day (BLPD) refers to the total volume of liquid that is recovered. Similarly, barrels of oil equivalent or BOE is a value that accounts for both oil and natural gas while ignoring any water that is recovered. Other terms are used when discussing only oil. These terms can refer to either the production of crude oil at an oil well, the conversion of crude oil to other products at an oil refinery, or the overall consumption of oil by a region or country. One common term is barrels per day (BPD, BOPD, bbl/d, bpd, bd, or b/d), where 1 BPD is equivalent to 0.0292 gallons per minute. One BPD also becomes 49.8 tonnes per year. At an oil refinery, production is sometimes reported as barrels per calendar day (b/cd or bcd), which is total production in a year divided by the days in that year. Likewise, barrels per stream day (BSD or BPSD) is the quantity of oil product produced by a single refining unit during continuous operation for 24 hours. Burning one tonne of light, synthetic, or heavy crude yields 38.51, 39.40, or 40.90 GJ (thermal) respectively (10.70, 10.94, or 11.36 MW·h), so 1 tonne per day of synthetic crude is about 456 kW of thermal power and 1 bpd of synthetic crude is about 378 kW (slightly less for light crude, slightly more for heavy crude). Conversion The task of converting a standard barrel of oil to a standard cubic metre of oil is complicated by the fact that the latter is defined by the API to mean the amount of oil that, at different reference conditions (101.325 kPa and ), occupies 1 cubic metre. The fact that the refence conditions are not exactly the same means that an exact conversion is impossible unless the exact expansion coefficient of the crude is known, and this will vary from one crude oil to another. For a light oil with density of 850 kilogram per cubic metre (API gravity of 35), warming the oil from to might increase its volume by about 0.047%. Conversely, a heavy oil with a density of 934 kg/m3 (API gravity of 20) might only increase in volume by 0.039%. If physically measuring the density at a new temperature is not possible, then tables of empirical data can be used to accurately predict the change in density. In turn, this allows maximum accuracy when converting between standard barrel and standard cubic metre. The logic above also implies the same level of accuracy in measurements for barrels if there is a error in measuring the temperature at time of measuring the volume. For ease of trading, communication and financial accounting, international commodity exchanges often set a conversion factor for benchmark crude oils. For instance the conversion factor set by the New York Mercantile Exchange (NYMEX) for Western Canadian Select (WCS) crude oil traded at Hardisty, Alberta, Canada is 6.29287 U.S. barrels per standard cubic metre, despite the uncertainty in converting the volume for crude oil. Regulatory authorities in producing countries set standards for measurement accuracy of produced hydrocarbons, where such measurements affect taxes or royalties to the government. In the United Kingdom, for instance, the measurement accuracy required is ±0.25%. Qualifiers A barrel can technically be used to specify any volume. Since the actual nature of the fluids being measured varies along the stream, sometimes qualifiers are used to clarify what is being specified. In the oil field, it is often important to differentiate between rates of production of fluids, which may be a mix of oil and water, and rates of production of the oil itself. If a well is producing 10 MBD (millions of barrels per day) of fluids with a 20% water cut, then the well would also be said to be producing 8,000 barrels of oil a day (bod). In other circumstances, it can be important to include gas in production and consumption figures. Normally, gas amount is measured in standard cubic feet or standard cubic metres (for volume at STP), as well as in kg or Btu (which do not depend on pressure or temperature). But when necessary, such volume is converted to a volume of oil of equivalent enthalpy of combustion. Production and consumption using this analogue is stated in barrels of oil equivalent per day (boed). In the case of water-injection wells, in the United States it is common to refer to the injectivity rate in barrels of water per day (bwd). In Canada, it is measured in cubic metres per day (m3/d). In general, water injection rates will be stated in the same units as oil production rates, since the usual objective is to replace the volume of oil produced with a similar volume of water to maintain reservoir pressure. Related kinds of quantity Outside the United States, volumes of oil are usually reported in cubic metres (m3) instead of oil barrels. Cubic metre is the basic volume unit in the International System. In Canada, oil companies measure oil in cubic metres, but convert to barrels on export, since most of Canada's oil production is exported to the US. The nominal conversion factor is 1 cubic metre = 6.2898 oil barrels, but conversion is generally done by custody transfer meters on the border, since the volumes are specified at different temperatures, and the exact conversion factor depends on both density and temperature. Canadian companies operate internally and report to Canadian governments in cubic metres, but often convert to US barrels for the benefit of American investors and oil marketers. They generally quote prices in Canadian dollars per cubic metre to other Canadian companies, but use US dollars per barrel in financial reports and press statements, making it appear to the outside world that they operate in barrels. Companies on the European stock exchanges report the mass of oil in tonnes. Since different varieties of petroleum have different densities, however, there is not a single conversion between mass and volume. For example, one tonne of heavy distillates might occupy a volume of . In contrast, one tonne of crude oil might occupy , and one tonne of gasoline will require . Overall, the conversion is usually between per tonne. History The measurement of an "oil barrel" originated in the early Pennsylvania oil fields. The Drake Well, the first oil well in the US, was drilled in Pennsylvania in 1859, and an oil boom followed in the 1860s. When oil production began, there was no standard container for oil, so oil and petroleum products were stored and transported in barrels of different shapes and sizes. Some of these barrels would originally have been used for other products, such as beer, fish, molasses, or turpentine. Both the barrels (based on the old English wine measure), the tierce (159 litres) and the whiskey barrels were used. Also, barrels were in common use. The 40 gallon whiskey barrel was the most common size used by early oil producers, since they were readily available at the time. Around 1866, early oil producers in Pennsylvania concluded that shipping oil in a variety of different containers was causing buyer distrust. They decided they needed a standard unit of measure to convince buyers that they were getting a fair volume for their money, and settled on the standard wine tierce, which was two gallons larger than the standard whisky barrel. The Weekly Register, an Oil City, Pennsylvania newspaper, stated on August 31, 1866 that "the oil producers have issued the following circular": And by that means, King Richard III's English wine tierce became the American standard oil barrel. By 1872, the standard oil barrel was firmly established as 42 US-gallons. The 42 gallon standard oil barrel was officially adopted by the Petroleum Producers Association in 1872 and by the U.S. Geological Survey and the U.S. Bureau of Mines in 1882. In modern times, many different types of oil, chemicals, and other products are transported in steel drums. In the United States, these commonly have a capacity of and are referred to as such. They are called 200 litre or 200 kg drums outside the United States. In the United Kingdom and its former dependencies, a drum was used, even though all those countries now officially use the metric system and the drums are filled to 200 litres. In the United States, the 42 US-gallon size as a unit of measure is largely confined to the oil industry, while different sizes of barrel are used in other industries. Nearly all other countries use the metric system. Thus, the 42 US-gallon oil barrel is a unit of measure rather than a physical container used to transport crude oil.
Physical sciences
Volume
Basics and measurement
594050
https://en.wikipedia.org/wiki/Epidote
Epidote
Epidote is a calcium aluminium iron sorosilicate mineral. Description Well developed crystals of epidote, Ca2Al2(Fe3+;Al)(SiO4)(Si2O7)O(OH), crystallizing in the monoclinic system, are of frequent occurrence: they are commonly prismatic in habit, the direction of elongation being perpendicular to the single plane of symmetry. The name Epidote is derived from the Greek word 'epidosis', meaning "increase", in allusion to the crystal characteristic of one longer side at the base of the prism. The faces are often deeply striated and crystals are often twinned. Many of the characters of the mineral vary with the amount of iron present for instance, the color, the optical constants, and the specific gravity. The color is green, grey, brown or nearly black, but usually a characteristic shade of yellowish-green or pistachio-green. It displays strong pleochroism, the pleochroic colors being usually green, yellow and brown. Clinozoisite is green, white or pale rose-red group species containing very little iron, thus having the same chemical composition as the orthorhombic mineral zoisite. The name, due to Haüy, is derived from the Greek word "epidosis" (ἐπίδοσις) which means "addition" in allusion to one side of the ideal prism being longer than the other. Epidote is an abundant rock-forming mineral, but one of secondary origin. It occurs in marble and schistose rocks of metamorphic origin. It is also a product of hydrothermal alteration of various minerals (feldspars, micas, pyroxenes, amphiboles, garnets, and others) composing igneous rocks. A rock composed of quartz and epidote is known as epidosite. Well-developed crystals are found at many localities: Knappenwand, near the Großvenediger in the Untersulzbachthal in Salzburg, as magnificent, dark green crystals of long prismatic habit in cavities in epidote schist, with asbestos, adularia, calcite, and apatite; the Ala valley and Traversella in Piedmont; Arendal in Norway; Le Bourg-d'Oisans in Dauphiné; Haddam in Connecticut; Prince of Wales Island in Alaska, here as large, dark green, tabular crystals with copper ores in metamorphosed limestone. The perfectly transparent, dark green crystals from the Knappenwand and from Brazil have occasionally been cut as gemstones. The green part of several mixed-rock ornamental stones is composed of epidote. These include Unakite and Australian Dragon Bloodstone. Related species Belonging to the same isomorphous group with epidote are the REE-rich allanite (containing primarily lanthanum, cerium, and yttrium), and the manganese-rich piemontite. Piemontite occurs as small, reddish-black, monoclinic crystals in the manganese mines at San Marcel, near Ivrea in Piedmont, and in crystalline schists at several places in Japan. The purple color of the Egyptian porfido rosso antico is due to the presence of this mineral. Allanite and dollaseite-(Ce) have the same general epidote formula and contain metals of the cerium group. In external appearance allanite differs widely from epidote, being black or dark brown in color, pitchy in lustre, and opaque in the mass; further, there is little or no cleavage, and well-developed crystals are rare. The crystallographic and optical characters are similar to those of epidote; the pleochroism is strong with reddish-, yellowish-, and greenish-brown colors. Although not a common mineral, allanite is of fairly wide distribution as a primary accessory constituent of many crystalline rocks, gneiss, granite, syenite, rhyolite, andesite, and others. It was first found in the granite of east Greenland and described by Thomas Allan in 1808, after whom the species was named. Allanite is a mineral readily altered by hydration, becoming optically isotropic and amorphous: for this reason several varieties have been distinguished, and many different names applied. Orthite was the name given by Jöns Berzelius in 1818 to a hydrated form found as slender prismatic crystals, sometimes a foot in length, at Finbo, near Falun in Sweden. Dollaseite is less common, famous from the Ostanmossa mine in the Norberg district of Sweden. Gallery
Physical sciences
Silicate minerals
Earth science
594086
https://en.wikipedia.org/wiki/Podiatry
Podiatry
Podiatry ( ), or podiatric medicine and surgery ( ), is a branch of medicine devoted to the study, diagnosis, and treatment of disorders of the foot, ankle and lower limb. The healthcare professional is known as a podiatrist. The US podiatric medical school curriculum includes lower extremity anatomy, general human anatomy, physiology, general medicine, physical assessment, biochemistry, neurobiology, pathophysiology, genetics and embryology, microbiology, histology, pharmacology, women's health, physical rehabilitation, sports medicine, research, ethics and jurisprudence, biomechanics, general principles of orthopedic surgery, plastic surgery, and foot and ankle surgery. Podiatry is practiced as a specialty in many countries. In Australia, graduates of recognised academic programs can register through the Podiatry Board of Australia as a "podiatrist", and those with additional recognised training may also receive endorsement to prescribe or administer restricted medications and/or seek specialist registration as a "podiatric surgeon". Medical Group Management Association (MGMA) data shows that a general podiatrist with a single specialty earns a median salary of $230,357, while one with a multi-specialty practice type earns $270,263. However, a podiatry surgeon makes more with a single specialty, with the median at $304,474 compared to the multispecialty of $286,201. First-year salaries around $150,000 with performance and productivity incentives are common. Private practice revenues for solo podiatrists vary widely, with the majority of solo practices grossing between $200,000 and $600,000 before overhead. History The professional care of feet existed in ancient Egypt, as depicted by bas-relief carvings at the entrance to Ankmahor's tomb from about 2400 BC. Hippocrates described the treatment of corns and calluses by physically reducing the hard skin and removing the cause. The skin scrapers which he invented for this purpose were the original scalpels. Until the turn of the 20th century, podiatrists were independently licensed physicians, separate from the rest of organized medicine. Lewis Durlacher, appointed as surgeon-podiatrist to the British royal household in 1823, called for podiatry to be a protected profession. Prominent figures including Napoleon and French kings employed personal podiatrists. President Abraham Lincoln sent his personal podiatrist, Isachar Zachriel, on confidential missions to confer with leaders of the Confederacy during the U.S. Civil War. The first podiatric society was established in New York in 1895, and still operates there today as NYSPMA. The first podiatric school opened in 1911. One year later, the British established a podiatric society at the London Foot Hospital; a school was added in 1919. The first American podiatric journal appeared in 1907, followed in 1912 by a UK journal. In Australia, professional podiatric associations were organized as early as 1924, followed by a podiatric training center and professional podiatric journal in 1939. Specific country practices Australia In Australia, podiatry is considered an allied health profession and is practised by individuals licensed by the Podiatry Board of Australia. Australia recognizes two levels of professional accreditation (General Podiatrist and Podiatric Surgeon), with ongoing lobbying for the recognition of other subspecialties. Some Commonwealth countries recognize Australian qualifications, allowing Australian podiatrists to practise abroad. Registration and regulation Australian podiatrists must register with the Podiatry Board of Australia, which regulates podiatrists and podiatric surgeons. The board also assesses foreign-trained registrants in conjunction with the Australian & New Zealand Podiatry Accreditation Council (ANZPAC). It recognizes three pathways to attain specialist registration as a podiatric surgeon: Fellowship of the Australasian College of Podiatric Surgeons Doctor of Podiatric Surgery, University of Western Australia Eligibility for Fellowship of the Australasian College of Podiatric Surgeons Until 21 November 2019, ANZPAC approved the Doctor of Podiatric Surgery program of study offered by the University of Western Australia as providing a qualification for the purpose of specialist registration as a podiatric surgeon. Education and training To enter an undergraduate Podiatric Medicine program, applicants must have completed a Year 12 Certificate with an Australian Tertiary Admission Rank (ATAR). Cut-off scores from the Universities Admissions Centre (UAC) generally range from 70.00 to 95.00; prospective students who are 21 or older can instead apply directly to the university. The UWA DPM program has admission requirements of: completion of a UWA bachelor's degree or equivalent, a minimum GPA of 5.0 from the most recent three years (FTE) of valid study, suitable GAMSAT score, and English language competency. There is no interview requirement for the DPM at UWA (applications are handled via the university). Australian podiatrists complete an undergraduate degree ranging from 3 to 4 years of education. The first 2 years of this program are generally focused on various biomedical science subjects, including functional anatomy, microbiology, biochemistry, physiology, pathophysiology, pharmacology, evidence-based medicine, sociology, and patient psychology, similar to the medical curriculum. The following year focuses on podiatry-specific areas such as podiatric anatomy & biomechanics, human gait, podiatric orthopaedics (the non-surgical management of foot abnormalities), podopaediatrics, sports medicine, rheumatology, diabetes, vascular medicine, mental health, wound care, neuroscience & neurology, pharmacology, general medicine, general pathology, local and general anaesthesia, minor and major podiatric surgical procedural techniques such as partial and total nail avulsions, matricectomy, cryotherapy, wound debridement, enucleation, suturing, other cutaneous and electro-surgical procedures and theoretical understanding of procedures performed by orthopaedic and podiatric surgeons. Australian podiatric surgeons are specialist podiatrists with further advanced training in medicine and pharmacology, and training in foot surgery. Podiatrists wishing to pursue specialisation in podiatric surgery must meet the requirements for Fellowship with the Australasian College of Podiatric Surgeons. They must complete a 4-year degree, including 2 years of didactic study and 2 years of clinical experience, followed by a master's degree with a focus on biomechanics, medicine, surgery, general surgery, advanced pharmacology, advanced medical imaging, and clinical pathology. They then qualify for the status of Registrar with the Australasian College of Podiatric Surgeons. Following surgical training with a podiatric surgeon (3–5 years), rotations within other medical and surgeons' disciplines, overseas clinical rotations, and passing oral and written exams, Registrars may qualify for Fellowship status. Fellows are then given Commonwealth accreditation under the Health Insurance Act, recognising them as providers of professional attention for the purposes of health insurance rebates. Australian podiatric medical schools The following podiatric teaching centres are accredited by the Australian and New Zealand Podiatry Accreditation Council (ANZPAC): University of Western Australia Charles Sturt University La Trobe University University of Western Sydney University of South Australia University of Newcastle (Australia) Queensland University of Technology Central Queensland University Southern Cross University Auckland University of Technology (in New Zealand) Some, including Charles Sturt University and University of Western Sydney, offer the degree Bachelor of Podiatric Medicine; others offer postgraduate degrees, such as the University of Western Australia's Doctor of Podiatric Medicine, and La Trobe University's Master of Podiatric Practice. Two more podiatric schools are being developed, at the Australian Catholic University and the University of Ballarat. Prescribing of scheduled medicines and referral rights The prescribing rights of Australian podiatrists vary by state. All states allow registered podiatrists to use local anaesthesia for minor surgeries. In Victoria, Western Australia, Queensland, South Australia, New South Wales: registered podiatrists and podiatric surgeons with an endorsement of scheduled medicines may prescribe relevant schedule 4 poisons. In Western Australia and South Australia, podiatrists with Master's degrees in Podiatry and extensive training in pharmacology are authorised to prescribe Schedule 2, 3, 4, or 8 medicines (Australian Health Practitioner Regulation Agency). In Queensland, Fellows of the Australasian College of Podiatric Surgeons are authorised to prescribe a range of Schedule 4 drugs and one Schedule 8 drug. Prescriptions written by podiatrists do not qualify for the Pharmaceutical Benefits Scheme, despite lobbying to change this. Some referrals from podiatrists (plain x-rays of the foot, leg, knee, and femur, and ultrasound examination of soft tissue conditions of the foot) are rebated by Medicare, while others (CTs, MRIs, bone scans, pathology testing, and other specialist medical practitioners) are not eligible for Medicare rebates. Canada In Canada, the definition and scope of the practice of podiatry varies by province. A number of provinces, including British Columbia, Alberta, and Quebec, accept the qualification of Doctor of Podiatric Medicine (DPM); in Quebec, other academic designations may also register. In 2004, Université du Québec à Trois-Rivières started the first and only program of Podiatric Medicine in Canada based on the American definition of podiatry. This program enlists 25 students yearly across Canada and leads to a DPM upon obtaining 195 credits. The province of Ontario has been registering chiropodists since 1944, with 701 chiropodists and 54 podiatrists registered by the College of Chiropodists of Ontario as of December 31, 2019. Ontario makes a distinction between podiatrists and chiropodists. Podiatrists are required to have a DPM, whereas chiropodists need only have a post-secondary diploma in chiropody. Podiatrists, unlike chiropodists, may bill OHIP, "communicate a diagnosis" to their patients, and perform surgical procedures on the bones of the forefoot. Registered podiatrists who relocate to Ontario are required to register with the province and practice as a chiropodist. Ontario legislation in 1991 imposed a cap on Ontario-trained chiropodists becoming podiatrists, while grandfathering in already-practising podiatrists. Iran There are no podiatric medical schools in Iran. The Ministry of Health and Medical Education (MoHME) reviews the dossier of podiatric applicants for medical registration according to the "Regulations on the Evaluation of the Educational Credentials of Foreign Graduates". Applicants with podiatric degrees from the United States qualify for registration in Iran if they meet the following criteria: possession of a bachelor's degree passing score on the MCAT completion of the podiatric curriculum of an accredited school, thereby obtaining the degree of Doctor of Podiatric Medicine (DPM) completion of a one-year postgraduate training (if required by the home jurisdiction) passing score on relevant board examinations New Zealand New Zealand established Chiropody (shortly thereafter renamed to Podiatry) as a registered profession in 1969, requiring all applicants to take a recognized three-year course of training. The New Zealand School of Podiatry was established at Petone in 1970, under the direction of John Gallocher. Later, the school moved to the Central Institute of Technology, Upper Hutt, Wellington. Today, Auckland University of Technology is the only provider of podiatry training in New Zealand. In 1976, podiatrists in New Zealand gained the legal right to use a local anaesthetic, and began to include minor surgical procedures on ingrown toenails in their scope of practice. They received the right to refer patients to radiologists for X-rays in 1984, and (with suitable training) to acquire licensing to take their own X-rays in 1989. Diagnostic radiographic training is now incorporated into the podiatric degree syllabus, and on successful completion of the course, graduates register with the New Zealand National Radiation Laboratory. United Kingdom The scope of practice of podiatrists in the UK varies depending on their education and training, but may include simple skin care, the use of prescription-only medicines, injection therapy, and non-invasive surgery such as nail resection and removal. Podiatrists also interface between patients and multidisciplinary teams, recognising systemic disease as it manifests in the foot and referring on to the appropriate health care professionals. To qualify as a podiatric surgeon, a podiatrist in the UK must undertake extensive postgraduate education and training, usually taking a minimum of 10 years to complete. Appropriately qualified podiatric surgeons may perform invasive bone and joint surgery. Legislation in the UK protects the professional titles 'chiropodist' and 'podiatrist', but does not distinguish between the two. Those using protected titles must be registered with the Health and Care Professions Council (HCPC). Registration is normally only granted to those holding a bachelor's degree from one of 13 recognized schools of podiatry in the UK. Professional bodies recognised by the HCPC are: The Society of Chiropodists and Podiatrists The Alliance of Private Sector Practitioners The Institute of Chiropodists and Podiatrists The British Chiropody and Podiatry Association In 1979, the Royal Commission on the National Health Service reported that about six and a half million NHS chiropody treatments were provided to just over one and a half million people in Great Britain in 1977, an increase of 19% over the number from three years before. Over 90% of patients receiving these treatments were aged 65 or over. At that time there were about 5,000 state registered chiropodists, but only about two-thirds worked for the NHS. The Commission agreed with the suggestion of the Association of Chief Chiropody Officers that more foot hygienists should be introduced, who could undertake, under the direction of a registered chiropodist, "nail cutting and such simple foot-care and hygiene as a fit person should normally carry out for himself." United States In the United States, medical and surgical care of the foot and ankle is mainly provided by two groups: podiatrists (with a Doctor of Podiatric Medicine degree) and orthopedic surgeons (with a Doctor of Medicine or Doctor of Osteopathic Medicine degree). In most states, their scope of practice is limited to the foot and ankle; however, some states include the leg, hand, or both. In order to be considered for admission to podiatric medical school, an applicant must first complete a minimum of 90 semester hours at the university level, or (more commonly), complete a bachelor's degree with an appropriate emphasis. In addition, potential students are required to take the Medical College Admission Test (MCAT). In 2019, the average MCAT for matriculants was 500 and 3.5 average undergraduate cGPA. The DPM degree itself takes a minimum of four years to complete. The first two years of podiatric medical school are similar to training that M.D. and D.O. students receive, but with greater emphasis on the foot and ankle. The four-year podiatric medical school is followed by a surgical residency to provide hands-on training. As of July 2013, all residency programs in podiatry were required to transition to a minimum of three years of post-doctoral training. This upgrading of training was spearheaded in California by the state Board of Podiatric Medicine (BPM) and its California Liaison Committee (CLC). BPM’s Executive Officer James H. Rathlesberger included it in the Federation of Podiatric Medical Boards’ Model Law, which he wrote before becoming FPMB president in 2000. Podiatric residents rotate through core areas of medicine and surgery. They work in such rotations as emergency medicine, internal medicine, infectious disease, behavioral medicine, physical medicine and rehabilitation, vascular surgery, general surgery, orthopedic surgery, plastic surgery, dermatology, and podiatric surgery and medicine. Fellowship training is available after residency in such fields such as geriatrics, foot and ankle traumatology, and infectious disease. Upon completion of their residency, podiatrist candidates are eligible to sit for examinations for certification by one of two specialty boards accredited by the Council on Podiatric Medical Education (CPME), which itself is overseen and approved by the Department of Education. These are the American Board of Podiatric Medicine (ABPM) and the American Board of Foot and Ankle Surgery (ABFAS). ABPM certification leads to fellowship in either the American Society of Podiatric Surgeons (ASPS) or the American College of Podiatric Medicine (ACPM). ABFAS certification leads to fellowship in the ASPS or the American College of Foot and Ankle Surgeons (ACFAS). ABPM is recognized by CPME as certification in primary podiatric medicine and orthopaedics and the ABFAS as certification in podiatric surgery. However, hospital credentialing committees often do not distinguish between the two. There are two surgical certifications under ABFAS: foot surgery, and reconstructive rearfoot/ankle (RRA) surgery. In order to be board-certified in RRA, the sitting candidate has to have already achieved board certification in foot surgery. To receive ABFAS certification, the candidate must pass the written examination, submit surgical logs indicating experience and variety, pass an oral examination, and complete a computer-based clinical simulation. Practice characteristics Podiatric physicians practice in a variety of different settings. Some practice solo in a private practice setting; some belong to larger group practices. There are podiatrists in larger multi-specialty practices as well (such as orthopedic groups or groups for the treatment of diabetes) or clinic practices (such as the Indian Health Service (IHS), the Rural Health Centers (RHC), or the Community Health Center (FQHC)). Some work for government organizations, such as for Veterans Affairs hospitals and clinics. Some podiatrists have primarily surgical practices. They may complete additional fellowship training in reconstruction of the foot and ankle from the effects of diabetes or physical trauma, or practice minimally invasive percutaneous surgery for cosmetic correction of hammer toes and bunions. Colleges and education There are 11 schools of podiatric medicine in the United States. These are governed by the American Association of Colleges of Podiatric Medicine (AACPM) and accredited by the Council on Podiatric Medical Education. Arizona School of Podiatric Medicine at Midwestern University Barry University School of Podiatric Medicine California School of Podiatric Medicine Des Moines University College of Podiatric Medicine and Surgery New York College of Podiatric Medicine Kent State University College of Podiatric Medicine Lake Erie College of Osteopathic Medicine School of Podiatric Medicine Dr. William M. Scholl College of Podiatric Medicine at Rosalind Franklin University of Medicine and Science Temple University School of Podiatric Medicine University of Texas Rio Grande Valley School of Podiatric Medicine College of Podiatric Medicine at Western University of Health Sciences Podiatric subspecialties Podiatrists treat a wide variety of foot and lower-extremity conditions through both nonsurgical and surgical approaches. While the terminology of subspecialties differ around the world, they generally fall into these categories: Reconstructive foot and ankle surgery Podiatric sports medicine (chronic overuse injuries and mechanical performance enhancement) Podiatric dermatology Lower extremity plastic and reconstructive surgery, limb salvage, and wound care Podopediatrics (podiatry in children) Forensic podiatry (the study of footprints, footwear, shoeprints and feet associated with crime scene investigations) Podiatric assistants work as a part of a podiatric medical team in a variety of clinical and non-clinical settings. Worldwide, there are common professional accreditation pathways to be a podiatric assistant; for instance, in Australia, the qualification is a Certificate IV in Allied Health Assistance specialising in podiatry. Podiatric assistants may specialize in many different fields, such as: Podiatric nurse Podiatric surgical nurse Foot carer Podiatric support worker Podiatric technician Podiatric hygienist Foot hygienist Podiatric medical assistant Professional societies and organizations Academy of Ambulatory Foot and Ankle Surgery (AAFAS) Alberta Podiatry Association (APA) Alpha Gamma Kappa fraternity Alliance of Private Sector Practitioners American Podiatric Medical Association (APMA) American Society of Podiatric Surgeons (ASPS) American Society of Forensic Podiatry American College of Foot and Ankle Surgeons (ACFAS) American Board of Foot and Ankle Surgery (ABFAS) American College of Podiatric Medicine (ACPM) American Board of Podiatric Medicine (ABPM American Board of Multiple Specialties in Podiatric Medicine American Board of Multiple Specialties in Podiatric Surgery American Academy of Podiatric Sports Medicine (AAPSM) American Society of Podiatric Dermatology (ASPD) Australian Podiatry Association (APODA) Association Belge des Podologues Canadian Podiatric Medical Association (CPMA) American Academy of Podiatric Practice Management (AAPPM) International Federation of Podiatrists – Fédération Internationale des Podologues (FIP-IFP) International Foot and Ankle Biomechanics Community (i-FAB) Student National Podiatric Medical Association (SNPMA) American Podiatric Medical Students' Association (APMSA) Australian College of Podiatric Surgeons (ACPS) Australian Podiatry Association (APodA) Australian Podiatry Council (APodC) Australasian Academy of Podiatric Sports Medicine (AAPSM) Australasian Podiatric Rheumatology Specialist Interest Group (APRSIG) Federation of Podiatric Medical Boards (FPMB) Institute of Chiropodists and Podiatrists (IOCP) Canadian Federation of Podiatric Medicine Royal College of Podiatry (RCoP)
Biology and health sciences
Fields of medicine
Health
594615
https://en.wikipedia.org/wiki/Ammonia%20solution
Ammonia solution
Ammonia solution, also known as ammonia water, ammonium hydroxide, ammoniacal liquor, ammonia liquor, aqua ammonia, aqueous ammonia, or (inaccurately) ammonia, is a solution of ammonia in water. It can be denoted by the symbols NH3(aq). Although the name ammonium hydroxide suggests a salt with the composition , it is impossible to isolate samples of NH4OH. The ions and OH− do not account for a significant fraction of the total amount of ammonia except in extremely dilute solutions. The concentration of such solutions is measured in units of the Baumé scale (density), with 26 degrees Baumé (about 30% of ammonia by weight at ) being the typical high-concentration commercial product. Basicity of ammonia in water In aqueous solution, ammonia deprotonates a small fraction of the water to give ammonium and hydroxide according to the following equilibrium: NH3 + H2O ⇌ + OH−. In a 1 M ammonia solution, about 0.42% of the ammonia is converted to ammonium, equivalent to pH = 11.63 because [] = 0.0042 M, [OH−] = 0.0042 M, [NH3] = 0.9958 M, and pH = 14 + log10[OH−] = 11.62. The base ionization constant is Kb = = 1.77. Saturated solutions Like other gases, ammonia exhibits decreasing solubility in solvent liquids as the temperature of the solvent increases. Ammonia solutions decrease in density as the concentration of dissolved ammonia increases. At , the density of a saturated solution is 0.88 g/ml; it contains 35.6% ammonia by mass, 308 grams of ammonia per litre of solution, and has a molarity of approximately 18 mol/L. At higher temperatures, the molarity of the saturated solution decreases and the density increases. Upon warming of saturated solutions, ammonia gas is released. Applications In contrast to anhydrous ammonia, aqueous ammonia finds few non-niche uses outside of cleaning agents. Cleaning products Ammonia solutions are used as a cleaning products for many surfaces and applications. Ammonia in water is sold as a cleaning agent by itself, usually labeled as simply "ammonia", as well as in cleaning products combined with other ingredients. It may be sold plain, lemon-scented (and typically colored yellow), or pine-scented (green). Commonly available ammonia with soap added is known as "cloudy ammonia". Household ammonia ranges in concentration by weight from 5% to 10% ammonia. Because aqueous ammonia is a gas dissolved in water, as the water evaporates from a surface, the gas evaporates also, leaving the surface streak-free. Its most common uses are to clean glass , porcelain, and stainless steel. It is good at removing grease and is found in products for cleaning ovens and for soaking items to loosen baked-on grime. Experts also warn not to use ammonia-based cleaners on car touchscreens, due to the risk of damage to the screen's anti-glare and anti-fingerprint coatings. More concentrated solutions (higher than 10%) are used for in professional and industrial cleaning products. US manufacturers of cleaning products are required to provide the product's material safety data sheet that lists the concentration used. Solutions of ammonia can be dangerous. These solutions are irritating to the eyes and mucous membranes (respiratory and digestive tracts), and to a lesser extent the skin. Experts advise that caution be used to ensure the chemical is not mixed into any liquid containing bleach, due to the danger of forming toxic chloramine gas. Mixing with chlorine-containing products or strong oxidants, such as household bleach, can generate toxic chloramine fumes. Alkyl amine precursor In industry, aqueous ammonia can be used as a precursor to some alkyl amines, although anhydrous ammonia is usually preferred. Hexamethylenetetramine forms readily from aqueous ammonia and formaldehyde. Ethylenediamine forms from 1,2-dichloroethane and aqueous ammonia. Absorption refrigeration In the early years of the twentieth century, the vapor absorption cycle using water-ammonia systems was popular and widely used, but after the development of the vapor compression cycle it lost much of its importance because of its low coefficient of performance (about one fifth of that of the vapor compression cycle). Both the Electrolux refrigerator and the Einstein refrigerator are well known examples of this application of the ammonia solution. Water treatment Ammonia is used to produce chloramine, which may be utilised as a disinfectant. In drinking water, chloramine is preferred over direct chlorination for its ability to remain active in stagnant water pipes longer, thereby reducing the risk of waterborne infections. Ammonia is used by aquarists for the purposes of setting up a new fish tank using an ammonia process called fishless cycling. This application requires that the ammonia contain no additives. Food production Baking ammonia (ammonium carbonate and ammonium bicarbonate) was one of the original chemical leavening agents. It was obtained from deer antlers. It is useful as a leavening agent, because ammonium carbonate is heat activated. This characteristic allows bakers to avoid both yeast's long proofing time and the quick CO2 dissipation of baking soda in making breads and cookies rise. It is still used to make ammonia cookies and other crisp baked goods, but its popularity has waned because of ammonia's off-putting smell and concerns over its use as a food ingredient compared to modern-day baking powder formulations. It has been assigned E number E527 for use as a food additive in the European Union. Aqueous ammonia is used as an acidity regulator to bring down the acid levels in food. It is classified in the United States by the Food and Drug Administration as generally recognized as safe (GRAS) when using the food grade version. Its pH control abilities make it an effective antimicrobial agent. Furniture darkening In furniture-making, ammonia fuming was traditionally used to darken or stain wood containing tannic acid. After being sealed inside a container with the wood, fumes from the ammonia solution react with the tannic acid and iron salts naturally found in wood, creating a rich, dark stained look to the wood. This technique was commonly used during the Arts and Crafts movement in furniture – a furniture style which was primarily constructed of oak and stained using these methods. Treatment of straw for cattle Ammonia solution is used to treat straw, producing "ammoniated straw" making it more edible for cattle. Laboratory use Aqueous ammonia is used in traditional qualitative inorganic analysis as a complexant and base. Like many amines, it gives a deep blue coloration with copper(II) solutions. Ammonia solution can dissolve silver oxide residues, such as those formed from Tollens' reagent. It is often found in solutions used to clean gold, silver, and platinum jewelry, but may have adverse effects on porous gem stones like opals and pearls.
Physical sciences
Specific bases
Chemistry
594964
https://en.wikipedia.org/wiki/Periodical%20cicadas
Periodical cicadas
The term periodical cicada is commonly used to refer to any of the seven species of the genus Magicicada of eastern North America, the 13- and 17-year cicadas. They are called periodical because nearly all individuals in a local population are developmentally synchronized and emerge in the same year. Although they are sometimes called "locusts", this is a misnomer, as cicadas belong to the taxonomic order Hemiptera (true bugs), suborder Auchenorrhyncha, while locusts are grasshoppers belonging to the order Orthoptera. Magicicada belongs to the cicada tribe Lamotialnini, a group of genera with representatives in Australia, Africa, and Asia, as well as the Americas. Magicicada species spend around 99.5% of their long lives underground in an immature state called a nymph. While underground, the nymphs feed on xylem fluids from the roots of broadleaf forest trees in the eastern United States. In the spring of their 13th or 17th year, mature cicada nymphs emerge between late April and early June (depending on latitude), synchronously and in tremendous numbers. The adults are active for only about four to six weeks after the unusually prolonged developmental phase. The males aggregate in chorus centers and call there to attract mates. Mated females lay eggs in the stems of woody plants. Within two months of the original emergence, the life cycle is complete and the adult cicadas die. Later in that same summer, the eggs hatch and the new nymphs burrow underground to develop for the next 13 or 17 years. Periodical emergences are also reported for the "World Cup cicada" Chremistica ribhoi (every 4 years) in northeast India and for a cicada species from Fiji, Raiateana knowlesi (every 8 years). Description The winged imago (adult) periodical cicada has two red compound eyes, three small ocelli, and a black dorsal thorax. The wings are translucent with orange veins. The underside of the abdomen may be black, orange, or striped with orange and black, depending on the species. Adults are typically , depending on species, generally about 75% the size of most of the annual cicada species found in the same region. Mature females are slightly larger than males. Magicicada males typically form large aggregations that sing in chorus to attract receptive females. Different species have different characteristic calling songs. The call of decim periodical cicadas is said to resemble someone calling "weeeee-whoa" or "Pharaoh". The cassini and decula periodical cicadas (including M. tredecula) have songs that intersperse buzzing and ticking sounds. Cicadas cannot sting and do not normally bite. Like other Auchenorrhyncha (true) bugs, they have mouthparts used to pierce plants and suck their sap. These mouthparts are used during the nymph stage to tap underground roots for water, minerals and carbohydrates and in the adult stage to acquire nutrients and water from plant stems. An adult cicada's proboscis can pierce human skin when it is handled, which is painful but in no other way harmful. Cicadas are neither venomous nor poisonous and there is no evidence that they or their bites can transmit diseases. Oviposition by female periodical cicadas damages pencil-sized twigs of woody vegetation. Mature trees rarely suffer lasting damage, although peripheral twig die-off or "flagging" may result. Planting young trees or shrubs is best postponed until after an expected emergence of the periodical cicadas. Existing young trees or shrubs can be covered with cheesecloth or other mesh netting with holes that are in diameter or smaller to prevent damage during the oviposition period, which begins about a week after the first adults emerge and lasts until all females have died. Life cycle Nearly all cicadas spend years underground as juveniles, before emerging above ground for a short adult stage of several weeks to a few months. The seven periodical cicada species are so named because, in any one location, all members of the population are developmentally synchronized—they emerge as adults all at once in the same year. This periodicity is especially remarkable because their life cycles are so long—13 or 17 years. In contrast, for nonperiodical species, some adults mature each summer and emerge while the rest of the population continues to develop underground. Many people refer to these nonperiodical species as annual cicadas because some are seen every summer. This may lead some to conclude that the non-periodic cicadas have life cycles of 1 year. This is incorrect. The few known life cycles of "annual" species range from two to 10 years, although some could be longer. The nymphs of the periodical cicadas live underground, usually within of the surface, feeding on the juices of plant roots. The nymphs of the periodical cicada undergo five instar stages in their development underground. The difference in the 13- and 17-year life cycle is said to be the time needed for the second instar to mature. When underground the nymphs move deeper below ground, detecting and then feeding on larger roots as they mature. The nymphs seem to track the number of years by detecting the changes in the xylem caused by abscission of the tree. This was supported experimentally by inducing a grove of trees to go through two cycles of losing and re-growing leaves in one calendar year. Cicadas feeding on those trees emerged after 16 years instead of 17. In late April to early June of the emergence year, mature fifth-instar nymphs construct tunnels to the surface and wait for the soil temperature to reach a critical value. In some situations, nymphs extend mud turrets up to several inches above the soil surface. The function of these turrets is not known, but the phenomenon has been observed in some nonperiodical cicadas, as well as other tunneling insects. The nymphs first emerge on a spring evening when the soil temperature at around of depth is above . The crepuscular emergence is thought to be related to the fact that maximum soil temperatures lag behind maximum insolation by several hours, conveniently providing some protection for the flightless nymphs against diurnal sight predators such as birds. For the rest of their lives the mature periodical cicadas will be strongly diurnal, with song often nearly ceasing at night. During most years in the United States this emergence cue translates to late April or early May in the far south, and late May to early June in the far north. Emerging nymphs may molt in the grass or climb from a few centimeters to more than 100 feet (30 m) to find a suitable vertical surface to complete their transformation into adults. After securing themselves to tree trunks, the walls of buildings, telephone poles, fenceposts, hanging foliage, and even stationary automobile tires, the nymphs undergo a final molt and then spend about six days in the trees to await the complete hardening of their wings and exoskeletons. Just after emerging from this final molt the teneral adults are off-white, but darken within an hour. Adult periodical cicadas live for only a few weeks; by mid-July, all have died. Their ephemeral adult forms are adapted for one purpose: reproduction. Like other cicadas the males produce a very loud species-specific mating song using their tymbals. Singing males of the same Magicicada species tend to form aggregations called choruses whose collective songs are attractive to females. Males in these choruses alternate bouts of singing with short flights from tree to tree in search of receptive females. Most matings occur in so-called chorus trees. Receptive females respond to the calls of conspecific males with timed wing-flicks (visual signaling is apparently a necessity in the midst of the males' song) which attract the males for mating. The sound of a chorus can be literally deafening and depending on the number of males composing it, may reach 100 dB in the immediate vicinity. In addition to their "calling" or "congregating" songs, males produce a distinctive courtship song when approaching an individual female. Both males and females can mate multiple times, although most females seem to mate only once . After mating, the female cuts V-shaped slits in the bark of young twigs and lays about 20 eggs in each, for a total clutch of 600 or more. After about 6–10 weeks, the eggs hatch and the nymphs drop to the ground, where they burrow and begin another 13- or 17-year cycle. Predator satiation survival strategy The nymphs emerge in very large numbers at nearly the same time, sometimes more than 1.5 million individuals per acre (>370/m2). Their mass emergence is, among other things, a survival trait called predator satiation. The details of this strategy are simple: for the first week after emergence the periodical cicadas are easy prey for reptiles, birds, squirrels, cats, dogs and other small and large mammals. In their present range the periodical cicadas have no effective predators, and all other animals feeding on them after emergence quickly become irrelevant with respect to their impact on total cicada populations. Early entomologists maintained that the cicadas' overall survival mechanism was simply to overwhelm predators by their sheer numbers, ensuring the survival of most of the individuals. Later, the fact that the developmental periods were each a prime number of years (13 and 17) was hypothesized to be a predator avoidance strategy, one adopted to eliminate the possibility of potential predators receiving periodic population boosts by synchronizing their own generations to divisors of the cicada emergence period. On this prime number hypothesis, a predator with a three-year reproductive cycle, which happened to coincide with a brood emergence in a given year, will have gone through either four cycles plus one year (12 + 1) or five cycles plus two years (15 + 2) by the next time that brood emerges. In this way prime-numbered broods exhibit a strategy to ensure that they nearly always emerge when some portion of the predators they will confront are sexually immature and therefore incapable of taking maximum advantage of the momentarily limitless food supply. Another viewpoint turns this hypothesis back onto the cicada broods themselves. It posits that the prime-numbered developmental times represent an adaptation to prevent hybridization between broods. It is hypothesized that this unusual method of sequestering different populations in time arose when conditions were extremely harsh. Under those conditions the mutation producing extremely long development times became so valuable that cicadas which possessed it found it beneficial to protect themselves from mating with cicadas that lacked the long-development trait. In this way, the long-developing cicadas retained a trait allowing them to survive the period of heavy selection pressure (i.e., harsh conditions) brought on by isolated and lowered populations during the period immediately following the retreat of glaciers (in the case of periodical cicadas, the North American Pleistocene glacial stadia). When seen in this light, their mass emergence and the predator satiation strategy that follows from this serves only to maintain the much longer-term survival strategy of protecting their long-development trait from hybridizations that might dilute it. This hybridization hypothesis was subsequently supported through a series of mathematical models and remains the most widely-accepted explanation for the unusually lengthy and mathematically sophisticated survival strategy of these insects. The length of the cycle was hypothesized to be controlled by a single gene locus, with the 13-year cycle dominant to the 17-year one, but this interpretation remains controversial and unsubstantiated at the level of DNA. Impact on other populations Cycles in cicada populations are significant enough to affect other animal and plant populations. For example, tree growth has been observed to decline the year before the emergence of a brood because of the increased feeding on roots by the growing nymphs. Moles, which feed on nymphs, have been observed to do well during the year before an emergence, but suffer population declines the following year because of the reduced food source. Wild turkey populations respond favorably to increased nutrition in their food supply from gorging on cicada adults on the ground at the end of their life cycles. Uneaten carcasses of periodical cicadas decompose on the ground, providing a resource pulse of nutrients to the forest community. Cicada broods may also have a negative impact. Eastern gray squirrel populations have been negatively affected, because the egg-laying activity of female cicadas damaged upcoming mast crops. Broods Periodical cicadas are grouped into geographic broods based on the calendar year when they emerge. For example, in 2014, the 13-year Brood XXII emerged in Louisiana and the 17-year Brood III emerged in western Illinois and eastern Iowa. In 1907, entomologist Charles Lester Marlatt assigned Roman numerals to 30 different broods of periodical cicadas: 17 distinct broods with a 17-year life cycle, to which he assigned brood numbers I through XVII (with emerging years 1893 through 1909); plus 13 broods with a 13-year cycle, to which he assigned brood numbers XVIII through XXX (1893 through 1905). Marlatt noted that the 17-year broods are generally more northerly than are the 13-year broods. Many of these hypothetical 30 broods have not been observed. Marlatt noted that some cicada populations (especially Brood XI in the valley of the Connecticut River in Massachusetts and Connecticut) were disappearing, a fact that he attributed to the reduction in forests and the introduction and proliferation of insect-eating "English sparrows" (House sparrows, Passer domesticus) that had followed the European settlement of North America. Two of the broods that Marlatt named (Broods XI and XXI) have become extinct. His numbering scheme has been retained for convenience (and because it clearly separates 13- and 17-year life cycles), although only 15 broods are known to survive. Periodical cicadas that emerge outside the expected time frame are called stragglers. Although they can emerge at any time, they usually do so one or four years before or after most other members of their broods emerge. Stragglers with a 17-year life cycle typically emerge four years early. Those with a 13-year cycle typically emerge four years late. The emergence of stragglers may in theory be indicative of a brood shifting from a 17-year cycle to a 13-year one. Brood XIII of the 17-year cicada, which reputably has the largest emergence of cicadas by size known anywhere, and Brood XIX of the 13-year cicada, arguably the largest (by geographic extent) of all periodical cicada broods, were expected to emerge together in 2024 for the first time since 1803. However, the two broods were not expected to overlap except potentially in a thin area in central and eastern Illinois (Macon, Sangamon, Livingston, and Logan counties). The next such dual emergence of these two particular broods will occur in 2245, 221 years after 2024. Many other 13-year and 17-year broods emerge during the same years, but the broods are not geographically close. Map of brood locations Taxonomy Phylogeny Magicicada is a member of the cicada tribe Lamotialnini, which is distributed globally aside from South America. Despite Magicicada being only found in eastern North America, its closest relatives are thought to be the genera Tryella and Aleeta from Australia, with Magicicada being sister to the clade containing Tryella and Aleeta. Within the Americas, its closest relative is thought to be the genus Chrysolasia from Guatemala. Species Seven recognized species are placed within Magicicada—three 17-year species and four 13-year species. These seven species are also sometimes grouped differently into three subgroups, the so-called Decim species group, Cassini species group, and Decula species group, reflecting strong similarities of each 17-year species with one or more species with a 13-year cycle. Evolution and speciation Not only are the periodical cicada life cycles curious for their use of the prime numbers 13 or 17, but their evolution is also intricately tied to one- and four-year changes in their life cycles. One-year changes are less common than four-year changes and are probably tied to variation in local climatic conditions. Four-year early and late emergences are common and involve a much larger proportion of the population than one-year changes. The different species are well-understood to have originated from a process of allochronic speciation, in which species subpopulations that are isolated from one another in time eventually become reproductively isolated as well. Research suggests that in extant periodical cicadas, the 13- and 17-year life cycles evolved at least eight different times in the last 4 million years and that different species with identical life cycles developed their overlapping geographic distribution by synchronizing their life cycles to the existing dominant populations. The same study estimates that the Decim species group split from the common ancestor of the Decula plus Cassini species groups around 4 million years ago (Mya). At around 2.5 Mya, the Cassini and Decula groups split from each other. The Sota et al. (2013) paper also calculates that the first separation of extant 13-year cicadas from 17-year cicadas took place in the Decim group about 530,000 years ago when the southern M. tredecim split from the northern M. septendecim. The second noteworthy event took place about 320,000 years ago with the split of the western Cassini group from its conspecifics to the east. The Decim and the Decula clades experienced similar western splits, but these are estimated to have taken place 270,000 and 230,000 years ago, respectively. The 13- and 17-year splits in Cassini and Decula took place after these events. The 17-year cicadas largely occupy formerly glaciated territory, and as a result their phylogeographic relationships reflect the effects of repeated contraction into glacial refugia (small islands of suitable habitat) and subsequent re-expansion during multiple interglacial periods. In each species group, Decim, Cassini, and Decula, the signature of the glacial periods is manifested in three phylogeographic genetic subdivisions: one subgroup east of the Appalachians, one midwestern, and one on the far western edge of their range. The Sota et al. data suggest that the founders of the southern 13-year cicada populations originated from the Decim group. These were later joined by Cassini originating from the western Cassini clade and Decula originating from eastern, middle, and western Decula clades. As Cassini and Decula invaded the south, they became synchronized with the resident M. tredecim. These Cassini and Decula are known as M. tredecassini and M. tredecula. More data is needed to lend support to this hypothesis and others hypotheses related to more recent 13- and 17-year splits involving M. neotredecim and M. tredecim. Distribution The 17-year periodical cicadas are distributed from the Eastern states, across the Ohio Valley, to the Great Plains states and north to the edges of the Upper Midwest, while the 13-year cicadas occur in the Southern and Mississippi Valley states, with some slight overlap of the two groups. For example, broods IV (17-year cycle) and XIX (13-year cycle) overlap in western Missouri and eastern Oklahoma. Their emergences should again coincide in 2219, 2440, 2661, etc., as they did in 1998 (although distributions change slightly from generation to generation and older distribution maps can be unreliable). An effort sponsored by the National Geographic Society is underway as of April 2021 at the University of Connecticut to generate new distribution maps of all periodical cicada broods. The effort uses crowdsourced data and records that entomologists and volunteers collect. Parasites, pests and pathogens Although it usually feeds on oak leaf gall midge (Polystepha pilulae) larvae and other insects, the oak leaf gall mite ("itch mite") (Pyemotes herfsi) becomes an ectoparasite of periodical cicada eggs when these are available. After cicadas deposit their eggs in the branches of trees, feeding mites reproduce and their numbers increase. After cicada emergences have ended, many people have therefore developed rashes, pustules, intense itching and other mite bite sequelae on their upper torso, head, neck and arms. Rashes and itching peaked after several days, but lasted as long as two weeks. Anti-itch treatments, including calamine lotion and topical steroid creams, did not relieve the itching. Massospora cicadina is a pathogenic fungus that infects only 13 and 17 year periodical cicadas. Infection results in a "plug" of spores that replaces the end of the cicada's abdomen while it is still alive, leading to infertility, disease transmission, and eventual death of the cicada. Symbiosis Magicicada are unable to obtain all of the essential amino acids from the dilute xylem fluid that they feed upon, and instead rely upon endosymbiotic bacteria that provide essential vitamins and nutrients for growth. Bacteria in the genus Hodgkinia live inside periodical cicadas, and grow and divide for years before punctuated cicada reproduction events impose natural selection on these bacteria to maintain a mutually beneficial relationship. As a result, the genome of Hodgkinia has fractionated into three independent bacterial species each containing only a subset of genes essential for this symbiosis. The host requires all three subgroups of symbionts, as only the complete complement of all three subgroups provides the host with all its essential nutrients. The Hodgkinia–Magicicada symbiosis is a powerful example of how bacterial endosymbionts drive the evolution of their hosts. History The first known account of a large emergence of cicadas appeared in a 1633 report by William Bradford, the governor of the Plymouth Colony, which had been established in 1620 within the future state of Massachusetts. After describing a "pestilent fever" that had swept through the colony and neighboring Indians, the report stated: It is to be observed that, the spring before this sickness, there was a numerous company of Flies which were like for bigness unto wasps or Bumble-Bees; they came out of little holes in the ground, and did eat up the green things, and made such a constant yelling noise as made the woods ring of them, and ready to deafen the hearers; they were not any seen or heard by the English in this country before this time; but the Indians told them that sickness would follow, and so it did, very hot, in the months of June, July, and August of that summer. (Elaborating on an observation that Marlatt reported in 1907, Gene Kritsky has suggested that Bradford's report is misdated, as Broods XI and XIV would have emerged in Plymouth in 1631 and 1634, respectively, while no presently known brood would have emerged there in 1633.) Historical accounts cite reports of 15- to 17-year recurrences of enormous numbers of noisy emergent cicadas ("locusts") written as early as 1733. John Bartram, a noted Philadelphia botanist and horticulturist, was among the early writers that described the insect's life cycle, appearance and characteristics. On May 9, 1715, Andreas Sandel, the pastor of Philadelphia's "Gloria Dei" Swedish Lutheran Church, described in his journal an emergence of Brood X. Pehr Kalm, a Finnish naturalist visiting Pennsylvania and New Jersey in 1749 on behalf of the Royal Swedish Academy of Sciences, observed in late May another emergence of that brood. When reporting the event in a paper that a Swedish academic journal published in 1756, Kalm wrote: Kalm then described Sandel's report and one that he had obtained from Benjamin Franklin that had recorded in Philadelphia the emergence from the ground of large numbers of cicadas during early May 1732. He noted that the people who had prepared these documents had made no such reports in other years. Kalm further noted that others had informed him that they had seen cicadas only occasionally before the insects emerged from the ground in Pennsylvania in large swarms on May 22, 1749. He additionally stated that he had not heard any cicadas in Pennsylvania and New Jersey in 1750 in the same months and areas in which he had heard many in 1749. The 1715 and 1732 reports, when coupled with his own 1749 and 1750 observations, supported the previous "general opinion" that he had cited. Kalm summarized his findings in a book translated into English and published in London in 1771, stating: Based on Kalm's account and a specimen that Kalm had provided, in 1758 Carl Linnaeus named the insect Cicada septendecim in the tenth edition of his Systema Naturae. Moses Bartram, a son of John Bartram, described the next appearance of the brood (Brood X) that Kalm had observed in 1749 in an article entitled Observations on the cicada, or locust of America, which appears periodically once in 16 or 17 years that he wrote in 1766. Bartram's article, which a London journal published in 1768, noted that upon hatching from eggs deposited in the twigs of trees, the young insects ran down to the earth and "entered the first opening that they could find". He reported that he had been able to discover them below the surface, but that others had reportedly found them deep. In 1775, Thomas Jefferson recorded in his "Garden Book" Brood II's 17-year periodicity, writing that an acquaintance remembered "great locust years" in 1724 and 1741, that he and others recalled another such year in 1758 and that the insects had again emerged from the ground at Monticello in 1775. He noted that the females lay their eggs in the small twigs of trees while above ground. The 1780 emergence of the Brood VII cicadas (also known as the Onondaga brood) during the American Revolutionary War, coincided with the aftermath of the military operation known as the Sullivan Expedition which devastated the indigenous Onondagan communities and destroyed their crops. The sudden arrival of such a substantial quantity of the cicadas provided a source of sustenance for the Onondaga people who were experiencing severe food insecurity following the Sullivan campaigns and the subsequent brutal winter. The seemingly miraculous arrival of the cicadas is commemorated by the Onondaga as though it were an intervention by the Creator to ensure their survival after such a traumatizing, catastrophic event. In April 1800, Benjamin Banneker, who lived near Ellicott's Mills, Maryland, wrote in his record book that he recalled a "great locust year" in 1749, a second in 1766 during which the insects appeared to be "full as numerous as the first", and a third in 1783. He predicted that the insects (Brood X) "may be expected again in they year 1800 which is Seventeen Since their third appearance to me". Describing an effect that the pathogenic fungus, Massospora cicadina, has on its host, Banneker's record book stated that the insects:... begin to Sing or make a noise from first they come out of the Earth till they die. The hindermost part rots off, and it does not appear to be any pain to them, for they still continue on Singing till they die. In 1845, D.L. Pharas of Woodville, Mississippi, announced the 13-year periodicity of the southern cicada broods in a local newspaper, the Woodville Republican. In 1858, Pharas placed the title Cicada tredecim in a subsequent article that the newspaper published on the subject. Ten years later, the American Entomologist published in December 1868 a paper that Benjamin Dann Walsh and Charles Valentine Riley had written that also reported the 13-year periodicity of the southern cicada broods. Walsh's and Riley's paper, which Scientific American reprinted in January 1869, illustrated the interior and exterior characteristics of the nymphs' emergence holes and raised turrets. Their article, which did not cite Pharas' reports, was the first to describe the southern cicadas' 13-year periodicity that received widespread attention. Riley later acknowledged Pharas' work in an 1885 publication on periodical cicadas that he authored. In 1998, an emergence contained a brood of 17-year cicadas (Brood IV) in western Missouri and a brood of 13-year cicadas (Brood XIX) over much of the rest of the state. Each of the broods are the state's largest of their types. As the territories of the two broods overlap (converge) in some areas, the convergence was the state's first since 1777. In 2007 and 2008, Edmond Zaborski, a research scientist with the Illinois Natural History Survey, reported that the oak leaf gall mite ("itch mite") (Pyemotes herfsi) is an ectoparasite of periodical cicada eggs. While investigating with the help of others the mysterious itchy welts and rashes that people were developing in Chicago's suburbs after the end of a 2007 Brood XIII emergence, he attributed the event to bites by mites whose populations had quickly increased while parasitizing those eggs. Similar events occurred in Cincinnati after a Brood XIV emergence ended in 2008, in Cleveland and elsewhere in northern and eastern Ohio after a Brood V emergence ended in 2016, in the Washington, D.C., area after a Brood X emergence ended in 2021, and again in the Chicago area after the next Brood XIII emergence ended in 2024. Use as human food Magicicada species are edible when cooked for people who lack allergies to similar foods. A number of recipes are available for this purpose. Some recommend collecting the insects shortly after molting while still soft. Others exhibit preferences for emergent nymphs or hardened adults. The insects have historically been eaten by Native Americans, who fried them or roasted them in hot ovens, stirring them until they were well browned. Marlatt wrote in 1907:
Biology and health sciences
Hemiptera (true bugs)
Animals
595183
https://en.wikipedia.org/wiki/Calcium%20sulfate
Calcium sulfate
Calcium sulfate (or calcium sulphate) is the inorganic compound with the formula CaSO4 and related hydrates. In the form of γ-anhydrite (the anhydrous form), it is used as a desiccant. One particular hydrate is better known as plaster of Paris, and another occurs naturally as the mineral gypsum. It has many uses in industry. All forms are white solids that are poorly soluble in water. Calcium sulfate causes permanent hardness in water. Hydration states and crystallographic structures The compound exists in three levels of hydration corresponding to different crystallographic structures and to minerals: (anhydrite): anhydrous state. The structure is related to that of zirconium orthosilicate (zircon): is 8-coordinate, is tetrahedral, O is 3-coordinate. (gypsum and selenite (mineral)): dihydrate. (bassanite): hemihydrate, also known as plaster of Paris. Specific hemihydrates are sometimes distinguished: α-hemihydrate and β-hemihydrate. Uses The main use of calcium sulfate is to produce plaster of Paris and stucco. These applications exploit the fact that calcium sulfate which has been powdered and calcined forms a moldable paste upon hydration and hardens as crystalline calcium sulfate dihydrate. It is also convenient that calcium sulfate is poorly soluble in water and does not readily dissolve in contact with water after its solidification. Hydration and dehydration reactions With judicious heating, gypsum converts to the partially dehydrated mineral called bassanite or plaster of Paris. This material has the formula CaSO4·(nH2O), where 0.5 ≤ n ≤ 0.8. Temperatures between are required to drive off the water within its structure. The details of the temperature and time depend on ambient humidity. Temperatures as high as are used in industrial calcination, but at these temperatures γ-anhydrite begins to form. The heat energy delivered to the gypsum at this time (the heat of hydration) tends to go into driving off water (as water vapor) rather than increasing the temperature of the mineral, which rises slowly until the water is gone, then increases more rapidly. The equation for the partial dehydration is: CaSO4 · 2 H2O   →   CaSO4 · H2O + H2O↑ The endothermic property of this reaction is relevant to the performance of drywall, conferring fire resistance to residential and other structures. In a fire, the structure behind a sheet of drywall will remain relatively cool as water is lost from the gypsum, thus preventing (or substantially retarding) damage to the framing (through combustion of wood members or loss of strength of steel at high temperatures) and consequent structural collapse. But at higher temperatures, calcium sulfate will release oxygen and act as an oxidizing agent. This property is used in aluminothermy. In contrast to most minerals, which when rehydrated simply form liquid or semi-liquid pastes, or remain powdery, calcined gypsum has an unusual property: when mixed with water at normal (ambient) temperatures, it quickly reverts chemically to the preferred dihydrate form, while physically "setting" to form a rigid and relatively strong gypsum crystal lattice: CaSO4 · H2O + H2O   →   CaSO4 · 2 H2O This reaction is exothermic and is responsible for the ease with which gypsum can be cast into various shapes including sheets (for drywall), sticks (for blackboard chalk), and molds (to immobilize broken bones, or for metal casting). Mixed with polymers, it has been used as a bone repair cement. Small amounts of calcined gypsum are added to earth to create strong structures directly from cast earth, an alternative to adobe (which loses its strength when wet). The conditions of dehydration can be changed to adjust the porosity of the hemihydrate, resulting in the so-called α- and β-hemihydrates (which are more or less chemically identical). On heating to , the nearly water-free form, called γ-anhydrite (CaSO4·nH2O where n = 0 to 0.05) is produced. γ-Anhydrite reacts slowly with water to return to the dihydrate state, a property exploited in some commercial desiccants. On heating above 250 °C, the completely anhydrous form called β-anhydrite or "natural" anhydrite is formed. Natural anhydrite does not react with water, even over geological timescales, unless very finely ground. The variable composition of the hemihydrate and γ-anhydrite, and their easy inter-conversion, is due to their nearly identical crystal structures containing "channels" that can accommodate variable amounts of water, or other small molecules such as methanol. Food industry The calcium sulfate hydrates are used as a coagulant in products such as tofu. For the FDA, it is permitted in cheese and related cheese products; cereal flours; bakery products; frozen desserts; artificial sweeteners for jelly & preserves; condiment vegetables; and condiment tomatoes and some candies. It is known in the E number series as E516, and the UN's FAO knows it as a firming agent, a flour treatment agent, a sequestrant, and a leavening agent. Dentistry Calcium sulfate has a long history of use in dentistry. It has been used in bone regeneration as a graft material and graft binder (or extender) and as a barrier in guided bone tissue regeneration. It is a biocompatible material and is completely resorbed following implantation. It does not evoke a significant host response and creates a calcium-rich milieu in the area of implantation. Desiccant When sold at the anhydrous state as a desiccant with a color-indicating agent under the name Drierite, it appears blue (anhydrous) or pink (hydrated) due to impregnation with cobalt(II) chloride, which functions as a moisture indicator. Sulfuric acid production Up to the 1970s, commercial quantities of sulfuric acid were produced from anhydrous calcium sulfate. Upon being mixed with shale or marl, and roasted at 1400°C, the sulfate liberates sulfur dioxide gas, a precursor to sulfuric acid. The reaction also produces calcium silicate, used in cement clinker production. Some component reactions pertaining to calcium sulfate: Production and occurrence The main sources of calcium sulfate are naturally occurring gypsum and anhydrite, which occur at many locations worldwide as evaporites. These may be extracted by open-cast quarrying or by deep mining. World production of natural gypsum is around 127 million tonnes per annum. In addition to natural sources, calcium sulfate is produced as a by-product in a number of processes: In flue-gas desulfurization, exhaust gases from fossil-fuel power stations and other processes (e.g. cement manufacture) are scrubbed to reduce their sulfur dioxide content, by injecting finely ground limestone: Related sulfur-trapping methods use lime and some produces an impure calcium sulfite, which oxidizes on storage to calcium sulfate. In the production of phosphoric acid from phosphate rock, calcium phosphate is treated with sulfuric acid and calcium sulfate precipitates. The product, called phosphogypsum is often contaminated with impurities making its use uneconomic. In the production of hydrogen fluoride, calcium fluoride is treated with sulfuric acid, precipitating calcium sulfate. In the refining of zinc, solutions of zinc sulfate are treated with hydrated lime to co-precipitate heavy metals such as barium. Calcium sulfate can also be recovered and re-used from scrap drywall at construction sites. These precipitation processes tend to concentrate radioactive elements in the calcium sulfate product. This issue is particular with the phosphate by-product, since phosphate ores naturally contain uranium and its decay products such as radium-226, lead-210 and polonium-210. Extraction of uranium from phosphorus ores can be economical on its own depending on prices on the uranium market or the separation of uranium can be mandated by environmental legislation and its sale is used to recover part of the cost of the process. Calcium sulfate is also a common component of fouling deposits in industrial heat exchangers, because its solubility decreases with increasing temperature (see the specific section on the retrograde solubility). Solubility The solubility of calcium sulfate decreases as temperature increases. This behaviour ("retrograde solubility") is uncommon: dissolution of most of the salts is endothermic and their solubility increases with temperature. The retrograde solubility of calcium sulfate is also responsible for its precipitation in the hottest zone of heating systems and for its contribution to the formation of scale in boilers along with the precipitation of calcium carbonate whose solubility also decreases when CO2 degasses from hot water or can escape out of the system.
Physical sciences
Sulfuric oxyanions
Chemistry
595716
https://en.wikipedia.org/wiki/Stilbite
Stilbite
Stilbite is the name of a series of tectosilicate minerals of the zeolite group. Prior to 1997, stilbite was recognized as a mineral species, but a reclassification in 1997 by the International Mineralogical Association changed it to a series name, with the mineral species being named: Stilbite-Ca Stilbite-Na, sometimes also called stiblite Stilbite-Ca, by far the more common of the two, is a hydrous calcium sodium and aluminium silicate, NaCa4(Si27Al9)O72·28(H2O). In the case of stilbite-Na, sodium dominates over calcium. The species are visually indistinguishable, and the series name stilbite is still used whenever testing has not been performed. History At one time heulandite and stilbite were considered to be identical minerals. After they were found to be two separate species, in 1818, the name desmine ("a bundle") was proposed for stilbite, and this name is still employed in Germany. The English name "stilbite" is from the Greek stilbein = to shine, because of the pearly luster of the {010} faces. Chemistry and related species Stilbite shows a wide variation in exchangeable cations: silicon and aluminium ions occupy equivalent sites and can substitute for each other. Since silicon and aluminium have a different charge (Si4+ and Al3+) the ions occupying the sodium/calcium site have to adjust to maintain charge balance. There is a continuous series between stellerite, whose formula can be written as Ca4(Si28Al8)O72·28(H2O), and stilbite, and another continuous series between stilbite and barrerite, Na8(Si28Al8)O72·26(H2O). Epistilbite is a distinct zeolite species unrelated to stilbite. Crystal class Stilbite is usually monoclinic 2/m, meaning that it has one twofold axis of rotational symmetry perpendicular to a mirror plane. The twofold axis is the crystal axis b, and the a and c crystal axes lie in the mirror plane. For a monoclinic crystal a and c are inclined to each other at an angle β which is not a right angle. For stilbite β is nearly 130°. Stilbite crystals, however, appear to be almost orthorhombic, and a larger unit cell can be chosen, containing two formula units (Z = 2) such that resembles an orthorhombic cell, with all three crystal axes very nearly mutually perpendicular. The mineral is said to be pseudo-orthorhombic. Non-endmember forms of stilbite may be triclinic or even truly orthorhombic, indeed the framework can have symmetry ranging from orthorhombic to triclinic in a single crystal. Habit Crystals are typically thin tabular, flattened parallel to the dominant cleavage and elongated along the a axis. Aggregates may be sheaf-like or in bow-ties, also fibrous and globular. Twinning, cruciform and penetration, is extremely common on {001}. Physical and optical properties The color is usually colorless or white, also yellow, brown, pink, salmon, orange, red, green, blue or black. The luster is generally vitreous, and on the perfect cleavage parallel to the plane of symmetry it is markedly pearly. The streak is white and crystals are transparent to translucent. The hardness is to 4 and the specific gravity 2.12 to 2.22. Cleavage is perfect on {010}, poor on {001}. The mineral is brittle, with a conchoidal or uneven fracture. It is not radioactive. Stilbite is biaxial (-) with refractive indices: Nx = 1.479 to 1.492, Ny = 1.485 to 1.500, Nz = 1.489 to 1.505 Nx = 1.484 to 1.500, Ny = 1.492 to 1.507, Nz = 1.494 to 1.513 Unit cell and structure Where sources give cell parameters for stilbite-Na, they are the same as those for stilbite-Ca. The unit cell can be considered as a monoclinic cell with β close to 130° and one formula unit per unit cell (Z = 1), or as a larger pseudo-orthorhombic cell with β close to 90° and Z = 2. Cell Parameters for the monoclinic cell: a = 13.595 to 13.69 Å, b = 18.197 to 18.31 Å, c = 11.265 to 11.30 Å, β = 127.94 to 128.1° a = 13.63 Å, b = 18.17 Å, c = 11.31 Å, β = 129.166° a = 13.60 to 13.69 Å, b = 18.20 to 18.31 Å, c = 11.27 Å, β = 128° Cell parameters for the pseudo-orthorhombic cell: a = 13.595 to 13.69 Å, b = 18.197 to 18.31 Å, c = 17.775 to 17.86 Å, β = 90.00 to 90.91° a = 13.595 to 13.657 Å, b = 18.197 to 18.309 Å, c = 17.775 to 17.842 Å, β = 90:05 to 90.91° (Z is doubled to Z = 4 because the formula unit halved to NaCa2Al5Si13O36.14H2O) a=13.69 Å, b=18.25 Å, c=11.31 Å, β =128.2° a = 13.60 to 13.69 Å, b = 18.20 to 18.31 Å, c = 17.78 to 17.86 Å, β = 90.0 to 90.91° The framework of stilbite is pseudo-orthorhombic with the open channels typical of zeolites. It has 10-member rings and 8-member rings forming channels parallel to a and pseudo-orthorhombic c respectively. Uses The open channels in the stilbite structure act like a molecular sieve, enabling it to separate hydrocarbons in the process of petroleum refining. Environment Stilbite is a low-temperature secondary hydrothermal mineral. It occurs in the amygdaloidal cavities of basaltic volcanic rocks, in andesites, gneiss and hydrothermal veins. It also forms in hot springs deposits, and as a cementing agent in some sandstones and conglomerates. Stilbite has not been found in sedimentary tuff deposits or deep-sea deposits. Associated minerals are other zeolites, prehnite, calcite and quartz. Localities Stilbite is abundant in the volcanic rocks of Iceland, Faroe Islands, Isle of Skye, Bay of Fundy, Nova Scotia (where it is the provincial mineral), northern New Jersey and North Carolina. Salmon-pink crystals occur with pale green apophyllite in the Deccan Traps near Mumbai (Bombay) and Pune, India; white sheaf-like groups encrust the calcite (Iceland-spar) of Berufjord near Djupivogr in Iceland; brown sheafs are found near Paterson, New Jersey in the United States; and crystals of a brick-red color are found at Old Kilpatrick, Scotland. Iceland is generally considered to be the type locality for stilbite-Ca. It is presumed to be the Helgusta Iceland Spar Mine, along Reydarfjordur. Excellent white bow ties of stilbite are found here on calcite and quartz, associated with heulandite and laumontite in cavities. The type locality for stilbite-Na is Cape Pula, Pula, Cagliari Province, Sardinia, Italy. Small, lustrous, white or pink, pointed blades of stilbite-Na, and formless masses, up to 5 cm in diameter, have been found there, covering a thin crust of reddish heulandite in large fractures and cavities in the highly weathered volcanic andesite or rhyolite. The Tertiary Deccan basalts of western India are the most prolific sources of stilbite in the world. Stilbite is the most abundant zeolite in the tholeiitic basalt plateaux near Nasik and Pune and decreases in abundance toward the coast at Mumbai. Photo gallery
Physical sciences
Silicate minerals
Earth science
595793
https://en.wikipedia.org/wiki/Pyrus%20pyrifolia
Pyrus pyrifolia
Pyrus pyrifolia is a species of pear tree native to southern China and northern Indochina that has been introduced to Korea, Japan and other parts of the world. The tree's edible fruit is known by many names, including Asian pear, Persian pear, Japanese pear, Chinese pear, Korean pear, Taiwanese pear, apple pear, zodiac pear, three-halves pear, papple, naspati and sand pear. Along with cultivars of P. × bretschneideri and Pyrus ussuriensis, the fruit is also called the nashi pear. Cultivars derived from Pyrus pyrifolia are grown throughout East Asia, and in other countries such Pakistan, Nepal, Australia, New Zealand, and America. Traditionally in East Asia the tree's flowers are a popular symbol of early spring, and it is a common sight in gardens and the countryside. The fruits are not generally baked in pies or made into jams because they have a high water content and a crisp, grainy texture, very different from the European varieties. They are commonly served raw and peeled. The fruit tends to be quite large and fragrant. When carefully wrapped, having a tendency to bruise because of its juiciness, it can last for several weeks (or more) in a cold, dry place. Culture Due to their relatively high price and the large size of the fruit of cultivars, the pears tend to be served to guests, given as gifts, or eaten together in a family setting. In cooking, ground pears are used in vinegar- or soy sauce-based sauces as a sweetener, instead of sugar. They are also used when marinating meat, especially beef, with a notable example being in the Korean dish bulgogi, due to the presence of enzymes to tenderize the proteins in the meat. In Australia, these pears were first introduced into commercial production beginning in 1980. In Japan, fruit is harvested in Chiba, Ibaraki, Tottori, Fukushima, Tochigi, Nagano, Niigata, Saitama and other prefectures, except Okinawa. Nashi () may be used as a late Autumn kigo, or "season word", when writing haiku. Nashi no hana (, pear flower) is also used as a kigo of spring. At least one city (Kamagaya-Shi, Chiba Prefecture) has the flowers of this tree as an official city flower. In Nepal (Nepali: Naspati नस्पाती) and the Himalayan states of India, they are cultivated as a cash crop in the Middle Hills between about in elevation, where the climate is suitable. The fruit are carried to nearby markets by human porters or, increasingly, by truck, but not for long distances because they bruise easily. In Taiwan, pears harvested in Japan have become luxurious presents since 1997 and their consumption has jumped. In China, the term "sharing a pear" () is a homophone of "separate" (). As a result, sharing a pear with a loved one can be read as a desire to separate from them. In Korea, the fruit is known as (), and it is grown and consumed in great quantity. In the South Korean city of Naju, there is a museum called The Naju Pear Museum and Pear Orchard for Tourists (). In Cyprus, the pears were introduced in 2010 after initially being investigated as a new fruit crop for the island in the early 1990s. They are currently grown in Kyperounta. Cultivars Cultivars are classified in two groups. Most of the cultivars belong to the Akanashi ('Russet pears') group, and have yellowish-brown rinds. The Aonashi ('Green pears') have yellow-green rinds. Important cultivars include: 'Chojuro' (, Japan, 1893?) ('Russet pears') 'Kosui' (, Japan, 1959; the most important cultivar in Japan) ('Russet pears') 'Hosui' (, Japan, 1972) ('Russet pears') 'Imamuraaki' (, Japan, native) ('Russet pears') 'Nijisseiki' (, Japan, 1898; name means "20th century", also spelled 'Nijusseiki') ('Green pears') 'Niitaka' (, Japan, 1927) ('Russet pears') 'Okusankichi' (, Japan, native) ('Russet pears') 'Raja' (new) ('Russet pears') 'Shinko' (, Japan, pre-1941) ('Russet pears') ('Russet pears') 'Hwangkeum' (, , Korea, 1984, 'Niitaka' × 'Nijisseiki') 'Huanghuali' (not to be confused with the wood of Dalbergia odorifera, also called Huanghuali) Pyrus pyrifolia var. culta Pyrus pyrifolia var. culta is a Japanese cultivar of pears. It is also known as a Nashi tree. Sometimes called the Sand Pear Yamanashi Prefecture is named after the fruit. Kanji It has a Chinese character representing it in Japanese . It is one of the Kyōiku kanji or Kanji taught in elementary school in Japan. It is one of the 20 kanji added to the Kyoiku kanji that are found in the names of the following prefectures of Japan It also generically refers to Pears in Chinese. Gallery
Biology and health sciences
Pomes
Plants
595823
https://en.wikipedia.org/wiki/Pyrus%20%C3%97%20bretschneideri
Pyrus × bretschneideri
Pyrus × bretschneideri (or Pyrus bretschneideri), the ya pear or pearple or Chinese white pear (), is an interspecific hybrid species of pear native to North China, where it is widely grown for its edible fruit. Recent molecular genetic evidence confirms some relationship to the Siberian pear (Pyrus ussuriensis), but it can also be classified as a subspecies of the Chinese pear Pyrus pyrifolia. Along with cultivars of P. pyrifolia and P. ussuriensis, the fruit is also called the nashi pear. These very juicy, white to light yellow pears, unlike the round Nashi pears (P. pyrifolia) that are also grown in eastern Asia, are shaped more like the European pear (Pyrus communis), narrow towards the stem end. The “Ya Li” (), literally "duck pear" due to its mallard-like shape, is one cultivar widely grown in China and exported around the world. Ya pears taste similar to a mild Bosc pear, but are crisp, with a higher water content and lower sugar content. Further hybridization Breeding programs have created cultivars that are the products of further hybridizing P. ×bretschneideri with P. pyrifolia. Under the International Code of Nomenclature for algae, fungi, and plants, such backcross hybrids are named within the species P. ×bretschneideri. Cultivar 'PremP109', also called 'Prem 109', is such a hybrid, marketed under the trademark Papple.
Biology and health sciences
Pomes
Plants
1558567
https://en.wikipedia.org/wiki/Oral%20rehydration%20therapy
Oral rehydration therapy
Oral rehydration therapy (ORT) is a type of fluid replacement used to prevent and treat dehydration, especially due to diarrhea. It involves drinking water with modest amounts of sugar and salts, specifically sodium and potassium. Oral rehydration therapy can also be given by a nasogastric tube. Therapy can include the use of zinc supplements to reduce the duration of diarrhea in infants and children under the age of 5. Use of oral rehydration therapy has been estimated to decrease the risk of death from diarrhea by up to 93%. Side effects may include vomiting, high blood sodium, or high blood potassium. If vomiting occurs, it is recommended that use be paused for 10 minutes and then gradually restarted. The recommended formulation includes sodium chloride, sodium citrate, potassium chloride, and glucose. Glucose may be replaced by sucrose and sodium citrate may be replaced by sodium bicarbonate, if not available, although the resulting mixture is not shelf stable in high-humidity environments. It works as glucose increases the uptake of sodium and thus water by the intestines, and the potassium chloride and sodium citrate help prevent hypokalemia and acidosis, respectively, which are both common side effects of diarrhea. A number of other formulations are also available including versions that can be made at home. However, the use of homemade solutions has not been well studied. Oral rehydration therapy was developed in the 1940s using electrolyte solutions with or without glucose on an empirical basis chiefly for mild or convalescent patients, but did not come into common use for rehydration and maintenance therapy until after the discovery that glucose promoted sodium and water absorption during cholera in the 1960s. It is on the World Health Organization's List of Essential Medicines. Globally, , oral rehydration therapy is used by 41% of children with diarrhea. This use has played an important role in reducing the number of deaths in children under the age of five. Medical uses ORT is less invasive than the other strategies for fluid replacement, specifically intravenous (IV) fluid replacement. Mild to moderate dehydration in children seen in an emergency department is best treated with ORT. Persons taking ORT should eat within six hours and return to their full diet within 24–48 hours. Oral rehydration therapy may also be used as a treatment for the symptoms of dehydration and rehydration in burns in resource-limited settings. Efficacy ORT may lower the mortality rate of diarrhea by as much as 93%. Case studies in four developing countries also have demonstrated an association between increased use of ORS and reduction in mortality. ORT using the original ORS formula has no effect on the duration of the diarrheic episode or the volume of fluid loss, although reduced osmolarity solutions have been shown to reduce stool volume. Treatment algorithm The degree of dehydration should be assessed before initiating ORT. ORT is suitable for people who are not dehydrated and those who show signs and symptoms of mild to moderate dehydration. People who have severe dehydration should seek professional medical help immediately and receive intravenous rehydration as soon as possible to rapidly replenish fluid volume in the body. Contraindications ORT should be discontinued and fluids replaced intravenously when vomiting is protracted despite proper administration of ORT; or signs of dehydration worsen despite giving ORT; or the person is unable to drink due to a decreased level of consciousness; or there is evidence of intestinal blockage or ileus. ORT might also be contraindicated in people who are in hemodynamic shock due to impaired airway protective reflexes. Short-term vomiting is not a contraindication to receiving oral rehydration therapy. In persons who are vomiting, drinking oral rehydration solution at a slow and continuous pace will help resolve vomiting. Preparation WHO and UNICEF have jointly developed official guidelines for the manufacture of oral rehydration solution and the oral rehydration salts used to make it (both often abbreviated ORS). They also describe other acceptable solutions, depending on material availability. Commercial preparations are available as prepared fluids and as packets of powder ready to mix with water. A basic oral rehydration therapy solution can also be prepared when packets of oral rehydration salts are not available. The molar ratio of sugar to salt should be 1:1 and the solution should not be hyperosmolar. The Rehydration Project states, "Making the mixture a little diluted (with more than 1 litre of clean water) is not harmful." The optimal fluid for preparing oral rehydration solution is clean water. However, if this is not available, the usually available water should be used. Oral rehydration solution should not be withheld simply because the available water is potentially unsafe; rehydration takes precedence. When oral rehydration salts packets and suitable teaspoons for measuring sugar and salt are not available, the WHO has recommended that homemade gruels, soups, etc., may be considered to help maintain hydration. A Lancet review in 2013 emphasized the need for more research on appropriate home made fluids to prevent dehydration. Sports drinks are not optimal oral rehydration solutions, but they can be used if optimal choices are not available. They should not be withheld for lack of better options; again, rehydration takes precedence. But they are not replacements for oral rehydration solutions in nonemergency situations. Reduced-osmolarity In 2003, WHO and UNICEF recommended that the osmolarity of oral rehydration solution be reduced from 311 to 245 mOsm/L. These guidelines were also updated in 2006. This recommendation was based on multiple clinical trials showing that the reduced osmolarity solution reduces stool volume in children with diarrhea by about twenty-five percent and the need for IV therapy by about thirty percent when compared to standard oral rehydration solution. The incidence of vomiting is also reduced. The reduced osmolarity oral rehydration solution has lower concentrations of glucose and sodium chloride than the original solution, but the concentrations of potassium and citrate are unchanged. The reduced osmolarity solution has been criticized by some for not providing enough sodium for adults with cholera. Clinical trials have, however, shown reduced osmolarity solution to be effective for adults and children with cholera. They seem to be safe but some caution is warranted according to the Cochrane review. Administration ORT is based on evidence that water continues to be absorbed from the gastrointestinal tract even while fluid is lost through diarrhea or vomiting. The World Health Organization specify indications, preparations and procedures for ORT. WHO/UNICEF guidelines suggest ORT should begin at the first sign of diarrhea in order to prevent dehydration. Babies may be given ORS with a dropper or a syringe. Infants under two may be given a teaspoon of ORS fluid every one to two minutes. Older children and adults should take frequent sips from a cup, with a recommended intake of 200–400 mL of solution after every loose movement. The WHO recommends giving children under two a quarter- to a half-cup of fluid following each loose bowel movement and older children a half- to a full cup. If the person vomits, the caregiver should wait 5–10 minutes and then resume giving ORS. ORS may be given by aid workers or health care workers in refugee camps, health clinics and hospital settings. Mothers should remain with their children and be taught how to give ORS. This will help to prepare them to give ORT at home in the future. Breastfeeding should be continued throughout ORT. Associated therapies Zinc As part of oral rehydration therapy, the WHO recommends supplemental zinc (10 to 20 mg daily) for ten to fourteen days, to reduce the severity and duration of the illness and make recurrent illness in the following two to three months less likely. Preparations are available as a zinc sulfate solution for adults, a modified solution for children and in tablet form. Feeding After severe dehydration is corrected and appetite returns, feeding the person speeds the recovery of normal intestinal function, minimizes weight loss and supports continued growth in children. Small frequent meals are best tolerated (offering the child food every three to four hours). Mothers should continue to breastfeed. A child with watery diarrhea typically regains their appetite as soon as dehydration is corrected, whereas a child with bloody diarrhea often eats poorly until the illness resolves. Such children should be encouraged to resume normal feeding as soon as possible. Once diarrhea is corrected, the WHO recommends giving the child an extra meal each day for two weeks, and longer if the child is malnourished. Children with malnutrition Dehydration may be overestimated in wasted children and underestimated in edematous children. Care of these children must also include careful management of their malnutrition and treatment of other infections. Useful signs of dehydration include an eagerness to drink, lethargy, cool and moist extremities, weak or absent radial pulse (wrist), and reduced or absent urine flow. In children with severe malnutrition, it is often impossible to reliably distinguish between moderate and severe dehydration. A severely malnourished child who has signs of severe dehydration but who does not have a history of watery diarrhea should be treated for septic shock. The original ORS (90 mmol sodium/L) and the current standard reduced-osmolarity ORS (75 mmol sodium/L) both contain too much sodium and too little potassium for severely malnourished children with dehydration due to diarrhea. ReSoMal (Rehydration Solution for Malnutrition) is recommended for such children. It contains less sodium (45 mmol/L) and more potassium (40 mmol/L) than reduced osmolarity ORS. It can be obtained in packets produced by UNICEF or other manufacturers. An exception is if the severely malnourished child also has severe diarrhea (in which case ReSoMal may not provide enough sodium), in which case standard reduced-osmolarity ORS (75 mmol sodium/L) is recommended. Malnourished children should be rehydrated slowly. The WHO recommends 10 milliliters of ReSoMal per kilogram body weight for each of the first two hours (for example, a 9-kilogram child should be given 90 mL of ReSoMal over the course of the first hour, and another 90 mL for the second hour) and then continuing at this same rate or slower based on the child's thirst and ongoing stool losses, keeping in mind that a severely dehydrated child may be lethargic. If the child drinks poorly, a nasogastric tube should be used. The IV route should not be used for rehydration except in cases of shock and then only with care, infusing slowly to avoid flooding the circulation and overloading the heart. Feeding should usually resume within 2–3 hours after starting rehydration and should continue every 2–3 hours, day and night. For an initial cereal diet before a child regains his or her full appetite, the WHO recommends combining 25 grams skimmed milk powder, 20 grams vegetable oil, 60 grams sugar, and 60 grams rice powder or other cereal into 1,000 milliliters water and boiling gently for five minutes. Give 130 mL per kilogram of body weight during per 24 hours. A child who cannot or will not eat this minimum amount should be given the diet by nasogastric tube divided into six equal feedings. Later on, the child should be given cereal made with a greater amount of skimmed milk product and vegetable oil and slightly less sugar. As appetite fully returns, the child should be eating 200 mL per kilogram of body weight per day. Zinc, potassium, vitamin A, and other vitamins and minerals should be added to both recommended cereal products, or to the oral rehydration solution itself. Children who are breastfed should continue breastfeeding. Antibiotics The WHO recommends that all severely malnourished children admitted to hospital should receive broad-spectrum antibiotics (for example, gentamicin and ampicillin). In addition, hospitalized children should be checked daily for other specific infections. If cholera is suspected give an antibiotic to which V. cholerae are susceptible. This reduces the volume loss due to diarrhea by 50% and shortens the duration of diarrhea to about 48 hours. Physiological basis Fluid from the body enters the intestinal lumen during digestion. This fluid is isosmotic with the blood and contains a high quantity, about 142 mEq/L, of sodium. A healthy individual secretes 2000–3000 milligrams of sodium per day into the intestinal lumen. Nearly all of this is reabsorbed so that sodium levels in the body remain constant. In a diarrheal illness, sodium-rich intestinal secretions are lost before they can be reabsorbed. This can lead to life-threatening dehydration or electrolyte imbalances within hours when fluid loss is severe. The objective of therapy is the replenishment of sodium and water losses by ORT or intravenous infusion. Sodium absorption occurs in two stages. The first is via intestinal epithelial cells (enterocytes). Sodium passes into these cells by co-transport with glucose, via the SGLT1 protein. From the intestinal epithelial cells, sodium is pumped by active transport via the sodium-potassium pump through the basolateral cell membrane into the extracellular space. The sodium–potassium ATPase pump at the basolateral cell membrane moves three sodium ions into the extracellular space, while pulling into the enterocyte two potassium ions. This creates a "downhill" sodium gradient within the cell. SGLT proteins use energy from this downhill sodium gradient to transport glucose across the apical membrane of the cell against the glucose gradient. The co-transporters are examples of secondary active transport. The GLUT uniporters then transport glucose across the basolateral membrane. Both SGLT1 and SGLT2 are known as symporters, since both sodium and glucose are transported in the same direction across the membrane. The co-transport of glucose into epithelial cells via the SGLT1 protein requires sodium. Two sodium ions and one molecule of glucose (or galactose) are transported together across the cell membrane via the SGLT1 protein. Without glucose, intestinal sodium is not absorbed. This is why oral rehydration salts include both sodium and glucose. For each cycle of the transport, hundreds of water molecules move into the epithelial cell to maintain osmotic equilibrium. The resultant absorption of sodium and water can achieve rehydration even while diarrhea continues. History Definition In the early 1980s, "oral rehydration therapy" meant only the preparation prescribed by the World Health Organization (WHO) and UNICEF. In 1988, the definition was changed to include recommended home-made solutions, because the official preparation was not always available. The definition was also amended in 1988, to include continued feeding as associated therapy. In 1991, the definition became "an increase in administered hydrational fluids"; in 1993, "an increase in administered fluids and continued feeding". Development Dehydration was a major cause of death during the 1829 cholera pandemic in Russia and Western Europe. In 1831, William Brooke O'Shaughnessy noted the changes in blood composition and loss of water and salt in the stool of people with cholera and prescribed intravenous fluid therapy (IV fluids). The prescribing of hypertonic IV therapy decreased the mortality rate of cholera to 40%, from 70%. In the West, IV therapy became the "gold standard" for the treatment of moderate and severe dehydration. In 1953, Hemendra Nath Chatterjee published in The Lancet the results of using ORT to treat people with mild cholera. He gave the solution orally and rectally, along with Coleus extract, antihistamines, and antiemetics, without controls. The formula of the fluid replacement solution was 4 g of sodium chloride, 25 g of glucose, and 1000 mL of water. He did not publish any balance data, and his exclusion of patients with severe dehydration did not lead to any confirming study; his report remained anecdotal. Robert Allan Phillips tried to make an effective ORT solution based on his discovery that, in the presence of glucose, sodium, and chloride could be absorbed in patients with cholera; but he failed because his solution was too hypertonic and he used it to try to stop the diarrhea rather than to rehydrate patients. In the early 1960s, Robert K. Crane described the sodium-glucose co-transport mechanism and its role in intestinal glucose absorption. This, along with evidence that the intestinal mucosa appears undamaged in cholera, suggested that intestinal absorption of glucose and sodium might continue during the illness. This supported the notion that oral rehydration might be possible even during severe diarrhea due to cholera. In 1967–1968, Norbert Hirschhorn and Nathaniel F. Pierce showed that people with severe cholera can absorb glucose, salt, and water and that this can occur in sufficient amounts to maintain hydration. In 1968, David R. Nalin and Richard A. Cash, helped by Rafiqul Islam and Majid Molla, reported that giving adults with cholera an oral glucose-electrolyte solution in volumes equal to those of the diarrhea losses reduced the need for IV fluid therapy by eighty percent.[46] In 1971, fighting during the Bangladesh Liberation War displaced millions and an epidemic of cholera ensued among the refugees. When IV fluid ran out in the refugee camps, Dilip Mahalanabis, a physician working with the Johns Hopkins International Center for Medical Research and Training in Calcutta, issued instructions to prepare an oral rehydration solution and to distribute it to family members and caregivers. Over 3,000 people with cholera received ORT in this way. The mortality rate was 3.6% among those given ORT, compared with 30% in those given IV fluid therapy. After Bangladesh won independence, there was a wide campaign to promote the use of saline in the treatment of diarrhea. In 1980, the World Health Organization recognized ORT and began a global program for its dissemination. In the 1970s, Norbert Hirschhorn used oral rehydration therapy on the White River Apache Indian Reservation with a grant from the National Institute of Allergy and Infectious Diseases. He observed that children voluntarily drank as much of the solution as needed to restore hydration, and that rehydration and early re-feeding would protect their nutrition. This led to increased use of ORT for children with diarrhea, especially in developing countries. In 1980, the Bangladeshi nonprofit BRAC created a door-to-door and person-to-person sales force to teach ORT for use by mothers at home. A task force of fourteen women, one cook, and one male supervisor traveled from village to village. After visiting with women in several villages, they hit upon the idea of encouraging the women in the village to make their own oral rehydration fluid. They used available household equipment, starting with a "half a seer" (half a quart) of water and adding a fistful of sugar and a three-finger pinch of salt. Later on, the approach was broadcast over television and radio, and a market for oral rehydration salts packets developed. Three decades later, national surveys have found that almost 90% of children with severe diarrhea in Bangladesh are given oral rehydration fluids at home or in a health facility. ORT is known in Bangladesh as Orosaline or Orsaline. From 2006 to 2011, UNICEF estimated that worldwide about a third of children under 5 who had diarrhea received an oral rehydration solution, with estimates ranging from 30% to 41% depending on the region. ORT is one of the principal elements of the UNICEF "GOBI FFF" program (growth monitoring; ORT; breast feeding; immunization; female education; family spacing and food supplementation). The program aims to increase child survival in developing nations through proven low-cost interventions. Awards Centre for Health and Population Research, Dhaka, Bangladesh, 2001 Gates award for global health. Norbert Hirschhorn, Dilip Mahalanabis, David Nalin, and Nathaniel Pierce, 2002 inaugural Pollin Prize for Pediatric Research. Richard A. Cash, David Nalin, Dilip Mahalanabis and Stanley Schultz, 2006 Prince Mahidol Award.
Biology and health sciences
Treatments
Health