id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
172732
https://en.wikipedia.org/wiki/Glycerol
Glycerol
Glycerol () is a simple triol compound. It is a colorless, odorless, viscous liquid that is sweet-tasting and non-toxic. The glycerol backbone is found in lipids known as glycerides. It is also widely used as a sweetener in the food industry and as a humectant in pharmaceutical formulations. Because of its three hydroxyl groups, glycerol is miscible with water and is hygroscopic in nature. Modern use of the word glycerine (alternatively spelled glycerin) refers to commercial preparations of less than 100% purity, typically 95% glycerol. Structure Although achiral, glycerol is prochiral with respect to reactions of one of the two primary alcohols. Thus, in substituted derivatives, the stereospecific numbering labels the molecule with a sn- prefix before the stem name of the molecule. Production Natural sources Glycerol is generally obtained from plant and animal sources where it occurs in triglycerides, esters of glycerol with long-chain carboxylic acids. The hydrolysis, saponification, or transesterification of these triglycerides produces glycerol as well as the fatty acid derivative: Triglycerides can be saponified with sodium hydroxide to give glycerol and fatty sodium salt or soap. Typical plant sources include soybeans or palm. Animal-derived tallow is another source. From 2000 to 2004, approximately 950,000 tons per year were produced in the United States and Europe; 350,000 tons of glycerol were produced in the U.S. alone. Since around 2010, there is a large surplus of glycerol as a byproduct of biofuel, enforced for example by EU directive 2003/30/EC that required 5.75% of petroleum fuels to be replaced with biofuel sources across all member states. Crude glycerol produced from triglycerides is of variable quality, with a low selling price of as low as US$0.02–0.05 per kilogram already in 2011. It can be purified in a rather expensive process by treatment with activated carbon to remove organic impurities, alkali to remove unreacted glycerol esters, and ion exchange to remove salts. High purity glycerol (greater than 99.5%) is obtained by multi-step distillation; a vacuum chamber is necessary due to its high boiling point (290 °C). Consequently, glycerol recycling is more of a challenge than its production, for instance by conversion to glycerol carbonate or to synthetic precursors, such as acrolein and epichlorohydrin. Synthetic glycerol Although usually not economical anymore, glycerol can be synthesized by various routes. During World War II, synthetic glycerol processes became a national defense priority because it is a precursor to nitroglycerine. Epichlorohydrin is the most important precursor. Chlorination of propylene gives allyl chloride, which is oxidized with hypochlorite to dichlorohydrin, which reacts with a strong base to give epichlorohydrin. Epichlorohydrin can be hydrolyzed to glycerol. Chlorine-free processes from propylene include the synthesis of glycerol from acrolein and propylene oxide. Applications Food industry In food and beverages, glycerol serves as a humectant, solvent, and sweetener, and may help preserve foods. It is also used as filler in commercially prepared low-fat foods (e.g., cookies), and as a thickening agent in liqueurs. Glycerol and water are used to preserve certain types of plant leaves. As a sugar substitute, it has approximately 27 kilocalories per teaspoon (sugar has 20) and is 60% as sweet as sucrose. It does not feed the bacteria that form a dental plaque and cause dental cavities. As a food additive, glycerol is labeled as E number E422. It is added to icing (frosting) to prevent it from setting too hard. As used in foods, glycerol is categorized by the U.S. Academy of Nutrition and Dietetics as a carbohydrate. The U.S. Food and Drug Administration (FDA) carbohydrate designation includes all caloric macronutrients excluding protein and fat. Glycerol has a caloric density similar to table sugar, but a lower glycemic index and different metabolic pathway within the body. It is also recommended as an additive when polyol sweeteners such as erythritol and xylitol are used, as its heating effect in the mouth will counteract these sweeteners' cooling effect. Medical Glycerol is used in medical, pharmaceutical and personal care preparations, often as a means of improving smoothness, providing lubrication, and as a humectant. Ichthyosis and xerosis have been relieved by the topical use of glycerin. It is found in allergen immunotherapies, cough syrups, elixirs and expectorants, toothpaste, mouthwashes, skin care products, shaving cream, hair care products, soaps, and water-based personal lubricants. In solid dosage forms like tablets, glycerol is used as a tablet holding agent. For human consumption, glycerol is classified by the FDA among the sugar alcohols as a caloric macronutrient. Glycerol is also used in blood banking to preserve red blood cells prior to freezing. Taken rectally, glycerol functions as a laxative by irritating the anal mucosa and inducing a hyperosmotic effect, expanding the colon by drawing water into it to induce peristalsis resulting in evacuation. It may be administered undiluted either as a suppository or as a small-volume (2–10 ml) enema. Alternatively, it may be administered in a dilute solution, such as 5%, as a high-volume enema. Taken orally (often mixed with fruit juice to reduce its sweet taste), glycerol can cause a rapid, temporary decrease in the internal pressure of the eye. This can be useful for the initial emergency treatment of severely elevated eye pressure. In 2017, researchers showed that the probiotic Limosilactobacillus reuteri bacteria can be supplemented with glycerol to enhance its production of antimicrobial substances in the human gut. This was confirmed to be as effective as the antibiotic vancomycin at inhibiting Clostridioides difficile infection without having a significant effect on the overall microbial composition of the gut. Glycerol has also been incorporated as a component of bio-ink formulations in the field of bioprinting. The glycerol content acts to add viscosity to the bio-ink without adding large protein, saccharide, or glycoprotein molecules. Botanical extracts When utilized in "tincture" method extractions, specifically as a 10% solution, glycerol prevents tannins from precipitating in ethanol extracts of plants (tinctures). It is also used as an "alcohol-free" alternative to ethanol as a solvent in preparing herbal extractions. It is less extractive when utilized in a standard tincture methodology. Alcohol-based tinctures can also have the alcohol removed and replaced with glycerol for its preserving properties. Such products are not "alcohol-free" in a scientific or FDA regulatory sense, as glycerol contains three hydroxyl groups. Fluid extract manufacturers often extract herbs in hot water before adding glycerol to make glycerites. When used as a primary "true" alcohol-free botanical extraction solvent in non-tincture based methodologies, glycerol has been shown to possess a high degree of extractive versatility for botanicals including removal of numerous constituents and complex compounds, with an extractive power that can rival that of alcohol and water–alcohol solutions. That glycerol possesses such high extractive power assumes it is utilized with dynamic (critical) methodologies as opposed to standard passive "tincturing" methodologies that are better suited to alcohol. Glycerol does not denature or render a botanical's constituents inert as alcohols (ethanol, methanol, and so on) do. Glycerol is a stable preserving agent for botanical extracts that, when utilized in proper concentrations in an extraction solvent base, does not allow inverting or reduction-oxidation of a finished extract's constituents, even over several years. Both glycerol and ethanol are viable preserving agents. Glycerol is bacteriostatic in its action, and ethanol is bactericidal in its action. Electronic cigarette liquid Glycerin, along with propylene glycol, is a common component of e-liquid, a solution used with electronic vaporizers (electronic cigarettes). This glycerol is heated with an atomizer (a heating coil often made of Kanthal wire), producing the aerosol that delivers nicotine to the user. Antifreeze Like ethylene glycol and propylene glycol, glycerol is a non-ionic kosmotrope that forms strong hydrogen bonds with water molecules, competing with water-water hydrogen bonds. This interaction disrupts the formation of ice. The minimum freezing point temperature is about corresponding to 70% glycerol in water. Glycerol was historically used as an anti-freeze for automotive applications before being replaced by ethylene glycol, which has a lower freezing point. While the minimum freezing point of a glycerol-water mixture is higher than an ethylene glycol-water mixture, glycerol is not toxic and is being re-examined for use in automotive applications. In the laboratory, glycerol is a common component of solvents for enzymatic reagents stored at temperatures below due to the depression of the freezing temperature. It is also used as a cryoprotectant where the glycerol is dissolved in water to reduce damage by ice crystals to laboratory organisms that are stored in frozen solutions, such as fungi, bacteria, nematodes, and mammalian embryos. Some organisms like the moor frog produce glycerol to survive freezing temperatures during hibernation. Chemical intermediate Glycerol is used to produce a variety of useful derivatives. Nitration gives nitroglycerin, an essential ingredient of various explosives such as dynamite, gelignite, and propellants like cordite. Nitroglycerin under the name glyceryl trinitrate (GTN) is commonly used to relieve angina pectoris, taken in the form of sub-lingual tablets, patches, or as an aerosol spray. Trifunctional polyether polyols are produced from glycerol and propylene oxide. Oxidation of glycerol affords mesoxalic acid. Dehydrating glycerol affords hydroxyacetone. Chlorination of glycerol gives the 1-chloropropane-2,3-diol: The same compound can be produced by hydrolysis of epichlorohydrin. Epoxidation by reaction with epichlorohydrin and a Lewis acid yields Glycerol triglycidyl ether. Vibration damping Glycerol is used as fill for pressure gauges to damp vibration. External vibrations, from compressors, engines, pumps, etc., produce harmonic vibrations within Bourdon gauges that can cause the needle to move excessively, giving inaccurate readings. The excessive swinging of the needle can also damage internal gears or other components, causing premature wear. Glycerol, when poured into a gauge to replace the air space, reduces the harmonic vibrations that are transmitted to the needle, increasing the lifetime and reliability of the gauge. Niche uses Entertainment industry Glycerol is used by set decorators when filming scenes involving water to prevent an area meant to look wet from drying out too quickly. Glycerine is also used in the generation of theatrical smoke and fog as a component of the fluid used in fog machines as a replacement for glycol, which has been shown to be an irritant if exposure is prolonged. Ultrasonic couplant Glycerol can be sometimes used as replacement for water in ultrasonic testing, as it has favourably higher acoustic impedance (2.42 MRayl versus 1.483 MRayl for water) while being relatively safe, non-toxic, non-corrosive and relatively low cost. Internal combustion fuel Glycerol is also used to power diesel generators supplying electricity for the FIA Formula E series of electric race cars. Research on additional uses Research continues into potential value-added products of glycerol obtained from biodiesel production. Examples (aside from combustion of waste glycerol): Hydrogen gas production. Glycerine acetate is a potential fuel additive. Additive for starch thermoplastic. Conversion to various other chemicals: Propylene glycol Acrolein Ethanol Epichlorohydrin, a raw material for epoxy resins Metabolism Glycerol is a precursor for synthesis of triacylglycerols and of phospholipids in the liver and adipose tissue. When the body uses stored fat as a source of energy, glycerol and fatty acids are released into the bloodstream. Glycerol is mainly metabolized in the liver. Glycerol injections can be used as a simple test for liver damage, as its rate of absorption by the liver is considered an accurate measure of liver health. Glycerol metabolism is reduced in both cirrhosis and fatty liver disease. Blood glycerol levels are highly elevated during diabetes, and is believed to be the cause of reduced fertility in patients who suffer from diabetes and metabolic syndrome. Blood glycerol levels in diabetic patients average three times higher than healthy controls. Direct glycerol treatment of testes has been found to cause significant long-term reduction in sperm count. Further testing on this subject was abandoned due to the unexpected results, as this was not the goal of the experiment. Circulating glycerol does not glycate proteins as do glucose or fructose, and does not lead to the formation of advanced glycation endproducts (AGEs). In some organisms, the glycerol component can enter the glycolysis pathway directly and, thus, provide energy for cellular metabolism (or, potentially, be converted to glucose through gluconeogenesis). Before glycerol can enter the pathway of glycolysis or gluconeogenesis (depending on physiological conditions), it must be converted to their intermediate glyceraldehyde 3-phosphate in the following steps: The enzyme glycerol kinase is present mainly in the liver and kidneys, but also in other body tissues, including muscle and brain. In adipose tissue, glycerol 3-phosphate is obtained from dihydroxyacetone phosphate with the enzyme glycerol-3-phosphate dehydrogenase. Toxicity and safety Glycerol has very low toxicity when ingested; its LD50 oral dose for rats is 12600 mg/kg and 8700 mg/kg for mice. It does not appear to cause toxicity when inhaled, although changes in cell maturity occurred in small sections of lung in animals under the highest dose measured. A sub-chronic 90-day nose-only inhalation study in Sprague–Dawley (SD) rats exposed to 0.03, 0.16 and 0.66 mg/L glycerin (Per liter of air) for 6-hour continuous sessions revealed no treatment-related toxicity other than minimal metaplasia of the epithelium lining at the base of the epiglottis in rats exposed to 0.66 mg/L glycerin. Glycerol intoxication Excessive consumption by children can lead to glycerol intoxication. Symptoms of intoxication include hypoglycemia, nausea and a loss of consciousness. While intoxication as a result of excessive glycerol consumption is rare and its symptoms generally mild, occasional reports of hospitalization have occurred. In the United Kingdom in August 2023, manufacturers of syrup used in slush ice drinks were advised to reduce the amount of glycerol in their formulations by the Food Standards Agency to reduce the risk of intoxication. Food Standards Scotland advises that slush ice drinks containing glycerol should not be given to children under the age of 4, owing to the risk of intoxication. It also recommends that businesses do not use free refill offers for the drinks in venues where children under the age of 10 are likely to consume them, and that products should be appropriately labelled to inform consumers of the presence of glycerol. Historical cases of contamination with diethylene glycol On 4 May 2007, the FDA advised all U.S. makers of medicines to test all batches of glycerol for diethylene glycol contamination. This followed an occurrence of hundreds of fatal poisonings in Panama resulting from a falsified import customs declaration by Panamanian import/export firm Aduanas Javier de Gracia Express, S. A. The cheaper diethylene glycol was relabeled as the more expensive glycerol. Between 1990 and 1998, incidents of DEG poisoning reportedly occurred in Argentina, Bangladesh, India, and Nigeria, and resulted in hundreds of deaths. In 1937, more than one hundred people died in the United States after ingesting DEG-contaminated elixir sulfanilamide, a drug used to treat infections. Etymology The origin of the gly- and glu- prefixes for glycols and sugars is from Ancient Greek glukus which means sweet. Name glycérine was coined ca. 1811 by Michel Eugène Chevreul to denote what was previously called "sweet principle of fat" by its discoverer Carl Wilhelm Scheele. It was borrowed into English ca. 1838 and in the 20th c. displaced by 1872 term glycerol featuring an alcohols' suffix -ol. Properties Table of thermal and physical properties of saturated liquid glycerin: {|class="wikitable mw-collapsible mw-collapsed" !Temperature (°C) !Density (kg/m3) !Specific heat (kJ/kg·K) !Kinematic viscosity (m2/s) !Conductivity (W/m·K) !Thermal diffusivity (m2/s) !Prandtl number !Bulk modulus (K−1) |- |0 |1276.03 |2.261 | |0.282 | |84700 | |- |10 |1270.11 |2.319 | |0.284 | |31000 | |- |20 |1264.02 |2.386 | |0.286 | |12500 | |- |30 |1258.09 |2.445 | |0.286 | |5380 | |- |40 |1252.01 |2.512 | |0.286 | |2450 | |- |50 |1244.96 |2.583 | |0.287 | |1630 | |}
Physical sciences
Alcohols
Chemistry
172742
https://en.wikipedia.org/wiki/Shetland%20pony
Shetland pony
The Shetland pony or Sheltie is a Scottish breed of pony originating in the Shetland Islands in the north of Scotland. It may stand up to at the withers. It has a heavy coat and short legs, is strong for its size, and is used for riding, driving, and pack purposes. History Shetland ponies originated in the Shetland Islands, located northeast of mainland Scotland. Small horses have been kept in the Shetland Islands since the Bronze Age. People who lived on the islands probably later crossed the native stock with ponies imported by Norse settlers. Shetland ponies were probably also influenced by the Celtic pony, brought to the islands by settlers between 2000 and 1000 BCE. The harsh climate and scarce food developed the ponies into extremely hardy animals. Shetland ponies were first used for pulling carts and for carrying peat, seaweed, and ploughing land. Then, as the Industrial Revolution increased the need for coal in the mid-nineteenth century, thousands of Shetland ponies were taken to mainland Britain to be pit ponies, working underground hauling coal, often for their entire (frequently shortened) lives. Coal mines in the eastern United States also imported some of these animals. The last mine that used Shetland ponies in the United States closed in 1971. The Shetland Pony Stud-Book Society is the breed society for the traditional Shetland throughout the world. It was started in 1890 to maintain purity and encourage high-quality animals. In 1957, the Shetland Islands Premium Stallion Scheme was formed to subsidise high-quality registered stallions to improve the breeding stock. A number of pony breeds derive from the traditional Shetland. These include the American Shetland Pony and Pony of the Americas in the United States, and the Deutsches Classic Pony in Germany. Characteristics The Shetland Pony is hardy and strong, in part because it developed in the harsh conditions of the Shetland Islands. It has a small head, widely spaced eyes and small and alert ears. It has a short muscular neck, a compact stocky body, short strong legs and a shorter-than-normal cannon-bone in relation to its size. A short broad back and deep girth are universal characteristics, as is a springy stride. It has a long thick mane and tail, and a dense double winter coat to withstand harsh weather. It may be of any known horse coat colour other than spotted. It is not unusual for a Shetland pony to live more than 30 years. Uses Today, Shetlands are ridden by children and are shown by both children and adults at horse shows in harness driving classes as well as for pleasure driving outside of the show ring. Shetlands are ridden by small children at horse shows, in riding schools and stables as well as for pleasure. They are seen working in commercial settings such as fairs or carnivals to provide short rides for visitors. They are also seen at petting zoos and sometimes are used for therapeutic horseback riding purposes. In the United Kingdom, Shetlands are also featured in the Shetland Pony Grand National, galloping around a racecourse with young jockeys. A few Shetland ponies still fulfil traditional working roles on the islands, and can be seen carrying peat (which is abundant and used as a fuel source in Shetland) cut from the hillsides in large saddlebags. Their strong physique and ability to cross a variety of difficult terrain types means they are still a viable choice for the job, even in an age of mechanised agriculture. Junior Harness Racing was founded in Queensland by a group of breeders to give young people aged 6–16 an opportunity to obtain a practical introduction to the harness racing industry. The children have the opportunity to drive Shetland ponies in harness under race conditions. No prize money is payable on pony races, although winners and place-getters receive medallions. Miniature Shetlands have been trained as guide horses to take the same role as guide dogs. This task is also performed by other miniature horse breeds.
Biology and health sciences
Horses
Animals
172761
https://en.wikipedia.org/wiki/Tricycle
Tricycle
A tricycle, sometimes abbreviated to trike, is a human-powered (or gasoline or electric motor powered or assisted, or gravity powered) three-wheeled vehicle. Some tricycles, such as cycle rickshaws (for passenger transport) and freight trikes, are used for commercial purposes, especially in the developing world, particularly Africa and Asia. In the West, adult-sized tricycles are used primarily for recreation, shopping, and exercise. Tricycles are favoured by children, the disabled, and senior adults for their apparent stability versus a bicycle; however a conventional trike may exhibit poor dynamic lateral stability, and the rider should exercise appropriate operating caution when cornering (e.g., with regard to speed, rate of turn, slope of surface) and operating technique (e.g., leaning the body 'into' the turn) to avoid tipping the trike over. Designs such as recumbents or others which place the rider lower relative to the wheel axles have a lower centre of gravity, and/or designs with canted wheels (tilted at the top towards the centerline) may be more resistant to lifting inner wheels or tipping during fast sharp turns, but still require operator awareness and technique. History A three-wheeled wheelchair was built in 1655 or 1680 by a disabled German man, Stephan Farffler, who wanted to be able to maintain his mobility. A watch-maker, Farffler created a vehicle that was powered by hand cranks. In 1789, two French inventors developed a three-wheeled vehicle, powered by pedals; they called it the tricycle. In 1818, British inventor Denis Johnson patented his approach to designing tricycles. In 1876, James Starley developed the Coventry Lever Tricycle, which used two small wheels on the right side and a large drive wheel on the left side; power was supplied by hand levers. In 1877, Starley developed a new vehicle he called the Coventry Rotary, which was "one of the first rotary chain drive tricycles." Starley's inventions started a tricycling craze in Britain; by 1879, there were "twenty types of tricycles and multi-wheel cycles ... produced in Coventry, England, and by 1884, there were over 120 different models produced by 20 manufacturers." The first front steering tricycle was manufactured in 1881 by The Leicester Safety Tricycle Company of Leicester, England, which was brought to the market in 1882 costing £18 (). They also developed a folding tricycle at the same time. Tricycles were used by riders who did not feel comfortable on the high wheelers, such as women who wore long, flowing dresses (see rational dress). In September, 1903 Edmund Payne, the popular comedian, started an attempt to beat the twenty-four hours' unpaced Tricycle record. At 100 miles Payne was inside his schedule time, but shortly afterwards had to desist at Wisbech, having encountered five hours of incessant rain. Associations In the UK, upright tricycles are sometimes referred to as "barrows". Many trike enthusiasts in the UK belong to the Tricycle Association, formed in 1929. They participate in day rides, tours, time trials, and a criterium (massed start racing) series. Wheel configurations Delta A delta tricycle has one front wheel and two rear wheels. Tadpole A tadpole tricycle has two front wheels and one rear wheel. Rear wheel steering is sometimes used, although this increases the turning circle and can affect handling (the geometry is similar to a regular tricycle operating in reverse, but with a steering damper added). Other Some early pedal tricycles from the late 19th century used two wheels in tandem on one side and a larger driving wheel on the other. An in-line three-wheeled vehicle has two steered wheels, one at the front and the other in the middle or at the rear. Types Upright Upright trikes resemble a two-wheeled bicycle, traditionally diamond frame, or open frame, but with either two widely spaced wheels at the back (called delta) or two wheels at the front (called tadpole). The rider straddles the frame in both delta and tadpole configurations. Steering is through a handlebar directly connected to the front wheel via a conventional bicycle fork in delta, or via a form of Ackermann steering geometry in the case of the upright tadpole. All non-tilting trikes have stability issues and great care must be used when riding a non-tilting upright trike. The center of gravity is quite high compared to recumbent trikes. Because of this, non-tilting trikes are more prone to tipping over in corners and on uneven or sloping terrain. Conversely, the rider enjoys better visibility than on a recumbent because their head is higher. Recumbent Recumbent trikes' advantages (over conventional trikes) include stability (through low centre of gravity) and low aerodynamic drag. Disadvantages (compared to bicycles) include greater cost, weight, and width. The very low seat may make entry difficult, and on the road they may be less visible to other traffic. Delta A recumbent delta is similar to an upright, with two wheels at the back and one at the front, but has a recumbent layout in which the rider is seated in a chair-like seat. One or both rear wheels can be driven, while the front is used for steering (the usual layout). Steering is either through a linkage, with the handlebars under the seat (under seat steering) or directly to the front wheel with a large handlebar (over seat steering). Some delta trikes can be stored upright by lifting the front wheel and resting the top of the seat on the ground. Delta trikes generally have higher seats and a tighter turning radius than tadpole trikes. The tight turning radius is useful if riding on trails with offset barriers, or navigating around closely placed obstacles. The higher seat makes mounting and dismounting easier. Even with the higher seat a delta trike can be quite stable provided most of the weight (including the rider) is shifted back towards the rear wheels. Many delta trikes place the seat too far forward and that takes weight off the two rear wheels and puts more weight onto the front wheel making the trike more unstable. The Hase Kettwiesel delta trike has an high seat that is placed to put most of the weight onto the cambered rear wheels making it more stable. Delta trikes are suitable to be used as manual scooters for mobility, rehabilitation and/or exercise. The Hase Lepus Comfort is an example of a rehabilitation delta trike designed mainly for comfort and ease of use. It has a lowered front boom and the seat can be adjusted to a height of , which aids in mounting and dismounting. It also has rear wheel suspension for comfort. The Lepus can be folded for easier storage and transportation. The weight of a delta trike can be quite close to the weight of a tadpole trike if they are both of a similar quality and similar materials are used. The Hase Kettwiesel Allround delta trike has an aluminium frame and weighs 39.4 lbs (17.9 kg). The Catrike Road tadpole trike has an aluminium frame and weighs 37.5 lbs (17 kg). Tadpole The recumbent tadpole or reverse trike is a recumbent design with two steered wheels at the front and one driven wheel at the back, though one model has the front wheels driven while the rear wheel steers. Steering is either through a single handlebar linked with tie rods to the front wheels' stub axle assemblies (indirect) or with two handlebars (rather, two half-handlebars) each bolted to a steerer tube, usually through a bicycle-type headset and connected to a stub axle assembly (direct). A single tie rod connects the left and right axle assemblies. The tadpole trike is often used by middle-aged or retiree-age former bicyclists who are tired of the associated pains from normal upright bikes. With its extremely low center of gravity, aerodynamic layout and light weight (for trikes), tadpoles are considered the highest performance trikes. Most velomobiles are built in a tadpole trike configuration since a wide front and narrow rear offer superior aerodynamics to a delta trike configuration. Hand-crank Hand-crank trikes use a hand-operated crank, either as a sole source of power or a double drive with footpower from pedals and hand-power from the hand crank. The hand-power only trikes can be used by individuals who do not have the use of their legs due to a disability or an injury. They are made by companies including Greenspeed, Invacare, Quickie and Druzin. In case of paralysis of the legs, more speed and range of distance can be obtained by adding functional electrical stimulation to the legs. The large leg muscles are activated by electrical impulses synchronized with the hand cranking movement. Tandem Recumbent tandem trikes allow two people to ride in a recumbent position with an extra-strong backbone frame to hold the extra weight. Some allow the "captain" (the rider who steers) and "stoker" (the rider who only pedals) to pedal at different speeds. They are often made with couplers so the frames can be broken down into pieces for easier transport. Manufacturers of recumbent trikes include Greenspeed, WhizWheelz and Inspired Cycle Engineering. Rickshaw Most cycle rickshaws, used for carrying passengers for hire, are tricycles with one steering wheel in the front and two wheels in the back supporting a seating area for one or two passengers. Cycle rickshaws often have a parasol or canopy to protect the passengers from sun and rain. These vehicles are widely used in South Asia and Southeast Asia, where rickshaw driving provides essential employment for recent immigrants from rural areas, generally impoverished men. In the 1990s and first decade of the 21st century, rickshaws became increasingly popular in big cities in Britain, Europe and the United States, where they provide urban transportation, novelty rides, and serve as advertising media. Spidertrike is a recumbent cycle rickshaw that is used in central London and operated by Eco Chariots. The trike pictured is called the SUV (Sensible Utility Vehicle) and is produced by the company Organic Engines, which operates in Florida in the United States. It is a front wheel drive tricycle, articulated behind the driver seat, and has hydraulic double disc brakes and internal hub gears. The passenger is protected from rain and sun with a canopy. Freight Urban delivery trikes are designed and constructed for transporting large loads. These trikes include a cargo area consisting of a steel tube carrier, an open or enclosed box, a flat platform, or a large, heavy-duty wire basket. These are usually mounted over one or both wheels, low behind the front wheel, or between parallel wheels at either the front or rear of the vehicle, to keep the center of gravity low. The frame and drivetrain must be constructed to handle loads several times that of an ordinary bicycle; as such, extra low gears may be added. Other specific design considerations include operator visibility and load suspension. Many, but not all, cycles used for the purpose of vending goods such as ice cream cart trikes or hot dog vending trikes are cargo bicycles. Many freight trikes are of the tadpole configuration, with the cargo box (platform, etc.) mounted between the front wheels. India and China are significant strongholds of the rear-loading "delta" carrier trike. Freight trikes are also designed for indoor use in large warehouses or industrial plants. The advantage of using freight trikes rather than a motor vehicle is that there is no exhaust, which means that the trike can be used inside warehouses. While another option is electric golf cart-style vehicles, freight trikes are human-powered, so they do not have the maintenance required to keep batteries on golf carts charged up. Common uses include: Delivery services in dense urban environments Food vending in high foot traffic areas (including specialist ice cream bikes) Transporting trade tools, including around large installations such as power stations and CERN Airport cargo handling Recycling collections Warehouse inventory transportation Mail Food collection Child transport (in Amsterdam, freight trikes are used primarily to carry children) Children's A tricycle is a typical toy for children between the ages of eighteen months and five years before a balance bike. Compared to adult models, children's trikes are simpler, without brakes or gears, and often with crude front-drive. Child trikes can be unstable, particularly if the wheelbase or track are insufficient. Some trikes have a push bar so adults can control the trike. Child trikes have frames made of metal, plastic, or wood. Children's trikes can have pedals directly driving the front wheels, allowing braking with the pedals, or they can use chain drive the rear wheels, often without a differential, so one rear wheel spins free. Children's trikes do not always have pneumatic tires, having instead wheels of solid rubber or hollow plastic. While this may add to the weight of the tricycle and reduces the shock-absorbing qualities, it eliminates the possibility of punctures. Pull brakes are rarely fitted to front-drive trikes, but the child can slow the trike down by resisting the forward motion through the pedals. Drift Drift trikes are a variety of tricycle with slick rear wheels, enabling them to drift, being countersteered round corners. They are commonly used for gravity-powered descents of paved roads with steep gradients. Hand and foot With hand and foot trikes, the rider makes a pair of front wheels change directions by shifting the center of weight and moves forward by rotating the rear wheel. The hand and foot trike can be also converted into a manual tricycle designed to be driven with both hands and both feet. There are also new hybrids between a handcycle, a recumbent bike and a tricycle, these bikes make it even possible to cycle with legs despite a spinal cord injury. Tilting Tricycles have been constructed that tilt in the direction of a turn, as a bicycle does, to avoid rolling over without a wide axle track. Examples have included upright, recumbent, delta, and tadpole configurations. Conversion sets Tricycle conversion sets or kits convert a bicycle to an upright tricycle. Tricycle kit can remove the front wheel and mounts two wheels under the handlebars for a quick and easy conversion. The advantages of a trike conversion set include lower cost compared with new hand built tricycles and the freedom to choose almost any donor bicycle frame. Tricycle conversion sets tend to be heavier than a high quality, hand built, sports, touring or racing tricycle. Conversion sets can give the would-be serious tricyclist a taste of triking before making the final decision to purchase a complete tricycle. Conversion sets can also supplied ready to be brazed onto a lightweight, steel bicycle frame to form a complete trike. Some trike conversion sets can also be used with recumbent bicycles to form recumbent trikes. Operation Adults may find upright tricycles difficult to ride because of familiarity with the counter-steering required to balance a bicycle. The variation in the camber of the road is the principal difficulty to be overcome once basic tricycle handling is mastered. Recumbent trikes are less affected by camber and, depending on track width and riding position, capable of very fast cornering. Some trikes are tilting three-wheelers, which lean into corners much as bicycles do. In the case of delta tricycles, the drive is often to just one of the rear wheels, though in some cases both wheels are driven through a differential. A double freewheel, preferably using no-backlash roller clutches, is considered superior. Trikes with a differential often use an internally geared hub as a gearbox in a 'mid drive' system. A jackshaft drive permits either single or two-wheel drive. Tadpoles generally use a bicycle's rear wheel drive and for that reason are usually lighter, cheaper and easier to replace and repair. Braking Some trikes use a geometry (also called center point steering) with the kingpin axis intersecting the ground directly ahead of the tire contact point, producing a normal amount of trail. This arrangement, elsewhere called "zero scrub radius" is used to mitigate the effects of one-sided braking on steering. While zero scrub can reduce steering feel and increase wandering it can also protect novices from spinning out and/or flipping. Tadpole trikes tend also to use Ackermann steering geometry, perhaps with both front brakes operated by the stronger hand. While the KMX Kart stunt trike with this setup allows the rear brake to be operated separately, letting the rider do "bootlegger turns", the standard setup for most trikes has the front brake for each side operated by each hand. The center-of-mass of most tadpole trikes is close to the front wheels, making the rear brake less useful. The rear brake may instead be connected to a latching brake lever for use as a parking brake when stopped on a hill. Recumbent trikes often brake one wheel with each hand, allowing the rider to brake one side alone to pull the trike in that direction. Records On 1 July 2005, Sudhakar Yadav from India rode a tricycle in Hyderabad with a height of , a wheel diameter of and length of . This tricycle is exhibited at the Sudha Cars Museum and has been verified as the largest tricycle by the Guinness World Records.
Technology
Human-powered transport
null
172817
https://en.wikipedia.org/wiki/Local%20anesthesia
Local anesthesia
Local anesthesia is any technique to induce the absence of sensation in a specific part of the body, generally for the aim of inducing local analgesia, i.e. local insensitivity to pain, although other local senses may be affected as well. It allows patients to undergo surgical and dental procedures with reduced pain and distress. In many situations, such as cesarean section, it is safer and therefore superior to general anesthesia. The following terms are often used interchangeably: Local anesthesia, in a strict sense, is anesthesia of a small part of the body such as a tooth or an area of skin. Regional anesthesia is aimed at anesthetizing a larger part of the body such as a leg or arm. Conduction anesthesia encompasses a great variety of local and regional anesthetic techniques. Medical A local anesthetic is a drug that causes reversible local anesthesia and a loss of nociception. When it is used on specific nerve pathways (nerve block), effects such as analgesia (loss of pain sensation) and paralysis (loss of muscle power) can be achieved. Clinical local anesthetics belong to one of two classes: aminoamide and aminoester local anesthetics. Synthetic local anesthetics are structurally related to cocaine. They differ from cocaine mainly in that they have no abuse potential and do not act on the sympathoadrenergic system, i.e. they do not produce hypertension or local vasoconstriction, with the exception of Ropivacaine and Mepivacaine that do produce weak vasoconstriction. Unlike other forms of anesthesia, a local can be used for a minor procedure in a surgeon's office as it does not put one into a state of unconsciousness. However, the physician should have a sterile environment available before doing a procedure in their office. Local anesthetics vary in their pharmacological properties and they are used in various techniques of local anesthesia such as: Topical anesthesia (surface) - Similar to topical gel numbing before getting injected with Lidocaine. Infiltration Plexus block Adverse effects depend on the local anesthetic method and site of administration discussed in depth in the local anesthetic sub-article, but overall, adverse effects can be: localized prolonged anesthesia or paresthesia due to infection, hematoma, excessive fluid pressure in a confined cavity, and severing of nerves & support tissue during injection. systemic reactions such as depressed CNS syndrome, allergic reaction, vasovagal episode, and cyanosis due to local anesthetic toxicity. lack of anesthetic effect due to infectious pus such as an abscess. Non-medical local anesthetic techniques Local pain management that uses other techniques than analgesic medication include: Transcutaneous electrical nerve stimulation, which has been found to be ineffective for lower back pain, however, it might help with diabetic neuropathy. Pulsed radiofrequency, neuromodulation, direct introduction of medication and nerve ablation may be used to target either the tissue structures and organ/systems responsible for persistent nociception or the nociceptors from the structures implicated as the source of chronic pain.
Biology and health sciences
Medical procedures: General
Health
172911
https://en.wikipedia.org/wiki/Nuclear%20weapon%20design
Nuclear weapon design
Nuclear weapons design are physical, chemical, and engineering arrangements that cause the physics package of a nuclear weapon to detonate. There are three existing basic design types: Pure fission weapons are the simplest, least technically demanding, were the first nuclear weapons built, and so far the only type ever used in warfare, by the United States on Japan in World War II. Boosted fission weapons increase yield beyond that of the implosion design, by using small quantities of fusion fuel to enhance the fission chain reaction. Boosting can more than double the weapon's fission energy yield. Staged thermonuclear weapons are arrangements of two or more "stages", most usually two. The first stage is normally a boosted fission weapon as above (except for the earliest thermonuclear weapons, which used a pure fission weapon instead). Its detonation causes it to shine intensely with X-rays, which illuminate and implode the second stage filled with a large quantity of fusion fuel. This sets in motion a sequence of events which results in a thermonuclear, or fusion, burn. This process affords potential yields up to hundreds of times those of fission weapons. Pure fission weapons have been the first type to be built by new nuclear powers. Large industrial states with well-developed nuclear arsenals have two-stage thermonuclear weapons, which are the most compact, scalable, and cost effective option, once the necessary technical base and industrial infrastructure are built. Most known innovations in nuclear weapon design originated in the United States, though some were later developed independently by other states. In early news accounts, pure fission weapons were called atomic bombs or A-bombs and weapons involving fusion were called hydrogen bombs or H-bombs. Practitioners of nuclear policy, however, favor the terms nuclear and thermonuclear, respectively. Nuclear reactions Nuclear fission separates or splits heavier atoms to form lighter atoms. Nuclear fusion combines lighter atoms to form heavier atoms. Both reactions generate roughly a million times more energy than comparable chemical reactions, making nuclear bombs a million times more powerful than non-nuclear bombs, which a French patent claimed in May 1939. In some ways, fission and fusion are opposite and complementary reactions, but the particulars are unique for each. To understand how nuclear weapons are designed, it is useful to know the important similarities and differences between fission and fusion. The following explanation uses rounded numbers and approximations. Fission When a free neutron hits the nucleus of a fissile atom like uranium-235 (235U), the uranium nucleus splits into two smaller nuclei called fission fragments, plus more neutrons (for 235U three about as often as two; an average of just under 2.5 per fission). The fission chain reaction in a supercritical mass of fuel can be self-sustaining because it produces enough surplus neutrons to offset losses of neutrons escaping the supercritical assembly. Most of these have the speed (kinetic energy) required to cause new fissions in neighboring uranium nuclei. The uranium-235 nucleus can split in many ways, provided the atomic numbers add up to 92 and the mass numbers add up to 236 (uranium-235 plus the neutron that caused the split). The following equation shows one possible split, namely into strontium-95 (95Sr), xenon-139 (139Xe), and two neutrons (n), plus energy: The immediate energy release per atom is about 180 million electron volts (MeV); i.e., 74 TJ/kg. Only 7% of this is gamma radiation and kinetic energy of fission neutrons. The remaining 93% is kinetic energy (or energy of motion) of the charged fission fragments, flying away from each other mutually repelled by the positive charge of their protons (38 for strontium, 54 for xenon). This initial kinetic energy is 67 TJ/kg, imparting an initial speed of about 12,000 kilometers per second (i.e. 1.2 cm per nanosecond). The charged fragments' high electric charge causes many inelastic coulomb collisions with nearby nuclei, and these fragments remain trapped inside the bomb's fissile pit and tamper until their kinetic energy is converted into heat. Given the speed of the fragments and the mean free path between nuclei in the compressed fuel assembly (for the implosion design), this takes about a millionth of a second (a microsecond), by which time the core and tamper of the bomb have expanded to a ball of plasma several meters in diameter with a temperature of tens of millions of degrees Celsius. This is hot enough to emit black-body radiation in the X-ray spectrum. These X-rays are absorbed by the surrounding air, producing the fireball and blast of a nuclear explosion. Most fission products have too many neutrons to be stable so they are radioactive by beta decay, converting neutrons into protons by throwing off beta particles (electrons), neutrinos and gamma rays. Their half-lives range from milliseconds to about 200,000 years. Many decay into isotopes that are themselves radioactive, so from 1 to 6 (average 3) decays may be required to reach stability. In reactors, the radioactive products are the nuclear waste in spent fuel. In bombs, they become radioactive fallout, both local and global. Meanwhile, inside the exploding bomb, the free neutrons released by fission carry away about 3% of the initial fission energy. Neutron kinetic energy adds to the blast energy of a bomb, but not as effectively as the energy from charged fragments, since neutrons do not give up their kinetic energy as quickly in collisions with charged nuclei or electrons. The dominant contribution of fission neutrons to the bomb's power is the initiation of subsequent fissions. Over half of the neutrons escape the bomb core, but the rest strike 235U nuclei causing them to fission in an exponentially growing chain reaction (1, 2, 4, 8, 16, etc.). Starting from one atom, the number of fissions can theoretically double a hundred times in a microsecond, which could consume all uranium or plutonium up to hundreds of tons by the hundredth link in the chain. Typically in a modern weapon, the weapon's pit contains of plutonium and at detonation produces approximately yield, representing the fissioning of approximately of plutonium. Materials which can sustain a chain reaction are called fissile. The two fissile materials used in nuclear weapons are: 235U, also known as highly enriched uranium (HEU), "oralloy" meaning "Oak Ridge alloy", or "25" (a combination of the last digit of the atomic number of uranium-235, which is 92, and the last digit of its mass number, which is 235); and 239Pu, also known as plutonium-239, or "49" (from "94" and "239"). Uranium's most common isotope, 238U, is fissionable but not fissile, meaning that it cannot sustain a chain reaction because its daughter fission neutrons are not (on average) energetic enough to cause follow-on 238U fissions. However, the neutrons released by fusion of the heavy hydrogen isotopes deuterium and tritium will fission 238U. This 238U fission reaction in the outer jacket of the secondary assembly of a two-stage thermonuclear bomb produces by far the greatest fraction of the bomb's energy yield, as well as most of its radioactive debris. For national powers engaged in a nuclear arms race, this fact of 238U's ability to fast-fission from thermonuclear neutron bombardment is of central importance. The plenitude and cheapness of both bulk dry fusion fuel (lithium deuteride) and 238U (a byproduct of uranium enrichment) permit the economical production of very large nuclear arsenals, in comparison to pure fission weapons requiring the expensive 235U or 239Pu fuels. Fusion Fusion produces neutrons which dissipate energy from the reaction. In weapons, the most important fusion reaction is called the D-T reaction. Using the heat and pressure of fission, hydrogen-2, or deuterium (2D), fuses with hydrogen-3, or tritium (3T), to form helium-4 (4He) plus one neutron (n) and energy: The total energy output, 17.6 MeV, is one tenth of that with fission, but the ingredients are only one-fiftieth as massive, so the energy output per unit mass is approximately five times as great. In this fusion reaction, 14 of the 17.6 MeV (80% of the energy released in the reaction) shows up as the kinetic energy of the neutron, which, having no electric charge and being almost as massive as the hydrogen nuclei that created it, can escape the scene without leaving its energy behind to help sustain the reaction – or to generate x-rays for blast and fire. The only practical way to capture most of the fusion energy is to trap the neutrons inside a massive bottle of heavy material such as lead, uranium, or plutonium. If the 14 MeV neutron is captured by uranium (of either isotope; 14 MeV is high enough to fission both 235U and 238U) or plutonium, the result is fission and the release of 180 MeV of fission energy, multiplying the energy output tenfold. For weapon use, fission is necessary to start fusion, helps to sustain fusion, and captures and multiplies the energy carried by the fusion neutrons. In the case of a neutron bomb (see below), the last-mentioned factor does not apply, since the objective is to facilitate the escape of neutrons, rather than to use them to increase the weapon's raw power. Tritium production An essential nuclear reaction is the one that creates tritium, or hydrogen-3. Tritium is employed in two ways. First, pure tritium gas is produced for placement inside the cores of boosted fission devices in order to increase their energy yields. This is especially so for the fission primaries of thermonuclear weapons. The second way is indirect, and takes advantage of the fact that the neutrons emitted by a supercritical fission "spark plug" in the secondary assembly of a two-stage thermonuclear bomb will produce tritium in situ when these neutrons collide with the lithium nuclei in the bomb's lithium deuteride fuel supply. Elemental gaseous tritium for fission primaries is also made by bombarding lithium-6 (6Li) with neutrons (n), only in a nuclear reactor. This neutron bombardment will cause the lithium-6 nucleus to split, producing an alpha particle, or helium-4 (4He), plus a triton (3T) and energy: But as was discovered in the first test of this type of device, Castle Bravo, when lithium-7 is present, one also has some amounts of the following two net reactions: Li + n → T + He + n Li + H → 2 He + n + 15.123 MeV Most lithium is 7Li, and this gave Castle Bravo a yield 2.5 times larger than expected. The neutrons are supplied by the nuclear reactor in a way similar to production of plutonium 239Pu from 238U feedstock: target rods of the 6Li feedstock are arranged around a uranium-fueled core, and are removed for processing once it has been calculated that most of the lithium nuclei have been transmuted to tritium. Of the four basic types of nuclear weapon, the first, pure fission, uses the first of the three nuclear reactions above. The second, fusion-boosted fission, uses the first two. The third, two-stage thermonuclear, uses all three. Pure fission weapons The first task of a nuclear weapon design is to rapidly assemble a supercritical mass of fissile (weapon grade) uranium or plutonium. A supercritical mass is one in which the percentage of fission-produced neutrons captured by other neighboring fissile nuclei is large enough that each fission event, on average, causes more than one follow-on fission event. Neutrons released by the first fission events induce subsequent fission events at an exponentially accelerating rate. Each follow-on fissioning continues a sequence of these reactions that works its way throughout the supercritical mass of fuel nuclei. This process is conceived and described colloquially as the nuclear chain reaction. To start the chain reaction in a supercritical assembly, at least one free neutron must be injected and collide with a fissile fuel nucleus. The neutron joins with the nucleus (technically a fusion event) and destabilizes the nucleus, which explodes into two middleweight nuclear fragments (from the severing of the strong nuclear force holding the mutually-repulsive protons together), plus two or three free neutrons. These race away and collide with neighboring fuel nuclei. This process repeats over and over until the fuel assembly goes sub-critical (from thermal expansion), after which the chain reaction shuts down because the daughter neutrons can no longer find new fuel nuclei to hit before escaping the less-dense fuel mass. Each following fission event in the chain approximately doubles the neutron population (net, after losses due to some neutrons escaping the fuel mass, and others that collide with any non-fuel impurity nuclei present). For the gun assembly method (see below) of supercritical mass formation, the fuel itself can be relied upon to initiate the chain reaction. This is because even the best weapon-grade uranium contains a significant number of 238U nuclei. These are susceptible to spontaneous fission events, which occur randomly (it is a quantum mechanical phenomenon). Because the fissile material in a gun-assembled critical mass is not compressed, the design need only ensure the two sub-critical masses remain close enough to each other long enough that a 238U spontaneous fission will occur while the weapon is in the vicinity of the target. This is not difficult to arrange as it takes but a second or two in a typical-size fuel mass for this to occur. (Still, many such bombs meant for delivery by air (gravity bomb, artillery shell or rocket) use injected neutrons to gain finer control over the exact detonation altitude, important for the destructive effectiveness of airbursts.) This condition of spontaneous fission highlights the necessity to assemble the supercritical mass of fuel very rapidly. The time required to accomplish this is called the weapon's critical insertion time. If spontaneous fission were to occur when the supercritical mass was only partially assembled, the chain reaction would begin prematurely. Neutron losses through the void between the two subcritical masses (gun assembly) or the voids between not-fully-compressed fuel nuclei (implosion assembly) would sap the bomb of the number of fission events needed to attain the full design yield. Additionally, heat resulting from the fissions that do occur would work against the continued assembly of the supercritical mass, from thermal expansion of the fuel. This failure is called predetonation. The resulting explosion would be called a "fizzle" by bomb engineers and weapon users. Plutonium's high rate of spontaneous fission makes uranium fuel a necessity for gun-assembled bombs, with their much greater insertion time and much greater mass of fuel required (because of the lack of fuel compression). There is another source of free neutrons that can spoil a fission explosion. All uranium and plutonium nuclei have a decay mode that results in energetic alpha particles. If the fuel mass contains impurity elements of low atomic number (Z), these charged alphas can penetrate the coulomb barrier of these impurity nuclei and undergo a reaction that yields a free neutron. The rate of alpha emission of fissile nuclei is one to two million times that of spontaneous fission, so weapon engineers are careful to use fuel of high purity. Fission weapons used in the vicinity of other nuclear explosions must be protected from the intrusion of free neutrons from outside. Such shielding material will almost always be penetrated, however, if the outside neutron flux is intense enough. When a weapon misfires or fizzles because of the effects of other nuclear detonations, it is called nuclear fratricide. For the implosion-assembled design, once the critical mass is assembled to maximum density, a burst of neutrons must be supplied to start the chain reaction. Early weapons used a modulated neutron generator code named "Urchin" inside the pit containing polonium-210 and beryllium separated by a thin barrier. Implosion of the pit crushes the neutron generator, mixing the two metals, thereby allowing alpha particles from the polonium to interact with beryllium to produce free neutrons. In modern weapons, the neutron generator is a high-voltage vacuum tube containing a particle accelerator which bombards a deuterium/tritium-metal hydride target with deuterium and tritium ions. The resulting small-scale fusion produces neutrons at a protected location outside the physics package, from which they penetrate the pit. This method allows better timing of the first fission events in the chain reaction, which optimally should occur at the point of maximum compression/supercriticality. Timing of the neutron injection is a more important parameter than the number of neutrons injected: the first generations of the chain reaction are vastly more effective due to the exponential function by which neutron multiplication evolves. The critical mass of an uncompressed sphere of bare metal is for uranium-235 and for delta-phase plutonium-239. In practical applications, the amount of material required for criticality is modified by shape, purity, density, and the proximity to neutron-reflecting material, all of which affect the escape or capture of neutrons. To avoid a premature chain reaction during handling, the fissile material in the weapon must be kept subcritical. It may consist of one or more components containing less than one uncompressed critical mass each. A thin hollow shell can have more than the bare-sphere critical mass, as can a cylinder, which can be arbitrarily long without ever reaching criticality. Another method of reducing criticality risk is to incorporate material with a large cross-section for neutron capture, such as boron (specifically 10B comprising 20% of natural boron). Naturally this neutron absorber must be removed before the weapon is detonated. This is easy for a gun-assembled bomb: the projectile mass simply shoves the absorber out of the void between the two subcritical masses by the force of its motion. The use of plutonium affects weapon design due to its high rate of alpha emission. This results in Pu metal spontaneously producing significant heat; a 5 kilogram mass produces 9.68 watts of thermal power. Such a piece would feel warm to the touch, which is no problem if that heat is dissipated promptly and not allowed to build up the temperature. But this is a problem inside a nuclear bomb. For this reason bombs using Pu fuel use aluminum parts to wick away the excess heat, and this complicates bomb design because Al plays no active role in the explosion processes. A tamper is an optional layer of dense material surrounding the fissile material. Due to its inertia it delays the thermal expansion of the fissioning fuel mass, keeping it supercritical for longer. Often the same layer serves both as tamper and as neutron reflector. Gun-type assembly Little Boy, the Hiroshima bomb, used of uranium with an average enrichment of around 80%, or of uranium-235, just about the bare-metal critical mass . When assembled inside its tamper/reflector of tungsten carbide, the was more than twice critical mass. Before the detonation, the uranium-235 was formed into two sub-critical pieces, one of which was later fired down a gun barrel to join the other, starting the nuclear explosion. Analysis shows that less than 2% of the uranium mass underwent fission; the remainder, representing most of the entire wartime output of the giant Y-12 factories at Oak Ridge, scattered uselessly. The inefficiency was caused by the speed with which the uncompressed fissioning uranium expanded and became sub-critical by virtue of decreased density. Despite its inefficiency, this design, because of its shape, was adapted for use in small-diameter, cylindrical artillery shells (a gun-type warhead fired from the barrel of a much larger gun). Such warheads were deployed by the United States until 1992, accounting for a significant fraction of the 235U in the arsenal, and were some of the first weapons dismantled to comply with treaties limiting warhead numbers. The rationale for this decision was undoubtedly a combination of the lower yield and grave safety issues associated with the gun-type design. Implosion-type For both the Trinity device and the Fat Man (Nagasaki) bomb, nearly identical plutonium fission through implosion designs were used. The Fat Man device specifically used , about in volume, of Pu-239, which is only 41% of bare-sphere critical mass . Surrounded by a U-238 reflector/tamper, the Fat Man's pit was brought close to critical mass by the neutron-reflecting properties of the U-238. During detonation, criticality was achieved by implosion. The plutonium pit was squeezed to increase its density by simultaneous detonation, as with the "Trinity" test detonation three weeks earlier, of the conventional explosives placed uniformly around the pit. The explosives were detonated by multiple exploding-bridgewire detonators. It is estimated that only about 20% of the plutonium underwent fission; the rest, about , was scattered. An implosion shock wave might be of such short duration that only part of the pit is compressed at any instant as the wave passes through it. To prevent this, a pusher shell may be needed. The pusher is located between the explosive lens and the tamper. It works by reflecting some of the shock wave backward, thereby having the effect of lengthening its duration. It is made out of a low density metal – such as aluminium, beryllium, or an alloy of the two metals (aluminium is easier and safer to shape, and is two orders of magnitude cheaper; beryllium has high neutron-reflective capability). Fat Man used an aluminium pusher. The series of RaLa Experiment tests of implosion-type fission weapon design concepts, carried out from July 1944 through February 1945 at the Los Alamos Laboratory and a remote site east of it in Bayo Canyon, proved the practicality of the implosion design for a fission device, with the February 1945 tests positively determining its usability for the final Trinity/Fat Man plutonium implosion design. The key to Fat Man's greater efficiency was the inward momentum of the massive U-238 tamper. (The natural uranium tamper did not undergo fission from thermal neutrons, but did contribute perhaps 20% of the total yield from fission by fast neutrons). After the chain reaction started in the plutonium, it continued until the explosion reversed the momentum of the implosion and expanded enough to stop the chain reaction. By holding everything together for a few hundred nanoseconds more, the tamper increased the efficiency. Plutonium pit The core of an implosion weapon – the fissile material and any reflector or tamper bonded to it – is known as the pit. Some weapons tested during the 1950s used pits made with U-235 alone, or in composite with plutonium, but all-plutonium pits are the smallest in diameter and have been the standard since the early 1960s. Casting and then machining plutonium is difficult not only because of its toxicity, but also because plutonium has many different metallic phases. As plutonium cools, changes in phase result in distortion and cracking. This distortion is normally overcome by alloying it with 30–35 mMol (0.9–1.0% by weight) gallium, forming a plutonium-gallium alloy, which causes it to take up its delta phase over a wide temperature range. When cooling from molten it then has only a single phase change, from epsilon to delta, instead of the four changes it would otherwise pass through. Other trivalent metals would also work, but gallium has a small neutron absorption cross section and helps protect the plutonium against corrosion. A drawback is that gallium compounds are corrosive and so if the plutonium is recovered from dismantled weapons for conversion to plutonium dioxide for power reactors, there is the difficulty of removing the gallium. Because plutonium is chemically reactive it is common to plate the completed pit with a thin layer of inert metal, which also reduces the toxic hazard. The gadget used galvanic silver plating; afterward, nickel deposited from nickel tetracarbonyl vapors was used, but thereafter and since, gold became the preferred material. Recent designs improve safety by plating pits with vanadium to make the pits more fire-resistant. Levitated-pit implosion The first improvement on the Fat Man design was to put an air space between the tamper and the pit to create a hammer-on-nail impact. The pit, supported on a hollow cone inside the tamper cavity, was said to be "levitated". The three tests of Operation Sandstone, in 1948, used Fat Man designs with levitated pits. The largest yield was 49 kilotons, more than twice the yield of the unlevitated Fat Man. It was immediately clear that implosion was the best design for a fission weapon. Its only drawback seemed to be its diameter. Fat Man was wide vs for Little Boy. The Pu-239 pit of Fat Man was only in diameter, the size of a softball. The bulk of Fat Man's girth was the implosion mechanism, namely concentric layers of U-238, aluminium, and high explosives. The key to reducing that girth was the two-point implosion design. Two-point linear implosion In the two-point linear implosion, the nuclear fuel is cast into a solid shape and placed within the center of a cylinder of high explosive. Detonators are placed at either end of the explosive cylinder, and a plate-like insert, or shaper, is placed in the explosive just inside the detonators. When the detonators are fired, the initial detonation is trapped between the shaper and the end of the cylinder, causing it to travel out to the edges of the shaper where it is diffracted around the edges into the main mass of explosive. This causes the detonation to form into a ring that proceeds inward from the shaper. Due to the lack of a tamper or lenses to shape the progression, the detonation does not reach the pit in a spherical shape. To produce the desired spherical implosion, the fissile material itself is shaped to produce the same effect. Due to the physics of the shock wave propagation within the explosive mass, this requires the pit to be a prolate spheroid, that is, roughly egg shaped. The shock wave first reaches the pit at its tips, driving them inward and causing the mass to become spherical. The shock may also change plutonium from delta to alpha phase, increasing its density by 23%, but without the inward momentum of a true implosion. The lack of compression makes such designs inefficient, but the simplicity and small diameter make it suitable for use in artillery shells and atomic demolition munitions – ADMs – also known as backpack or suitcase nukes; an example is the W48 artillery shell, the smallest nuclear weapon ever built or deployed. All such low-yield battlefield weapons, whether gun-type U-235 designs or linear implosion Pu-239 designs, pay a high price in fissile material in order to achieve diameters between six and ten inches (15 and 25 cm). Hollow-pit implosion A more efficient implosion system uses a hollow pit. A hollow plutonium pit was the original plan for the 1945 Fat Man bomb, but there was not enough time to develop and test the implosion system for it. A simpler solid-pit design was considered more reliable, given the time constraints, but it required a heavy U-238 tamper, a thick aluminium pusher, and three tons of high explosives. After the war, interest in the hollow pit design was revived. Its obvious advantage is that a hollow shell of plutonium, shock-deformed and driven inward toward its empty center, would carry momentum into its violent assembly as a solid sphere. It would be self-tamping, requiring a smaller U-238 tamper, no aluminium pusher, and less high explosive. Fusion-boosted fission The next step in miniaturization was to speed up the fissioning of the pit to reduce the minimum inertial confinement time. This would allow the efficient fission of the fuel with less mass in the form of tamper or the fuel itself. The key to achieving faster fission would be to introduce more neutrons, and among the many ways to do this, adding a fusion reaction was relatively easy in the case of a hollow pit. The easiest fusion reaction to achieve is found in a 50–50 mixture of tritium and deuterium. For fusion power experiments this mixture must be held at high temperatures for relatively lengthy times in order to have an efficient reaction. For explosive use, however, the goal is not to produce efficient fusion, but simply provide extra neutrons early in the process. Since a nuclear explosion is supercritical, any extra neutrons will be multiplied by the chain reaction, so even tiny quantities introduced early can have a large effect on the outcome. For this reason, even the relatively low compression pressures and times (in fusion terms) found in the center of a hollow pit warhead are enough to create the desired effect. In the boosted design, the fusion fuel in gas form is pumped into the pit during arming. This will fuse into helium and release free neutrons soon after fission begins. The neutrons will start a large number of new chain reactions while the pit is still critical or nearly critical. Once the hollow pit is perfected, there is little reason not to boost; deuterium and tritium are easily produced in the small quantities needed, and the technical aspects are trivial. The concept of fusion-boosted fission was first tested on May 25, 1951, in the Item shot of Operation Greenhouse, Eniwetok, yield 45.5 kilotons. Boosting reduces diameter in three ways, all the result of faster fission: Since the compressed pit does not need to be held together as long, the massive U-238 tamper can be replaced by a light-weight beryllium shell (to reflect escaping neutrons back into the pit). The diameter is reduced. The mass of the pit can be reduced by half, without reducing yield. Diameter is reduced again. Since the mass of the metal being imploded (tamper plus pit) is reduced, a smaller charge of high explosive is needed, reducing diameter even further. The first device whose dimensions suggest employment of all these features (two-point, hollow-pit, fusion-boosted implosion) was the Swan device. It had a cylindrical shape with a diameter of and a length of . It was first tested standalone and then as the primary of a two-stage thermonuclear device during Operation Redwing. It was weaponized as the Robin primary and became the first off-the-shelf, multi-use primary, and the prototype for all that followed. After the success of Swan, seemed to become the standard diameter of boosted single-stage devices tested during the 1950s. Length was usually twice the diameter, but one such device, which became the W54 warhead, was closer to a sphere, only long. One of the applications of the W54 was the Davy Crockett XM-388 recoilless rifle projectile. It had a dimension of just , and is shown here in comparison to its Fat Man predecessor (). Another benefit of boosting, in addition to making weapons smaller, lighter, and with less fissile material for a given yield, is that it renders weapons immune to predetonation. It was discovered in the mid-1950s that plutonium pits would be particularly susceptible to partial predetonation if exposed to the intense radiation of a nearby nuclear explosion (electronics might also be damaged, but this was a separate problem). RI was a particular problem before effective early warning radar systems because a first strike attack might make retaliatory weapons useless. Boosting reduces the amount of plutonium needed in a weapon to below the quantity which would be vulnerable to this effect. Two-stage thermonuclear Pure fission or fusion-boosted fission weapons can be made to yield hundreds of kilotons, at great expense in fissile material and tritium, but by far the most efficient way to increase nuclear weapon yield beyond ten or so kilotons is to add a second independent stage, called a secondary. In the 1940s, bomb designers at Los Alamos thought the secondary would be a canister of deuterium in liquefied or hydride form. The fusion reaction would be D-D, harder to achieve than D-T, but more affordable. A fission bomb at one end would shock-compress and heat the near end, and fusion would propagate through the canister to the far end. Mathematical simulations showed it would not work, even with large amounts of expensive tritium added. The entire fusion fuel canister would need to be enveloped by fission energy, to both compress and heat it, as with the booster charge in a boosted primary. The design breakthrough came in January 1951, when Edward Teller and Stanislaw Ulam invented radiation implosion – for nearly three decades known publicly only as the Teller-Ulam H-bomb secret. The concept of radiation implosion was first tested on May 9, 1951, in the George shot of Operation Greenhouse, Eniwetok, yield 225 kilotons. The first full test was on November 1, 1952, the Mike shot of Operation Ivy, Eniwetok, yield 10.4 megatons. In radiation implosion, the burst of X-ray energy coming from an exploding primary is captured and contained within an opaque-walled radiation channel which surrounds the nuclear energy components of the secondary. The radiation quickly turns the plastic foam that had been filling the channel into a plasma which is mostly transparent to X-rays, and the radiation is absorbed in the outermost layers of the pusher/tamper surrounding the secondary, which ablates and applies a massive force (much like an inside out rocket engine) causing the fusion fuel capsule to implode much like the pit of the primary. As the secondary implodes a fissile "spark plug" at its center ignites and provides neutrons and heat which enable the lithium deuteride fusion fuel to produce tritium and ignite as well. The fission and fusion chain reactions exchange neutrons with each other and boost the efficiency of both reactions. The greater implosive force, enhanced efficiency of the fissile "spark plug" due to boosting via fusion neutrons, and the fusion explosion itself provide significantly greater explosive yield from the secondary despite often not being much larger than the primary. For example, for the Redwing Mohawk test on July 3, 1956, a secondary called the Flute was attached to the Swan primary. The Flute was in diameter and long, about the size of the Swan. But it weighed ten times as much and yielded 24 times as much energy (355 kilotons vs 15 kilotons). Equally important, the active ingredients in the Flute probably cost no more than those in the Swan. Most of the fission came from cheap U-238, and the tritium was manufactured in place during the explosion. Only the spark plug at the axis of the secondary needed to be fissile. A spherical secondary can achieve higher implosion densities than a cylindrical secondary, because spherical implosion pushes in from all directions toward the same spot. However, in warheads yielding more than one megaton, the diameter of a spherical secondary would be too large for most applications. A cylindrical secondary is necessary in such cases. The small, cone-shaped re-entry vehicles in multiple-warhead ballistic missiles after 1970 tended to have warheads with spherical secondaries, and yields of a few hundred kilotons. In engineering terms, radiation implosion allows for the exploitation of several known features of nuclear bomb materials which heretofore had eluded practical application. For example: The optimal way to store deuterium in a reasonably dense state is to chemically bond it with lithium, as lithium deuteride. But the lithium-6 isotope is also the raw material for tritium production, and an exploding bomb is a nuclear reactor. Radiation implosion will hold everything together long enough to permit the complete conversion of lithium-6 into tritium, while the bomb explodes. So the bonding agent for deuterium permits use of the D-T fusion reaction without any pre-manufactured tritium being stored in the secondary. The tritium production constraint disappears. For the secondary to be imploded by the hot, radiation-induced plasma surrounding it, it must remain cool for the first microsecond, i.e., it must be encased in a massive radiation (heat) shield. The shield's massiveness allows it to double as a tamper, adding momentum and duration to the implosion. No material is better suited for both of these jobs than ordinary, cheap uranium-238, which also happens to undergo fission when struck by the neutrons produced by D-T fusion. This casing, called the pusher, thus has three jobs: to keep the secondary cool; to hold it, inertially, in a highly compressed state; and, finally, to serve as the chief energy source for the entire bomb. The consumable pusher makes the bomb more a uranium fission bomb than a hydrogen fusion bomb. Insiders never used the term "hydrogen bomb". Finally, the heat for fusion ignition comes not from the primary but from a second fission bomb called the spark plug, embedded in the heart of the secondary. The implosion of the secondary implodes this spark plug, detonating it and igniting fusion in the material around it, but the spark plug then continues to fission in the neutron-rich environment until it is fully consumed, adding significantly to the yield. In the ensuing fifty years, no one has come up with a more efficient way to build a thermonuclear bomb. It is the design of choice for the United States, Russia, the United Kingdom, China, and France, the five thermonuclear powers. On 3 September 2017 North Korea carried out what it reported as its first "two-stage thermo-nuclear weapon" test. According to Dr. Theodore Taylor, after reviewing leaked photographs of disassembled weapons components taken before 1986, Israel possessed boosted weapons and would require supercomputers of that era to advance further toward full two-stage weapons in the megaton range without nuclear test detonations. The other nuclear-armed nations, India and Pakistan, probably have single-stage weapons, possibly boosted. Interstage In a two-stage thermonuclear weapon the energy from the primary impacts the secondary. An essential energy transfer modulator called the interstage, between the primary and the secondary, protects the secondary's fusion fuel from heating too quickly, which could cause it to explode in a conventional (and small) heat explosion before the fusion and fission reactions get a chance to start. There is very little information in the open literature about the mechanism of the interstage. Its first mention in a U.S. government document formally released to the public appears to be a caption in a graphic promoting the Reliable Replacement Warhead Program in 2007. If built, this new design would replace "toxic, brittle material" and "expensive 'special' material" in the interstage. This statement suggests the interstage may contain beryllium to moderate the flux of neutrons from the primary, and perhaps something to absorb and re-radiate the x-rays in a particular manner. There is also some speculation that this interstage material, which may be code-named Fogbank, might be an aerogel, possibly doped with beryllium and/or other substances. The interstage and the secondary are encased together inside a stainless steel membrane to form the canned subassembly (CSA), an arrangement which has never been depicted in any open-source drawing. The most detailed illustration of an interstage shows a British thermonuclear weapon with a cluster of items between its primary and a cylindrical secondary. They are labeled "end-cap and neutron focus lens", "reflector/neutron gun carriage", and "reflector wrap". The origin of the drawing, posted on the internet by Greenpeace, is uncertain, and there is no accompanying explanation. Specific designs While every nuclear weapon design falls into one of the above categories, specific designs have occasionally become the subject of news accounts and public discussion, often with incorrect descriptions about how they work and what they do. Examples are listed below. Alarm Clock/Sloika The first effort to exploit the symbiotic relationship between fission and fusion was a 1940s design that mixed fission and fusion fuel in alternating thin layers. As a single-stage device, it would have been a cumbersome application of boosted fission. It first became practical when incorporated into the secondary of a two-stage thermonuclear weapon. The U.S. name, Alarm Clock, came from Teller: he called it that because it might "wake up the world" to the possibility of the potential of the Super. The Russian name for the same design was more descriptive: Sloika (), a layered pastry cake. A single-stage Soviet Sloika was tested as RDS-6s on August 12, 1953. No single-stage U.S. version was tested, but the code named Castle Union shot of Operation Castle, April 26, 1954, was a two-stage thermonuclear device code-named Alarm Clock. Its yield, at Bikini, was 6.9 megatons. Because the Soviet Sloika test used dry lithium-6 deuteride eight months before the first U.S. test to use it (Castle Bravo, March 1, 1954), it was sometimes claimed that the USSR won the H-bomb race, even though the United States tested and developed the first hydrogen bomb: the Ivy Mike H-bomb test. The 1952 U.S. Ivy Mike test used cryogenically cooled liquid deuterium as the fusion fuel in the secondary, and employed the D-D fusion reaction. However, the first Soviet test to use a radiation-imploded secondary, the essential feature of a true H-bomb, was on November 23, 1955, three years after Ivy Mike. In fact, real work on the implosion scheme in the Soviet Union only commenced in the very early part of 1953, several months after the successful testing of Sloika. Clean bombs On March 1, 1954, the largest-ever U.S. nuclear test explosion, the 15-megaton Castle Bravo shot of Operation Castle at Bikini Atoll, delivered a promptly lethal dose of fission-product fallout to more than of Pacific Ocean surface. Radiation injuries to Marshall Islanders and Japanese fishermen made that fact public and revealed the role of fission in hydrogen bombs. In response to the public alarm over fallout, an effort was made to design a clean multi-megaton weapon, relying almost entirely on fusion. The energy produced by the fissioning of unenriched natural uranium, when used as the tamper material in the secondary and subsequent stages in the Teller-Ulam design, can far exceed the energy released by fusion, as was the case in the Castle Bravo test. Replacing the fissionable material in the tamper with another material is essential to producing a "clean" bomb. In such a device, the tamper no longer contributes energy, so for any given weight, a clean bomb will have less yield. The earliest known incidence of a three-stage device being tested, with the third stage, called the tertiary, being ignited by the secondary, was May 27, 1956, in the Bassoon device. This device was tested in the Zuni shot of Operation Redwing. This shot used non-fissionable tampers; an inert substitute material such as tungsten or lead was used. Its yield was 3.5 megatons, 85% fusion and only 15% fission. The Ripple concept, which used ablation to achieve fusion using very little fission, was and still is by far the cleanest design. Unlike previous clean bombs, which were clean simply by replacing fission fuel with inert substance, Ripple was by design clean. Ripple was also extremely efficient; plans for a 15 kt/kg were made during Operation Dominic. Shot Androscoggin featured a proof-of-concept Ripple design, resulting in a 63-kiloton fizzle (significantly lower than the predicted 15 megatons). It was repeated in shot Housatonic, which featured a 9.96 megaton explosion that was reportedly >99.9% fusion. The public records for devices that produced the highest proportion of their yield via fusion reactions are the peaceful nuclear explosions of the 1970s. Others include the 10 megaton Dominic Housatonic at over 99.9% fusion, 50-megaton Tsar Bomba at 97% fusion, the 9.3-megaton Hardtack Poplar test at 95%, and the 4.5-megaton Redwing Navajo test at 95% fusion. The most ambitious peaceful application of nuclear explosions was pursued by the USSR with the aim of creating a long canal between the Pechora river basin and the Kama river basin, about half of which was to be constructed through a series of underground nuclear explosions. It was reported that about 250 nuclear devices might be used to get the final goal. The Taiga test was to demonstrate the feasibility of the project. Three of these "clean" devices of 15 kiloton yield each were placed in separate boreholes spaced about apart at depths of . They were simultaneously detonated on March 23, 1971, catapulting a radioactive plume into the air that was carried eastward by wind. The resulting trench was around long and wide, with an unimpressive depth of just . Despite their "clean" nature, the area still exhibits a noticeably higher (albeit mostly harmless) concentration of fission products, the intense neutron bombardment of the soil, the device itself and the support structures also activated their stable elements to create a significant amount of man-made radioactive elements like 60Co. The overall danger posed by the concentration of radioactive elements present at the site created by these three devices is still negligible, but a larger scale project as was envisioned would have had significant consequences both from the fallout of radioactive plume and the radioactive elements created by the neutron bombardment. On July 19, 1956, AEC Chairman Lewis Strauss said that the Redwing Zuni shot clean bomb test "produced much of importance ... from a humanitarian aspect." However, less than two days after this announcement, the dirty version of Bassoon, called Bassoon Prime, with a uranium-238 tamper in place, was tested on a barge off the coast of Bikini Atoll as the Redwing Tewa shot. The Bassoon Prime produced a 5-megaton yield, of which 87% came from fission. Data obtained from this test, and others, culminated in the eventual deployment of the highest-yielding US nuclear weapon known, and the highest yield-to-weight weapon ever made, a three-stage thermonuclear weapon with a maximum "dirty" yield of 25 megatons, designated as the B41 nuclear bomb, which was to be carried by U.S. Air Force bombers until it was decommissioned; this weapon was never fully tested. Third generation First and second generation nuclear weapons release energy as omnidirectional blasts. Third generation nuclear weapons are experimental special effect warheads and devices that can release energy in a directed manner, some of which were tested during the Cold War but were never deployed. These include: Project Prometheus, also known as "Nuclear Shotgun", which would have used a nuclear explosion to accelerate kinetic penetrators against ICBMs. Project Excalibur, a nuclear-pumped X-ray laser to destroy ballistic missiles. Nuclear shaped charges that focus their energy in particular directions. Project Orion explored the use of nuclear explosives for rocket propulsion. Fourth generation The idea of "4th-generation" nuclear weapons has been proposed as a possible successor to the examples of weapons designs listed above. These methods tend to revolve around using non-nuclear primaries to set off further fission or fusion reactions. For example, if antimatter were usable and controllable in macroscopic quantities, a reaction between a small amount of antimatter and an equivalent amount of matter could release energy comparable to a small fission weapon, and could in turn be used as the first stage of a very compact thermonuclear weapon. Extremely-powerful lasers could also potentially be used this way, if they could be made powerful-enough, and compact-enough, to be viable as a weapon. Most of these ideas are versions of pure fusion weapons, and share the common property that they involve hitherto unrealized technologies as their "primary" stages. While many nations have invested significantly in inertial confinement fusion research programs, since the 1970s it has not been considered promising for direct weapons use, but rather as a tool for weapons- and energy-related research that can be used in the absence of full-scale nuclear testing. Whether any nations are aggressively pursuing "4th-generation" weapons is not clear. In many case (as with antimatter) the underlying technology is presently thought to be very far from being viable, and if it was viable would be a powerful weapon in and of itself, outside of a nuclear weapons context, and without providing any significant advantages above existing nuclear weapons designs Pure fusion weapons Since the 1950s, the United States and Soviet Union investigated the possibility of releasing significant amounts of nuclear fusion energy without the use of a fission primary. Such "pure fusion weapons" were primarily imagined as low-yield, tactical nuclear weapons whose advantage would be their ability to be used without producing fallout on the scale of weapons that release fission products. In 1998, the United States Department of Energy declassified the following: Red mercury, a likely hoax substance, has been hyped as a catalyst for a pure fusion weapon. Cobalt bombs A doomsday bomb, made popular by Nevil Shute's 1957 novel, and subsequent 1959 movie, On the Beach, the cobalt bomb is a hydrogen bomb with a jacket of cobalt. The neutron-activated cobalt would have maximized the environmental damage from radioactive fallout. These bombs were popularized in the 1964 film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb; the material added to the bombs is referred to in the film as 'cobalt-thorium G'. Such "salted" weapons were investigated by U.S. Department of Defense. Fission products are as deadly as neutron-activated cobalt. Initially, gamma radiation from the fission products of an equivalent size fission-fusion-fission bomb are much more intense than Cobalt-60 (): 15,000 times more intense at 1 hour; 35 times more intense at 1 week; 5 times more intense at 1 month; and about equal at 6 months. Thereafter fission drops off rapidly so that fallout is 8 times more intense than fission at 1 year and 150 times more intense at 5 years. The very long-lived isotopes produced by fission would overtake the again after about 75 years. The triple "taiga" nuclear salvo test, as part of the preliminary March 1971 Pechora–Kama Canal project, produced a small amount of fission products and therefore a comparatively large amount of case material activated products are responsible for most of the residual activity at the site today, namely . fusion generated neutron activation was responsible for about half of the gamma dose at the test site. That dose is too small to cause deleterious effects, and normal green vegetation exists all around the lake that was formed. Arbitrarily large multi-staged devices The idea of a device which has an arbitrarily large number of Teller-Ulam stages, with each driving a larger radiation-driven implosion than the preceding stage, is frequently suggested, but technically disputed. There are "well-known sketches and some reasonable-looking calculations in the open literature about two-stage weapons, but no similarly accurate descriptions of true three stage concepts." During the mid-1950s through early 1960s, scientists working in the weapons laboratories of the United States investigated weapons concepts as large as 1,000 megatons, and Edward Teller announced the design of a 10,000-megaton weapon code-named SUNDIAL at a meeting of the General Advisory Committee of the Atomic Energy Commission. Much of the information about these efforts remains classified, but such "gigaton" range weapons do not appear to have made it beyond theoretical investigations. While both the US and Soviet Union investigated (and in the case of the Soviets, tested) "very high yield" (e.g. 50 to 100-megaton) weapons designs in the 1950s and early 1960s, these appear to represent the upper-limit of Cold War weapon yields pursued seriously, and were so physically heavy and massive that they could not be carried entirely within the bomb bays of the largest bombers. Cold War warhead development trends from the mid-1960s onward, and especially after the Limited Test Ban Treaty, instead resulted in highly-compact warheads with yields in the range from hundreds of kilotons to the low megatons that gave greater options for deliverability. Following the concern caused by the estimated gigaton scale of the 1994 Comet Shoemaker-Levy 9 impacts on the planet Jupiter, in a 1995 meeting at Lawrence Livermore National Laboratory (LLNL), Edward Teller proposed to a collective of U.S. and Russian ex-Cold War weapons designers that they collaborate on designing a 1,000-megaton nuclear explosive device for diverting extinction-class asteroids (10+ km in diameter), which would be employed in the event that one of these asteroids were on an impact trajectory with Earth. Neutron bombs A neutron bomb, technically referred to as an enhanced radiation weapon (ERW), is a type of tactical nuclear weapon designed specifically to release a large portion of its energy as energetic neutron radiation. This contrasts with standard thermonuclear weapons, which are designed to capture this intense neutron radiation to increase its overall explosive yield. In terms of yield, ERWs typically produce about one-tenth that of a fission-type atomic weapon. Even with their significantly lower explosive power, ERWs are still capable of much greater destruction than any conventional bomb. Meanwhile, relative to other nuclear weapons, damage is more focused on biological material than on material infrastructure (though extreme blast and heat effects are not eliminated). ERWs are more accurately described as suppressed yield weapons. When the yield of a nuclear weapon is less than one kiloton, its lethal radius from blast, , is less than that from its neutron radiation. However, the blast is more than potent enough to destroy most structures, which are less resistant to blast effects than even unprotected human beings. Blast pressures of upwards of are survivable, whereas most buildings will collapse with a pressure of only . Commonly misconceived as a weapon designed to kill populations and leave infrastructure intact, these bombs (as mentioned above) are still very capable of leveling buildings over a large radius. The intent of their design was to kill tank crews – tanks giving excellent protection against blast and heat, surviving (relatively) very close to a detonation. Given the Soviets' vast tank forces during the Cold War, this was the perfect weapon to counter them. The neutron radiation could instantly incapacitate a tank crew out to roughly the same distance that the heat and blast would incapacitate an unprotected human (depending on design). The tank chassis would also be rendered highly radioactive, temporarily preventing its re-use by a fresh crew. Neutron weapons were also intended for use in other applications, however. For example, they are effective in anti-nuclear defenses – the neutron flux being capable of neutralising an incoming warhead at a greater range than heat or blast. Nuclear warheads are very resistant to physical damage, but are very difficult to harden against extreme neutron flux. ERWs were two-stage thermonuclears with all non-essential uranium removed to minimize fission yield. Fusion provided the neutrons. Developed in the 1950s, they were first deployed in the 1970s, by U.S. forces in Europe. The last ones were retired in the 1990s. A neutron bomb is only feasible if the yield is sufficiently high that efficient fusion stage ignition is possible, and if the yield is low enough that the case thickness will not absorb too many neutrons. This means that neutron bombs have a yield range of 1–10 kilotons, with fission proportion varying from 50% at 1 kiloton to 25% at 10 kilotons (all of which comes from the primary stage). The neutron output per kiloton is then 10 to 15 times greater than for a pure fission implosion weapon or for a strategic warhead like a W87 or W88. Weapon design laboratories All the nuclear weapon design innovations discussed in this article originated from the following three labs in the manner described. Other nuclear weapon design labs in other countries duplicated those design innovations independently, reverse-engineered them from fallout analysis, or acquired them by espionage. Lawrence Berkeley The first systematic exploration of nuclear weapon design concepts took place in mid-1942 at the University of California, Berkeley. Important early discoveries had been made at the adjacent Lawrence Berkeley Laboratory, such as the 1940 cyclotron-made production and isolation of plutonium. A Berkeley professor, J. Robert Oppenheimer, had just been hired to run the nation's secret bomb design effort. His first act was to convene the 1942 summer conference. By the time he moved his operation to the new secret town of Los Alamos, New Mexico, in the spring of 1943, the accumulated wisdom on nuclear weapon design consisted of five lectures by Berkeley professor Robert Serber, transcribed and distributed as the (classified but now fully declassified and widely available online as a PDF) Los Alamos Primer. The Primer addressed fission energy, neutron production and capture, nuclear chain reactions, critical mass, tampers, predetonation, and three methods of assembling a bomb: gun assembly, implosion, and "autocatalytic methods", the one approach that turned out to be a dead end. Los Alamos At Los Alamos, it was found in April 1944 by Emilio Segrè that the proposed Thin Man Gun assembly type bomb would not work for plutonium because of predetonation problems caused by Pu-240 impurities. So Fat Man, the implosion-type bomb, was given high priority as the only option for plutonium. The Berkeley discussions had generated theoretical estimates of critical mass, but nothing precise. The main wartime job at Los Alamos was the experimental determination of critical mass, which had to wait until sufficient amounts of fissile material arrived from the production plants: uranium from Oak Ridge, Tennessee, and plutonium from the Hanford Site in Washington. In 1945, using the results of critical mass experiments, Los Alamos technicians fabricated and assembled components for four bombs: the Trinity Gadget, Little Boy, Fat Man, and an unused spare Fat Man. After the war, those who could, including Oppenheimer, returned to university teaching positions. Those who remained worked on levitated and hollow pits and conducted weapon effects tests such as Crossroads Able and Baker at Bikini Atoll in 1946. All of the essential ideas for incorporating fusion into nuclear weapons originated at Los Alamos between 1946 and 1952. After the Teller-Ulam radiation implosion breakthrough of 1951, the technical implications and possibilities were fully explored, but ideas not directly relevant to making the largest possible bombs for long-range Air Force bombers were shelved. Because of Oppenheimer's initial position in the H-bomb debate, in opposition to large thermonuclear weapons, and the assumption that he still had influence over Los Alamos despite his departure, political allies of Edward Teller decided he needed his own laboratory in order to pursue H-bombs. By the time it was opened in 1952, in Livermore, California, Los Alamos had finished the job Livermore was designed to do. Lawrence Livermore With its original mission no longer available, the Livermore lab tried radical new designs that failed. Its first three nuclear tests were fizzles: in 1953, two single-stage fission devices with uranium hydride pits, and in 1954, a two-stage thermonuclear device in which the secondary heated up prematurely, too fast for radiation implosion to work properly. Shifting gears, Livermore settled for taking ideas Los Alamos had shelved and developing them for the Army and Navy. This led Livermore to specialize in small-diameter tactical weapons, particularly ones using two-point implosion systems, such as the Swan. Small-diameter tactical weapons became primaries for small-diameter secondaries. Around 1960, when the superpower arms race became a ballistic missile race, Livermore warheads were more useful than the large, heavy Los Alamos warheads. Los Alamos warheads were used on the first intermediate-range ballistic missiles, IRBMs, but smaller Livermore warheads were used on the first intercontinental ballistic missiles, ICBMs, and submarine-launched ballistic missiles, SLBMs, as well as on the first multiple warhead systems on such missiles. In 1957 and 1958, both labs built and tested as many designs as possible, in anticipation that a planned 1958 test ban might become permanent. By the time testing resumed in 1961 the two labs had become duplicates of each other, and design jobs were assigned more on workload considerations than lab specialty. Some designs were horse-traded. For example, the W38 warhead for the Titan I missile started out as a Livermore project, was given to Los Alamos when it became the Atlas missile warhead, and in 1959 was given back to Livermore, in trade for the W54 Davy Crockett warhead, which went from Livermore to Los Alamos. Warhead designs after 1960 took on the character of model changes, with every new missile getting a new warhead for marketing reasons. The chief substantive change involved packing more fissile uranium-235 into the secondary, as it became available with continued uranium enrichment and the dismantlement of the large high-yield bombs. Starting with the Nova facility at Livermore in the mid-1980s, nuclear design activity pertaining to radiation-driven implosion was informed by research with indirect drive laser fusion. This work was part of the effort to investigate Inertial Confinement Fusion. Similar work continues at the more powerful National Ignition Facility. The Stockpile Stewardship and Management Program also benefited from research performed at NIF. Explosive testing Nuclear weapons are in large part designed by trial and error. The trial often involves test explosion of a prototype. In a nuclear explosion, a large number of discrete events, with various probabilities, aggregate into short-lived, chaotic energy flows inside the device casing. Complex mathematical models are required to approximate the processes, and in the 1950s there were no computers powerful enough to run them properly. Even today's computers and simulation software are not adequate. It was easy enough to design reliable weapons for the stockpile. If the prototype worked, it could be weaponized and mass-produced. It was much more difficult to understand how it worked or why it failed. Designers gathered as much data as possible during the explosion, before the device destroyed itself, and used the data to calibrate their models, often by inserting fudge factors into equations to make the simulations match experimental results. They also analyzed the weapon debris in fallout to see how much of a potential nuclear reaction had taken place. Light pipes An important tool for test analysis was the diagnostic light pipe. A probe inside a test device could transmit information by heating a plate of metal to incandescence, an event that could be recorded by instruments located at the far end of a long, very straight pipe. The picture below shows the Shrimp device, detonated on March 1, 1954, at Bikini, as the Castle Bravo test. Its 15-megaton explosion was the largest ever by the United States. The silhouette of a man is shown for scale. The device is supported from below, at the ends. The pipes going into the shot cab ceiling, which appear to be supports, are actually diagnostic light pipes. The eight pipes at the right end (1) sent information about the detonation of the primary. Two in the middle (2) marked the time when X-rays from the primary reached the radiation channel around the secondary. The last two pipes (3) noted the time radiation reached the far end of the radiation channel, the difference between (2) and (3) being the radiation transit time for the channel. From the shot cab, the pipes turned horizontally and traveled along a causeway built on the Bikini reef to a remote-controlled data collection bunker on Namu Island. While x-rays would normally travel at the speed of light through a low-density material like the plastic foam channel filler between (2) and (3), the intensity of radiation from the exploding primary creates a relatively opaque radiation front in the channel filler, which acts like a slow-moving logjam to retard the passage of radiant energy. While the secondary is being compressed via radiation-induced ablation, neutrons from the primary catch up with the x-rays, penetrate into the secondary, and start breeding tritium via the third reaction noted in the first section above. This 6Li + n reaction is exothermic, producing 5 MeV per event. The spark plug has not yet been compressed and thus remains subcritical, so no significant fission or fusion takes place as a result. If enough neutrons arrive before implosion of the secondary is complete, though, the crucial temperature differential between the outer and inner parts of the secondary can be degraded, potentially causing the secondary to fail to ignite. The first Livermore-designed thermonuclear weapon, the Morgenstern device, failed in this manner when it was tested as Castle Koon on April 7, 1954. The primary ignited, but the secondary, preheated by the primary's neutron wave, suffered what was termed as an inefficient detonation; thus, a weapon with a predicted one-megaton yield produced only 110 kilotons, of which merely 10 kt were attributed to fusion. These timing effects, and any problems they cause, are measured by light-pipe data. The mathematical simulations which they calibrate are called radiation flow hydrodynamics codes, or channel codes. They are used to predict the effect of future design modifications. It is not clear from the public record how successful the Shrimp light pipes were. The unmanned data bunker was far enough back to remain outside the mile-wide crater, but the 15-megaton blast, two and a half times as powerful as expected, breached the bunker by blowing its 20-ton door off the hinges and across the inside of the bunker. (The nearest people were farther away, in a bunker that survived intact.) Fallout analysis The most interesting data from Castle Bravo came from radio-chemical analysis of weapon debris in fallout. Because of a shortage of enriched lithium-6, 60% of the lithium in the Shrimp secondary was ordinary lithium-7, which doesn't breed tritium as easily as lithium-6 does. But it does breed lithium-6 as the product of an (n, 2n) reaction (one neutron in, two neutrons out), a known fact, but with unknown probability. The probability turned out to be high. Fallout analysis revealed to designers that, with the (n, 2n) reaction, the Shrimp secondary effectively had two and half times as much lithium-6 as expected. The tritium, the fusion yield, the neutrons, and the fission yield were all increased accordingly. As noted above, Bravo's fallout analysis also told the outside world, for the first time, that thermonuclear bombs are more fission devices than fusion devices. A Japanese fishing boat, Daigo Fukuryū Maru, sailed home with enough fallout on her decks to allow scientists in Japan and elsewhere to determine, and announce, that most of the fallout had come from the fission of U-238 by fusion-produced 14 MeV neutrons. Underground testing The global alarm over radioactive fallout, which began with the Castle Bravo event, eventually drove nuclear testing literally underground. The last U.S. above-ground test took place at Johnston Island on November 4, 1962. During the next three decades, until September 23, 1992, the United States conducted an average of 2.4 underground nuclear explosions per month, all but a few at the Nevada Test Site (NTS) northwest of Las Vegas. The Yucca Flat section of the NTS is covered with subsidence craters resulting from the collapse of terrain over radioactive caverns created by nuclear explosions (see photo). After the 1974 Threshold Test Ban Treaty (TTBT), which limited underground explosions to 150 kilotons or less, warheads like the half-megaton W88 had to be tested at less than full yield. Since the primary must be detonated at full yield in order to generate data about the implosion of the secondary, the reduction in yield had to come from the secondary. Replacing much of the lithium-6 deuteride fusion fuel with lithium-7 hydride limited the tritium available for fusion, and thus the overall yield, without changing the dynamics of the implosion. The functioning of the device could be evaluated using light pipes, other sensing devices, and analysis of trapped weapon debris. The full yield of the stockpiled weapon could be calculated by extrapolation. Production facilities When two-stage weapons became standard in the early 1950s, weapon design determined the layout of the new, widely dispersed U.S. production facilities, and vice versa. Because primaries tend to be bulky, especially in diameter, plutonium is the fissile material of choice for pits, with beryllium reflectors. It has a smaller critical mass than uranium. The Rocky Flats plant near Boulder, Colorado, was built in 1952 for pit production and consequently became the plutonium and beryllium fabrication facility. The Y-12 plant in Oak Ridge, Tennessee, where mass spectrometers called calutrons had enriched uranium for the Manhattan Project, was redesigned to make secondaries. Fissile U-235 makes the best spark plugs because its critical mass is larger, especially in the cylindrical shape of early thermonuclear secondaries. Early experiments used the two fissile materials in combination, as composite Pu-Oy pits and spark plugs, but for mass production, it was easier to let the factories specialize: plutonium pits in primaries, uranium spark plugs and pushers in secondaries. Y-12 made lithium-6 deuteride fusion fuel and U-238 parts, the other two ingredients of secondaries. The Hanford Site near Richland WA operated Plutonium production nuclear reactors and separations facilities during World War 2 and the Cold War. Nine Plutonium production reactors were built and operated there. The first being the B-Reactor which began operations in September 1944 and the last being the N-Reactor which ceased operations in January 1987. The Savannah River Site in Aiken, South Carolina, also built in 1952, operated nuclear reactors which converted U-238 into Pu-239 for pits, and converted lithium-6 (produced at Y-12) into tritium for booster gas. Since its reactors were moderated with heavy water, deuterium oxide, it also made deuterium for booster gas and for Y-12 to use in making lithium-6 deuteride. Warhead design safety Because even low-yield nuclear warheads have astounding destructive power, weapon designers have always recognised the need to incorporate mechanisms and associated procedures intended to prevent accidental detonation. Gun-type It is inherently dangerous to have a weapon containing a quantity and shape of fissile material which can form a critical mass through a relatively simple accident. Because of this danger, the propellant in Little Boy (four bags of cordite) was inserted into the bomb in flight, shortly after takeoff on August 6, 1945. This was the first time a gun-type nuclear weapon had ever been fully assembled. If the weapon falls into water, the moderating effect of the water can also cause a criticality accident, even without the weapon being physically damaged. Similarly, a fire caused by an aircraft crashing could easily ignite the propellant, with catastrophic results. Gun-type weapons have always been inherently unsafe. In-flight pit insertion Neither of these effects is likely with implosion weapons since there is normally insufficient fissile material to form a critical mass without the correct detonation of the lenses. However, the earliest implosion weapons had pits so close to criticality that accidental detonation with some nuclear yield was a concern. On August 9, 1945, Fat Man was loaded onto its airplane fully assembled, but later, when levitated pits made a space between the pit and the tamper, it was feasible to use in-flight pit insertion. The bomber would take off with no fissile material in the bomb. Some older implosion-type weapons, such as the US Mark 4 and Mark 5, used this system. In-flight pit insertion will not work with a hollow pit in contact with its tamper. Steel ball safety method As shown in the diagram above, one method used to decrease the likelihood of accidental detonation employed metal balls. The balls were emptied into the pit: this prevented detonation by increasing the density of the hollow pit, thereby preventing symmetrical implosion in the event of an accident. This design was used in the Green Grass weapon, also known as the Interim Megaton Weapon, which was used in the Violet Club and Yellow Sun Mk.1 bombs. Chain safety method Alternatively, the pit can be "safed" by having its normally hollow core filled with an inert material such as a fine metal chain, possibly made of cadmium to absorb neutrons. While the chain is in the center of the pit, the pit cannot be compressed into an appropriate shape to fission; when the weapon is to be armed, the chain is removed. Similarly, although a serious fire could detonate the explosives, destroying the pit and spreading plutonium to contaminate the surroundings as has happened in several weapons accidents, it could not cause a nuclear explosion. One-point safety While the firing of one detonator out of many will not cause a hollow pit to go critical, especially a low-mass hollow pit that requires boosting, the introduction of two-point implosion systems made that possibility a real concern. In a two-point system, if one detonator fires, one entire hemisphere of the pit will implode as designed. The high-explosive charge surrounding the other hemisphere will explode progressively, from the equator toward the opposite pole. Ideally, this will pinch the equator and squeeze the second hemisphere away from the first, like toothpaste in a tube. By the time the explosion envelops it, its implosion will be separated both in time and space from the implosion of the first hemisphere. The resulting dumbbell shape, with each end reaching maximum density at a different time, may not become critical. It is not possible to tell on the drawing board how this will play out. Nor is it possible using a dummy pit of U-238 and high-speed x-ray cameras, although such tests are helpful. For final determination, a test needs to be made with real fissile material. Consequently, starting in 1957, a year after Swan, both labs began one-point safety tests. Out of 25 one-point safety tests conducted in 1957 and 1958, seven had zero or slight nuclear yield (success), three had high yields of 300 t to 500 t (severe failure), and the rest had unacceptable yields between those extremes. Of particular concern was Livermore's W47, which generated unacceptably high yields in one-point testing. To prevent an accidental detonation, Livermore decided to use mechanical safing on the W47. The wire safety scheme described below was the result. When testing resumed in 1961, and continued for three decades, there was sufficient time to make all warhead designs inherently one-point safe, without need for mechanical safing. Wire safety method In the last test before the 1958 moratorium the W47 warhead for the Polaris SLBM was found to not be one-point safe, producing an unacceptably high nuclear yield of of TNT equivalent (Hardtack II Titania). With the test moratorium in force, there was no way to refine the design and make it inherently one-point safe. A solution was devised consisting of a boron-coated wire inserted into the weapon's hollow pit at manufacture. The warhead was armed by withdrawing the wire onto a spool driven by an electric motor. Once withdrawn, the wire could not be re-inserted. The wire had a tendency to become brittle during storage, and break or get stuck during arming, preventing complete removal and rendering the warhead a dud. It was estimated that 50–75% of warheads would fail. This required a complete rebuild of all W47 primaries. The oil used for lubricating the wire also promoted corrosion of the pit. Strong link/weak link Under the strong link/weak link system, "weak links" are constructed between critical nuclear weapon components (the "hard links"). In the event of an accident the weak links are designed to fail first in a manner that precludes energy transfer between them. Then, if a hard link fails in a manner that transfers or releases energy, energy can't be transferred into other weapon systems, potentially starting a nuclear detonation. Hard links are usually critical weapon components that have been hardened to survive extreme environments, while weak links can be both components deliberately inserted into the system to act as a weak link and critical nuclear components that can fail predictably. An example of a weak link would be an electrical connector that contains electrical wires made from a low melting point alloy. During a fire, those wires would melt, breaking any electrical connection. Permissive action link A permissive action link is an access control device designed to prevent unauthorised use of nuclear weapons. Early PALs were simple electromechanical switches and have evolved into complex arming systems that include integrated yield control options, lockout devices and anti-tamper devices.
Technology
Weapon of mass destruction
null
172987
https://en.wikipedia.org/wiki/Solar%20mass
Solar mass
The solar mass () is a standard unit of mass in astronomy, equal to approximately (2 nonillion kilograms in US short scale). It is approximately equal to the mass of the Sun. It is often used to indicate the masses of other stars, as well as stellar clusters, nebulae, galaxies and black holes. More precisely, the mass of the Sun is The solar mass is about times the mass of Earth (), or times the mass of Jupiter (). History of measurement The value of the gravitational constant was first derived from measurements that were made by Henry Cavendish in 1798 with a torsion balance. The value he obtained differs by only 1% from the modern value, but was not as precise. The diurnal parallax of the Sun was accurately measured during the transits of Venus in 1761 and 1769, yielding a value of (9 arcseconds, compared to the present value of ). From the value of the diurnal parallax, one can determine the distance to the Sun from the geometry of Earth. The first known estimate of the solar mass was by Isaac Newton. In his work Principia (1687), he estimated that the ratio of the mass of Earth to the Sun was about . Later he determined that his value was based upon a faulty value for the solar parallax, which he had used to estimate the distance to the Sun. He corrected his estimated ratio to in the third edition of the Principia. The current value for the solar parallax is smaller still, yielding an estimated mass ratio of . As a unit of measurement, the solar mass came into use before the AU and the gravitational constant were precisely measured. This is because the relative mass of another planet in the Solar System or the combined mass of two binary stars can be calculated in units of Solar mass directly from the orbital radius and orbital period of the planet or stars using Kepler's third law. Calculation The mass of the Sun cannot be measured directly, and is instead calculated from other measurable factors, using the equation for the orbital period of a small body orbiting a central mass. Based on the length of the year, the distance from Earth to the Sun (an astronomical unit or AU), and the gravitational constant (), the mass of the Sun is given by solving Kepler's third law: The value of G is difficult to measure and is only known with limited accuracy (see Cavendish experiment). The value of G times the mass of an object, called the standard gravitational parameter, is known for the Sun and several planets to a much higher accuracy than G alone. As a result, the solar mass is used as the standard mass in the astronomical system of units. Variation The Sun is losing mass because of fusion reactions occurring within its core, leading to the emission of electromagnetic energy, neutrinos and by the ejection of matter with the solar wind. It is expelling about /year. The mass loss rate will increase when the Sun enters the red giant stage, climbing to /year when it reaches the tip of the red-giant branch. This will rise to /year on the asymptotic giant branch, before peaking at a rate of 10−5 to 10−4 /year as the Sun generates a planetary nebula. By the time the Sun becomes a degenerate white dwarf, it will have lost 46% of its starting mass. The mass of the Sun has been decreasing since the time it formed. This occurs through two processes in nearly equal amounts. First, in the Sun's core, hydrogen is converted into helium through nuclear fusion, in particular the p–p chain, and this reaction converts some mass into energy in the form of gamma ray photons. Most of this energy eventually radiates away from the Sun. Second, high-energy protons and electrons in the atmosphere of the Sun are ejected directly into outer space as the solar wind and coronal mass ejections. The original mass of the Sun at the time it reached the main sequence remains uncertain. The early Sun had much higher mass-loss rates than at present, and it may have lost anywhere from 1–7% of its natal mass over the course of its main-sequence lifetime. Related units One solar mass, , can be converted to related units: (Lunar mass) (Earth mass) (Jupiter mass) It is also frequently useful in general relativity to express mass in units of length or time. (half the Schwarzschild radius of the Sun) The solar mass parameter (G·), as listed by the IAU Division I Working Group, has the following estimates: (TCG-compatible) (TDB-compatible)
Physical sciences
Mass and weight
Basics and measurement
173072
https://en.wikipedia.org/wiki/Growth%20hormone
Growth hormone
Growth hormone (GH) or somatotropin, also known as human growth hormone (hGH or HGH) in its human form, is a peptide hormone that stimulates growth, cell reproduction, and cell regeneration in humans and other animals. It is thus important in human development. GH also stimulates production of insulin-like growth factor 1 (IGF-1) and increases the concentration of glucose and free fatty acids. It is a type of mitogen which is specific only to the receptors on certain types of cells. GH is a 191-amino acid, single-chain polypeptide that is synthesized, stored and secreted by somatotropic cells within the lateral wings of the anterior pituitary gland. A recombinant form of HGH called somatropin (INN) is used as a prescription drug to treat children's growth disorders and adult growth hormone deficiency. In the United States, it is only available legally from pharmacies by prescription from a licensed health care provider. In recent years in the United States, some health care providers are prescribing growth hormone in the elderly to increase vitality. While legal, the efficacy and safety of this use for HGH has not been tested in a clinical trial. Many of the functions of HGH remain unknown. In its role as an anabolic agent, HGH has been used by competitors in sports since at least 1982 and has been banned by the IOC and NCAA. Traditional urine analysis does not detect doping with HGH, so the ban was not enforced until the early 2000s, when blood tests that could distinguish between natural and artificial HGH were starting to be developed. Blood tests conducted by WADA at the 2004 Olympic Games in Athens, Greece, targeted primarily HGH. Use of the drug for performance enhancement is not currently approved by the FDA. GH has been studied for use in raising livestock more efficiently in industrial agriculture and several efforts have been made to obtain governmental approval to use GH in livestock production. These uses have been controversial. In the United States, the only FDA-approved use of GH for livestock is the use of a cow-specific form of GH called bovine somatotropin for increasing milk production in dairy cows. Retailers are permitted to label containers of milk as produced with or without bovine somatotropin. Nomenclature The names somatotropin (STH) or somatotropic hormone refer to the growth hormone produced naturally in animals and extracted from carcasses. Hormone extracted from human cadavers is abbreviated hGH. The main growth hormone produced by recombinant DNA technology has the approved generic name (INN) somatropin and the brand name Humatrope and is properly abbreviated rhGH in the scientific literature. Since its introduction in 1992, Humatrope has been a banned sports doping agent and in this context is referred to as HGH. The term growth hormone has been incorrectly applied to refer to anabolic sex hormones in the European beef hormone controversy, which initially restricts the use of estradiol, progesterone, testosterone, zeranol, melengestrol acetate and trenbolone acetate. Biology Gene Genes for human growth hormone, known as growth hormone 1 (somatotropin; pituitary growth hormone) and growth hormone 2 (placental growth hormone; growth hormone variant), are localized in the q22-24 region of chromosome 17 and are closely related to human chorionic somatomammotropin (also known as placental lactogen) genes. GH, human chorionic somatomammotropin, and prolactin belong to a group of homologous hormones with growth-promoting and lactogenic activity. Structure The major isoform of the human growth hormone is a protein of 191 amino acids and a molecular weight of 22,124 daltons. The structure includes four helices necessary for functional interaction with the GH receptor. It appears that, in structure, GH is evolutionarily homologous to prolactin and chorionic somatomammotropin. Despite marked structural similarities between growth hormone from different species, only human and Old World monkey growth hormones have significant effects on the human growth hormone receptor. Several molecular isoforms of GH exist in the pituitary gland and are released to blood. In particular, a variant of approximately 20 kDa originated by an alternative splicing is present in a rather constant 1:9 ratio, while recently an additional variant of ~ 23-24 kDa has also been reported in post-exercise states at higher proportions. This variant has not been identified, but it has been suggested to coincide with a 22 kDa glycosylated variant of 23 kDa identified in the pituitary gland. Furthermore, these variants circulate partially bound to a protein (growth hormone-binding protein, GHBP), which is the truncated part of the growth hormone receptor, and an acid-labile subunit (ALS). Regulation Secretion of growth hormone (GH) in the pituitary is regulated by the neurosecretory nuclei of the hypothalamus. These cells release the peptides growth hormone-releasing hormone (GHRH or somatocrinin) and growth hormone-inhibiting hormone (GHIH or somatostatin) into the hypophyseal portal venous blood surrounding the pituitary. GH release in the pituitary is primarily determined by the balance of these two peptides, which in turn is affected by many physiological stimulators (e.g., exercise, nutrition, sleep) and inhibitors (e.g., free fatty acids) of GH secretion. Somatotropic cells in the anterior pituitary gland then synthesize and secrete GH in a pulsatile manner, in response to these stimuli by the hypothalamus. The largest and most predictable of these GH peaks occurs about an hour after onset of sleep with plasma levels of 13 to 72 ng/mL. Maximal secretion of GH may occur within minutes of the onset of slow-wave (SW) sleep (stage III or IV). Otherwise there is wide variation between days and individuals. Nearly fifty percent of GH secretion occurs during the third and fourth NREM sleep stages. Surges of secretion during the day occur at 3- to 5-hour intervals. The plasma concentration of GH during these peaks may range from 5 to even 45 ng/mL. Between the peaks, basal GH levels are low, usually less than 5 ng/mL for most of the day and night. Additional analysis of the pulsatile profile of GH described in all cases less than 1 ng/ml for basal levels while maximum peaks were situated around 10-20 ng/mL. A number of factors are known to affect GH secretion, such as age, sex, diet, exercise, stress, and other hormones. Young adolescents secrete GH at the rate of about 700 μg/day, while healthy adults secrete GH at the rate of about 400 μg/day. Sleep deprivation generally suppresses GH release, particularly after early adulthood. Stimulators of growth hormone (GH) secretion include: Peptide hormones GHRH (somatocrinin) through binding to the growth hormone-releasing hormone receptor (GHRHR) Ghrelin through binding to growth hormone secretagogue receptors (GHSR) Sex hormones Increased androgen secretion during puberty (in males from testes and in females from adrenal cortex) Testosterone and DHEA Estrogen Clonidine, moxonidine and L-DOPA by stimulating GHRH release α4β2 nicotinic agonists, including nicotine, which also act synergistically with clonidine or moxonidine. Hypoglycemia, arginine, pramipexole, ornitine, lysine, tryptophan, γ-Aminobutyric acid and propranolol by inhibiting somatostatin release Deep sleep Glucagon Sodium oxybate or γ-Hydroxybutyric acid Niacin as nicotinic acid (vitamin B3) Fasting Insulin Vigorous exercise Inhibitors of GH secretion include: GHIH (somatostatin) from the periventricular nucleus circulating concentrations of GH and IGF-1 (negative feedback on the pituitary and hypothalamus) Hyperglycemia Glucocorticoids Dihydrotestosterone Phenothiazines In addition to control by endogenous and stimulus processes, a number of foreign compounds (xenobiotics such as drugs and endocrine disruptors) are known to influence GH secretion and function. Function Effects of growth hormone on the tissues of the body can generally be described as anabolic (building up). Like most other peptide hormones, GH acts by interacting with a specific receptor on the surface of cells. Increased height during childhood is the most widely known effect of GH. Height appears to be stimulated by at least two mechanisms: Because polypeptide hormones are not fat-soluble, they cannot penetrate cell membranes. Thus, GH exerts some of its effects by binding to receptors on target cells, where it activates the MAPK/ERK pathway. Through this mechanism GH directly stimulates division and multiplication of chondrocytes of cartilage. GH also stimulates, through the JAK-STAT signaling pathway, the production of insulin-like growth factor 1 (IGF-1, formerly known as somatomedin C), a hormone homologous to proinsulin. The liver is a major target organ of GH for this process and is the principal site of IGF-1 production. IGF-1 has growth-stimulating effects on a wide variety of tissues. Additional IGF-1 is generated within target tissues, making it what appears to be both an endocrine and an autocrine/paracrine hormone. IGF-1 also has stimulatory effects on osteoblast and chondrocyte activity to promote bone growth. In addition to increasing height in children and adolescents, growth hormone has many other effects on the body: Increases calcium retention, and strengthens and increases the mineralization of bone Increases muscle mass through sarcomere hypertrophy Promotes lipolysis Increases protein synthesis Stimulates the growth of all internal organs excluding the brain Plays a role in homeostasis Reduces liver uptake of glucose Promotes gluconeogenesis in the liver Contributes to the maintenance and function of pancreatic islets Stimulates the immune system Increases deiodination of T4 to T3 Induces insulin resistance Biochemistry GH has a short biological half-life of about 10 to 20 minutes. Clinical significance Excess The most common disease of GH excess is a pituitary tumor composed of somatotroph cells of the anterior pituitary. These somatotroph adenomas are benign and grow slowly, gradually producing more and more GH. For years, the principal clinical problems are those of GH excess. Eventually, the adenoma may become large enough to cause headaches, impair vision by pressure on the optic nerves, or cause deficiency of other pituitary hormones by displacement. Prolonged GH excess thickens the bones of the jaw, fingers and toes, resulting in heaviness of the jaw and increased size of digits, referred to as acromegaly. Accompanying problems can include sweating, pressure on nerves (e.g., carpal tunnel syndrome), muscle weakness, excess sex hormone-binding globulin (SHBG), insulin resistance or even a rare form of type 2 diabetes, and reduced sexual function. GH-secreting tumors are typically recognized in the fifth decade of life. It is extremely rare for such a tumor to occur in childhood, but, when it does, the excessive GH can cause excessive growth, traditionally referred to as pituitary gigantism. Surgical removal is the usual treatment for GH-producing tumors. In some circumstances, focused radiation or a GH antagonist such as pegvisomant may be employed to shrink the tumor or block function. Other drugs like octreotide (somatostatin agonist) and bromocriptine (dopamine agonist) can be used to block GH secretion because both somatostatin and dopamine negatively inhibit GHRH-mediated GH release from the anterior pituitary. Deficiency The effects of growth hormone (GH) deficiency vary depending on the age at which they occur. Alterations in somatomedin can result in growth hormone deficiency with two known mechanisms; failure of tissues to respond to somatomedin, or failure of the liver to produce somatomedin. Major manifestations of GH deficiency in children are growth failure, the development of a short stature, and delayed sexual maturity. In adults, somatomedin alteration contributes to increased osteoclast activity, resulting in weaker bones that are more prone to pathologic fracture and osteoporosis. However, deficiency is rare in adults, with the most common cause being a pituitary adenoma. Other adult causes include a continuation of a childhood problem, other structural lesions or trauma, and very rarely idiopathic GHD. Adults with GHD "tend to have a relative increase in fat mass and a relative decrease in muscle mass and, in many instances, decreased energy and quality of life". Diagnosis of GH deficiency involves a multiple-step diagnostic process, usually culminating in GH stimulation tests to see if the patient's pituitary gland will release a pulse of GH when provoked by various stimuli. Psychological effects Quality of life Several studies, primarily involving patients with GH deficiency, have suggested a crucial role of GH in both mental and emotional well-being and maintaining a high energy level. Adults with GH deficiency often have higher rates of depression than those without. While GH replacement therapy has been proposed to treat depression as a result of GH deficiency, the long-term effects of such therapy are unknown. Cognitive function GH has also been studied in the context of cognitive function, including learning and memory. GH in humans appears to improve cognitive function and may be useful in the treatment of patients with cognitive impairment that is a result of GH deficiency. Medical uses Replacement therapy GH is used as replacement therapy in adults with GH deficiency of either childhood-onset or adult-onset (usually as a result of an acquired pituitary tumor). In these patients, benefits have variably included reduced fat mass, increased lean mass, increased bone density, improved lipid profile, reduced cardiovascular risk factors, and improved psychosocial well-being. Long acting growth hormone (LAGH) analogues are now available for treating growth hormone deficiency both in children and adults. These are once weekly injections as compared to conventional growth hormone which has to be taken as daily injections. LAGH injection 4 times a month has been found to be as safe and effective as daily growth hormone injections. Other approved uses GH can be used to treat conditions that produce short stature but are not related to deficiencies in GH. However, results are not as dramatic when compared to short stature that is solely attributable to deficiency of GH. Examples of other causes of shortness often treated with GH are Turner syndrome, Growth failure secondary to chronic kidney disease in children, Prader–Willi syndrome, intrauterine growth restriction, and severe idiopathic short stature. Higher ("pharmacologic") doses are required to produce significant acceleration of growth in these conditions, producing blood levels well above normal ("physiologic"). One version of rHGH has also been FDA approved for maintaining muscle mass in wasting due to AIDS. Off-label use Off-label prescription of HGH is controversial and may be illegal. Claims for GH as an anti-aging treatment date back to 1990 when the New England Journal of Medicine published a study wherein GH was used to treat 12 men over 60. At the conclusion of the study, all the men showed statistically significant increases in lean body mass and bone mineral density, while the control group did not. The authors of the study noted that these improvements were the opposite of the changes that would normally occur over a 10- to 20-year aging period. Despite the fact the authors at no time claimed that GH had reversed the aging process itself, their results were misinterpreted as indicating that GH is an effective anti-aging agent. This has led to organizations such as the controversial American Academy of Anti-Aging Medicine promoting the use of this hormone as an "anti-aging agent". A Stanford University School of Medicine meta-analysis of clinical studies on the subject published in early 2007 showed that the application of GH on healthy elderly patients increased muscle by about 2 kg and decreased body fat by the same amount. However, these were the only positive effects from taking GH. No other critical factors were affected, such as bone density, cholesterol levels, lipid measurements, maximal oxygen consumption, or any other factor that would indicate increased fitness. Researchers also did not discover any gain in muscle strength, which led them to believe that GH merely let the body store more water in the muscles rather than increase muscle growth. This would explain the increase in lean body mass. GH has also been used experimentally to treat multiple sclerosis, to enhance weight loss in obesity, as well as in fibromyalgia, heart failure, Crohn's disease and ulcerative colitis, and burns. GH has also been used experimentally in patients with short bowel syndrome to lessen the requirement for intravenous total parenteral nutrition. In 1990, the US Congress passed an omnibus crime bill, the Crime Control Act of 1990, that amended the Federal Food, Drug, and Cosmetic Act, that classified anabolic steroids as controlled substances and added a new section that stated that a person who "knowingly distributes, or possesses with intent to distribute, human growth hormone for any use in humans other than the treatment of a disease or other recognized medical condition, where such use has been authorized by the Secretary of Health and Human Services" has committed a felony. The Drug Enforcement Administration of the US Department of Justice considers off-label prescribing of HGH to be illegal, and to be a key path for illicit distribution of HGH. This section has also been interpreted by some doctors, most notably the authors of a commentary article published in the Journal of the American Medical Association in 2005, as meaning that prescribing HGH off-label may be considered illegal. And some articles in the popular press, such as those criticizing the pharmaceutical industry for marketing drugs for off-label use (with concern of ethics violations) have made strong statements about whether doctors can prescribe HGH off-label: "Unlike other prescription drugs, HGH may be prescribed only for specific uses. U.S. sales are limited by law to treat a rare growth defect in children and a handful of uncommon conditions like short bowel syndrome or Prader-Willi syndrome, a congenital disease that causes reduced muscle tone and a lack of hormones in sex glands." At the same time, anti-aging clinics where doctors prescribe, administer, and sell HGH to people are big business. In a 2012 article in Vanity Fair, when asked how HGH prescriptions far exceed the number of adult patients estimated to have HGH-deficiency, Dragos Roman, who leads a team at the FDA that reviews drugs in endocrinology, said "The F.D.A. doesn't regulate off-label uses of H.G.H. Sometimes it's used appropriately. Sometimes it's not." Side effects Injection site reactions are common. More rarely, patients can experience joint swelling, joint pain, carpal tunnel syndrome, and an increased risk of diabetes. In some cases, the patient can produce an immune response against GH. GH may also be a risk factor for Hodgkin's lymphoma. One survey of adults that had been treated with replacement cadaver GH (which has not been used anywhere in the world since 1985) during childhood showed a mildly increased incidence of colon cancer and prostate cancer, but linkage with the GH treatment was not established. Performance enhancement The first description of the use of GH as a doping agent was Dan Duchaine's "Underground Steroid handbook" which emerged from California in 1982; it is not known where and when GH was first used this way. Athletes in many sports have used human growth hormone in order to attempt to enhance their athletic performance. Some recent studies have not been able to support claims that human growth hormone can improve the athletic performance of professional male athletes. Many athletic societies ban the use of GH and will issue sanctions against athletes who are caught using it. However, because GH is a potent endogenous protein, it is very difficult to detect GH doping. In the United States, GH is legally available only by prescription from a medical doctor. Dietary supplements To capitalize on the idea that GH might be useful to combat aging, companies selling dietary supplements have websites selling products linked to GH in the advertising text, with medical-sounding names described as "HGH Releasers". Typical ingredients include amino acids, minerals, vitamins, and/or herbal extracts, the combination of which are described as causing the body to make more GH with corresponding beneficial effects. In the United States, because these products are marketed as dietary supplements, it is illegal for them to contain GH, which is a drug. Also, under United States law, products sold as dietary supplements cannot have claims that the supplement treats or prevents any disease or condition, and the advertising material must contain a statement that the health claims are not approved by the FDA. The FTC and the FDA do enforce the law when they become aware of violations. Agricultural use In the United States, it is legal to give a bovine GH to dairy cows to increase milk production, and is legal to use GH in raising cows for beef; see article on Bovine somatotropin, cattle feeding, dairy farming and the beef hormone controversy. The use of GH in poultry farming is illegal in the United States. Similarly, no chicken meat for sale in Australia is administered hormones. Several companies have attempted to have a version of GH for use in pigs (porcine somatotropin) approved by the FDA but all applications have been withdrawn. Drug development history Genentech pioneered the use of recombinant human growth hormone for human therapy, which was approved by the FDA in 1985. Prior to its production by recombinant DNA technology, growth hormone used to treat deficiencies was extracted from the pituitary glands of cadavers. Attempts to create a wholly synthetic HGH failed. Limited supplies of HGH resulted in the restriction of HGH therapy to the treatment of idiopathic short stature. Very limited clinical studies of growth hormone derived from an Old World monkey, the rhesus macaque, were conducted by John C. Beck and colleagues in Montreal, in the late 1950s. The study published in 1957, which was conducted on "a 13-year-old male with well-documented hypopituitarism secondary to a crainiophyaryngioma," found that: "Human and monkey growth hormone resulted in a significant enhancement of nitrogen storage ... (and) there was a retention of potassium, phosphorus, calcium, and sodium. ... There was a gain in body weight during both periods. ... There was a significant increase in urinary excretion of aldosterone during both periods of administration of growth hormone. This was most marked with the human growth hormone. ... Impairment of the glucose tolerance curve was evident after 10 days of administration of the human growth hormone. No change in glucose tolerance was demonstrable on the fifth day of administration of monkey growth hormone." The other study, published in 1958, was conducted on six people: the same subject as the Science paper; an 18-year-old male with statural and sexual retardation and a skeletal age of between 13 and 14 years; a 15-year-old female with well-documented hypopituitarism secondary to a craniopharyngioma; a 53-year-old female with carcinoma of the breast and widespread skeletal metastases; a 68-year-old female with advanced postmenopausal osteoporosis; and a healthy 24-year-old medical student without any clinical or laboratory evidence of systemic disease. In 1985, unusual cases of Creutzfeldt–Jakob disease were found in individuals that had received cadaver-derived HGH ten to fifteen years previously. Based on the assumption that infectious prions causing the disease were transferred along with the cadaver-derived HGH, cadaver-derived HGH was removed from the market. In 1985, biosynthetic human growth hormone replaced pituitary-derived human growth hormone for therapeutic use in the U.S. and elsewhere. As of 2005, recombinant growth hormones available in the United States (and their manufacturers) included Nutropin (Genentech), Humatrope (Lilly), Genotropin (Pfizer), Norditropin (Novo), and Saizen (Merck Serono). In 2006, the U.S. Food and Drug Administration (FDA) approved a version of rHGH called Omnitrope (Sandoz). A sustained-release form of growth hormone, Nutropin Depot (Genentech and Alkermes) was approved by the FDA in 1999, allowing for fewer injections (every 2 or 4 weeks instead of daily); however, the product was discontinued by Genentech/Alkermes in 2004 for financial reasons (Nutropin Depot required significantly more resources to produce than the rest of the Nutropin line).
Biology and health sciences
Animal hormones
Biology
173088
https://en.wikipedia.org/wiki/Radio%20broadcasting
Radio broadcasting
Radio broadcasting is the broadcasting of audio (sound), sometimes with related metadata, by radio waves to radio receivers belonging to a public audience. In terrestrial radio broadcasting the radio waves are broadcast by a land-based radio station, while in satellite radio the radio waves are broadcast by a satellite in Earth orbit. To receive the content the listener must have a broadcast radio receiver (radio). Stations are often affiliated with a radio network that provides content in a common radio format, either in broadcast syndication or simulcast, or both. The encoding of a radio broadcast depends on whether it uses an analog or digital signal. Analog radio broadcasts use one of two types of radio wave modulation: amplitude modulation for AM radio, or frequency modulation for FM radio. Newer, digital radio stations transmit in several different digital audio standards, such as DAB (Digital Audio Broadcasting), HD radio, or DRM (Digital Radio Mondiale). History The earliest radio stations were radiotelegraphy systems and did not carry audio. For audio broadcasts to be possible, electronic detection and amplification devices had to be incorporated. The thermionic valve, a kind of vacuum tube, was invented in 1904 by the English physicist John Ambrose Fleming. He developed a device that he called an "oscillation valve," because it passes current in only one direction. The heated filament, or cathode, was capable of thermionic emission of electrons that would flow to the plate (or anode) when it was at a higher voltage. Electrons, however, could not pass in the reverse direction because the plate was not heated, and thus not capable of thermionic emission of electrons. Later known as the Fleming valve, it could be used as a rectifier of alternating current, and as a radio wave detector. This greatly improved the crystal set, which rectified the radio signal using an early solid-state diode based on a crystal and a so-called cat's whisker. However, an amplifier was still required. The triode (mercury-vapor filled with a control grid) was created on March 4, 1906, by the Austrian Robert von Lieben; independently, on October 25, 1906, Lee De Forest patented his three-element Audion. It was not put to practical use until 1912 when its amplifying ability became recognized by researchers. By about 1920, valve technology had matured to the point where radio broadcasting was quickly becoming viable. However, an early audio transmission that could be termed a broadcast may have occurred on Christmas Eve in 1906 by Reginald Fessenden, although this is disputed. While many early experimenters attempted to create systems similar to radiotelephone devices by which only two parties were meant to communicate, there were others who intended to transmit to larger audiences. Charles Herrold started broadcasting in California in 1909 and was carrying audio by the next year. (Herrold's station eventually became KCBS). In The Hague, the Netherlands, PCGG started broadcasting on November 6, 1919, making it arguably the first commercial broadcasting station. In 1916, Frank Conrad, an electrical engineer employed at the Westinghouse Electric Corporation, began broadcasting from his Wilkinsburg, Pennsylvania garage with the call letters 8XK. Later, the station was moved to the top of the Westinghouse factory building in East Pittsburgh, Pennsylvania. Westinghouse relaunched the station as KDKA on November 2, 1920, as the first commercially licensed radio station in the United States. The commercial broadcasting designation came from the type of broadcast license; advertisements did not air until years later. The first licensed broadcast in the United States came from KDKA itself: the results of the Harding/Cox Presidential Election. The Montreal station that became CFCF began broadcast programming on May 20, 1920, and the Detroit station that became WWJ began program broadcasts beginning on August 20, 1920, although neither held a license at the time. In 1920, wireless broadcasts for entertainment began in the UK from the Marconi Research Centre 2MT at Writtle near Chelmsford, England. A famous broadcast from Marconi's New Street Works factory in Chelmsford was made by the famous soprano Dame Nellie Melba on June 15, 1920, where she sang two arias and her famous trill. She was the first artist of international renown to participate in direct radio broadcasts. The 2MT station began to broadcast regular entertainment in 1922. The BBC was amalgamated in 1922 and received a Royal Charter in 1926, making it the first national broadcaster in the world, followed by Czechoslovak Radio and other European broadcasters in 1923. Radio Argentina began regularly scheduled transmissions from the Teatro Coliseo in Buenos Aires on August 27, 1920, making its own priority claim. The station got its license on November 19, 1923. The delay was due to the lack of official Argentine licensing procedures before that date. This station continued regular broadcasting of entertainment, and cultural fare for several decades. Radio in education soon followed, and colleges across the U.S. began adding radio broadcasting courses to their curricula. Curry College in Milton, Massachusetts introduced one of the first broadcasting majors in 1932 when the college teamed up with WLOE in Boston to have students broadcast programs. By 1931, a majority of U.S. households owned at least one radio receiver. In line to ITU Radio Regulations (article1.61) each broadcasting station shall be classified by the service in which it operates permanently or temporarily. Types Broadcasting by radio takes several forms. These include AM and FM stations. There are several subtypes, namely commercial broadcasting, non-commercial educational (NCE) public broadcasting and non-profit varieties as well as community radio, student-run campus radio stations, and hospital radio stations can be found throughout the world. Many stations broadcast on shortwave bands using AM technology that can be received over thousands of miles (especially at night). For example, the BBC, VOA, VOR, and Deutsche Welle have transmitted via shortwave to Africa and Asia. These broadcasts are very sensitive to atmospheric conditions and solar activity. Nielsen Audio, formerly known as Arbitron, the United States–based company that reports on radio audiences, defines a "radio station" as a government-licensed AM or FM station; an HD Radio (primary or multicast) station; an internet stream of an existing government-licensed station; one of the satellite radio channels from XM Satellite Radio or Sirius Satellite Radio; or, potentially, a station that is not government licensed. AM AM stations were the earliest broadcasting stations to be developed. AM refers to amplitude modulation, a mode of broadcasting radio waves by varying the amplitude of the carrier signal in response to the amplitude of the signal to be transmitted. The medium-wave band is used worldwide for AM broadcasting. Europe also uses the long wave band. In response to the growing popularity of FM stereo radio stations in the late 1980s and early 1990s, some North American stations began broadcasting in AM stereo, though this never gained popularity and very few receivers were ever sold. The signal is subject to interference from electrical storms (lightning) and other electromagnetic interference (EMI). One advantage of AM radio signal is that it can be detected (turned into sound) with simple equipment. If a signal is strong enough, not even a power source is needed; building an unpowered crystal radio receiver was a common childhood project in the early decades of AM broadcasting. AM broadcasts occur on North American airwaves in the medium wave frequency range of 525 to 1,705 kHz (known as the "standard broadcast band"). The band was expanded in the 1990s by adding nine channels from 1,605 to 1,705 kHz. Channels are spaced every 10 kHz in the Americas, and generally every 9 kHz everywhere else. AM transmissions cannot be ionospheric propagated during the day due to strong absorption in the D-layer of the ionosphere. In a crowded channel environment, this means that the power of regional channels which share a frequency must be reduced at night or directionally beamed in order to avoid interference, which reduces the potential nighttime audience. Some stations have frequencies unshared with other stations in North America; these are called clear-channel stations. Many of them can be heard across much of the country at night. During the night, absorption largely disappears and permits signals to travel to much more distant locations via ionospheric reflections. However, fading of the signal can be severe at night. AM radio transmitters can transmit audio frequencies up to 15 kHz (now limited to 10 kHz in the US due to FCC rules designed to reduce interference), but most receivers are only capable of reproducing frequencies up to 5 kHz or less. At the time that AM broadcasting began in the 1920s, this provided adequate fidelity for existing microphones, 78 rpm recordings, and loudspeakers. The fidelity of sound equipment subsequently improved considerably, but the receivers did not. Reducing the bandwidth of the receivers reduces the cost of manufacturing and makes them less prone to interference. AM stations are never assigned adjacent channels in the same service area. This prevents the sideband power generated by two stations from interfering with each other. Bob Carver created an AM stereo tuner employing notch filtering that demonstrated that an AM broadcast can meet or exceed the 15 kHz baseband bandwidth allotted to FM stations without objectionable interference. After several years, the tuner was discontinued. Bob Carver had left the company and the Carver Corporation later cut the number of models produced before discontinuing production completely. As well as on the medium wave bands, amplitude modulation (AM) is also used on the shortwave and long wave bands. Shortwave is used largely for national broadcasters, international propaganda, or religious broadcasting organizations. Shortwave transmissions can have international or inter-continental range depending on atmospheric conditions. Long-wave AM broadcasting occurs in Europe, Asia, and Africa. The ground wave propagation at these frequencies is little affected by daily changes in the ionosphere, so broadcasters need not reduce power at night to avoid interference with other transmitters. FM FM refers to frequency modulation, and occurs on VHF airwaves in the frequency range of 88 to 108 MHz everywhere except Japan and Russia. Russia, like the former Soviet Union, uses 65.9 to 74 MHz frequencies in addition to the world standard. Japan uses the 76 to 90 MHz frequency band. Edwin Howard Armstrong invented wide-band FM radio in the early 1930s to overcome the problem of radio-frequency interference (RFI), which plagued AM radio reception. At the same time, greater fidelity was made possible by spacing stations further apart in the radio frequency spectrum. Instead of 10 kHz apart, as on the AM band in the US, FM channels are 200 kHz (0.2 MHz) apart. In other countries, greater spacing is sometimes mandatory, such as in New Zealand, which uses 700 kHz spacing (previously 800 kHz). The improved fidelity made available was far in advance of the audio equipment of the 1940s, but wide interchannel spacing was chosen to take advantage of the noise-suppressing feature of wideband FM. Bandwidth of 200 kHz is not needed to accommodate an audio signal — 20 kHz to 30 kHz is all that is necessary for a narrowband FM signal. The 200 kHz bandwidth allowed room for ±75 kHz signal deviation from the assigned frequency, plus guard bands to reduce or eliminate adjacent channel interference. The larger bandwidth allows for broadcasting a 15 kHz bandwidth audio signal plus a 38 kHz stereo "subcarrier"—a piggyback signal that rides on the main signal. Additional unused capacity is used by some broadcasters to transmit utility functions such as background music for public areas, GPS auxiliary signals, or financial market data. The AM radio problem of interference at night was addressed in a different way. At the time FM was set up, the available frequencies were far higher in the spectrum than those used for AM radio - by a factor of approximately 100. Using these frequencies meant that even at far higher power, the range of a given FM signal was much shorter; thus its market was more local than for AM radio. The reception range at night is the same as in the daytime. All FM broadcast transmissions are line-of-sight, and ionospheric bounce is not viable. The much larger bandwidths, compared to AM and SSB, are more susceptible to phase dispersion. Propagation speeds are fastest in the ionosphere at the lowest sideband frequency. The celerity difference between the highest and lowest sidebands is quite apparent to the listener. Such distortion occurs up to frequencies of approximately 50 MHz. Higher frequencies do not reflect from the ionosphere, nor from storm clouds. Moon reflections have been used in some experiments, but require impractical power levels. The original FM radio service in the U.S. was the Yankee Network, located in New England. Regular FM broadcasting began in 1939 but did not pose a significant threat to the AM broadcasting industry. It required purchase of a special receiver. The frequencies used, 42 to 50 MHz, were not those used today. The change to the current frequencies, 88 to 108 MHz, began after the end of World War II and was to some extent imposed by AM broadcasters as an attempt to cripple what was by now realized to be a potentially serious threat. FM radio on the new band had to begin from the ground floor. As a commercial venture, it remained a little-used audio enthusiasts' medium until the 1960s. The more prosperous AM stations, or their owners, acquired FM licenses and often broadcast the same programming on the FM station as on the AM station ("simulcasting"). The FCC limited this practice in the 1960s. By the 1980s, since almost all new radios included both AM and FM tuners, FM became the dominant medium, especially in cities. Because of its greater range, AM remained more common in rural environments. Pirate radio Pirate radio is illegal or non-regulated radio transmission. It is most commonly used to describe illegal broadcasting for entertainment or political purposes. Sometimes it is used for illegal two-way radio operation. Its history can be traced back to the unlicensed nature of the transmission, but historically there has been occasional use of sea vessels—fitting the most common perception of a pirate—as broadcasting bases. Rules and regulations vary largely from country to country, but often the term pirate radio describes the unlicensed broadcast of FM radio, AM radio, or shortwave signals over a wide range. In some places, radio stations are legal where the signal is transmitted, but illegal where the signals are received—especially when the signals cross a national boundary. In other cases, a broadcast may be considered "pirate" due to the type of content, its transmission format, or the transmitting power (wattage) of the station, even if the transmission is not technically illegal (such as a webcast or an amateur radio transmission). Pirate radio stations are sometimes referred to as bootleg radio or clandestine stations. Terrestrial digital radio Digital radio broadcasting has emerged, first in Europe (the UK in 1995 and Germany in 1999), and later in the United States, France, the Netherlands, South Africa, and many other countries worldwide. The simplest system is named DAB Digital Radio, for Digital Audio Broadcasting, and uses the public domain EUREKA 147 (Band III) system. DAB is used mainly in the UK and South Africa. Germany and the Netherlands use the DAB and DAB+ systems, and France uses the L-Band system of DAB Digital Radio. The broadcasting regulators of the United States and Canada have chosen to use HD radio, an in-band on-channel system that puts digital broadcasts at frequencies adjacent to the analog broadcast. HD Radio is owned by a consortium of private companies that is called iBiquity. An international non-profit consortium Digital Radio Mondiale (DRM), has introduced the public domain DRM system, which is used by a relatively small number of broadcasters worldwide. International broadcasting Broadcasters in one country have several reasons to reach out to an audience in other countries. Commercial broadcasters may simply see a business opportunity to sell advertising or subscriptions to a broader audience. This is more efficient than broadcasting to a single country, because domestic entertainment programs and information gathered by domestic news staff can be cheaply repackaged for non-domestic audiences. Governments typically have different motivations for funding international broadcasting. One clear reason is for ideological, or propaganda reasons. Many government-owned stations portray their nation in a positive, non-threatening way. This could be to encourage business investment in or tourism to the nation. Another reason is to combat a negative image produced by other nations or internal dissidents, or insurgents. Radio RSA, the broadcasting arm of the apartheid South African government, is an example of this. A third reason is to promote the ideology of the broadcaster. For example, a program on Radio Moscow from the 1960s to the 1980s was What is Communism? A second reason is to advance a nation's foreign policy interests and agenda by disseminating its views on international affairs or on the events in particular parts of the world. During the Cold War the American Radio Free Europe and Radio Liberty and Indian Radio AIR were founded to broadcast news from "behind the Iron Curtain" that was otherwise being censored and promote dissent and occasionally, to disseminate disinformation. Currently, the US operates similar services aimed at Cuba (Radio y Televisión Martí) and the People's Republic of China, Vietnam, Laos and North Korea (Radio Free Asia). Besides ideological reasons, many stations are run by religious broadcasters and are used to provide religious education, religious music, or worship service programs. For example, Vatican Radio, established in 1931, broadcasts such programs. Another station, such as HCJB or Trans World Radio will carry brokered programming from evangelists. In the case of the Broadcasting Services of the Kingdom of Saudi Arabia, both governmental and religious programming is provided. Extensions Extensions of traditional radio-wave broadcasting for audio broadcasting in general include cable radio, local wire television networks, DTV radio, satellite radio, and Internet radio via streaming media on the Internet. Satellite The enormous entry costs of space-based satellite transmitters and restrictions on available radio spectrum licenses has restricted growth of Satellite radio broadcasts. In the US and Canada, just two services, XM Satellite Radio and Sirius Satellite Radio exist. Both XM and Sirius are owned by Sirius XM Satellite Radio, which was formed by the merger of XM and Sirius on July 29, 2008, whereas in Canada, XM Radio Canada and Sirius Canada remained separate companies until 2010. Worldspace in Africa and Asia, and MobaHO! in Japan and the ROK were two unsuccessful satellite radio operators which have gone out of business. Program formats Radio program formats differ by country, regulation, and markets. For instance, the U.S. Federal Communications Commission designates the 88–92 megahertz band in the U.S. for non-profit or educational programming, with advertising prohibited. In addition, formats change in popularity as time passes and technology improves. Early radio equipment only allowed program material to be broadcast in real time, known as live broadcasting. As technology for sound recording improved, an increasing proportion of broadcast programming used pre-recorded material. A current trend is the automation of radio stations. Some stations now operate without direct human intervention by using entirely pre-recorded material sequenced by computer control. Receiver
Technology
Media and communication
null
173181
https://en.wikipedia.org/wiki/Riemann%20surface
Riemann surface
In mathematics, particularly in complex analysis, a Riemann surface is a connected one-dimensional complex manifold. These surfaces were first studied by and are named after Bernhard Riemann. Riemann surfaces can be thought of as deformed versions of the complex plane: locally near every point they look like patches of the complex plane, but the global topology can be quite different. For example, they can look like a sphere or a torus or several sheets glued together. Examples of Riemann surfaces include graphs of multivalued functions such as √z or log(z), e.g. the subset of pairs with . Every Riemann surface is a surface: a two-dimensional real manifold, but it contains more structure (specifically a complex structure). Conversely, a two-dimensional real manifold can be turned into a Riemann surface (usually in several inequivalent ways) if and only if it is orientable and metrizable. Given this, the sphere and torus admit complex structures but the Möbius strip, Klein bottle and real projective plane do not. Every compact Riemann surface is a complex algebraic curve by Chow's theorem and the Riemann–Roch theorem. Definitions There are several equivalent definitions of a Riemann surface. A Riemann surface X is a connected complex manifold of complex dimension one. This means that X is a connected Hausdorff space that is endowed with an atlas of charts to the open unit disk of the complex plane: for every point there is a neighbourhood of x that is homeomorphic to the open unit disk of the complex plane, and the transition maps between two overlapping charts are required to be holomorphic. A Riemann surface is an oriented manifold of (real) dimension two – a two-sided surface – together with a conformal structure. Again, manifold means that locally at any point x of X, the space is homeomorphic to a subset of the real plane. The supplement "Riemann" signifies that X is endowed with an additional structure that allows angle measurement on the manifold, namely an equivalence class of so-called Riemannian metrics. Two such metrics are considered equivalent if the angles they measure are the same. Choosing an equivalence class of metrics on X is the additional datum of the conformal structure. A complex structure gives rise to a conformal structure by choosing the standard Euclidean metric given on the complex plane and transporting it to X by means of the charts. Showing that a conformal structure determines a complex structure is more difficult. Examples Algebraic curves Further definitions and properties As with any map between complex manifolds, a function between two Riemann surfaces M and N is called holomorphic if for every chart g in the atlas of M and every chart h in the atlas of N, the map is holomorphic (as a function from C to C) wherever it is defined. The composition of two holomorphic maps is holomorphic. The two Riemann surfaces M and N are called biholomorphic (or conformally equivalent to emphasize the conformal point of view) if there exists a bijective holomorphic function from M to N whose inverse is also holomorphic (it turns out that the latter condition is automatic and can therefore be omitted). Two conformally equivalent Riemann surfaces are for all practical purposes identical. Orientability Each Riemann surface, being a complex manifold, is orientable as a real manifold. For complex charts f and g with transition function , h can be considered as a map from an open set of R2 to R2 whose Jacobian in a point z is just the real linear map given by multiplication by the complex number h′(z). However, the real determinant of multiplication by a complex number α equals 2, so the Jacobian of h has positive determinant. Consequently, the complex atlas is an oriented atlas. Functions Every non-compact Riemann surface admits non-constant holomorphic functions (with values in C). In fact, every non-compact Riemann surface is a Stein manifold. In contrast, on a compact Riemann surface X every holomorphic function with values in C is constant due to the maximum principle. However, there always exist non-constant meromorphic functions (holomorphic functions with values in the Riemann sphere ). More precisely, the function field of X is a finite extension of C(t), the function field in one variable, i.e. any two meromorphic functions are algebraically dependent. This statement generalizes to higher dimensions, see . Meromorphic functions can be given fairly explicitly, in terms of Riemann theta functions and the Abel–Jacobi map of the surface. Algebraicity All compact Riemann surfaces are algebraic curves since they can be embedded into some CPn. This follows from the Kodaira embedding theorem and the fact there exists a positive line bundle on any complex curve. Analytic vs. algebraic The existence of non-constant meromorphic functions can be used to show that any compact Riemann surface is a projective variety, i.e. can be given by polynomial equations inside a projective space. Actually, it can be shown that every compact Riemann surface can be embedded into complex projective 3-space. This is a surprising theorem: Riemann surfaces are given by locally patching charts. If one global condition, namely compactness, is added, the surface is necessarily algebraic. This feature of Riemann surfaces allows one to study them with either the means of analytic or algebraic geometry. The corresponding statement for higher-dimensional objects is false, i.e. there are compact complex 2-manifolds which are not algebraic. On the other hand, every projective complex manifold is necessarily algebraic, see Chow's theorem. As an example, consider the torus T := . The Weierstrass function ℘τ(z) belonging to the lattice is a meromorphic function on T. This function and its derivative ℘τ′(z) generate the function field of T. There is an equation where the coefficients g2 and g3 depend on τ, thus giving an elliptic curve Eτ in the sense of algebraic geometry. Reversing this is accomplished by the j-invariant j(E), which can be used to determine τ and hence a torus. Classification of Riemann surfaces The set of all Riemann surfaces can be divided into three subsets: hyperbolic, parabolic and elliptic Riemann surfaces. Geometrically, these correspond to surfaces with negative, vanishing or positive constant sectional curvature. That is, every connected Riemann surface X admits a unique complete 2-dimensional real Riemann metric with constant curvature equal to −1, 0 or 1 that belongs to the conformal class of Riemannian metrics determined by its structure as a Riemann surface. This can be seen as a consequence of the existence of isothermal coordinates. In complex analytic terms, the Poincaré–Koebe uniformization theorem (a generalization of the Riemann mapping theorem) states that every simply connected Riemann surface is conformally equivalent to one of the following: The Riemann sphere , which is isomorphic to P1(C); The complex plane C; The open disk , which is isomorphic to the upper half-plane . A Riemann surface is elliptic, parabolic or hyperbolic according to whether its universal cover is isomorphic to P1(C), C or D. The elements in each class admit a more precise description. Elliptic Riemann surfaces The Riemann sphere P1(C) is the only example, as there is no group acting on it by biholomorphic transformations freely and properly discontinuously and so any Riemann surface whose universal cover is isomorphic to P1(C) must itself be isomorphic to it. Parabolic Riemann surfaces If X is a Riemann surface whose universal cover is isomorphic to the complex plane C then it is isomorphic to one of the following surfaces: C itself; The quotient ; A quotient , where with . Topologically there are only three types: the plane, the cylinder and the torus. But while in the two former case the (parabolic) Riemann surface structure is unique, varying the parameter τ in the third case gives non-isomorphic Riemann surfaces. The description by the parameter τ gives the Teichmüller space of "marked" Riemann surfaces (in addition to the Riemann surface structure one adds the topological data of a "marking", which can be seen as a fixed homeomorphism to the torus). To obtain the analytic moduli space (forgetting the marking) one takes the quotient of Teichmüller space by the mapping class group. In this case it is the modular curve. Hyperbolic Riemann surfaces In the remaining cases, X is a hyperbolic Riemann surface, that is isomorphic to a quotient of the upper half-plane by a Fuchsian group (this is sometimes called a Fuchsian model for the surface). The topological type of X can be any orientable surface save the torus and sphere. A case of particular interest is when X is compact. Then its topological type is described by its genus . Its Teichmüller space and moduli space are -dimensional. A similar classification of Riemann surfaces of finite type (that is homeomorphic to a closed surface minus a finite number of points) can be given. However, in general the moduli space of Riemann surfaces of infinite topological type is too large to admit such a description. Maps between Riemann surfaces The geometric classification is reflected in maps between Riemann surfaces, as detailed in Liouville's theorem and the Little Picard theorem: maps from hyperbolic to parabolic to elliptic are easy, but maps from elliptic to parabolic or parabolic to hyperbolic are very constrained (indeed, generally constant!). There are inclusions of the disc in the plane in the sphere: , but any holomorphic map from the sphere to the plane is constant, any holomorphic map from the plane into the unit disk is constant (Liouville's theorem), and in fact any holomorphic map from the plane into the plane minus two points is constant (Little Picard theorem)! Punctured spheres These statements are clarified by considering the type of a Riemann sphere with a number of punctures. With no punctures, it is the Riemann sphere, which is elliptic. With one puncture, which can be placed at infinity, it is the complex plane, which is parabolic. With two punctures, it is the punctured plane or alternatively annulus or cylinder, which is parabolic. With three or more punctures, it is hyperbolic – compare pair of pants. One can map from one puncture to two, via the exponential map (which is entire and has an essential singularity at infinity, so not defined at infinity, and misses zero and infinity), but all maps from zero punctures to one or more, or one or two punctures to three or more are constant. Ramified covering spaces Continuing in this vein, compact Riemann surfaces can map to surfaces of lower genus, but not to higher genus, except as constant maps. This is because holomorphic and meromorphic maps behave locally like for integer n, so non-constant maps are ramified covering maps, and for compact Riemann surfaces these are constrained by the Riemann–Hurwitz formula in algebraic topology, which relates the Euler characteristic of a space and a ramified cover. For example, hyperbolic Riemann surfaces are ramified covering spaces of the sphere (they have non-constant meromorphic functions), but the sphere does not cover or otherwise map to higher genus surfaces, except as a constant. Isometries of Riemann surfaces The isometry group of a uniformized Riemann surface (equivalently, the conformal automorphism group) reflects its geometry: genus 0 – the isometry group of the sphere is the Möbius group of projective transforms of the complex line, the isometry group of the plane is the subgroup fixing infinity, and of the punctured plane is the subgroup leaving invariant the set containing only infinity and zero: either fixing them both, or interchanging them (1/z). the isometry group of the upper half-plane is the real Möbius group; this is conjugate to the automorphism group of the disk. genus 1 – the isometry group of a torus is in general generated by translations (as an Abelian variety) and the rotation by 180°. In special cases there can be additional rotations and reflections. For genus , the isometry group is finite, and has order at most , by Hurwitz's automorphisms theorem; surfaces that realize this bound are called Hurwitz surfaces. It is known that every finite group can be realized as the full group of isometries of some Riemann surface. For genus 2 the order is maximized by the Bolza surface, with order 48. For genus 3 the order is maximized by the Klein quartic, with order 168; this is the first Hurwitz surface, and its automorphism group is isomorphic to the unique simple group of order 168, which is the second-smallest non-abelian simple group. This group is isomorphic to both and . For genus 4, Bring's surface is a highly symmetric surface. For genus 7 the order is maximized by the Macbeath surface, with order 504; this is the second Hurwitz surface, and its automorphism group is isomorphic to , the fourth-smallest non-abelian simple group. Function-theoretic classification The classification scheme above is typically used by geometers. There is a different classification for Riemann surfaces that is typically used by complex analysts. It employs a different definition for "parabolic" and "hyperbolic". In this alternative classification scheme, a Riemann surface is called parabolic if there are no non-constant negative subharmonic functions on the surface and is otherwise called hyperbolic. This class of hyperbolic surfaces is further subdivided into subclasses according to whether function spaces other than the negative subharmonic functions are degenerate, e.g. Riemann surfaces on which all bounded holomorphic functions are constant, or on which all bounded harmonic functions are constant, or on which all positive harmonic functions are constant, etc. To avoid confusion, call the classification based on metrics of constant curvature the geometric classification, and the one based on degeneracy of function spaces the function-theoretic classification. For example, the Riemann surface consisting of "all complex numbers but 0 and 1" is parabolic in the function-theoretic classification but it is hyperbolic in the geometric classification.
Mathematics
Calculus and analysis
null
173204
https://en.wikipedia.org/wiki/Drosophila%20melanogaster
Drosophila melanogaster
Drosophila melanogaster is a species of fly (an insect of the order Diptera) in the family Drosophilidae. The species is often referred to as the fruit fly or lesser fruit fly, or less commonly the "vinegar fly", "pomace fly", or "banana fly". In the wild, D. melanogaster are attracted to rotting fruit and fermenting beverages, and are often found in orchards, kitchens and pubs. Starting with Charles W. Woodworth's 1901 proposal of the use of this species as a model organism, D. melanogaster continues to be widely used for biological research in genetics, physiology, microbial pathogenesis, and life history evolution. D. melanogaster was the first animal to be launched into space in 1947. As of 2017, six Nobel Prizes have been awarded to drosophilists for their work using the insect. Drosophila melanogaster is typically used in research owing to its rapid life cycle, relatively simple genetics with only four pairs of chromosomes, and large number of offspring per generation. It was originally an African species, with all non-African lineages having a common origin. Its geographic range includes all continents, including islands. D. melanogaster is a common pest in homes, restaurants, and other places where food is served. Flies belonging to the family Tephritidae are also called "fruit flies". This can cause confusion, especially in the Mediterranean, Australia, and South Africa, where the Mediterranean fruit fly Ceratitis capitata is an economic pest. Etymology The term "Drosophila", meaning "dew-loving", is a modern scientific Latin adaptation from Greek words , , "dew", and , , "lover". The term "melanogaster" meaning "black-belly", comes from Ancient Greek , , “black”, and , , "belly". Physical appearance Unlike humans, the sex and physical appearance of fruit flies is not influenced by hormones. The appearance and sex of fruit flies is determined only by genetic information. Female fruit flies are substantially larger than male fruit flies, with females having bodies that are up to 30% larger than an adult male. Wild type fruit flies are yellow-brown, with brick-red eyes and transverse black rings across the abdomen. The black portions of the abdomen are the inspiration for the species name (melanogaster = "black-bellied"). The brick-red color of the eyes of the wild type fly are due to two pigments: xanthommatin, which is brown and is derived from tryptophan, and drosopterins, which are red and are derived from guanosine triphosphate. They exhibit sexual dimorphism; females are about long; males are slightly smaller. Furthermore, males have a cluster of spiky hairs (claspers) surrounding the reproducing parts used to attach to the female during mating. Extensive images are found at FlyBase. Drosophila melanogaster can be distinguished from related species by the following combination of features: gena ~1/10 diameter of eye at greatest vertical height; wing hyaline and with costal index 2.4; male protarsus with a single row of ~12 setae forming a sex comb; male epandrial posterior lobe small and nearly triangular; female abdominal tergite 6 with dark band running to its ventral margin; female oviscapt small, pale, without dorsodistal depression and with 12-13 peg-like outer ovisensilla. Drosophila melanogaster flies can sense air currents with the hairs on their backs. Their eyes are sensitive to slight differences in light intensity and will instinctively fly away when a shadow or other movement is detected. Lifecycle and reproduction Under optimal growth conditions at , the D. melanogaster lifespan is about 50 days from egg to death. The developmental period for D. melanogaster varies with temperature, as with many ectothermic species. The shortest development time (egg to adult), seven days, is achieved at . Development times increase at higher temperatures (11 days at ) due to heat stress. Under ideal conditions, the development time at is days, at it takes 19 days and at it takes over 50 days. Under crowded conditions, development time increases, while the emerging flies are smaller. Females lay some 400 eggs (embryos), about five at a time, into rotting fruit or other suitable material such as decaying mushrooms and sap fluxes. Drosophila melanogaster is a holometabolous insect, so it undergoes a full metamorphosis. Their life cycle is broken down into four stages: embryo, larva, pupa, adult. The eggs, which are about 0.5 mm long, hatch after 12–15 hours (at ). The resulting larvae grow for about four days (at 25 °C) while molting twice (into second- and third-instar larvae), at about 24 and 48 hours after hatching. During this time, they feed on the microorganisms that decompose the fruit, as well as on the sugar of the fruit itself. The mother puts feces on the egg sacs to establish the same microbial composition in the larvae's guts that has worked positively for herself. Then the larvae encapsulate in the puparium and undergo a four-day-long metamorphosis (at 25 °C), after which the adults eclose (emerge). Drosophila melanogaster, commonly known as the fruit fly, has been a significant model organism in embryonic development research. Many of its genes that regulate embryonic development and their mechanisms of action have been crucial in understanding the fundamental principles of embryonic development regulation in many multicellular organisms, including humans. Here are some important genes regulating embryonic development in Drosophila melanogaster and their modes of action: Maternal genes: These genes are encoded in the female fruit fly and are present in the early stages of embryo development. They determine the embryo's main features and early development. For example, the gene called Bicoid regulates the formation of the embryo's anterior end, and its absence leads to an embryo lacking a head. Zygotic genes: These genes are activated in later stages of embryo development when the fruit fly embryo begins to produce its own genetic products. For example, the hunchback gene regulates the formation of segments in the embryo. Homeotic genes: This gene family regulates segmentation and axial patterning in development. They act as regulatory factors that determine cell fate in embryonic development. For example, the gene called Antennapedia regulates the formation of anterior limbs in the embryo. Morphogens: These are molecules that form gradients in embryonic development and regulate cell fate depending on their position in the gradient. For example, the Hedgehog morphogen regulates the differentiation of segments and segment identity in the fruit fly embryo. These genes and their modes of action form a complex regulatory network that guides the embryonic development of Drosophila melanogaster. They influence cell differentiation, segment formation, and axial patterning in the embryo, ultimately leading to the development of a fully formed adult fruit fly. Males perform a sequence of five behavioral patterns to court females. First, males orient themselves while playing a courtship song by horizontally extending and vibrating their wings. Soon after, the male positions himself at the rear of the female's abdomen in a low posture to tap and lick the female genitalia. Finally, the male curls his abdomen and attempts copulation. Females can reject males by moving away, kicking, and extruding their ovipositor. Copulation lasts around 15–20 minutes, during which males transfer a few hundred, very long (1.76 mm) sperm cells in seminal fluid to the female. Females store the sperm in a tubular receptacle and in two mushroom-shaped spermathecae; sperm from multiple matings compete for fertilization. A last male precedence is believed to exist; the last male to mate with a female sires about 80% of her offspring. This precedence was found to occur through both displacement and incapacitation. The displacement is attributed to sperm handling by the female fly as multiple matings are conducted and is most significant during the first 1–2 days after copulation. Displacement from the seminal receptacle is more significant than displacement from the spermathecae. Incapacitation of first male sperm by second male sperm becomes significant 2–7 days after copulation. The seminal fluid of the second male is believed to be responsible for this incapacitation mechanism (without removal of first male sperm) which takes effect before fertilization occurs. The delay in effectiveness of the incapacitation mechanism is believed to be a protective mechanism that prevents a male fly from incapacitating his own sperm should he mate with the same female fly repetitively. Sensory neurons in the uterus of female D. melanogaster respond to a male protein, sex peptide, which is found in semen. This protein makes the female reluctant to copulate for about 10 days after insemination. The signal pathway leading to this change in behavior has been determined. The signal is sent to a brain region that is a homolog of the hypothalamus and the hypothalamus then controls sexual behavior and desire. Gonadotropic hormones in Drosophila maintain homeostasis and govern reproductive output via a cyclic interrelationship, not unlike the mammalian estrous cycle. Sex peptide perturbs this homeostasis and dramatically shifts the endocrine state of the female by inciting juvenile hormone synthesis in the corpus allatum. D. melanogaster is often used for life extension studies, such as to identify genes purported to increase lifespan when mutated. D. melanogaster is also used in studies of aging. Werner syndrome is a condition in humans characterized by accelerated aging. It is caused by mutations in the gene WRN that encodes a protein with essential roles in repair of DNA damage. Mutations in the D. melanogaster homolog of WRN also cause increased physiologic signs of aging, such as shorter lifespan, higher tumor incidence, muscle degeneration, reduced climbing ability, altered behavior and reduced locomotor activity. Meiosis Meiotic recombination in D. melanogaster appears to be employed in repairing damage in germ-line DNA as indicated by the findings that meiotic recombination is induced by the DNA damaging agents ultraviolet light and mitomycin C. Females Females become receptive to courting males about 8–12 hours after emergence. Specific neuron groups in females have been found to affect copulation behavior and mate choice. One such group in the abdominal nerve cord allows the female fly to pause her body movements to copulate. Activation of these neurons induces the female to cease movement and orient herself towards the male to allow for mounting. If the group is inactivated, the female remains in motion and does not copulate. Various chemical signals such as male pheromones often are able to activate the group. Also, females exhibit mate choice copying. When virgin females are shown other females copulating with a certain type of male, they tend to copulate more with this type of male afterwards than naïve females (which have not observed the copulation of others). This behavior is sensitive to environmental conditions, and females copulate less in bad weather conditions. Males D. melanogaster males exhibit a strong reproductive learning curve. That is, with sexual experience, these flies tend to modify their future mating behavior in multiple ways. These changes include increased selectivity for courting only intraspecifically, as well as decreased courtship times. Sexually naïve D. melanogaster males are known to spend significant time courting interspecifically, such as with D. simulans flies. Naïve D. melanogaster will also attempt to court females that are not yet sexually mature, and other males. D. melanogaster males show little to no preference for D. melanogaster females over females of other species or even other male flies. However, after D. simulans or other flies incapable of copulation have rejected the males' advances, D. melanogaster males are much less likely to spend time courting nonspecifically in the future. This apparent learned behavior modification seems to be evolutionarily significant, as it allows the males to avoid investing energy into futile sexual encounters. In addition, males with previous sexual experience modify their courtship dance when attempting to mate with new females—the experienced males spend less time courting, so have lower mating latencies, meaning that they are able to reproduce more quickly. This decreased mating latency leads to a greater mating efficiency for experienced males over naïve males. This modification also appears to have obvious evolutionary advantages, as increased mating efficiency is extremely important in the eyes of natural selection. Polygamy Both male and female D. melanogaster flies act polygamously (having multiple sexual partners at the same time). In both males and females, polygamy results in a decrease in evening activity compared to virgin flies, more so in males than females. Evening activity consists of those in which the flies participate other than mating and finding partners, such as finding food. The reproductive success of males and females varies, because a female only needs to mate once to reach maximum fertility. Mating with multiple partners provides no advantage over mating with one partner, so females exhibit no difference in evening activity between polygamous and monogamous individuals. For males, however, mating with multiple partners increases their reproductive success by increasing the genetic diversity of their offspring. This benefit of genetic diversity is an evolutionary advantage because it increases the chance that some of the offspring will have traits that increase their fitness in their environment. The difference in evening activity between polygamous and monogamous male flies can be explained with courtship. For polygamous flies, their reproductive success increases by having offspring with multiple partners, and therefore they spend more time and energy on courting multiple females. On the other hand, monogamous flies only court one female, and expend less energy doing so. While it requires more energy for male flies to court multiple females, the overall reproductive benefits it produces has kept polygamy as the preferred sexual choice. The mechanism that affects courtship behavior in Drosophila is controlled by the oscillator neurons DN1s and LNDs. Oscillation of the DN1 neurons was found to be effected by sociosexual interactions, and is connected to mating-related decrease of evening activity. Model organism in genetics D. melanogaster remains one of the most studied organisms in biological research, particularly in genetics and developmental biology. It is also employed in studies of environmental mutagenesis. History of use in genetic analysis D. melanogaster was among the first organisms used for genetic analysis, and today it is one of the most widely used and genetically best-known of all eukaryotic organisms. All organisms use common genetic systems; therefore, comprehending processes such as transcription and replication in fruit flies helps in understanding these processes in other eukaryotes, including humans. Thomas Hunt Morgan began using fruit flies in experimental studies of heredity at Columbia University in 1910 in a laboratory known as the Fly Room. The Fly Room was cramped with eight desks, each occupied by students and their experiments. They started off experiments using milk bottles to rear the fruit flies and handheld lenses for observing their traits. The lenses were later replaced by microscopes, which enhanced their observations. Morgan and his students eventually elucidated many basic principles of heredity, including sex-linked inheritance, epistasis, multiple alleles, and gene mapping. D. melanogaster had historically been used in laboratories to study genetics and patterns of inheritance. However, D. melanogaster also has importance in environmental mutagenesis research, allowing researchers to study the effects of specific environmental mutagens. Reasons for use in laboratories There are many reasons the fruit fly is a popular choice as a model organism: Its care and culture require little equipment, space, and expense even when using large cultures. It can be safely and readily anesthetized (usually with ether, carbon dioxide gas, by cooling, or with products such as FlyNap). Its morphology is easy to identify once anesthetized. It has a short generation time (about 10 days at room temperature), so several generations can be studied within a few weeks. It has a high fecundity (females lay up to 100 eggs per day, and perhaps 2000 in a lifetime). Males and females are readily distinguished, and virgin females can be easily identified by their light-colored, translucent abdomen, facilitating genetic crossing. The mature larva has giant chromosomes in the salivary glands called polytene chromosomes, "puffs", which indicate regions of transcription, hence gene activity. The under-replication of rDNA occurs resulting in only 20% of DNA compared to the brain. Compare to the 47%, less rDNA in Sarcophaga barbata ovaries. It has only four pairs of chromosomes – three autosomes, and one pair of sex chromosomes. Males do not show meiotic recombination, facilitating genetic studies. Recessive lethal "balancer chromosomes" carrying visible genetic markers can be used to keep stocks of lethal alleles in a heterozygous state without recombination due to multiple inversions in the balancer. The development of this organism—from fertilized egg to mature adult—is well understood. Genetic transformation techniques have been available since 1987. One approach of inserting foreign genes into the Drosophila genome involves P elements. The transposable P elements, also known as transposons, are segments of bacterial DNA that are transferred into the fly genome. Transgenic flies have already contributed to many scientific advances, e.g., modeling such human diseases as Parkinson's, neoplasia, obesity, and diabetes. Its complete genome was sequenced and first published in 2000. Sexual mosaics can be readily produced, providing an additional tool for studying the development and behavior of these flies. Genetic markers Genetic markers are commonly used in Drosophila research, for example within balancer chromosomes or P-element inserts, and most phenotypes are easily identifiable either with the naked eye or under a microscope. In the list of a few common markers below, the allele symbol is followed by the name of the gene affected and a description of its phenotype. (Note: Recessive alleles are in lower case, while dominant alleles are capitalised.) Cy1: Curly; the wings curve away from the body, flight may be somewhat impaired e1: Ebony; black body and wings (heterozygotes are also visibly darker than wild type) Sb1: Stubble; bristles are shorter and thicker than wild type w1: White; eyes lack pigmentation and appear white bw: Brown; eye color determined by various pigments combined. y1: Yellow; body pigmentation and wings appear yellow, the fly analog of albinism Classic genetic mutations Drosophila genes are traditionally named after the phenotype they cause when mutated. For example, the absence of a particular gene in Drosophila will result in a mutant embryo that does not develop a heart. Scientists have thus called this gene tinman, named after the Oz character of the same name. Likewise changes in the Shavenbaby gene cause the loss of dorsal cuticular hairs in Drosophila sechellia larvae. This system of nomenclature results in a wider range of gene names than in other organisms. b: black- The black mutation was discovered in 1910 by Thomas Hunt Morgan. The black mutation results in a darker colored body, wings, veins, and segments of the fruit fly's leg. This occurs due to the fly's inability to create beta-alanine, a beta amino acid. The phenotypic expression of this mutation varies based on the genotype of the individual; for example, whether the specimen is homozygotic or heterozygotic results in a darker or less dark appearance. This genetic mutation is x-linked recessive. bw: brown- The brown eye mutation results from inability to produce or synthesize pteridine (red) pigments, due to a point mutation on chromosome II. m: miniature- One of the first records of the miniature mutation of wings was also made by Thomas Hunt Morgan in 1911. He described the wings as having a similar shape as the wild-type phenotype. However, their miniature designation refers to the lengths of their wings, which do not stretch beyond their body and, thus, are notably shorter than the wild-type length. He also noted its inheritance is connected to the sex of the fly and could be paired with the inheritance of other sex-determined traits such as white eyes. The wings may also demonstrate other characteristics deviant from the wild-type wing, such as a duller and cloudier color. Miniature wings are 1.5x shorter than wild-type but are believed to have the same number of cells. This is due to the lack of complete flattening by these cells, making the overall structure of the wing seem shorter in comparison. The pathway of wing expansion is regulated by a signal-receptor pathway, where the neurohormone bursicon interacts with its complementary G protein-coupled receptor; this receptor drives one of the G-protein subunits to signal further enzyme activity and results in development in the wing, such as apoptosis and growth. se: sepia- The eye color of the sepia mutant is sepia, a reddish-brown color. In wild-type flies, ommochromes (brown) and drosopterins (red) give the eyes the typical red color. The drosopterins are made via a pathway that involves a pyrimidodiazepine synthase, which is encoded on chromosome 3L. The gene has a premature stop codon in sepia flies, so that the flies cannot produce the pyrimidodiazepine synthase and thus no red pigment, so that the eyes stay sepia. The sepia allele is recessive and thus offspring from sepia flies and homozygous wild type flies, has red eyes. The sepia phenotype does not depend on the sex of the fly. v: vermilion- The vermilion mutants cannot produce the brown ommochromes leaving the red drosopterins so that the eyes are vermilion colored (a radiant red) compared to a wild-type D. melanogaster. The vermilion mutation is sex-linked and recessive. The gene that is defect lies on the X chromosome. The brown ommochromes are synthesised from kynurenine, which is made from tryptophane. Vermilion flies cannot convert tryptophane into kynurenine and thus cannot make ommochromes, either. Vermilion mutants live longer than wild-type flies. This longer life span may be associated with the reduced amount of tryptophan converted to kynurenine in vermilion flies. vg: vestigial- A spontaneous mutation, discovered in 1919 by Thomas Morgan and Calvin Bridges. Vestigial wings are those not fully developed and that have lost function. Since the discovery of the vestigial gene in Drosophila melanogaster, there have been many discoveries of the vestigial gene in other vertebrates and their functions within the vertebrates. The vestigial gene is considered to be one of the most important genes for wing formation, but when it becomes over expressed the issue of ectopic wings begin to form. The vestigial gene acts to regulate the expression of the wing imaginal discs in the embryo and acts with other genes to regulate the development of the wings. A mutated vestigial allele removes an essential sequence of the DNA required for correct development of the wings. w: white- Drosophila melanogaster wild type typically expresses a brick red eye color. The white eye mutation in fruit flies is caused due to the absence of two pigments associated with red and brown eye colors; peridines (red) and ommochromes (brown). In January 1910, Thomas Hunt Morgan first discovered the white gene and denoted it as w. The discovery of the white-eye mutation by Morgan brought about the beginnings of genetic experimentation and analysis of Drosophila melanogaster. Hunt eventually discovered that the gene followed a similar pattern of inheritance related to the meiotic segregation of the X chromosome. He discovered that the gene was located on the X chromosome with this information. This led to the discovery of sex-linked genes and also to the discovery of other mutations in Drosophila melanogaster. The white-eye mutation leads to several disadvantages in flies, such as a reduced climbing ability, shortened life span, and lowered resistance to stress when compared to wild type flies. Drosophila melanogaster has a series of mating behaviors that enable them to copulate within a given environment and therefore contribute to their fitness. After Morgan's discovery of the white-eye mutation being sex-linked, a study led by Sturtevant (1915) concluded that white-eyed males were less successful than wild-type males in terms of mating with females. It was found that the greater the density in eye pigmentation, the greater the success in mating for the males of Drosophila melanogaster. y: yellow- The yellow gene is a genetic mutation known as Dmel\y within the widely used data base called FlyBase. This mutation can be easily identified by the atypical yellow pigment observed in the cuticle of the adult flies and the mouth pieces of the larva. The y mutation comprises the following phenotypic classes: the mutants that show a complete loss of pigmentation from the cuticle (y-type) and other mutants that show a mosaic pigment pattern with some regions of the cuticle (wild type, y2-type). The role of the yellow gene is diverse and is responsible for changes in behaviour, sex-specific reproductive maturation and, epigenetic reprogramming. The y gene is an ideal gene to study as it is visibly clear when an organism has this gene, making it easier to understand the passage of DNA to offspring. Genome The genome of D. melanogaster (sequenced in 2000, and curated at the FlyBase database) contains four pairs of chromosomes – an X/Y pair, and three autosomes labeled 2, 3, and 4. The fourth chromosome is relatively very small and therefore often ignored, aside from its important eyeless gene. The D. melanogaster sequenced genome of 139.5 million base pairs has been annotated and contains around 15,682 genes according to Ensemble release 73. More than 60% of the genome appears to be functional non-protein-coding DNA involved in gene expression control. Determination of sex in Drosophila occurs by the X:A ratio of X chromosomes to autosomes, not because of the presence of a Y chromosome as in human sex determination. Although the Y chromosome is entirely heterochromatic, it contains at least 16 genes, many of which are thought to have male-related functions. There are three transferrin orthologs, all of which are dramatically divergent from those known in chordate models. Similarity to humans A June 2001 study by National Human Genome Research Institute comparing the fruit fly and human genome estimated that about 60% of genes are conserved between the two species. About 75% of known human disease genes have a recognizable match in the genome of fruit flies, and 50% of fly protein sequences have mammalian homologs . An online database called Homophila is available to search for human disease gene homologues in flies and vice versa. Drosophila is being used as a genetic model for several human diseases including the neurodegenerative disorders Parkinson's, Huntington's, spinocerebellar ataxia and Alzheimer's disease. The fly is also being used to study mechanisms underlying aging and oxidative stress, immunity, diabetes, and cancer, as well as drug abuse. Development The life cycle of this insect has four stages: fertilized egg, larva, pupa, and adult. Embryogenesis in Drosophila has been extensively studied, as its small size, short generation time, and large brood size makes it ideal for genetic studies. It is also unique among model organisms in that cleavage occurs in a syncytium. During oogenesis, cytoplasmic bridges called "ring canals" connect the forming oocyte to nurse cells. Nutrients and developmental control molecules move from the nurse cells into the oocyte. In the figure to the left, the forming oocyte can be seen to be covered by follicular support cells. After fertilization of the oocyte, the early embryo (or syncytial embryo) undergoes rapid DNA replication and 13 nuclear divisions until about 5000 to 6000 nuclei accumulate in the unseparated cytoplasm of the embryo. By the end of the eighth division, most nuclei have migrated to the surface, surrounding the yolk sac (leaving behind only a few nuclei, which will become the yolk nuclei). After the 10th division, the pole cells form at the posterior end of the embryo, segregating the germ line from the syncytium. Finally, after the 13th division, cell membranes slowly invaginate, dividing the syncytium into individual somatic cells. Once this process is completed, gastrulation starts. Nuclear division in the early Drosophila embryo happens so quickly, no proper checkpoints exist, so mistakes may be made in division of the DNA. To get around this problem, the nuclei that have made a mistake detach from their centrosomes and fall into the centre of the embryo (yolk sac), which will not form part of the fly. The gene network (transcriptional and protein interactions) governing the early development of the fruit fly embryo is one of the best understood gene networks to date, especially the patterning along the anteroposterior (AP) and dorsoventral (DV) axes (See under morphogenesis). The embryo undergoes well-characterized morphogenetic movements during gastrulation and early development, including germ-band extension, formation of several furrows, ventral invagination of the mesoderm, and posterior and anterior invagination of endoderm (gut), as well as extensive body segmentation until finally hatching from the surrounding cuticle into a first-instar larva. During larval development, tissues known as imaginal discs grow inside the larva. Imaginal discs develop to form most structures of the adult body, such as the head, legs, wings, thorax, and genitalia. Cells of the imaginal disks are set aside during embryogenesis and continue to grow and divide during the larval stages—unlike most other cells of the larva, which have differentiated to perform specialized functions and grow without further cell division. At metamorphosis, the larva forms a pupa, inside which the larval tissues are reabsorbed and the imaginal tissues undergo extensive morphogenetic movements to form adult structures. Developmental plasticity Biotic and abiotic factors experienced during development will affect developmental resource allocation leading to phenotypic variation, also referred to as developmental plasticity. As in all insects, environmental factors can influence several aspects of development in Drosophila melanogaster. Fruit flies reared under a hypoxia treatment experience decreased thorax length, while hyperoxia produces smaller flight muscles, suggesting negative developmental effects of extreme oxygen levels. Circadian rhythms are also subject to developmental plasticity. Light conditions during development affect daily activity patterns in Drosophila melanogaster, where flies raised under constant dark or light are less active as adults than those raised under a 12-hour light/dark cycle. Temperature is one of the most pervasive factors influencing arthropod development. In Drosophila melanogaster temperature-induced developmental plasticity can be beneficial and/or detrimental. Most often lower developmental temperatures reduce growth rates which influence many other physiological factors. For example, development at 25 °C increases walking speed, thermal performance breadth, and territorial success, while development at 18 °C increases body mass, wing size, all of which are tied to fitness. Moreover, developing at certain low temperatures produces proportionally large wings which improve flight and reproductive performance at similarly low temperatures (See acclimation). While certain effects of developmental temperature, like body size, are irreversible in ectotherms, others can be reversible. When Drosophila melanogaster develop at cold temperatures they will have greater cold tolerance, but if cold-reared flies are maintained at warmer temperatures their cold tolerance decreases and heat tolerance increases over time. Because insects typically only mate in a specific range of temperatures, their cold/heat tolerance is an important trait in maximizing reproductive output. While the traits described above are expected to manifest similarly across sexes, developmental temperature can also produce sex-specific effects in D. melanogaster adults. Females- Ovariole number is significantly affected by developmental temperature in D. melanogaster. Egg size is also affected by developmental temperature, and exacerbated when both parents develop at warm temperatures (See Maternal effect). Under stressful temperatures, these structures will develop to smaller ultimate sizes and decrease a female's reproductive output. Early fecundity (total eggs laid in first 10 days post-eclosion) is maximized when reared at 25 °C (versus 17 °C and 29 °C) regardless of adult temperature. Across a wide range of developmental temperatures, females tend to have greater heat tolerance than males. Males- Stressful developmental temperatures will cause sterility in D. melanogaster males; although the upper temperature limit can be increased by maintaining strains at high temperatures (See acclimation). Male sterility can be reversible if adults are returned to an optimal temperature after developing at stressful temperatures. Male flies are smaller and more successful at defending food/oviposition sites when reared at 25 °C versus 18 °C; thus smaller males will have increased mating success and reproductive output. Sex determination Drosophila flies have both X and Y chromosomes, as well as autosomes. Unlike humans, the Y chromosome does not confer maleness; rather, it encodes genes necessary for making sperm. Sex is instead determined by the ratio of X chromosomes to autosomes. Furthermore, each cell "decides" whether to be male or female independently of the rest of the organism, resulting in the occasional occurrence of gynandromorphs. Three major genes are involved in determination of Drosophila sex. These are sex-lethal, sisterless, and deadpan. Deadpan is an autosomal gene which inhibits sex-lethal, while sisterless is carried on the X chromosome and inhibits the action of deadpan. An AAX cell has twice as much deadpan as sisterless, so sex-lethal will be inhibited, creating a male. However, an AAXX cell will produce enough sisterless to inhibit the action of deadpan, allowing the sex-lethal gene to be transcribed to create a female. Later, control by deadpan and sisterless disappears and what becomes important is the form of the sex-lethal gene. A secondary promoter causes transcription in both males and females. Analysis of the cDNA has shown that different forms are expressed in males and females. Sex-lethal has been shown to affect the splicing of its own mRNA. In males, the third exon is included which encodes a stop codon, causing a truncated form to be produced. In the female version, the presence of sex-lethal causes this exon to be missed out; the other seven amino acids are produced as a full peptide chain, again giving a difference between males and females. Presence or absence of functional sex-lethal proteins now go on to affect the transcription of another protein known as doublesex. In the absence of sex-lethal, doublesex will have the fourth exon removed and be translated up to and including exon 6 (DSX-M[ale]), while in its presence the fourth exon which encodes a stop codon will produce a truncated version of the protein (DSX-F[emale]). DSX-F causes transcription of Yolk proteins 1 and 2 in somatic cells, which will be pumped into the oocyte on its production. Immunity The D. melanogaster immune system can be divided into two responses: humoral and cell-mediated. The former is a systemic response mediated in large part through the toll and Imd pathways, which are parallel systems for detecting microbes. Other pathways including the stress response pathways JAK-STAT and P38, nutritional signalling via FOXO, and JNK cell death signalling are all involved in key physiological responses to infection. D. melanogaster has an organ called the "fat body", which is analogous to the human liver. The fat body is the primary secretory organ and produces key immune molecules upon infection, such as serine proteases and antimicrobial peptides (AMPs). AMPs are secreted into the hemolymph and bind infectious bacteria and fungi, killing them by forming pores in their cell walls or inhibiting intracellular processes. The cellular immune response instead refers to the direct activity of blood cells (hemocytes) in Drosophila, which are analogous to mammalian monocytes/macrophages. Hemocytes also possess a significant role in mediating humoral immune responses such as the melanization reaction. The immune response to infection can involve up to 2,423 genes, or 13.7% of the genome. Although the fly's transcriptional response to microbial challenge is highly specific to individual pathogens, Drosophila differentially expresses a core group of 252 genes upon infection with most bacteria. This core group of genes is associated with gene ontology categories such as antimicrobial response, stress response, secretion, neuron-like, reproduction, and metabolism among others. Drosophila also possesses several immune mechanisms to both shape the microbiota and prevent excessive immune responses upon detection of microbial stimuli. For instance, secreted PGRPs with amidase activity scavenge and degrade immunostimulatory DAP-type PGN in order to block Imd activation. Unlike mammals, Drosophila have innate immunity but lack an adaptive immune response. However, the core elements of this innate immune response are conserved between humans and fruit flies. As a result, the fruit fly offers a useful model of innate immunity for disentangling genetic interactions of signalling and effector function, as flies do not have to contend with interference of adaptive immune mechanisms that could confuse results. Various genetic tools, protocols, and assays make Drosophila a classical model for studying the innate immune system, which has even included immune research on the international space station. JAK-STAT signalling Multiple elements of the Drosophila JAK-STAT signalling pathway bear direct homology to human JAK-STAT pathway genes. JAK-STAT signalling is induced upon various organismal stresses such as heat stress, dehydration, or infection. JAK-STAT induction leads to the production of a number of stress response proteins including Thioester-containing proteins (TEPs), Turandots, and the putative antimicrobial peptide Listericin. The mechanisms through which many of these proteins act is still under investigation. For instance, the TEPs appear to promote phagocytosis of Gram-positive bacteria and the induction of the toll pathway. As a consequence, flies lacking TEPs are susceptible to infection by toll pathway challenges. The cellular response to infection Circulating hemocytes are key regulators of infection. This has been demonstrated both through genetic tools to generate flies lacking hemocytes, or through injecting microglass beads or lipid droplets that saturate hemocyte ability to phagocytose a secondary infection. Flies treated like this fail to phagocytose bacteria upon infection, and are correspondingly susceptible to infection. These hemocytes derive from two waves of hematopoiesis, one occurring in the early embryo and one occurring during development from larva to adult. However Drosophila hemocytes do not renew over the adult lifespan, and so the fly has a finite number of hemocytes that decrease over the course of its lifespan. Hemocytes are also involved in regulating cell-cycle events and apoptosis of aberrant tissue (e.g. cancerous cells) by producing Eiger, a tumor necrosis factor signalling molecule that promotes JNK signalling and ultimately cell death and apoptosis. Behavioral genetics and neuroscience In 1971, Ron Konopka and Seymour Benzer published "Clock mutants of Drosophila melanogaster", a paper describing the first mutations that affected an animal's behavior. Wild-type flies show an activity rhythm with a frequency of about a day (24 hours). They found mutants with faster and slower rhythms, as well as broken rhythms—flies that move and rest in random spurts. Work over the following 30 years has shown that these mutations (and others like them) affect a group of genes and their products that form a biochemical or biological clock. This clock is found in a wide range of fly cells, but the clock-bearing cells that control activity are several dozen neurons in the fly's central brain. Since then, Benzer and others have used behavioral screens to isolate genes involved in vision, olfaction, audition, learning/memory, courtship, pain, and other processes, such as longevity. Following the pioneering work of Alfred Henry Sturtevant and others, Benzer and colleagues used sexual mosaics to develop a novel fate mapping technique. This technique made it possible to assign a particular characteristic to a specific anatomical location. For example, this technique showed that male courtship behavior is controlled by the brain. Mosaic fate mapping also provided the first indication of the existence of pheromones in this species. Males distinguish between conspecific males and females and direct persistent courtship preferentially toward females thanks to a female-specific sex pheromone which is mostly produced by the female's tergites. The first learning and memory mutants (dunce, rutabaga, etc.) were isolated by William "Chip" Quinn while in Benzer's lab, and were eventually shown to encode components of an intracellular signaling pathway involving cyclic AMP, protein kinase A, and a transcription factor known as CREB. These molecules were shown to be also involved in synaptic plasticity in Aplysia and mammals. The Nobel Prize in Physiology or Medicine for 2017 was awarded to Jeffrey C. Hall, Michael Rosbash, Michael W. Young for their works using fruit flies in understanding the "molecular mechanisms controlling the circadian rhythm". Male flies sing to the females during courtship using their wings to generate sound, and some of the genetics of sexual behavior have been characterized. In particular, the fruitless gene has several different splice forms, and male flies expressing female splice forms have female-like behavior and vice versa. The TRP channels nompC, nanchung, and inactive are expressed in sound-sensitive Johnston's organ neurons and participate in the transduction of sound. Mutating the Genderblind gene, also known as CG6070, alters the sexual behavior of Drosophila, turning the flies bisexual. Flies use a modified version of Bloom filters to detect novelty of odors, with additional features including similarity of novel odor to that of previously experienced examples, and time elapsed since previous experience of the same odor. Aggression As with most insects, aggressive behaviors between male flies commonly occur in the presence of courting a female and when competing for resources. Such behaviors often involve raising wings and legs towards the opponent and attacking with the whole body. Thus, it often causes wing damage, which reduces their fitness by removing their ability to fly and mate. Acoustic communication In order for aggression to occur, male flies produce sounds to communicate their intent. A 2017 study found that songs promoting aggression contain pulses occurring at longer intervals. RNA sequencing from fly mutants displaying over-aggressive behaviors found more than 50 auditory-related genes (important for transient receptor potentials, Ca2+ signaling, and mechanoreceptor potentials) to be upregulated in the AB neurons located in Johnston's organ. In addition, aggression levels were reduced when these genes were knocked out via RNA interference. This signifies the major role of hearing as a sensory modality in communicating aggression. Pheromone signaling Other than hearing, another sensory modality that regulates aggression is pheromone signaling, which operates through either the olfactory system or the gustatory system depending on the pheromone. An example is cVA, an anti-aphrodisiac pheromone used by males to mark females after copulation and to deter other males from mating. This male-specific pheromone causes an increase in male-male aggression when detected by another male's gustatory system. However, upon inserting a mutation that makes the flies irresponsive to cVA, no aggressive behaviors were seen. This shows how there are multiple modalities for promoting aggression in flies. Competition for food Specifically, when competing for food, aggression occurs based on amount of food available and is independent of any social interactions between males. Specifically, sucrose was found to stimulate gustatory receptor neurons, which was necessary to stimulate aggression. However, once the amount of food becomes greater than a certain amount, the competition between males lowers. This is possibly due to an over-abundance of food resources. On a larger scale, food was found to determine the boundaries of a territory since flies were observed to be more aggressive at the food's physical perimeter. Effect of sleep deprivation However, like most behaviors requiring arousal and wakefulness, aggression was found to be impaired via sleep deprivation. Specifically, this occurs through the impairment of Octopamine and dopamine signaling, which are important pathways for regulating arousal in insects. Due to reduced aggression, sleep-deprived male flies were found to be disadvantaged at mating compared to normal flies. However, when octopamine agonists were administered upon these sleep-deprived flies, aggression levels were seen to be increased and sexual fitness was subsequently restored. Therefore, this finding implicates the importance of sleep in aggression between male flies. Vision The compound eye of the fruit fly contains 760 unit eyes or ommatidia, and are one of the most advanced among insects. Each ommatidium contains eight photoreceptor cells (R1-8), support cells, pigment cells, and a cornea. Wild-type flies have reddish pigment cells, which serve to absorb excess blue light so the fly is not blinded by ambient light. Eye color genes regulate cellular vesicular transport. The enzymes needed for pigment synthesis are then transported to the cell's pigment granule, which holds pigment precursor molecules. Each photoreceptor cell consists of two main sections, the cell body and the rhabdomere. The cell body contains the nucleus, while the 100-μm-long rhabdomere is made up of toothbrush-like stacks of membrane called microvilli. Each microvillus is 1–2 μm in length and about 60 nm in diameter. The membrane of the rhabdomere is packed with about 100 million opsin molecules, the visual protein that absorbs light. The other visual proteins are also tightly packed into the microvilli, leaving little room for cytoplasm. Opsins and spectral sensitivity The genome of Drosophila encodes seven opsins, five of those are expressed in the omatidia of the eye. The photoreceptor cells R1-R6 express the opsin Rh1, which absorbs maximally blue light (around 480 nm), however the R1-R6 cells cover a broader range of the spectrum than an opsin would allow due to a sensitising pigment that adds two sensitivity maxima in the UV-range (355 and 370 nm). The R7 cells come in two types with yellow and pale rhabdomeres (R7y and R7p). The pale R7p cells express the opsin Rh3, which maximally absorbs UV-light (345 nm). The R7p cells are strictly paired with the R8p cells that express Rh5, which maximally absorbs violet light (437 nm). The other, the yellow R7y cells express a blue-absorbing screening pigment and the opsin Rh4, which maximally absorbs UV-light (375 nm). The R7y cells are strictly paired with R8y cells that express Rh6, which maximally absorbs UV-light (508 nm). In a subset of omatidia both R7 and R8 cells express the opsin Rh3. However, these absorption maxima of the opsins where measured in white eyed flies without screening pigments (Rh3-Rh6), or from the isolated opsin directly (Rh1). Those pigments reduce the light that reaches the opsins depending on the wavelength. Thus in fully pigmented flies, the effective absorption maxima of opsins differs and thus also the sensitivity of their photoreceptor cells. With screening pigment, the opsin Rh3 is short wave shifted from 345 nm to 330 nm and Rh4 from 375 nm to 355 nm. Whether screening pigment is present does not make a practical difference for the opsin Rh5 (435 nm and 437 nm), while the opsin R6 is long wave shifted by 92 nm from 508 nm to 600 nm. Additionally of the opsins of the eye, Drosophila has two more opsins: The ocelli express the opsin Rh2, which maximally absorbs violet light (~420 nm). And the opsin Rh7, which maximally absorbs UV-light (350 nm) with an unusually long wavelength tail up to 500 nm. The long tail disappears if a lysine at position 90 is replaced by glutamic acid. This mutant then absorbs maximally violet light (450 nm). The opsin Rh7 entrains with cryptochrome the circadian rhythm of Drosophila to the day-night-cycle in the central pacemaker neurons. Each Drosophila opsin binds the carotenoid chromophore 11-cis-3-hydroxyretinal via a lysine. This lysine is conserved in almost all opsins, only a few opsins have lost it during evolution. Opsins without it are not light sensitive. In particular, the Drosophila opsins Rh1, Rh4, and Rh7 function not only as photoreceptors, but also as chemoreceptors for aristolochic acid. These opsins still have the lysine like other opsins. However, if it is replaced by an arginine in Rh1, then Rh1 loses light sensitivity but still responds to aristolochic acid. Thus, the lysine is not needed for Rh1 to function as chemoreceptor. Phototransduction As in vertebrate vision, visual transduction in invertebrates occurs via a G protein-coupled pathway. However, in vertebrates, the G protein is transducin, while the G protein in invertebrates is Gq (dgq in Drosophila). When rhodopsin (Rh) absorbs a photon of light its chromophore, 11-cis-3-hydroxyretinal, is isomerized to all-trans-3-hydroxyretinal. Rh undergoes a conformational change into its active form, metarhodopsin. Metarhodopsin activates Gq, which in turn activates a phospholipase Cβ (PLCβ) known as NorpA. PLCβ hydrolyzes phosphatidylinositol (4,5)-bisphosphate (PIP2), a phospholipid found in the cell membrane, into soluble inositol triphosphate (IP3) and diacylglycerol (DAG), which stays in the cell membrane. DAG, a derivative of DAG, or PIP2 depletion cause a calcium-selective ion channel known as transient receptor potential (TRP) to open and calcium and sodium flows into the cell. IP3 is thought to bind to IP3 receptors in the subrhabdomeric cisternae, an extension of the endoplasmic reticulum, and cause release of calcium, but this process does not seem to be essential for normal vision. Calcium binds to proteins such as calmodulin (CaM) and an eye-specific protein kinase C (PKC) known as InaC. These proteins interact with other proteins and have been shown to be necessary for shut off of the light response. In addition, proteins called arrestins bind metarhodopsin and prevent it from activating more Gq. A sodium-calcium exchanger known as CalX pumps the calcium out of the cell. It uses the inward sodium gradient to export calcium at a stoichiometry of 3 Na+/ 1 Ca++. TRP, InaC, and PLC form a signaling complex by binding a scaffolding protein called InaD. InaD contains five binding domains called PDZ domain proteins, which specifically bind the C termini of target proteins. Disruption of the complex by mutations in either the PDZ domains or the target proteins reduces the efficiency of signaling. For example, disruption of the interaction between InaC, the protein kinase C, and InaD results in a delay in inactivation of the light response. Unlike vertebrate metarhodopsin, invertebrate metarhodopsin can be converted back into rhodopsin by absorbing a photon of orange light (580 nm). About two-thirds of the Drosophila brain is dedicated to visual processing. Although the spatial resolution of their vision is significantly worse than that of humans, their temporal resolution is around 10 times better. Grooming Drosophila are known to exhibit grooming behaviors that are executed in a predictable manner. Drosophila consistently begin a grooming sequence by using their front legs to clean the eyes, then the head and antennae. Using their hind legs, Drosophila proceed to groom their abdomen, and finally the wings and thorax. Throughout this sequence, Drosophila periodically rub their legs together to get rid of excess dust and debris that accumulates during the grooming process. Grooming behaviors have been shown to be executed in a suppression hierarchy. This means that grooming behaviors that occur at the beginning of the sequence prevent those that come later in the sequence from occurring simultaneously, as the grooming sequence consists of mutually exclusive behaviors. This hierarchy does not prevent Drosophila from returning to grooming behaviors that have already been accessed in the grooming sequence. The order of grooming behaviors in the suppression hierarchy is thought to be related to the priority of cleaning a specific body part. For example, the eyes and antennae are likely executed early on in the grooming sequence to prevent debris from interfering with the function of D. melanogaster's sensory organs. Walking Like many other hexapod insects, Drosophila typically walk using a tripod gait. This means that three of the legs swing together while the other three remain stationary, or in stance. Specifically, the middle leg moves in phase with the contralateral front and hind legs. However, variability around the tripod configuration exists along a continuum, meaning that flies do not exhibit distinct transitions between different gaits. At fast walking speeds, the walking configuration is mostly tripod (3 legs in stance), but at slower walking speeds, flies are more likely to have four (tetrapod) or five legs in stance (wave). These transitions may help to optimize static stability. Because flies are so small, inertial forces are negligible compared with the elastic forces of their muscles and joints or the viscous forces of the surrounding air. Flight Flies fly via straight sequences of movement interspersed by rapid turns called saccades. During these turns, a fly is able to rotate 90° in less than 50 milliseconds. Characteristics of Drosophila flight may be dominated by the viscosity of the air, rather than the inertia of the fly body, but the opposite case with inertia as the dominant force may occur. However, subsequent work showed that while the viscous effects on the insect body during flight may be negligible, the aerodynamic forces on the wings themselves actually cause fruit flies' turns to be damped viscously. Connectome Drosophila is one of the few animals (C. elegans being another) where detailed neural circuits (a connectome) are available. A high-level connectome, at the level of brain compartments and interconnecting tracts of neurons, exists for the full fly brain. A version of this is available online. Detailed circuit-level connectomes exist for the lamina and a medulla column, both in the visual system of the fruit fly, and the alpha lobe of the mushroom body. In May 2017 a paper published in bioRxiv presented an electron microscopy image stack of the whole adult female brain at synaptic resolution. The volume is available for sparse tracing of selected circuits. Since then, multiple datasets have been collected including a dense connectome of half the central brain of Drosophila in 2020, and a dense connectome of the entire female adult nerve cord in 2021. Generally, these datasets are acquired by sectioning the tissue (e.g. the brain) into thin sections (on order of ten or hundreds of nanometers). Each section is then imaged using an electron microscope and these images are stitched and aligned together to create a 3D image volume. The methods used in reconstruction and initial analysis of the such datasets followed. Due to advancements in deep learning, automated methods for image segmentation have made large scale reconstruction providing dense reconstructions of all the neurites within the volume. Furthermore, the resolution of electron microscopy illuminates ultrastructural variations between neurons as well as the location of individual synapses, thereby providing a wiring diagram of synaptic connectivity between all neurites within the given dataset. In 2023, the complete map of a Drosophila larval brain at the synapse level, and an analysis of its architecture was published. The larval brain consists of 3016 neurons and 548,000 synaptic sites, whereas the adult brain has about 150,000 neurons and 150 million synapses. Misconceptions Drosophila is sometimes referred to as a pest due to its tendency to live in human settlements where fermenting fruit is found. Flies may collect in homes, restaurants, stores, and other locations. The name and behavior of this species of fly have led to the misconception that it is a biological security risk in Australia and elsewhere. While other "fruit fly" species do pose a risk, D. melanogaster is attracted to fruit that is already rotting, rather than causing fruit to rot.
Biology and health sciences
Flies (Diptera)
null
173238
https://en.wikipedia.org/wiki/Diels%E2%80%93Alder%20reaction
Diels–Alder reaction
In organic chemistry, the Diels–Alder reaction is a chemical reaction between a conjugated diene and a substituted alkene, commonly termed the dienophile, to form a substituted cyclohexene derivative. It is the prototypical example of a pericyclic reaction with a concerted mechanism. More specifically, it is classified as a thermally allowed [4+2] cycloaddition with Woodward–Hoffmann symbol [π4s + π2s]. It was first described by Otto Diels and Kurt Alder in 1928. For the discovery of this reaction, they were awarded the Nobel Prize in Chemistry in 1950. Through the simultaneous construction of two new carbon–carbon bonds, the Diels–Alder reaction provides a reliable way to form six-membered rings with good control over the regio- and stereochemical outcomes. Consequently, it has served as a powerful and widely applied tool for the introduction of chemical complexity in the synthesis of natural products and new materials. The underlying concept has also been applied to π-systems involving heteroatoms, such as carbonyls and imines, which furnish the corresponding heterocycles; this variant is known as the hetero-Diels–Alder reaction. The reaction has also been generalized to other ring sizes, although none of these generalizations have matched the formation of six-membered rings in terms of scope or versatility. Because of the negative values of ΔH° and ΔS° for a typical Diels–Alder reaction, the microscopic reverse of a Diels–Alder reaction becomes favorable at high temperatures, although this is of synthetic importance for only a limited range of Diels–Alder adducts, generally with some special structural features; this reverse reaction is known as the retro-Diels–Alder reaction. Mechanism The reaction is an example of a concerted pericyclic reaction. It is believed to occur via a single, cyclic transition state, with no intermediates generated during the course of the reaction. As such, the Diels–Alder reaction is governed by orbital symmetry considerations: it is classified as a [π4s + π2s] cycloaddition, indicating that it proceeds through the suprafacial/suprafacial interaction of a 4π electron system (the diene structure) with a 2π electron system (the dienophile structure), an interaction that leads to a transition state without an additional orbital symmetry-imposed energetic barrier and allows the Diels–Alder reaction to take place with relative ease. A consideration of the reactants' frontier molecular orbitals (FMO) makes plain why this is so. (The same conclusion can be drawn from an orbital correlation diagram or a Dewar-Zimmerman analysis.) For the more common "normal" electron demand Diels–Alder reaction, the more important of the two HOMO/LUMO interactions is that between the electron-rich diene's ψ2 as the highest occupied molecular orbital (HOMO) with the electron-deficient dienophile's π* as the lowest unoccupied molecular orbital (LUMO). However, the HOMO–LUMO energy gap is close enough that the roles can be reversed by switching electronic effects of the substituents on the two components. In an inverse (reverse) electron-demand Diels–Alder reaction, electron-withdrawing substituents on the diene lower the energy of its empty ψ3 orbital and electron-donating substituents on the dienophile raise the energy of its filled π orbital sufficiently that the interaction between these two orbitals becomes the most energetically significant stabilizing orbital interaction. Regardless of which situation pertains, the HOMO and LUMO of the components are in phase and a bonding interaction results as can be seen in the diagram below. Since the reactants are in their ground state, the reaction is initiated thermally and does not require activation by light. The "prevailing opinion" is that most Diels–Alder reactions proceed through a concerted mechanism; the issue, however, has been thoroughly contested. Despite the fact that the vast majority of Diels–Alder reactions exhibit stereospecific, syn addition of the two components, a diradical intermediate has been postulated (and supported with computational evidence) on the grounds that the observed stereospecificity does not rule out a two-step addition involving an intermediate that collapses to product faster than it can rotate to allow for inversion of stereochemistry. There is a notable rate enhancement when certain Diels–Alder reactions are carried out in polar organic solvents such as dimethylformamide and ethylene glycol, and even in water. The reaction of cyclopentadiene and butenone for example is 700 times faster in water relative to 2,2,4-trimethylpentane as solvent. Several explanations for this effect have been proposed, such as an increase in effective concentration due to hydrophobic packing or hydrogen-bond stabilization of the transition state. The geometry of the diene and dienophile components each propagate into stereochemical details of the product. For intermolecular reactions especially, the preferred positional and stereochemical relationship of substituents of the two components compared to each other are controlled by electronic effects. However, for intramolecular Diels–Alder cycloaddition reactions, the conformational stability of the structure of the transition state can be an overwhelming influence. Regioselectivity Frontier molecular orbital theory has also been used to explain the regioselectivity patterns observed in Diels–Alder reactions of substituted systems. Calculation of the energy and orbital coefficients of the components' frontier orbitals provides a picture that is in good accord with the more straightforward analysis of the substituents' resonance effects, as illustrated below. In general, the regioselectivity found for both normal and inverse electron-demand Diels–Alder reaction follows the ortho-para rule, so named, because the cyclohexene product bears substituents in positions that are analogous to the ortho and para positions of disubstituted arenes. For example, in a normal-demand scenario, a diene bearing an electron-donating group (EDG) at C1 has its largest HOMO coefficient at C4, while the dienophile with an electron withdrawing group (EWG) at C1 has the largest LUMO coefficient at C2. Pairing these two coefficients gives the "ortho" product as seen in case 1 in the figure below. A diene substituted at C2 as in case 2 below has the largest HOMO coefficient at C1, giving rise to the "para" product. Similar analyses for the corresponding inverse-demand scenarios gives rise to the analogous products as seen in cases 3 and 4. Examining the canonical mesomeric forms above, it is easy to verify that these results are in accord with expectations based on consideration of electron density and polarization. In general, with respect to the energetically most well-matched HOMO-LUMO pair, maximizing the interaction energy by forming bonds between centers with the largest frontier orbital coefficients allows the prediction of the main regioisomer that will result from a given diene-dienophile combination. In a more sophisticated treatment, three types of substituents (Z withdrawing: HOMO and LUMO lowering (CF3, NO2, CN, C(O)CH3), X donating: HOMO and LUMO raising (Me, OMe, NMe2), C conjugating: HOMO raising and LUMO lowering (Ph, vinyl)) are considered, resulting in a total of 18 possible combinations. The maximization of orbital interaction correctly predicts the product in all cases for which experimental data is available. For instance, in uncommon combinations involving X groups on both diene and dienophile, a 1,3-substitution pattern may be favored, an outcome not accounted for by a simplistic resonance structure argument. However, cases where the resonance argument and the matching of largest orbital coefficients disagree are rare. Stereospecificity and stereoselectivity Diels–Alder reactions, as concerted cycloadditions, are stereospecific. Stereochemical information of the diene and the dienophile are retained in the product, as a syn addition with respect to each component. For example, substituents in a cis (trans, resp.) relationship on the double bond of the dienophile give rise to substituents that are cis (trans, resp.) on those same carbons with respect to the cyclohexene ring. Likewise, cis,cis- and trans,trans-disubstituted dienes give cis substituents at these carbons of the product whereas cis,trans-disubstituted dienes give trans substituents: Diels–Alder reactions in which adjacent stereocenters are generated at the two ends of the newly formed single bonds imply two different possible stereochemical outcomes. This is a stereoselective situation based on the relative orientation of the two separate components when they react with each other. In the context of the Diels–Alder reaction, the transition state in which the most significant substituent (an electron-withdrawing and/or conjugating group) on the dienophile is oriented towards the diene π system and slips under it as the reaction takes place is known as the endo transition state. In the alternative exo transition state, it is oriented away from it. (There is a more general usage of the terms endo and exo in stereochemical nomenclature.) In cases where the dienophile has a single electron-withdrawing / conjugating substituent, or two electron-withdrawing / conjugating substituents cis to each other, the outcome can often be predicted. In these "normal demand" Diels–Alder scenarios, the endo transition state is typically preferred, despite often being more sterically congested. This preference is known as the Alder endo rule. As originally stated by Alder, the transition state that is preferred is the one with a "maximum accumulation of double bonds." Endo selectivity is typically higher for rigid dienophiles such as maleic anhydride and benzoquinone; for others, such as acrylates and crotonates, selectivity is not very pronounced. The most widely accepted explanation for the origin of this effect is a favorable interaction between the π systems of the dienophile and the diene, an interaction described as a secondary orbital effect, though dipolar and van der Waals attractions may play a part as well, and solvent can sometimes make a substantial difference in selectivity. The secondary orbital overlap explanation was first proposed by Woodward and Hoffmann. In this explanation, the orbitals associated with the group in conjugation with the dienophile double-bond overlap with the interior orbitals of the diene, a situation that is possible only for the endo transition state. Although the original explanation only invoked the orbital on the atom α to the dienophile double bond, Salem and Houk have subsequently proposed that orbitals on the α and β carbons both participate when molecular geometry allows. Often, as with highly substituted dienes, very bulky dienophiles, or reversible reactions (as in the case of furan as diene), steric effects can override the normal endo selectivity in favor of the exo isomer. The diene The diene component of the Diels–Alder reaction can be either open-chain or cyclic, and it can host many different types of substituents. It must, however, be able to exist in the s-cis conformation, since this is the only conformer that can participate in the reaction. Though butadienes are typically more stable in the s-trans conformation, for most cases energy difference is small (~2–5 kcal/mol). A bulky substituent at the C2 or C3 position can increase reaction rate by destabilizing the s-trans conformation and forcing the diene into the reactive s-cis conformation. 2-tert-butyl-buta-1,3-diene, for example, is 27 times more reactive than simple butadiene. Conversely, a diene having bulky substituents at both C2 and C3 is less reactive because the steric interactions between the substituents destabilize the s-cis conformation. Dienes with bulky terminal substituents (C1 and C4) decrease the rate of reaction, presumably by impeding the approach of the diene and dienophile. An especially reactive diene is 1-methoxy-3-trimethylsiloxy-buta-1,3-diene, otherwise known as Danishefsky's diene. It has particular synthetic utility as means of furnishing α,β–unsaturated cyclohexenone systems by elimination of the 1-methoxy substituent after deprotection of the enol silyl ether. Other synthetically useful derivatives of Danishefsky's diene include 1,3-alkoxy-1-trimethylsiloxy-1,3-butadienes (Brassard dienes) and 1-dialkylamino-3-trimethylsiloxy-1,3-butadienes (Rawal dienes). The increased reactivity of these and similar dienes is a result of synergistic contributions from donor groups at C1 and C3, raising the HOMO significantly above that of a comparable monosubstituted diene. Unstable (and thus highly reactive) dienes can be synthetically useful, e.g. o-quinodimethanes can be generated in situ. In contrast, stable dienes, such as naphthalene, require forcing conditions and/or highly reactive dienophiles, such as N-phenylmaleimide. Anthracene, being less aromatic (and therefore more reactive for Diels–Alder syntheses) in its central ring can form a 9,10 adduct with maleic anhydride at 80 °C and even with acetylene, a weak dienophile, at 250 °C. The dienophile In a normal demand Diels–Alder reaction, the dienophile has an electron-withdrawing group in conjugation with the alkene; in an inverse-demand scenario, the dienophile is conjugated with an electron-donating group. Dienophiles can be chosen to contain a "masked functionality". The dienophile undergoes Diels–Alder reaction with a diene introducing such a functionality onto the product molecule. A series of reactions then follow to transform the functionality into a desirable group. The end product cannot be made in a single DA step because equivalent dienophile is either unreactive or inaccessible. An example of such approach is the use of α-chloroacrylonitrile (CH2=CClCN). When reacted with a diene, this dienophile will introduce α-chloronitrile functionality onto the product molecule. This is a "masked functionality" which can be then hydrolyzed to form a ketone. α-Chloroacrylonitrile dienophile is an equivalent of ketene dienophile (CH2=C=O), which would produce same product in one DA step. The problem is that ketene itself cannot be used in Diels–Alder reactions because it reacts with dienes in unwanted manner (by [2+2] cycloaddition), and therefore "masked functionality" approach has to be used. Other such functionalities are phosphonium substituents (yielding exocyclic double bonds after Wittig reaction), various sulfoxide and sulfonyl functionalities (both are acetylene equivalents), and nitro groups (ketene equivalents). Variants on the classical Diels–Alder reaction Hetero-Diels–Alder Diels–Alder reactions involving at least one heteroatom are also known and are collectively called hetero-Diels–Alder reactions. Carbonyl groups, for example, can successfully react with dienes to yield dihydropyran rings, a reaction known as the oxo-Diels–Alder reaction, and imines can be used, either as the dienophile or at various sites in the diene, to form various N-heterocyclic compounds through the aza-Diels–Alder reaction. Nitroso compounds (R-N=O) can react with dienes to form oxazines. Chlorosulfonyl isocyanate can be utilized as a dienophile to prepare Vince lactam. Lewis acid activation Lewis acids, such as zinc chloride, boron trifluoride, tin tetrachloride, or aluminium chloride, can catalyze Diels–Alder reactions by binding to the dienophile. Traditionally, the enhanced Diels-Alder reactivity is ascribed to the ability of the Lewis acid to lower the LUMO of the activated dienophile, which results in a smaller normal electron demand HOMO-LUMO orbital energy gap and hence more stabilizing orbital interactions. Recent studies, however, have shown that this rationale behind Lewis acid-catalyzed Diels–Alder reactions is incorrect. It is found that Lewis acids accelerate the Diels–Alder reaction by reducing the destabilizing steric Pauli repulsion between the interacting diene and dienophile and not by lowering the energy of the dienophile's LUMO and consequently, enhancing the normal electron demand orbital interaction. The Lewis acid binds via a donor-acceptor interaction to the dienophile and via that mechanism polarizes occupied orbital density away from the reactive C=C double bond of the dienophile towards the Lewis acid. This reduced occupied orbital density on C=C double bond of the dienophile will, in turn, engage in a less repulsive closed-shell-closed-shell orbital interaction with the incoming diene, reducing the destabilizing steric Pauli repulsion and hence lowers the Diels–Alder reaction barrier. In addition, the Lewis acid catalyst also increases the asynchronicity of the Diels–Alder reaction, making the occupied π-orbital located on the C=C double bond of the dienophile asymmetric. As a result, this enhanced asynchronicity leads to an extra reduction of the destabilizing steric Pauli repulsion as well as a diminishing pressure on the reactants to deform, in other words, it reduced the destabilizing activation strain (also known as distortion energy). This working catalytic mechanism is known as Pauli-lowering catalysis, which is operative in a variety of organic reactions. The original rationale behind Lewis acid-catalyzed Diels–Alder reactions is incorrect, because besides lowering the energy of the dienophile's LUMO, the Lewis acid also lowers the energy of the HOMO of the dienophile and hence increases the inverse electron demand LUMO-HOMO orbital energy gap. Thus, indeed Lewis acid catalysts strengthen the normal electron demand orbital interaction by lowering the LUMO of the dienophile, but, they simultaneously weaken the inverse electron demand orbital interaction by also lowering the energy of the dienophile's HOMO. These two counteracting phenomena effectively cancel each other, resulting in nearly unchanged orbital interactions when compared to the corresponding uncatalyzed Diels–Alder reactions and making this not the active mechanism behind Lewis acid-catalyzed Diels–Alder reactions. Asymmetric Diels–Alder Many methods have been developed for influencing the stereoselectivity of the Diels–Alder reaction, such as the use of chiral auxiliaries, catalysis by chiral Lewis acids, and small organic molecule catalysts. Evans' oxazolidinones, oxazaborolidines, bis-oxazoline–copper chelates, imidazoline catalysis, and many other methodologies exist for effecting diastereo- and enantioselective Diels–Alder reactions. Hexadehydro Diels–Alder In the hexadehydro Diels–Alder reaction, alkynes and diynes are used instead of alkenes and dienes, forming an unstable benzyne intermediate which can then be trapped to form an aromatic product. This reaction allows the formation of heavily functionalized aromatic rings in a single step. Applications and natural occurrence The retro-Diels–Alder reaction is used in the industrial production of cyclopentadiene. Cyclopentadiene is a precursor to various norbornenes, which are common monomers. The Diels–Alder reaction is also employed in the production of vitamin B6. History The Diels-Alder reaction was the culmination of several intertwined research threads, some near misses, and ultimately, the insightful recognition of a general principle by Otto Diels and Kurt Alder. Their seminal work, detailed in a series of 28 articles published in the Justus Liebigs Annalen der Chemie and Berichte der deutschen chemischen Gesellschaft from 1928 to 1937, established the reaction's wide applicability and its importance in constructing six-membered rings. The first 19 articles were authored by Diels and Alder, while the later articles were authored by Diels and various other coauthors. However, the history of the reaction extends further back, revealing a fascinating narrative of discoveries missed and opportunities overlooked. Several chemists, working independently in the late 19th and early 20th centuries, encountered reactions that, in retrospect, involved the Diels-Alder process but remained unrecognized as such. Theodor Zincke performed a series of experiments between 1892 and 1912 involving tetrachlorocyclopentadienone, a highly reactive diene analogue. In 1910, Sergey Lebedev systematically investigated thermal polymerization of three conjugated dienes (butadiene, isoprene and dimethylbutadiene), a process now recognized as a Diels-Alder self-reaction, providing a detailed analysis of the dimerization products and recognizing the importance of the conjugated system in the process. Five years earlier, Carl Harries studied the degradation of natural rubber, leading him to propose a cyclic structure for the polymer. Hermann Staudinger's work with ketenes published in 1912 covered both [2+2] cycloadditions, where one molecule of a ketene reacted with an unsaturated compound to form a four-membered ring, and, importantly, [4+2] cycloadditions. In the latter case, two molecules of ketene combined with one molecule of an unsaturated compound (such as a quinone) to yield a six-membered ring. While not a classic Diels-Alder reaction in the typical sense of a conjugated diene and a separate dienophile, Staudinger's observation of this [4+2] process, forming a six-membered ring, foreshadowed the later work of Diels and Alder. However, his focus remained primarily on the more common [2+2] ketene cycloaddition. Hans von Euler-Chelpin and K. O. Josephson, investigating isoprene and butadiene reactions in 1920, both observed products consistent with Diels-Alder cycloadditions, but didn't go on to research it further. Perhaps the most striking near miss came from Walter Albrecht in early 1900s. Working in Johannes Thiele's laboratory, Albrecht investigated the reaction of cyclopentadiene with para-benzoquinone. His 1902 doctoral dissertation clearly describes the formation of the Diels-Alder adduct, even providing (incorrect) structural assignments. However, influenced by Thiele's focus on conjugation and partial valence, Albrecht in his 1906 publication interpreted the reaction as a 1,4-addition followed by a 1,2-addition, completely overlooking the cycloaddition aspect. While these observations hinted at the possibility of a broader class of cycloaddition reactions, they remained isolated incidents, their significance not fully appreciated at the time, with none of the researchers even trying to generalize their findings. It fell to Diels and Alder to synthesize these disparate threads into a coherent whole. Unlike the earlier researchers, they recognized the generality and predictability of the diene and dienophile combining to form a cyclic structure. Through their systematic investigations, exploring various combinations of dienes and dienophiles, they firmly established the "diene synthesis" as a powerful new synthetic method. Their meticulous work not only demonstrated the reaction's scope and versatility but also laid the groundwork for future theoretical developments, including the Woodward-Hoffmann rules, which would provide a deeper understanding of pericyclic reactions, including the Diels-Alder. Applications in total synthesis The Diels–Alder reaction was one step in an early preparation of the steroids cortisone and cholesterol. The reaction involved the addition of butadiene to a quinone. Diels–Alder reactions were used in the original synthesis of prostaglandins F2α and E2. The Diels–Alder reaction establishes the relative stereochemistry of three contiguous stereocenters on the prostaglandin cyclopentane core. Activation by Lewis acidic cupric tetrafluoroborate was required. A Diels–Alder reaction was used in the synthesis of disodium prephenate, a biosynthetic precursor of the amino acids phenylalanine and tyrosine. A synthesis of reserpine uses a Diels–Alder reaction to set the cis-decalin framework of the D and E rings. In another synthesis of reserpine, the cis-fused D and E rings was formed by a Diels–Alder reaction. Intramolecular Diels–Alder of the pyranone below with subsequent extrusion of carbon dioxide via a retro [4+2] afforded the bicyclic lactam. Epoxidation from the less hindered α-face, followed by epoxide opening at the less hindered C18 afforded the desired stereochemistry at these positions, while the cis-fusion was achieved with hydrogenation, again proceeding primarily from the less hindered face. A pyranone was similarly used as the dienophile in the total synthesis of taxol. The intermolecular reaction of the hydroxy-pyrone and α,β–unsaturated ester shown below suffered from poor yield and regioselectivity; however, when directed by phenylboronic acid the desired adduct could be obtained in 61% yield after cleavage of the boronate with neopentyl glycol. The stereospecificity of the Diels–Alder reaction in this instance allowed for the definition of four stereocenters that were carried on to the final product. A Diels–Alder reaction is a key step in the synthesis of (-)-furaquinocin C. Tabersonine was prepared by a Diels–Alder reaction to establish cis relative stereochemistry of the alkaloid core. Conversion of the cis-aldehyde to its corresponding alkene by Wittig olefination and subsequent ring-closing metathesis with a Schrock catalyst gave the second ring of the alkaloid core. The diene in this instance is notable as an example of a 1-amino-3-siloxybutadiene, otherwise known as a Rawal diene. (+)-Sterpurene can be prepared by asymmetric D-A reaction that featured a remarkable intramolecular Diels–Alder reaction of an allene. The [2,3]-sigmatropic rearrangement of the thiophenyl group to give the sulfoxide as below proceeded enantiospecifically due to the predefined stereochemistry of the propargylic alcohol. In this way, the single allene isomer formed could direct the Diels–Alder reaction to occur on only one face of the generated 'diene'. The tetracyclic core of the antibiotic (-)-tetracycline was prepared with a Diels–Alder reaction. Thermally initiated, conrotatory opening of the benzocyclobutene generated the o-quinodimethane, which reacted intermolecularly to give the tetracycline skeleton. The dienophile's free hydroxyl group is integral to the success of the reaction, as hydroxyl-protected variants did not react under several different reaction conditions. Takemura et al. synthesized cantharidin in 1980 by Diels–Alder reaction, utilizing high pressure. Synthetic applications of the Diels–Alder reaction have been reviewed extensively.
Physical sciences
Organic reactions
Chemistry
173272
https://en.wikipedia.org/wiki/Multi-exposure%20HDR%20capture
Multi-exposure HDR capture
In photography and videography, multi-exposure HDR capture is a technique that creates high dynamic range (HDR) images (or extended dynamic range images) by taking and combining multiple exposures of the same subject matter at different exposures. Combining multiple images in this way results in an image with a greater dynamic range than what would be possible by taking one single image. The technique can also be used to capture video by taking and combining multiple exposures for each frame of the video. The term "HDR" is used frequently to refer to the process of creating HDR images from multiple exposures. Many smartphones have an automated HDR feature that relies on computational imaging techniques to capture and combine multiple exposures. A single image captured by a camera provides a finite range of luminosity inherent to the medium, whether it is a digital sensor or film. Outside this range, tonal information is lost and no features are visible; tones that exceed the range are "burned out" and appear pure white in the brighter areas, while tones that fall below the range are "crushed" and appear pure black in the darker areas. The ratio between the maximum and the minimum tonal values that can be captured in a single image is known as the dynamic range. In photography, dynamic range is measured in exposure value (EV) differences, also known as stops. The human eye's response to light is non-linear: halving the light level does not halve the perceived brightness of a space, it makes it look only slightly dimmer. For most illumination levels, the response is approximately logarithmic. Human eyes adapt fairly rapidly to changes in light levels. HDR can thus produce images that look more like what a human sees when looking at the subject. This technique can be applied to produce images that preserve local contrast for a natural rendering, or exaggerate local contrast for artistic effect. HDR is useful for recording many real-world scenes containing a wider range of brightness than can be captured directly, typically both bright, direct sunlight and deep shadows. Due to the limitations of printing and display contrast, the extended dynamic range of HDR images must be compressed to the range that can be displayed. The method of rendering a high dynamic range image to a standard monitor or printing device is called tone mapping; it reduces the overall contrast of an HDR image to permit display on devices or prints with lower dynamic range. Benefits One aim of HDR is to present a similar range of luminance to that experienced through the human visual system. The human eye, through non-linear response, adaptation of the iris, and other methods, adjusts constantly to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions. Most cameras are limited to a much narrower range of exposure values within a single image, due to the dynamic range of the capturing medium. With a limited dynamic range, tonal differences can be captured only within a certain range of brightness. Outside of this range, no details can be distinguished: when the tone being captured exceeds the range in bright areas, these tones appear as pure white, and when the tone being captured does not meet the minimum threshold, these tones appear as pure black. Images captured with non-HDR cameras that have a limited exposure range (low dynamic range, LDR), may lose detail in highlights or shadows. Modern CMOS image sensors have improved dynamic range and can often capture a wider range of tones in a single exposure reducing the need to perform multi-exposure HDR. Color film negatives and slides consist of multiple film layers that respond to light differently. Original film (especially negatives versus transparencies or slides) feature a very high dynamic range (in the order of 8 for negatives and 4 to 4.5 for positive transparencies). Multi-exposure HDR is used in photography and also in extreme dynamic range applications such as welding or automotive work. In security cameras the term "wide dynamic range" is used instead of HDR. Limitations A fast-moving subject, or camera movement between the multiple exposures, will generate a "ghost" effect or a staggered-blur strobe effect due to the merged images not being identical. Unless the subject is static and the camera mounted on a tripod there may be a tradeoff between extended dynamic range and sharpness. Sudden changes in the lighting conditions (strobed LED light) can also interfere with the desired results, by producing one or more HDR layers that do have the luminosity expected by an automated HDR system, though one might still be able to produce a reasonable HDR image manually in software by rearranging the image layers to merge in order of their actual luminosity. Because of the nonlinearity of some sensors image artifacts can be common. Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and color calibration affect resulting high-dynamic-range images. Process High-dynamic-range photographs are generally composites of multiple standard dynamic range images, often captured using exposure bracketing. Afterwards, photo manipulation software merges the input files into a single HDR image, which is then also tone mapped in accordance with the limitations of the planned output or display. Capturing multiple images (exposure bracketing) Any camera that allows manual exposure control can perform multi-exposure HDR image capture, although one equipped with automatic exposure bracketing (AEB) facilitates the process. Some cameras have an AEB feature that spans a far greater dynamic range than others, from ±0.6 in simpler cameras to ±18 EV in top professional cameras, The exposure value (EV) refers to the amount of light applied to the light-sensitive detector, whether film or digital sensor such as a CCD. An increase or decrease of one stop is defined as a doubling or halving of the amount of light captured. Revealing detail in the darkest of shadows requires an increased EV, while preserving detail in very bright situations requires very low EVs. EV is controlled using one of two photographic controls: varying either the size of the aperture or the exposure time. A set of images with multiple EVs intended for HDR processing should be captured only by altering the exposure time; altering the aperture size also would affect the depth of field and so the resultant multiple images would be quite different, preventing their final combination into a single HDR image. Multi-exposure HDR photography generally is limited to still scenes because any movement between successive images will impede or prevent success in combining them afterward. Also, because the photographer must capture three or more images to obtain the desired luminance range, taking such a full set of images takes extra time. Photographers have developed calculation methods and techniques to partially overcome these problems, but the use of a sturdy tripod is advised to minimize framing differences between exposures. Merging the images into an HDR image Tonal information and details from shadow areas can be recovered from images that are deliberately overexposed (i.e., with positive EV compared to the correct scene exposure), while similar tonal information from highlight areas can be recovered from images that are deliberately underexposed (negative EV). The process of selecting and extracting shadow and highlight information from these over/underexposed images and then combining them with image(s) that are exposed correctly for the overall scene is known as exposure fusion. Exposure fusion can be performed manually, relying on the HDR operator's judgment, experience, and training, but usually, fusion is performed automatically by software. Storing Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors as they should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed using mathematical functions such as power laws logarithms, or floating point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges. HDR images often do not use fixed ranges per color channel, other than traditional images, to represent many more colors over a much wider dynamic range (multiple channels). For that purpose, they do not use integer values to represent the single color channels (e.g., 0–255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common values are 16-bit (half precision) or 32-bit floating-point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with a color depth that has as few as 10 to 12 bits ( to values) for luminance and 8 bits ( values) for chrominance without introducing any visible quantization artifacts. Tone mapping Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast. Although it is a distinct operation, tone mapping is often applied to HDR files by the same software package. Tone mapping is often needed because the dynamic range that can be displayed is often lower than the dynamic range of the captured or processed image. HDR displays can receive a higher dynamic range signal than SDR displays, reducing the need for tone mapping. Types of HDR HDR can be done via several methods: DOL: Digital overlap BME: Binned multiplexed exposure SME: Spatially multiplexed exposure QBC: Quad Bayer Coding Examples This is an example of four standard dynamic range images that are combined to produce three resulting tone mapped images: This is an example of a scene with a very wide dynamic range: Devices Post-capture software Several software applications are available on the PC, Mac, and Linux platforms for producing HDR files and tone mapped images. Notable titles include: Adobe Photoshop Affinity Photo Aurora HDR Dynamic Photo HDR EasyHDR GIMP HDR PhotoStudio Luminance HDR Nik Collection HDR Efex Pro Oloneo PhotoEngine Photomatix Pro PTGui SNS-HDR Photography Several camera manufacturers offer built-in multi-exposure HDR features. For example, the Pentax K-7 DSLR has an HDR mode that makes 3 or 5 exposures and outputs (only) a tone mapped HDR image in a JPEG file. The Canon PowerShot G12, Canon PowerShot S95, and Canon PowerShot S100 offer similar features in a smaller format. Nikon's approach is called 'Active D-Lighting' which applies exposure compensation and tone mapping to the image as it comes from the sensor, with the emphasis being on creating a realistic effect. Some smartphones provide HDR modes for their cameras, and most mobile platforms have apps that provide multi-exposure HDR picture taking. Google released a HDR+ mode for the Nexus 5 and Nexus 6 smartphones in 2014, which automatically captures a series of images and combines them into a single still image, as detailed by Marc Levoy. Unlike traditional HDR, Levoy's implementation of HDR+ uses multiple images underexposed by using a short shutter speed, which are then aligned and averaged by pixel, improving dynamic range and reducing noise. By selecting the sharpest image as the baseline for alignment, the effect of camera shake is reduced. Some of the sensors on modern phones and cameras may combine two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing. Videography Although not as established as for still photography capture, it is also possible to capture and combine multiple images for each frame of a video in order to increase the dynamic range captured by the camera. This can be done via multiple methods: Creating a time-lapse of individually images created via the multi-exposure HDR technique. Taking consecutively two differently exposed images by cutting the frame rate in half. Taking simultaneously two differently exposed images by cutting the resolution in half. Taking simultaneously two differently exposed images with full resolution and frame rate via a sensor with dual gain architecture. For example: Arri Alexa's sensor, Samsung sensors with Smart-ISO Pro. Some cameras designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example, a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. In 2020, Qualcomm announced Snapdragon 888, a mobile SoC able to do computational multi-exposure HDR video capture in 4K and also to record it in a format compatible with HDR displays. In 2021, the Xiaomi Mi 11 Ultra smartphone is able to do computational multi-exposure HDR for video capture. Surveillance cameras HDR capture can be implemented on surveillance cameras, even inexpensive models. This is usually termed a wide dynamic range (WDR) function Examples include CarCam Tiny, Prestige DVR-390, and DVR-478. History Mid-19th century The idea of using several exposures to adequately reproduce a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, as the luminosity range was too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive. Mid-20th century Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This was effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took five days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow. Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which prominently features dodging and burning, in the context of his Zone System. With the advent of color photography, tone mapping in the darkroom was no longer possible due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response, or continued to shoot in black and white to use tone mapping methods. Color film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force". This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color. The dynamic range of this extended range film has been estimated as 1:108. It has been used to photograph nuclear explosions, for astronomical photography, for spectrographic research, and for medical imaging. Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s. Late 20th century Georges Cornuéjols and licensees of his patents (Brdi, Hymatom) introduced the principle of HDR video image, in 1986, by interposing a matricial LCD screen in front of the camera's image sensor, increasing the sensors dynamic by five stops. The concept of neighborhood tone mapping was applied to video cameras in 1988 by a group from the Technion in Israel, led by Oliver Hilsenrath and Yehoshua Y. Zeevi. Technion researchers filed for a patent on this concept in 1991, and several related patents in 1992 and 1993. In February and April 1990, Georges Cornuéjols introduced the first real-time HDR camera that combined two images captured successively by a sensor or simultaneously by two sensors of the camera. This process is known as bracketing used for a video stream. In 1991, the first commercial video camera was introduced that performed real-time capturing of multiple images with different exposures, and producing an HDR video image, by Hymatom, licensee of Georges Cornuéjols. Also in 1991, Georges Cornuéjols introduced the HDR+ image principle by non-linear accumulation of images to increase the sensitivity of the camera: for low-light environments, several successive images are accumulated, thus increasing the signal-to-noise ratio. In 1993, another commercial medical camera producing an HDR video image, by the Technion. Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping the result. Global HDR was first introduced in 1993 resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard. On October 28, 1998, Ben Sarao created one of the first nighttime HDR+G (high dynamic range + graphic) image of STS-95 on the launch pad at NASA's Kennedy Space Center. It consisted of four film images of the space shuttle at night that were digitally composited with additional digital graphic elements. The image was first exhibited at NASA Headquarters Great Hall, Washington DC, in 1999 and then published in Hasselblad Forum. The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Lab. Mann's method involved a two-step procedure: First, generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods). Second, convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations. 21st century In February 2001, the Dynamic Ranger technique was demonstrated, using multiple photos with different exposure levels to accomplish high dynamic range similar to the naked eye. In the early 2000s, several scholarly research efforts used consumer-grade sensors and cameras. A few companies such as RED and Arri have been developing digital sensors capable of a higher dynamic range. RED EPIC-X can capture time-sequential HDRx images with a user-selectable 1–3 stops of additional highlight latitude in the "x" channel. The "x" channel can be merged with the normal channel in post production software. The Arri Alexa camera uses a dual-gain architecture to generate an HDR image from two exposures captured at the same time. With the advent of low-cost consumer digital cameras, many amateurs began posting tone-mapped HDR time-lapse videos on the Internet, essentially a sequence of still photographs in quick succession. In 2010, the independent studio Soviet Montage produced an example of HDR video from disparately exposed video streams using a beam splitter and consumer grade HD video cameras. Similar methods have been described in the academic literature in 2001 and 2007. In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping. On June 30, 2016, Microsoft added support for the digital compositing of HDR images to Windows 10 using the Universal Windows Platform.
Technology
Photography
null
173283
https://en.wikipedia.org/wiki/Poly%28methyl%20methacrylate%29
Poly(methyl methacrylate)
Poly(methyl methacrylate) (PMMA) is a synthetic polymer derived from methyl methacrylate. It is a transparent thermoplastic, used as an engineering plastic. PMMA is also known as acrylic, acrylic glass, as well as by the trade names and brands Crylux, Hesalite, Plexiglas, Acrylite, Lucite, and Perspex, among several others (see below). This plastic is often used in sheet form as a lightweight or shatter-resistant alternative to glass. It can also be used as a casting resin, in inks and coatings, and for many other purposes. It is often technically classified as a type of glass, in that it is a non-crystalline vitreous substance—hence its occasional historic designation as acrylic glass. History The first acrylic acid was created in 1843. Methacrylic acid, derived from acrylic acid, was formulated in 1865. The reaction between methacrylic acid and methanol results in the ester methyl methacrylate. It was developed in 1928 in several different laboratories by many chemists, such as William R. Conn, Otto Röhm, and Walter Bauer, and first brought to market in 1933 by German Röhm & Haas AG (as of January 2019, part of Evonik Industries) and its partner and former U.S. affiliate Rohm and Haas Company under the trademark Plexiglas. Polymethyl methacrylate was discovered in the early 1930s by British chemists Rowland Hill and John Crawford at Imperial Chemical Industries (ICI) in the United Kingdom. ICI registered the product under the trademark Perspex. About the same time, chemist and industrialist Otto Röhm of Röhm and Haas AG in Germany attempted to produce safety glass by polymerizing methyl methacrylate between two layers of glass. The polymer separated from the glass as a clear plastic sheet, which Röhm gave the trademarked name Plexiglas in 1933. Both Perspex and Plexiglas were commercialized in the late 1930s. In the United States, E.I. du Pont de Nemours & Company (now DuPont Company) subsequently introduced its own product under the trademark Lucite. In 1936 ICI Acrylics (now Lucite International) began the first commercially viable production of acrylic safety glass. During World War II both Allied and Axis forces used acrylic glass for submarine periscopes and aircraft windscreen, canopies, and gun turrets. Scraps of acrylic were also used to make clear pistol grips for the M1911A1 pistol or clear handle grips for the M1 bayonet or theater knifes so that soldiers could put small photos of loved ones or pin-up girls' pictures inside. They were called "Sweetheart Grips" or "Pin-up Grips". Others were used to make handles for theater knives made from scrap materials. Civilian applications followed after the war. Names Common orthographic stylings include polymethyl methacrylate and polymethylmethacrylate. The full IUPAC chemical name is poly(methyl 2-methylpropoate), although it is a common mistake to use "an" instead of "en". Although PMMA is often called simply "acrylic", acrylic can also refer to other polymers or copolymers containing polyacrylonitrile. Notable trade names and brands include Acrylite, Altuglas, Astariglas, Cho Chen, Crystallite, Cyrolite, Hesalite (when used in Omega watches), Lucite, Optix, Oroglas, PerClax, Perspex, Plexiglas, R-Cast, and Sumipex. PMMA is an economical alternative to polycarbonate (PC) when tensile strength, flexural strength, transparency, polishability, and UV tolerance are more important than impact strength, chemical resistance, and heat resistance. Additionally, PMMA does not contain the potentially harmful bisphenol-A subunits found in polycarbonate and is a far better choice for laser cutting. It is often preferred because of its moderate properties, easy handling and processing, and low cost. Non-modified PMMA behaves in a brittle manner when under load, especially under an impact force, and is more prone to scratching than conventional inorganic glass, but modified PMMA is sometimes able to achieve high scratch and impact resistance. Properties PMMA is a strong, tough, and lightweight material. It has a density of 1.17–1.20 g/cm, which is approximately half that of glass, which is generally, depending on composition, 2.2–2.53 g/cm. It also has good impact strength, higher than both glass and polystyrene, but significantly lower than polycarbonate and some engineered polymers. PMMA ignites at and burns, forming carbon dioxide, water, carbon monoxide, and low-molecular-weight compounds, including formaldehyde. PMMA transmits up to 92% of visible light ( thickness), and gives a reflection of about 4% from each of its surfaces due to its refractive index (1.4905 at 589.3nm). It filters ultraviolet (UV) light at wavelengths below about 300 nm (similar to ordinary window glass). Some manufacturers add coatings or additives to PMMA to improve absorption in the 300–400 nm range. PMMA passes infrared light of up to 2,800 nm and blocks IR of longer wavelengths up to 25,000 nm. Colored PMMA varieties allow specific IR wavelengths to pass while blocking visible light (for remote control or heat sensor applications, for example). PMMA swells and dissolves in many organic solvents; it also has poor resistance to many other chemicals due to its easily hydrolyzed ester groups. Nevertheless, its environmental stability is superior to most other plastics such as polystyrene and polyethylene, and therefore it is often the material of choice for outdoor applications. PMMA has a maximum water absorption ratio of 0.3–0.4% by weight. Tensile strength decreases with increased water absorption. Its coefficient of thermal expansion is relatively high at (5–10)×10 °C. The Futuro house was made of fibreglass-reinforced polyester plastic, polyester-polyurethane, and poly(methylmethacrylate); one of them was found to be degrading by cyanobacteria and Archaea. PMMA can be joined using cyanoacrylate cement (commonly known as superglue), with heat (welding), or by using chlorinated solvents such as dichloromethane or trichloromethane (chloroform) to dissolve the plastic at the joint, which then fuses and sets, forming an almost invisible weld. Scratches may easily be removed by polishing or by heating the surface of the material. Laser cutting may be used to form intricate designs from PMMA sheets. PMMA vaporizes to gaseous compounds (including its monomers) upon laser cutting, so a very clean cut is made, and cutting is performed very easily. However, the pulsed lasercutting introduces high internal stresses, which on exposure to solvents produce undesirable "stress-crazing" at the cut edge and several millimetres deep. Even ammonium-based glass-cleaner and almost everything short of soap-and-water produces similar undesirable crazing, sometimes over the entire surface of the cut parts, at great distances from the stressed edge. Annealing the PMMA sheet/parts is therefore an obligatory post-processing step when intending to chemically bond lasercut parts together. In the majority of applications, PMMA will not shatter. Rather, it breaks into large dull pieces. Since PMMA is softer and more easily scratched than glass, scratch-resistant coatings are often added to PMMA sheets to protect it (as well as possible other functions). Pure poly(methyl methacrylate) homopolymer is rarely sold as an end product, since it is not optimized for most applications. Rather, modified formulations with varying amounts of other comonomers, additives, and fillers are created for uses where specific properties are required. For example: A small amount of acrylate comonomers are routinely used in PMMA grades destined for heat processing, since this stabilizes the polymer to depolymerization ("unzipping") during processing. Comonomers such as butyl acrylate are often added to improve impact strength. Comonomers such as methacrylic acid can be added to increase the glass transition temperature of the polymer for higher temperature use such as in lighting applications. Plasticizers may be added to improve processing properties, lower the glass transition temperature, improve impact properties, and improve mechanical properties such as elastic modulus Dyes may be added to give color for decorative applications, or to protect against (or filter) UV light. Fillers may be substituted to reduce cost. Synthesis and processing PMMA is routinely produced by emulsion polymerization, solution polymerization, and bulk polymerization. Generally, radical initiation is used (including living polymerization methods), but anionic polymerization of PMMA can also be performed. The glass transition temperature (T) of atactic PMMA is . The T values of commercial grades of PMMA range from ; the range is so wide because of the vast number of commercial compositions that are copolymers with co-monomers other than methyl methacrylate. PMMA is thus an organic glass at room temperature; i.e., it is below its T. The forming temperature starts at the glass transition temperature and goes up from there. All common molding processes may be used, including injection molding, compression molding, and extrusion. The highest quality PMMA sheets are produced by cell casting, but in this case, the polymerization and molding steps occur concurrently. The strength of the material is higher than molding grades owing to its extremely high molecular mass. Rubber toughening has been used to increase the toughness of PMMA to overcome its brittle behavior in response to applied loads. Applications Being transparent and durable, PMMA is a versatile material and has been used in a wide range of fields and applications such as rear-lights and instrument clusters for vehicles, appliances, and lenses for glasses. PMMA in the form of sheets affords to shatter resistant panels for building windows, skylights, bulletproof security barriers, signs and displays, sanitary ware (bathtubs), LCD screens, furniture and many other applications. It is also used for coating polymers based on MMA provides outstanding stability against environmental conditions with reduced emission of VOC. Methacrylate polymers are used extensively in medical and dental applications where purity and stability are critical to performance. Glass substitute PMMA is commonly used for constructing residential and commercial aquariums. Designers started building large aquariums when poly(methyl methacrylate) could be used. It is less often used in other building types due to incidents such as the Summerland disaster. PMMA is used for viewing ports and even complete pressure hulls of submersibles, such as the Alicia submarine's viewing sphere and the window of the bathyscaphe Trieste. PMMA is used in the lenses of exterior lights of automobiles. Spectator protection in ice hockey rinks is made from PMMA. Historically, PMMA was an important improvement in the design of aircraft windows, making possible such designs as the bombardier's transparent nose compartment in the Boeing B-17 Flying Fortress. Modern aircraft transparencies often use stretched acrylic plies. Police vehicles for riot control often have the regular glass replaced with PMMA to protect the occupants from thrown objects. PMMA is an important material in the making of certain lighthouse lenses. PMMA was used for the roofing of the compound in the Olympic Park for the 1972 Summer Olympics in Munich. It enabled a light and translucent construction of the structure. PMMA (under the brand name "Lucite") was used for the ceiling of the Houston Astrodome. Daylight redirection Laser cut acrylic panels have been used to redirect sunlight into a light pipe or tubular skylight and, from there, to spread it into a room. Their developers Veronica Garcia Hansen, Ken Yeang, and Ian Edmonds were awarded the Far East Economic Review Innovation Award in bronze for this technology in 2003. Attenuation being quite strong for distances over one meter (more than 90% intensity loss for a 3000 K source), acrylic broadband light guides are then dedicated mostly to decorative uses. Pairs of acrylic sheets with a layer of microreplicated prisms between the sheets can have reflective and refractive properties that let them redirect part of incoming sunlight in dependence on its angle of incidence. Such panels act as miniature light shelves. Such panels have been commercialized for purposes of daylighting, to be used as a window or a canopy such that sunlight descending from the sky is directed to the ceiling or into the room rather than to the floor. This can lead to a higher illumination of the back part of a room, in particular when combined with a white ceiling, while having a slight impact on the view to the outside compared to normal glazing. Medicine PMMA has a good degree of compatibility with human tissue, and it is used in the manufacture of rigid intraocular lenses which are implanted in the eye when the original lens has been removed in the treatment of cataracts. This compatibility was discovered by the English ophthalmologist Harold Ridley in WWII RAF pilots, whose eyes had been riddled with PMMA splinters coming from the side windows of their Supermarine Spitfire fighters – the plastic scarcely caused any rejection, compared to glass splinters coming from aircraft such as the Hawker Hurricane. Ridley had a lens manufactured by the Rayner company (Brighton & Hove, East Sussex) made from Perspex polymerised by ICI. On 29 November 1949 at St Thomas' Hospital, London, Ridley implanted the first intraocular lens at St Thomas's Hospital in London. In particular, acrylic-type lenses are useful for cataract surgery in patients that have recurrent ocular inflammation (uveitis), as acrylic material induces less inflammation. Eyeglass lenses are commonly made from PMMA. Historically, hard contact lenses were frequently made of this material. Soft contact lenses are often made of a related polymer, where acrylate monomers containing one or more hydroxyl groups make them hydrophilic. In orthopedic surgery, PMMA bone cement is used to affix implants and to remodel lost bone. It is supplied as a powder with liquid methyl methacrylate (MMA). Although PMMA is biologically compatible, MMA is considered to be an irritant and a possible carcinogen. PMMA has also been linked to cardiopulmonary events in the operating room due to hypotension. Bone cement acts like a grout and not so much like a glue in arthroplasty. Although sticky, it does not bond to either the bone or the implant; rather, it primarily fills the spaces between the prosthesis and the bone preventing motion. A disadvantage of this bone cement is that it heats up to while setting that may cause thermal necrosis of neighboring tissue. A careful balance of initiators and monomers is needed to reduce the rate of polymerization, and thus the heat generated. In cosmetic surgery, tiny PMMA microspheres suspended in some biological fluid are injected as a soft-tissue filler under the skin to reduce wrinkles or scars permanently. PMMA as a soft-tissue filler was widely used in the beginning of the century to restore volume in patients with HIV-related facial wasting. PMMA is used illegally to shape muscles by some bodybuilders. Plombage is an outdated treatment of tuberculosis where the pleural space around an infected lung was filled with PMMA balls, in order to compress and collapse the affected lung. Emerging biotechnology and biomedical research use PMMA to create microfluidic lab-on-a-chip devices, which require 100 micrometre-wide geometries for routing liquids. These small geometries are amenable to using PMMA in a biochip fabrication process and offers moderate biocompatibility. Bioprocess chromatography columns use cast acrylic tubes as an alternative to glass and stainless steel. These are pressure rated and satisfy stringent requirements of materials for biocompatibility, toxicity, and extractables. Dentistry Due to its aforementioned biocompatibility, poly(methyl methacrylate) is a commonly used material in modern dentistry, particularly in the fabrication of dental prosthetics, artificial teeth, and orthodontic appliances. Acrylic prosthetic construction: Pre-polymerized, powdered PMMA spheres are mixed with a Methyl Methacrylate liquid monomer, Benzoyl Peroxide (initiator), and NN-Dimethyl-P-Toluidine (accelerator), and placed under heat and pressure to produce a hardened polymerized PMMA structure. Through the use of injection molding techniques, wax based designs with artificial teeth set in predetermined positions built on gypsum stone models of patients' mouths can be converted into functional prosthetics used to replace missing dentition. PMMA polymer and methyl methacrylate monomer mix is then injected into a flask containing a gypsum mold of the previously designed prosthesis, and placed under heat to initiate polymerization process. Pressure is used during the curing process to minimize polymerization shrinkage, ensuring an accurate fit of the prosthesis. Though other methods of polymerizing PMMA for prosthetic fabrication exist, such as chemical and microwave resin activation, the previously described heat-activated resin polymerization technique is the most commonly used due to its cost effectiveness and minimal polymerization shrinkage. Artificial teeth: While denture teeth can be made of several different materials, PMMA is a material of choice for the manufacturing of artificial teeth used in dental prosthetics. Mechanical properties of the material allow for heightened control of aesthetics, easy surface adjustments, decreased risk of fracture when in function in the oral cavity, and minimal wear against opposing teeth. Additionally, since the bases of dental prosthetics are often constructed using PMMA, adherence of PMMA denture teeth to PMMA denture bases is unparalleled, leading to the construction of a strong and durable prosthetic. Art and aesthetics Acrylic paint essentially consists of PMMA suspended in water; however since PMMA is hydrophobic, a substance with both hydrophobic and hydrophilic groups needs to be added to facilitate the suspension. Modern furniture makers, especially in the 1960s and 1970s, seeking to give their products a space age aesthetic, incorporated Lucite and other PMMA products into their designs, especially office chairs. Many other products (for example, guitars) are sometimes made with acrylic glass to make the commonly opaque objects translucent. Perspex has been used as a surface to paint on, for example by Salvador Dalí. Diasec is a process which uses acrylic glass as a substitute for normal glass in picture frames. This is done for its relatively low cost, light weight, shatter-resistance, aesthetics and because it can be ordered in larger sizes than standard picture framing glass. As early as 1939, Los Angeles-based Dutch sculptor Jan De Swart experimented with samples of Lucite sent to him by DuPont; De Swart created tools to work the Lucite for sculpture and mixed chemicals to bring about certain effects of color and refraction. From approximately the 1960s onward, sculptors and glass artists such as Jan Kubíček, Leroy Lamis, and Frederick Hart began using acrylics, especially taking advantage of the material's flexibility, light weight, cost and its capacity to refract and filter light. In the 1950s and 1960s, Lucite was an extremely popular material for jewelry, with several companies specialized in creating high-quality pieces from this material. Lucite beads and ornaments are still sold by jewelry suppliers. Acrylic sheets are produced in dozens of standard colors, most commonly sold using color numbers developed by Rohm & Haas in the 1950s. Methyl methacrylate "synthetic resin" for casting (simply the bulk liquid chemical) may be used in conjunction with a polymerization catalyst such as methyl ethyl ketone peroxide (MEKP), to produce hardened transparent PMMA in any shape, from a mold. Objects like insects or coins, or even dangerous chemicals in breakable quartz ampules, may be embedded in such "cast" blocks, for display and safe handling. Other uses PMMA, in the commercial form Technovit 7200 is used vastly in the medical field. It is used for plastic histology, electron microscopy, as well as many more uses. PMMA has been used to create ultra-white opaque membranes that are flexible and switch appearance to transparent when wet. Acrylic is used in tanning beds as the transparent surface that separates the occupant from the tanning bulbs while tanning. The type of acrylic used in tanning beds is most often formulated from a special type of polymethyl methacrylate, a compound that allows the passage of ultraviolet rays. Sheets of PMMA are commonly used in the sign industry to make flat cut out letters in thicknesses typically varying from . These letters may be used alone to represent a company's name and/or logo, or they may be a component of illuminated channel letters. Acrylic is also used extensively throughout the sign industry as a component of wall signs where it may be a backplate, painted on the surface or the backside, a faceplate with additional raised lettering or even photographic images printed directly to it, or a spacer to separate sign components. PMMA was used in Laserdisc optical media. (CDs and DVDs use both acrylic and polycarbonate for impact resistance). It is used as a light guide for the backlights in TFT-LCDs. Plastic optical fiber used for short-distance communication is made from PMMA, and perfluorinated PMMA, clad with fluorinated PMMA, in situations where its flexibility and cheaper installation costs outweigh its poor heat tolerance and higher attenuation versus glass fiber. PMMA, in a purified form, is used as the matrix in laser dye-doped organic solid-state gain media for tunable solid state dye lasers. In semiconductor research and industry, PMMA aids as a resist in the electron beam lithography process. A solution consisting of the polymer in a solvent is used to spin coat silicon and other semiconducting and semi-insulating wafers with a thin film. Patterns on this can be made by an electron beam (using an electron microscope), deep UV light (shorter wavelength than the standard photolithography process), or X-rays. Exposure to these creates chain scission or (de-cross-linking) within the PMMA, allowing for the selective removal of exposed areas by a chemical developer, making it a positive photoresist. PMMA's advantage is that it allows for extremely high resolution patterns to be made. Smooth PMMA surface can be easily nanostructured by treatment in oxygen radio-frequency plasma and nanostructured PMMA surface can be easily smoothed by vacuum ultraviolet (VUV) irradiation. PMMA is used as a shield to stop beta radiation emitted from radioisotopes. Small strips of PMMA are used as dosimeter devices during the Gamma Irradiation process. The optical properties of PMMA change as the gamma dose increases, and can be measured with a spectrophotometer. Blacklight-reactive UV tattoos may use tattoo ink made with PMMA microcapsules and fluorescent dyes. In the 1960s, luthier Dan Armstrong developed a line of electric guitars and basses whose bodies were made completely of acrylic. These instruments were marketed under the Ampeg brand. Ibanez and B.C. Rich have also made acrylic guitars. Ludwig-Musser makes a line of acrylic drums called Vistalites, well known as being used by Led Zeppelin drummer John Bonham. Artificial nails in the "acrylic" type often include PMMA powder. Some modern briar, and occasionally meerschaum, tobacco pipes sport stems made of Lucite. PMMA technology is utilized in roofing and waterproofing applications. By incorporating a polyester fleece sandwiched between two layers of catalyst-activated PMMA resin, a fully reinforced liquid membrane is created in situ. PMMA is a widely used material to create deal toys and financial tombstones. PMMA is used by the Sailor Pen Company of Kure, Japan, in their standard models of gold-nib fountain pens, specifically as the cap and body material.
Physical sciences
Polymers
Chemistry
173285
https://en.wikipedia.org/wiki/Equilateral%20triangle
Equilateral triangle
An equilateral triangle is a triangle in which all three sides have the same length, and all three angles are equal. Because of these properties, the equilateral triangle is a regular polygon, occasionally known as the regular triangle. It is the special case of an isosceles triangle by modern definition, creating more special properties. The equilateral triangle can be found in various tilings, and in polyhedrons such as the deltahedron and antiprism. It appears in real life in popular culture, architecture, and the study of stereochemistry resembling the molecular known as the trigonal planar molecular geometry. Properties An equilateral triangle is a triangle that has three equal sides. It is a special case of an isosceles triangle in the modern definition, stating that an isosceles triangle is defined at least as having two equal sides. Based on the modern definition, this leads to an equilateral triangle in which one of the three sides may be considered its base. The follow-up definition above may result in more precise properties. For example, since the perimeter of an isosceles triangle is the sum of its two legs and base, the equilateral triangle is formulated as three times its side. The internal angle of an equilateral triangle are equal, 60°. Because of these properties, the equilateral triangles are regular polygons. The cevians of an equilateral triangle are all equal in length, resulting in the median and angle bisector being equal in length, considering those lines as their altitude depending on the base's choice. When the equilateral triangle is flipped across its altitude or rotated around its center for one-third of a full turn, its appearance is unchanged; it has the symmetry of a dihedral group of order six. Other properties are discussed below. Area The area of an equilateral triangle with edge length is The formula may be derived from the formula of an isosceles triangle by Pythagoras theorem: the altitude of a triangle is the square root of the difference of squares of a side and half of a base. Since the base and the legs are equal, the height is: In general, the area of a triangle is half the product of its base and height. The formula of the area of an equilateral triangle can be obtained by substituting the altitude formula. Another way to prove the area of an equilateral triangle is by using the trigonometric function. The area of a triangle is formulated as the half product of base and height and the sine of an angle. Because all of the angles of an equilateral triangle are 60°, the formula is as desired. A version of the isoperimetric inequality for triangles states that the triangle of greatest area among all those with a given perimeter is equilateral. That is, for perimeter and area , the equality holds for the equilateral triangle: Relationship with circles The radius of the circumscribed circle is: and the radius of the inscribed circle is half of the circumradius: The theorem of Euler states that the distance between circumradius and inradius is formulated as . As a corollary of this, the equilateral triangle has the smallest ratio of the circumradius to the inradius of any triangle. That is: Pompeiu's theorem states that, if is an arbitrary point in the plane of an equilateral triangle but not on its circumcircle, then there exists a triangle with sides of lengths , , and . That is, , , and satisfy the triangle inequality that the sum of any two of them is greater than the third. If is on the circumcircle then the sum of the two smaller ones equals the longest and the triangle has degenerated into a line, this case is known as Van Schooten's theorem. A packing problem asks the objective of circles packing into the smallest possible equilateral triangle. The optimal solutions show that can be packed into the equilateral triangle, but the open conjectures expand to . Other mathematical properties Morley's trisector theorem states that, in any triangle, the three points of intersection of the adjacent angle trisectors form an equilateral triangle. Viviani's theorem states that, for any interior point in an equilateral triangle with distances , , and from the sides and altitude , independent of the location of . An equilateral triangle may have integer sides with three rational angles as measured in degrees, known for the only acute triangle that is similar to its orthic triangle (with vertices at the feet of the altitudes), and the only triangle whose Steiner inellipse is a circle (specifically, the incircle). The triangle of the largest area of all those inscribed in a given circle is equilateral, and the triangle of the smallest area of all those circumscribed around a given circle is also equilateral. It is the only regular polygon aside from the square that can be inscribed inside any other regular polygon. Given a point in the interior of an equilateral triangle, the ratio of the sum of its distances from the vertices to the sum of its distances from the sides is greater than or equal to 2, equality holding when is the centroid. In no other triangle is there a point for which this ratio is as small as 2. This is the Erdős–Mordell inequality; a stronger variant of it is Barrow's inequality, which replaces the perpendicular distances to the sides with the distances from to the points where the angle bisectors of , , and cross the sides (, , and being the vertices). There are numerous other triangle inequalities that hold equality if and only if the triangle is equilateral. Construction The equilateral triangle can be constructed in different ways by using circles. The first proposition in the Elements first book by Euclid. Start by drawing a circle with a certain radius, placing the point of the compass on the circle, and drawing another circle with the same radius; the two circles will intersect in two points. An equilateral triangle can be constructed by taking the two centers of the circles and the points of intersection. An alternative way to construct an equilateral triangle is by using Fermat prime. A Fermat prime is a prime number of the form wherein denotes the non-negative integer, and there are five known Fermat primes: 3, 5, 17, 257, 65537. A regular polygon is constructible by compass and straightedge if and only if the odd prime factors of its number of sides are distinct Fermat primes. To do so geometrically, draw a straight line and place the point of the compass on one end of the line, then swing an arc from that point to the other point of the line segment; repeat with the other side of the line, which connects the point where the two arcs intersect with each end of the line segment in the aftermath. If three equilateral triangles are constructed on the sides of an arbitrary triangle, either all outward or inward, by Napoleon's theorem the centers of those equilateral triangles themselves form an equilateral triangle. Appearances In other related figures Notably, the equilateral triangle tiles the Euclidean plane with six triangles meeting at a vertex; the dual of this tessellation is the hexagonal tiling. Truncated hexagonal tiling, rhombitrihexagonal tiling, trihexagonal tiling, snub square tiling, and snub hexagonal tiling are all semi-regular tessellations constructed with equilateral triangles. Other two-dimensional objects built from equilateral triangles include the Sierpiński triangle (a fractal shape constructed from an equilateral triangle by subdividing recursively into smaller equilateral triangles) and Reuleaux triangle (a curved triangle with constant width, constructed from an equilateral triangle by rounding each of its sides). Equilateral triangles may also form a polyhedron in three dimensions. A polyhedron whose faces are all equilateral triangles is called a deltahedron. There are eight strictly convex deltahedra: three of the five Platonic solids (regular tetrahedron, regular octahedron, and regular icosahedron) and five of the 92 Johnson solids (triangular bipyramid, pentagonal bipyramid, snub disphenoid, triaugmented triangular prism, and gyroelongated square bipyramid). More generally, all Johnson solids have equilateral triangles among their faces, though most also have other other regular polygons. The antiprisms are a family of polyhedra incorporating a band of alternating triangles. When the antiprism is uniform, its bases are regular and all triangular faces are equilateral. As a generalization, the equilateral triangle belongs to the infinite family of -simplexes, with . Applications Equilateral triangles have frequently appeared in man-made constructions and in popular culture. In architecture, an example can be seen in the cross-section of the Gateway Arch and the surface of the Vegreville egg. It appears in the flag of Nicaragua and the flag of the Philippines. It is a shape of a variety of road signs, including the yield sign. The equilateral triangle occurs in the study of stereochemistry. It can be described as the molecular geometry in which one atom in the center connects three other atoms in a plane, known as the trigonal planar molecular geometry. In the Thomson problem, concerning the minimum-energy configuration of charged particles on a sphere, and for the Tammes problem of constructing a spherical code maximizing the smallest distance among the points, the best solution known for places the points at the vertices of an equilateral triangle, inscribed in the sphere. This configuration is proven optimal for the Tammes problem, but a rigorous solution to this instance of the Thomson problem is unknown.
Mathematics
Two-dimensional space
null
173316
https://en.wikipedia.org/wiki/Strychnine
Strychnine
Strychnine (, , US chiefly ) is a highly toxic, colorless, bitter, crystalline alkaloid used as a pesticide, particularly for killing small vertebrates such as birds and rodents. Strychnine, when inhaled, swallowed, or absorbed through the eyes or mouth, causes poisoning which results in muscular convulsions and eventually death through asphyxia. While it is no longer used medicinally, it was used historically in small doses to strengthen muscle contractions, such as a heart and bowel stimulant and performance-enhancing drug. The most common source is from the seeds of the Strychnos nux-vomica tree. Biosynthesis Strychnine is a terpene indole alkaloid belonging to the Strychnos family of Corynanthe alkaloids, and it is derived from tryptamine and secologanin. The biosynthesis of strychnine was solved in 2022. The enzyme, strictosidine synthase, catalyzes the condensation of tryptamine and secologanin, followed by a Pictet-Spengler reaction to form strictosidine. Many steps have been inferred by isolation of intermediates from Strychnos nux-vomica. The next step is hydrolysis of the acetal, which opens the ring by elimination of glucose (O-Glu) and provides a reactive aldehyde. The nascent aldehyde is then attacked by a secondary amine to afford geissoschizine, a common intermediate of many related compounds in the Strychnos family. A reverse Pictet-Spengler reaction cleaves the C2–C3 bond, while subsequently forming the C3–C7 bond via a 1,2-alkyl migration, an oxidation from a Cytochrome P450 enzyme to a spiro-oxindole, nucleophilic attack from the enol at C16, and elimination of oxygen forms the C2–C16 bond to provide dehydropreakuammicine. Hydrolysis of the methyl ester and decarboxylation leads to norfluorocurarine. Stereospecific reduction of the endocyclic double bond by NADPH and hydroxylation provides the Wieland-Gumlich aldehyde, which was first isolated by Heimberger and Scott in 1973, although previously synthesized by Wieland and Gumlich in 1932. To elongate the appendage by two carbons, acetyl-CoA is added to the aldehyde in an aldol reaction to afford prestrychnine. Strychnine is then formed by a facile addition of the amine with the carboxylic acid or its activated CoA thioester, followed by ring-closure via displacement of an activated alcohol. Chemical synthesis As early researchers noted, the strychnine molecular structure, with its specific array of rings, stereocenters, and nitrogen functional groups, is a complex synthetic target, and has stimulated interest for that reason and for interest in the structure–activity relationships underlying its pharmacologic activities. An early synthetic chemist targeting strychnine, Robert Burns Woodward, quoted the chemist who determined its structure through chemical decomposition and related physical studies as saying that "for its molecular size it is the most complex organic substance known" (attributed to Sir Robert Robinson). The first total synthesis of strychnine was reported by the research group of R. B. Woodward in 1954, and is considered a classic in this field. The Woodward account published in 1954 was very brief (3 pages), but was followed by a 42-page report in 1963. The molecule has since received continuing wide attention in the years since for the challenges to synthetic organic strategy and tactics presented by its complexity; its synthesis has been targeted and its stereocontrolled preparation independently achieved by more than a dozen research groups since the first success. Mechanism of action Strychnine is a neurotoxin which acts as an antagonist of glycine and acetylcholine receptors. It primarily affects the motor nerve fibers in the spinal cord which control muscle contraction. An impulse is triggered at one end of a nerve cell by the binding of neurotransmitters to the receptors. In the presence of an inhibitory neurotransmitter, such as glycine, a greater quantity of excitatory neurotransmitters must bind to receptors before an action potential is generated. Glycine acts primarily as an agonist of the glycine receptor, which is a ligand-gated chloride channel in neurons located in the spinal cord and in the brain. This chloride channel allows the negatively charged chloride ions into the neuron, causing a hyperpolarization which pushes the membrane potential further from threshold. Strychnine is an antagonist of glycine; it binds noncovalently to the same receptor, preventing the inhibitory effects of glycine on the postsynaptic neuron. Therefore, action potentials are triggered with lower levels of excitatory neurotransmitters. When the inhibitory signals are prevented, the motor neurons are more easily activated and the victim has spastic muscle contractions, resulting in death by asphyxiation. Strychnine binds the Aplysia californica acetylcholine binding protein (a homolog of nicotinic receptors) with high affinity but low specificity, and does so in multiple conformations. Toxicity In high doses, strychnine is very toxic to humans (minimum lethal oral dose in adults is 30–120 mg) and many other animals (oral = 16 mg/kg in rats, 2 mg/kg in mice), and poisoning by inhalation, swallowing, or absorption through eyes or mouth can be fatal. S. nux-vomica seeds are generally effective as a poison only when they are crushed or chewed before swallowing because the pericarp is quite hard and indigestible; poisoning symptoms may therefore not appear if the seeds are ingested whole. Animal toxicity Strychnine poisoning in animals usually occurs from ingestion of baits designed for use against gophers, rats, squirrels, moles, chipmunks and coyotes. Strychnine is also used as a rodenticide, but is not specific to such unwanted pests and may kill other small animals. In the United States, most baits containing strychnine have been replaced with zinc phosphide baits since 1990. In the European Union, rodenticides with strychnine have been forbidden since 2006. Some animals are immune to strychnine; usually these have evolved resistance to poisonous strychnos alkaloids in the fruit they eat, such as fruit bats. The drugstore beetle has a symbiotic gut yeast that allows it to digest pure strychnine. Strychnine toxicity in rats is dependent on sex. It is more toxic to females than to males when administered via subcutaneous injection or intraperitoneal injection. Differences are due to higher rates of metabolism by male rat liver microsomes. Dogs and cats are more susceptible among domestic animals, pigs are believed to be as susceptible as dogs, and horses are able to tolerate relatively large amounts of strychnine. Birds affected by strychnine poisoning exhibit wing droop, salivation, tremors, muscle tenseness, and convulsions. Death occurs as a result of respiratory arrest. The clinical signs of strychnine poisoning relate to its effects on the central nervous system. The first clinical signs of poisoning include nervousness, restlessness, twitching of the muscles, and stiffness of the neck. As the poisoning progresses, the muscular twitching becomes more pronounced and convulsions suddenly appear in all the skeletal muscles. The limbs are extended and the neck is curved to opisthotonus. The pupils are widely dilated. As death approaches, the convulsions follow one another with increased rapidity, severity, and duration. Death results from asphyxia due to prolonged paralysis of the respiratory muscles. Following the ingestion of strychnine, symptoms of poisoning usually appear within 15 to 60 minutes. Human toxicity After injection, inhalation, or ingestion, the first symptoms to appear are generalized muscle spasms. They appear very quickly after inhalation or injection – within as few as five minutes – and take somewhat longer to manifest after ingestion, typically approximately 15 minutes. With a very high dose, the onset of respiratory failure and brain death can occur in 15 to 30 minutes. If a lower dose is ingested, other symptoms begin to develop, including seizures, cramping, stiffness, hypervigilance, and agitation. Seizures caused by strychnine poisoning can start as early as 15 minutes after exposure and last 12–24 hours. They are often triggered by sights, sounds, or touch and can cause other adverse symptoms, including hyperthermia, rhabdomyolysis, myoglobinuric kidney failure, metabolic acidosis, and respiratory acidosis. During seizures, mydriasis (abnormal dilation), exophthalmos (protrusion of the eyes), and nystagmus (involuntary eye movements) may occur. As strychnine poisoning progresses, tachycardia (rapid heart beat), hypertension (high blood pressure), tachypnea (rapid breathing), cyanosis (blue discoloration), diaphoresis (sweating), water-electrolyte imbalance, leukocytosis (high number of white blood cells), trismus (lockjaw), risus sardonicus (spasm of the facial muscles), and opisthotonus (dramatic spasm of the back muscles, causing arching of the back and neck) can occur. In rare cases, the affected person may experience nausea or vomiting. The proximate cause of death in strychnine poisoning can be cardiac arrest, respiratory failure, multiple organ failure, or brain damage. For occupational exposures to strychnine, the Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health have set exposure limits at 0.15 mg/m3 over an 8-hour work day. Because strychnine produces some of the most dramatic and painful symptoms of any known toxic reaction, strychnine poisoning is often portrayed in literature and film including authors Agatha Christie and Arthur Conan Doyle. Treatment There is no antidote for strychnine poisoning. Strychnine poisoning demands aggressive management with early control of muscle spasms, intubation for loss of airway control, toxin removal (decontamination), intravenous hydration and potentially active cooling efforts in the context of hyperthermia as well as hemodialysis in kidney failure (strychnine has not been shown to be removed by hemodialysis). Treatment involves oral administration of activated charcoal, which adsorbs strychnine within the digestive tract; unabsorbed strychnine is removed from the stomach by gastric lavage, along with tannic acid or potassium permanganate solutions to oxidize strychnine. Activated charcoal Activated charcoal is a substance that can bind to certain toxins in the digestive tract and prevent their absorption into the bloodstream. The effectiveness of this treatment, as well as how long it is effective after ingestion, are subject to debate. According to one source, activated charcoal is only effective within one hour of poison being ingested, although the source does not regard strychnine specifically. Other sources specific to strychnine state that activated charcoal may be used after one hour of ingestion, depending on dose and type of strychnine-containing product. Therefore, other treatment options are generally favoured over activated charcoal. The use of activated charcoal is considered dangerous in patients with tenuous airways or altered mental states. Other treatments Most other treatment options focus on controlling the convulsions that arise from strychnine poisoning. These treatments involve keeping the patient in a quiet and darkened room, anticonvulsants such as phenobarbital or diazepam, muscle relaxants such as dantrolene, barbiturates and propofol, and chloroform or heavy doses of chloral, bromide, urethane or amyl nitrite. If a poisoned person is able to survive for 6 to 12 hours subsequent to initial dose, they have a good prognosis. The sine qua non of strychnine toxicity is the "awake" seizure, in which tonic-clonic activity occurs but the patient is alert and oriented throughout and afterwards. Accordingly, George Harley (1829–1896) showed in 1850 that curare (wourali) was effective for the treatment of tetanus and strychnine poisoning. Pharmacokinetics Absorption Strychnine may be introduced into the body orally, by inhalation, or by injection. It is a potently bitter substance, and in humans has been shown to activate bitter taste receptors TAS2R10 and TAS2R46. Strychnine is rapidly absorbed from the gastrointestinal tract. Distribution Strychnine is transported by plasma and red blood cells. Due to slight protein binding, strychnine leaves the bloodstream quickly and distributes to bodily tissues. Approximately 50% of the ingested dose can enter the tissues in 5 minutes. Also within a few minutes of ingestion, strychnine can be detected in the urine. Little difference was noted between oral and intramuscular administration of strychnine in a 4 mg dose. In persons killed by strychnine, the highest concentrations are found in the blood, liver, kidney and stomach wall. The usual fatal dose is 60–100 mg strychnine and is fatal after a period of 1–2 hours, though lethal doses vary depending on the individual. Metabolism Strychnine is rapidly metabolized by the liver microsomal enzyme system requiring NADPH and O2. Strychnine competes with the inhibitory neurotransmitter glycine resulting in an excitatory state. However, the toxicokinetics after overdose have not been well described. In most severe cases of strychnine poisoning, the patient dies before reaching the hospital. The biological half-life of strychnine is about 10 hours. Excretion A few minutes after ingestion, strychnine is excreted unchanged in the urine, and accounts for about 5 to 15% of a sublethal dose given over 6 hours. Approximately 10 to 20% of the dose will be excreted unchanged in the urine in the first 24 hours. The percentage excreted decreases with the increasing dose. Of the amount excreted by the kidneys, about 70% is excreted in the first 6 hours, and almost 90% in the first 24 hours. Excretion is virtually complete in 48 to 72 hours. History Strychnine was the first alkaloid to be identified in plants of the genus Strychnos, family Loganiaceae. Strychnos, named by Carl Linnaeus in 1753, is a genus of trees and climbing shrubs of the Gentianales order. The genus contains 196 various species and is distributed throughout the warm regions of Asia (58 species), America (64 species), and Africa (75 species). The seeds and bark of many plants in this genus contain strychnine. The toxic and medicinal effects of Strychnos nux-vomica have been well known from the times of ancient India, although the chemical compound itself was not identified and characterized until the 19th century. The inhabitants of these countries had historical knowledge of the species Strychnos nux-vomica and Saint-Ignatius' bean (Strychnos ignatii). Strychnos nux-vomica is a tree native to the tropical forests on the Malabar Coast in Southern India, Sri Lanka and Indonesia, which attains a height of about . The tree has a crooked, short, thick trunk and the wood is close grained and very durable. The fruit has an orange color and is about the size of a large apple with a hard rind and contains five seeds, which are covered with a soft wool-like substance. The ripe seeds look like flattened disks, which are very hard. These seeds are the chief commercial source of strychnine and were first imported to and marketed in Europe as a poison to kill rodents and small predators. Strychnos ignatii is a woody climbing shrub of the Philippines. The fruit of the plant, known as Saint Ignatius' bean, contains as many as 25 seeds embedded in the pulp. The seeds contain more strychnine than other commercial alkaloids. The properties of S. nux-vomica and S. ignatii are substantially those of the alkaloid strychnine. Strychnine was first discovered by French chemists Joseph Bienaimé Caventou and Pierre-Joseph Pelletier in 1818 in the Saint-Ignatius' bean. In some Strychnos plants a 9,10-dimethoxy derivative of strychnine, the alkaloid brucine, is also present. Brucine is not as poisonous as strychnine. Historic records indicate that preparations containing strychnine (presumably) had been used to kill dogs, cats, and birds in Europe as far back as 1640. It was allegedly used by convicted murderer William Palmer to kill his final victim, John Cook. It was also used during World War II by the Dirlewanger Brigade against civilian population. The structure of strychnine was first determined in 1946 by Sir Robert Robinson and in 1954 this alkaloid was synthesized in a laboratory by Robert B. Woodward. This is one of the most famous syntheses in the history of organic chemistry. Both chemists won the Nobel prize (Robinson in 1947 and Woodward in 1965). Strychnine has been used as a plot device in the author Agatha Christie's murder mysteries. Other uses Strychnine was popularly used as an athletic performance enhancer and recreational stimulant in the late 19th century and early 20th century, due to its convulsant effects. One notorious instance of its use was during the 1904 Olympics marathon, when track-and-field athlete Thomas Hicks was unwittingly administered a concoction of egg whites and brandy laced with a small amount of strychnine by his assistants in a vain attempt to boost his stamina. Hicks won the race, but was hallucinating by the time he reached the finish line, and soon after collapsed. Maximilian Theodor Buch proposed it as a cure for alcoholism around the same time. It was thought to be similar to coffee, and also has been used and abused recreationally. Its effects are well-described in H. G. Wells' novella The Invisible Man: the title character states "Strychnine is a grand tonic ... to take the flabbiness out of a man." Dr Kemp, an acquaintance, replies: "It's the devil. It's the palaeolithic in a bottle."
Physical sciences
Alkaloids
Chemistry
173351
https://en.wikipedia.org/wiki/Laboratory
Laboratory
A laboratory (; ; colloquially lab) is a facility that provides controlled conditions in which scientific or technological research, experiments, and measurement may be performed. Laboratories are found in a variety of settings such as schools, universities, privately owned research institutions, corporate research and testing facilities, government regulatory and forensic investigation centers, physicians' offices, clinics, hospitals, regional and national referral centers, and even occasionally personal residences. Overview The organisation and contents of laboratories are determined by the differing requirements of the specialists working within. A physics laboratory might contain a particle accelerator or vacuum chamber, while a metallurgy laboratory could have apparatus for casting or refining metals or for testing their strength. A chemist or biologist might use a wet laboratory, while a psychologist's laboratory might be a room with one-way mirrors and hidden cameras in which to observe behavior. In some laboratories, such as those commonly used by computer scientists, computers (sometimes supercomputers) are used for either simulations or the analysis of data. Scientists in other fields will still use other types of laboratories. Engineers use laboratories as well to design, build, and test technological devices. Scientific laboratories can be found as research room and learning spaces in schools and universities, industry, government, or military facilities, and even aboard ships and spacecraft. Despite the underlying notion of the lab as a confined space for experts, the term "laboratory" is also increasingly applied to workshop spaces such as Living Labs, Fab Labs, or Hackerspaces, in which people meet to work on societal problems or make prototypes, working collaboratively or sharing resources. This development is inspired by new, participatory approaches to science and innovation and relies on user-centred design methods and concepts like Open innovation or User innovation,. One distinctive feature of work in Open Labs is the phenomenon of translation, driven by the different backgrounds and levels of expertise of the people involved. History Early instances of "laboratories" recorded in English involved alchemy and the preparation of medicines. The emergence of Big Science during World War II increased the size of laboratories and scientific equipment, introducing particle accelerators and similar devices. The early laboratories The earliest laboratory according to the present evidence is a home laboratory of Pythagoras of Samos, the well-known Greek philosopher and scientist. This laboratory was created when Pythagoras conducted an experiment about tones of sound and vibration of string. In the painting of Louis Pasteur by Albert Edelfelt in 1885, Louis Pasteur is shown comparing a note in his left hand with a bottle filled with a solid in his right hand, and not wearing any personal protective equipment. Researching in teams started in the 19th century, and many new kinds of equipment were developed in the 20th century. A 16th century underground alchemical laboratory was accidentally discovered in the year 2002. Rudolf II, Holy Roman Emperor was believed to be the owner. The laboratory is called Speculum Alchemiae and is preserved as a museum in Prague. Techniques Laboratory techniques are the set of procedures used on natural sciences such as chemistry, biology, physics to conduct an experiment; while some of them involve the use of complex laboratory equipment from laboratory glassware to electrical devices, and others require more specific or expensive supplies. Equipment and supplies Laboratory equipment refers to the various tools and equipment used by scientists working in a laboratory. Laboratory equipment is generally used to either perform an experiment or to take measurements and gather data. Larger or more sophisticated equipment is generally called a scientific instrument. The classical equipment includes tools such as Bunsen burners and microscopes as well as specialty equipment such as operant conditioning chambers, spectrophotometers and calorimeters. Chemical laboratories laboratory glassware such as the beaker or reagent bottle Analytical devices as HPLC or spectrophotometers Molecular biology laboratories/Life science laboratories Autoclave Microscope Centrifuges Shakers & mixers Pipette Thermal cyclers (PCR) Photometer Refrigerators and Freezers Universal testing machine ULT Freezers Incubators Bioreactor Biological safety cabinets Sequencing instruments Fume hoods Environmental chamber Humidifier Weighing scale Reagents (supply) Pipettes tips (supply) Polymer (supply) consumables for small volumes (μL and mL scale), mainly sterile Specialized types The title of laboratory is also used for certain other facilities where the processes or equipment used are similar to those in scientific laboratories. These notably include: Film laboratory or Darkroom Clandestine lab for the production of illegal drugs Computer lab Crime lab used to process crime scene evidence Language laboratory Medical laboratory (involves handling of chemical compounds) Public health laboratory Industrial laboratory Cleanroom Safety In many laboratories, hazards are present. Laboratory hazards might include poisons; infectious agents; flammable, explosive, or radioactive materials; moving machinery; extreme temperatures; lasers, strong magnetic fields or high voltage. Therefore, safety precautions are vitally important. Rules exist to minimize the individual's risk, and safety equipment is used to protect the lab users from injury or to assist in responding to an emergency. The Occupational Safety and Health Administration (OSHA) in the United States, recognizing the unique characteristics of the laboratory workplace, has tailored a standard for occupational exposure to hazardous chemicals in laboratories. This standard is often referred to as the "Laboratory Standard". Under this standard, a laboratory is required to produce a Chemical Hygiene Plan (CHP) which addresses the specific hazards found in its location, and its approach to them. In determining the proper Chemical Hygiene Plan for a particular business or laboratory, it is necessary to understand the requirements of the standard, evaluation of the current safety, health and environmental practices and assessment of the hazards. The CHP must be reviewed annually. Many schools and businesses employ safety, health, and environmental specialists, such as a Chemical Hygiene Officer (CHO) to develop, manage, and evaluate their CHP. Additionally, third party review is also used to provide an objective "outside view" which provides a fresh look at areas and problems that may be taken for granted or overlooked due to habit. Inspections and audits like also be conducted on a regular basis to assess hazards due to chemical handling and storage, electrical equipment, biohazards, hazardous waste management, chemical waste, housekeeping and emergency preparedness, radiation safety, ventilation as well as respiratory testing and indoor air quality. An important element of such audits is the review of regulatory compliance and the training of individuals who have access to or work in the laboratory. Training is critical to the ongoing safe operation of the laboratory facility. Educators, staff and management must be engaged in working to reduce the likelihood of accidents, injuries and potential litigation. Efforts are made to ensure laboratory safety videos are both relevant and engaging. Sustainability The effects of climate change are becoming more of a concern for organizations, and mitigation strategies are being sought by the research community. While many laboratories are used to perform research to find innovative solutions to this global challenge, sustainable working practices in the labs are also contributing factors towards a greener environment. Many labs are already trying to minimize their environmental impact by reducing energy consumption, recycling, and implementing waste sorting processes to ensure correct disposal. Best practice Research labs featuring energy-intensive equipment, use up to three to five times more energy per square meter than office areas. Fume hoods Presumably the major contributor to this high energy consumption are fume hoods. Significant impact can be achieved by keeping the opening height as low as possible when working and keeping them closed when not in use. One possibility to help with this, could be to install automatic systems, which close the hoods after an inactivity period of a certain length and turn off the lights as well. So the flow can be regulated better and is not unnecessarily kept at a very high level. Freezers Normally, ULT freezers are kept at −80 °C. One such device can consume up to the same amount of energy as a single-family household (25 kWh/day). Increasing the temperature to −70 °C makes it possible to use 40% less energy and still keep most samples safely stored. Air condensers Minimizing the consumption of water can be achieved by changing from water-cooled condensers (Dimroth condenser) to air-cooled condensers (Vigreux column), which take advantage of the large surface area to cool. Laboratory electronics The use of ovens is very helpful to dry glassware, but those installations can consume a lot of energy. Employing timers to regulate their use during nights and weekends, can reduce their impact on energy consumption enormously. Waste sorting and disposal The disposal of chemically/biologically contaminated waste requires a lot of energy. Regular waste however requires much less energy or can even be recycled to some degree. Not every object in a lab is contaminated, but often ends up in the contaminated waste, driving up energy costs for waste disposal. A good sorting and recycling system for non contaminated lab waste will allow lab users to act sustainably and correctly dispose of waste. Networks As of 2021, there are numerous laboratories currently dedicating time and resources to move towards more sustainable lab practices at their facilities, e.g.  MIT and the university of Edingburgh. Furthermore, several networks have emerged such as Green Your Lab, Towards greener research, the UK-based network LEAN, the Max-Planck-Sustainability network, and national platforms such as green labs austria and green labs NL. More university independent efforts and resources include the Laboratory Efficiency Assessment Framework, the think-tank labos1point5 and the non-profit organisation my green lab. Organization Organization of laboratories is an area of focus in sociology. Scientists consider how their work should be organized, which could be based on themes, teams, projects or fields of expertise. Work is divided, not only between different jobs of the laboratory such as the researchers, engineers and technicians, but also in terms of autonomy (should the work be individual or in groups). For example, one research group has a schedule where they conduct research on their own topic of interest for one day of the week, but for the rest they work on a given group project. Finance management is yet another organizational issue. The laboratory itself is a historically dated organizational model. It came about due to the observation that the quality of work of researchers who collaborate is overall greater than a researcher working in isolation. From the 1950s, the laboratory has evolved from being an educational tool used by teachers to attract the top students into research, into an organizational model allowing a high level of scientific productivity. Some forms of organization in laboratories include: Their size: Varies from a handful of researches to several hundred. The division of labor: "Occurs between designers and operatives; researchers, engineers, and technicians; theoreticians and experimenters; senior researchers, junior researchers and students; those who publish, those who sign the publications and the others; and between specialities." The coordination mechanisms: Which includes the formalization of objectives and tasks; the standardization of procedures (protocols, project management, quality management, knowledge management), the validation of publications and cross-cutting activities (number and type of seminars). There are three main factors that contribute to the organizational form of a laboratory : The educational background of the researchers and their socialization process. The intellectual process involved in their work, including the type of investigation and equipment they use. The laboratory's history. Other forms of organization include social organization. Social organization A study by Richard H.R. Harper, involving two laboratories, will help elucidate the concept of social organization in laboratories. The main subject of the study revolved around the relationship between the staff of a laboratory (researchers, administrators, receptionists, technicians, etc.) and their Locator. A Locator is an employee of a Laboratory who is in charge of knowing where each member of the laboratory currently is, based on a unique signal emitted from the badge of each staff member. The study describes social relationships among different classes of jobs, such as the relationship between researchers and the Locator. It does not describe the social relationship between employees within a class, such as the relationship between researchers. Through ethnographic studies, one finding is that, among the personnel, each class (researchers, administrators...) has a different degree of entitlement, which varies per laboratory. Entitlement can be both formal or informal (meaning it is not enforced), but each class is aware and conforms to its existence. The degree of entitlement, which is also referred to as a staff's rights, affects social interaction between staff. By looking at the various interactions among staff members, we can determine their social position in the organization. As an example, administrators, in one lab of the study, do not have the right to ask the Locator where the researchers currently are, as they are not entitled to such information. On the other hand, researchers do have access to this type of information. So a consequence of this social hierarchy is that the Locator discloses various degrees of information, based on the staff member and their rights. The Locator does not want to disclose information that could jeopardize his relationship with the members of staff. The Locator adheres to the rights of each class. Social hierarchy is also related to attitudes towards technologies. This was inferred based on the attitude of various jobs towards their lab badge. Their attitude depended on how that job viewed their badge from a standpoint of utility, (how is the badge useful for my job) morality (what are my morals on privacy, as it relates to being tracked by this badge) and relations (how will I be seen by others if I refuse to wear this badge). For example, a receptionist would view the badge as useful, as it would help them locate members of staff during the day. Illustrating relations, researchers would also wear their badge due to informal pressures, such as not wanting to look like a spoil-sport, or not wanting to draw attention to themselves. Another finding is the resistance to change in a social organization. Staff members feel ill at ease when changing patterns of entitlement, obligation, respect, informal and formal hierarchy, and more. In summary, differences in attitude among members of the laboratory are explained by social organization: A person's attitudes are intimately related to the role they have in an organization. This hierarchy helps understand information distribution, control, and attitudes towards technologies in the laboratory.
Physical sciences
Basics
null
173354
https://en.wikipedia.org/wiki/Automation
Automation
Automation describes a wide range of technologies that reduce human intervention in processes, mainly by predetermining decision criteria, subprocess relationships, and related actions, as well as embodying those predeterminations in machines. Automation has been achieved by various means including mechanical, hydraulic, pneumatic, electrical, electronic devices, and computers, usually in combination. Complicated systems, such as modern factories, airplanes, and ships typically use combinations of all of these techniques. The benefit of automation includes labor savings, reducing waste, savings in electricity costs, savings in material costs, and improvements to quality, accuracy, and precision. Automation includes the use of various equipment and control systems such as machinery, processes in factories, boilers, and heat-treating ovens, switching on telephone networks, steering, stabilization of ships, aircraft and other applications and vehicles with reduced human intervention. Examples range from a household thermostat controlling a boiler to a large industrial control system with tens of thousands of input measurements and output control signals. Automation has also found a home in the banking industry. It can range from simple on-off control to multi-variable high-level algorithms in terms of control complexity. In the simplest type of an automatic control loop, a controller compares a measured value of a process with a desired set value and processes the resulting error signal to change some input to the process, in such a way that the process stays at its set point despite disturbances. This closed-loop control is an application of negative feedback to a system. The mathematical basis of control theory was begun in the 18th century and advanced rapidly in the 20th. The term automation, inspired by the earlier word automatic (coming from automaton), was not widely used before 1947, when Ford established an automation department. It was during this time that the industry was rapidly adopting feedback controllers, which were introduced in the 1930s. The World Bank's World Development Report of 2019 shows evidence that the new industries and jobs in the technology sector outweigh the economic effects of workers being displaced by automation. Job losses and downward mobility blamed on automation have been cited as one of many factors in the resurgence of nationalist, protectionist and populist politics in the US, UK and France, among other countries since the 2010s. History Early history It was a preoccupation of the Greeks and Arabs (in the period between about 300 BC and about 1200 AD) to keep accurate track of time. In Ptolemaic Egypt, about 270 BC, Ctesibius described a float regulator for a water clock, a device not unlike the ball and cock in a modern flush toilet. This was the earliest feedback-controlled mechanism. The appearance of the mechanical clock in the 14th century made the water clock and its feedback control system obsolete. The Persian Banū Mūsā brothers, in their Book of Ingenious Devices (850 AD), described a number of automatic controls. Two-step level controls for fluids, a form of discontinuous variable structure controls, were developed by the Banu Musa brothers. They also described a feedback controller. The design of feedback control systems up through the Industrial Revolution was by trial-and-error, together with a great deal of engineering intuition. It was not until the mid-19th century that the stability of feedback control systems was analyzed using mathematics, the formal language of automatic control theory. The centrifugal governor was invented by Christiaan Huygens in the seventeenth century, and used to adjust the gap between millstones. Industrial Revolution in Western Europe The introduction of prime movers, or self-driven machines advanced grain mills, furnaces, boilers, and the steam engine created a new requirement for automatic control systems including temperature regulators (invented in 1624; see Cornelius Drebbel), pressure regulators (1681), float regulators (1700) and speed control devices. Another control mechanism was used to tent the sails of windmills. It was patented by Edmund Lee in 1745. Also in 1745, Jacques de Vaucanson invented the first automated loom. Around 1800, Joseph Marie Jacquard created a punch-card system to program looms. In 1771 Richard Arkwright invented the first fully automated spinning mill driven by water power, known at the time as the water frame. An automatic flour mill was developed by Oliver Evans in 1785, making it the first completely automated industrial process. A centrifugal governor was used by Mr. Bunce of England in 1784 as part of a model steam crane. The centrifugal governor was adopted by James Watt for use on a steam engine in 1788 after Watt's partner Boulton saw one at a flour mill Boulton & Watt were building. The governor could not actually hold a set speed; the engine would assume a new constant speed in response to load changes. The governor was able to handle smaller variations such as those caused by fluctuating heat load to the boiler. Also, there was a tendency for oscillation whenever there was a speed change. As a consequence, engines equipped with this governor were not suitable for operations requiring constant speed, such as cotton spinning. Several improvements to the governor, plus improvements to valve cut-off timing on the steam engine, made the engine suitable for most industrial uses before the end of the 19th century. Advances in the steam engine stayed well ahead of science, both thermodynamics and control theory. The governor received relatively little scientific attention until James Clerk Maxwell published a paper that established the beginning of a theoretical basis for understanding control theory. 20th century Relay logic was introduced with factory electrification, which underwent rapid adaption from 1900 through the 1920s. Central electric power stations were also undergoing rapid growth and the operation of new high-pressure boilers, steam turbines and electrical substations created a large demand for instruments and controls. Central control rooms became common in the 1920s, but as late as the early 1930s, most process controls were on-off. Operators typically monitored charts drawn by recorders that plotted data from instruments. To make corrections, operators manually opened or closed valves or turned switches on or off. Control rooms also used color-coded lights to send signals to workers in the plant to manually make certain changes. The development of the electronic amplifier during the 1920s, which was important for long-distance telephony, required a higher signal-to-noise ratio, which was solved by negative feedback noise cancellation. This and other telephony applications contributed to the control theory. In the 1940s and 1950s, German mathematician Irmgard Flügge-Lotz developed the theory of discontinuous automatic controls, which found military applications during the Second World War to fire control systems and aircraft navigation systems. Controllers, which were able to make calculated changes in response to deviations from a set point rather than on-off control, began being introduced in the 1930s. Controllers allowed manufacturing to continue showing productivity gains to offset the declining influence of factory electrification. Factory productivity was greatly increased by electrification in the 1920s. U.S. manufacturing productivity growth fell from 5.2%/yr 1919–29 to 2.76%/yr 1929–41. Alexander Field notes that spending on non-medical instruments increased significantly from 1929 to 1933 and remained strong thereafter. The First and Second World Wars saw major advancements in the field of mass communication and signal processing. Other key advances in automatic controls include differential equations, stability theory and system theory (1938), frequency domain analysis (1940), ship control (1950), and stochastic analysis (1941). Starting in 1958, various systems based on solid-state digital logic modules for hard-wired programmed logic controllers (the predecessors of programmable logic controllers [PLC]) emerged to replace electro-mechanical relay logic in industrial control systems for process control and automation, including early Telefunken/AEG Logistat, Siemens Simatic, Philips/Mullard/ Norbit, BBC Sigmatronic, ACEC Logacec, Estacord, Krone Mibakron, Bistat, Datapac, Norlog, SSR, or Procontic systems. In 1959 Texaco's Port Arthur Refinery became the first chemical plant to use digital control. Conversion of factories to digital control began to spread rapidly in the 1970s as the price of computer hardware fell. Significant applications The automatic telephone switchboard was introduced in 1892 along with dial telephones. By 1929, 31.9% of the Bell system was automatic. Automatic telephone switching originally used vacuum tube amplifiers and electro-mechanical switches, which consumed a large amount of electricity. Call volume eventually grew so fast that it was feared the telephone system would consume all electricity production, prompting Bell Labs to begin research on the transistor. The logic performed by telephone switching relays was the inspiration for the digital computer. The first commercially successful glass bottle-blowing machine was an automatic model introduced in 1905. The machine, operated by a two-man crew working 12-hour shifts, could produce 17,280 bottles in 24 hours, compared to 2,880 bottles made by a crew of six men and boys working in a shop for a day. The cost of making bottles by machine was 10 to 12 cents per gross compared to $1.80 per gross by the manual glassblowers and helpers. Sectional electric drives were developed using control theory. Sectional electric drives are used on different sections of a machine where a precise differential must be maintained between the sections. In steel rolling, the metal elongates as it passes through pairs of rollers, which must run at successively faster speeds. In paper making paper, the sheet shrinks as it passes around steam-heated drying arranged in groups, which must run at successively slower speeds. The first application of a sectional electric drive was on a paper machine in 1919. One of the most important developments in the steel industry during the 20th century was continuous wide strip rolling, developed by Armco in 1928. Before automation, many chemicals were made in batches. In 1930, with the widespread use of instruments and the emerging use of controllers, the founder of Dow Chemical Co. was advocating continuous production. Self-acting machine tools that displaced hand dexterity so they could be operated by boys and unskilled laborers were developed by James Nasmyth in the 1840s. Machine tools were automated with Numerical control (NC) using punched paper tape in the 1950s. This soon evolved into computerized numerical control (CNC). Today extensive automation is practiced in practically every type of manufacturing and assembly process. Some of the larger processes include electrical power generation, oil refining, chemicals, steel mills, plastics, cement plants, fertilizer plants, pulp and paper mills, automobile and truck assembly, aircraft production, glass manufacturing, natural gas separation plants, food and beverage processing, canning and bottling and manufacture of various kinds of parts. Robots are especially useful in hazardous applications like automobile spray painting. Robots are also used to assemble electronic circuit boards. Automotive welding is done with robots and automatic welders are used in applications like pipelines. Space/computer age With the advent of the space age in 1957, controls design, particularly in the United States, turned away from the frequency-domain techniques of classical control theory and backed into the differential equation techniques of the late 19th century, which were couched in the time domain. During the 1940s and 1950s, German mathematician Irmgard Flugge-Lotz developed the theory of discontinuous automatic control, which became widely used in hysteresis control systems such as navigation systems, fire-control systems, and electronics. Through Flugge-Lotz and others, the modern era saw time-domain design for nonlinear systems (1961), navigation (1960), optimal control and estimation theory (1962), nonlinear control theory (1969), digital control and filtering theory (1974), and the personal computer (1983). Advantages, disadvantages, and limitations Perhaps the most cited advantage of automation in industry is that it is associated with faster production and cheaper labor costs. Another benefit could be that it replaces hard, physical, or monotonous work. Additionally, tasks that take place in hazardous environments or that are otherwise beyond human capabilities can be done by machines, as machines can operate even under extreme temperatures or in atmospheres that are radioactive or toxic. They can also be maintained with simple quality checks. However, at the time being, not all tasks can be automated, and some tasks are more expensive to automate than others. Initial costs of installing the machinery in factory settings are high, and failure to maintain a system could result in the loss of the product itself. Moreover, some studies seem to indicate that industrial automation could impose ill effects beyond operational concerns, including worker displacement due to systemic loss of employment and compounded environmental damage; however, these findings are both convoluted and controversial in nature, and could potentially be circumvented. The main advantages of automation are: Increased throughput or productivity Improved quality Increased predictability Improved robustness (consistency), of processes or product Increased consistency of output Reduced direct human labor costs and expenses Reduced cycle time Increased accuracy Relieving humans of monotonously repetitive work Required work in development, deployment, maintenance, and operation of automated processes — often structured as "jobs" Increased human freedom to do other things Automation primarily describes machines replacing human action, but it is also loosely associated with mechanization, machines replacing human labor. Coupled with mechanization, extending human capabilities in terms of size, strength, speed, endurance, visual range & acuity, hearing frequency & precision, electromagnetic sensing & effecting, etc., advantages include: Relieving humans of dangerous work stresses and occupational injuries (e.g., fewer strained backs from lifting heavy objects) Removing humans from dangerous environments (e.g. fire, space, volcanoes, nuclear facilities, underwater, etc.) The main disadvantages of automation are: High initial cost Faster production without human intervention can mean faster unchecked production of defects where automated processes are defective. Scaled-up capacities can mean scaled-up problems when systems fail — releasing dangerous toxins, forces, energies, etc., at scaled-up rates. Human adaptiveness is often poorly understood by automation initiators. It is often difficult to anticipate every contingency and develop fully preplanned automated responses for every situation. The discoveries inherent in automating processes can require unanticipated iterations to resolve, causing unanticipated costs and delays. People anticipating employment income may be seriously disrupted by others deploying automation where no similar income is readily available. Paradox of automation The paradox of automation says that the more efficient the automated system, the more crucial the human contribution of the operators. Humans are less involved, but their involvement becomes more critical. Lisanne Bainbridge, a cognitive psychologist, identified these issues notably in her widely cited paper "Ironies of Automation." If an automated system has an error, it will multiply that error until it is fixed or shut down. This is where human operators come in. A fatal example of this was Air France Flight 447, where a failure of automation put the pilots into a manual situation they were not prepared for. Limitations Current technology is unable to automate all the desired tasks. Many operations using automation have large amounts of invested capital and produce high volumes of products, making malfunctions extremely costly and potentially hazardous. Therefore, some personnel is needed to ensure that the entire system functions properly and that safety and product quality are maintained. As a process becomes increasingly automated, there is less and less labor to be saved or quality improvement to be gained. This is an example of both diminishing returns and the logistic function. As more and more processes become automated, there are fewer remaining non-automated processes. This is an example of the exhaustion of opportunities. New technological paradigms may, however, set new limits that surpass the previous limits. Current limitations Many roles for humans in industrial processes presently lie beyond the scope of automation. Human-level pattern recognition, language comprehension, and language production ability are well beyond the capabilities of modern mechanical and computer systems (but see Watson computer). Tasks requiring subjective assessment or synthesis of complex sensory data, such as scents and sounds, as well as high-level tasks such as strategic planning, currently require human expertise. In many cases, the use of humans is more cost-effective than mechanical approaches even where the automation of industrial tasks is possible. Therefore, algorithmic management as the digital rationalization of human labor instead of its substitution has emerged as an alternative technological strategy. Overcoming these obstacles is a theorized path to post-scarcity economics. Societal impact and unemployment Increased automation often causes workers to feel anxious about losing their jobs as technology renders their skills or experience unnecessary. Early in the Industrial Revolution, when inventions like the steam engine were making some job categories expendable, workers forcefully resisted these changes. Luddites, for instance, were English textile workers who protested the introduction of weaving machines by destroying them. More recently, some residents of Chandler, Arizona, have slashed tires and pelted rocks at self-driving car, in protest over the cars' perceived threat to human safety and job prospects. The relative anxiety about automation reflected in opinion polls seems to correlate closely with the strength of organized labor in that region or nation. For example, while a study by the Pew Research Center indicated that 72% of Americans are worried about increasing automation in the workplace, 80% of Swedes see automation and artificial intelligence (AI) as a good thing, due to the country's still-powerful unions and a more robust national safety net. According to one estimate, 47% of all current jobs in the US have the potential to be fully automated by 2033. Furthermore, wages and educational attainment appear to be strongly negatively correlated with an occupation's risk of being automated. Erik Brynjolfsson and Andrew McAfee argue that "there's never been a better time to be a worker with special skills or the right education, because these people can use technology to create and capture value. However, there's never been a worse time to be a worker with only 'ordinary' skills and abilities to offer, because computers, robots, and other digital technologies are acquiring these skills and abilities at an extraordinary rate." Others however argue that highly skilled professional jobs like a lawyer, doctor, engineer, journalist are also at risk of automation. According to a 2020 study in the Journal of Political Economy, automation has robust negative effects on employment and wages: "One more robot per thousand workers reduces the employment-to-population ratio by 0.2 percentage points and wages by 0.42%." A 2025 study in the American Economic Journal found that the introduction of industrial robots reduced 1993 and 2014 led to reduced employment of men and women by 3.7 and 1.6 percentage points. Research by Carl Benedikt Frey and Michael Osborne of the Oxford Martin School argued that employees engaged in "tasks following well-defined procedures that can easily be performed by sophisticated algorithms" are at risk of displacement, and 47% of jobs in the US were at risk. The study, released as a working paper in 2013 and published in 2017, predicted that automation would put low-paid physical occupations most at risk, by surveying a group of colleagues on their opinions. However, according to a study published in McKinsey Quarterly in 2015 the impact of computerization in most cases is not the replacement of employees but the automation of portions of the tasks they perform. The methodology of the McKinsey study has been heavily criticized for being intransparent and relying on subjective assessments. The methodology of Frey and Osborne has been subjected to criticism, as lacking evidence, historical awareness, or credible methodology. Additionally, the Organisation for Economic Co-operation and Development (OECD) found that across the 21 OECD countries, 9% of jobs are automatable. Based on a formula by Gilles Saint-Paul, an economist at Toulouse 1 University, the demand for unskilled human capital declines at a slower rate than the demand for skilled human capital increases. In the long run and for society as a whole it has led to cheaper products, lower average work hours, and new industries forming (i.e., robotics industries, computer industries, design industries). These new industries provide many high salary skill-based jobs to the economy. By 2030, between 3 and 14 percent of the global workforce will be forced to switch job categories due to automation eliminating jobs in an entire sector. While the number of jobs lost to automation is often offset by jobs gained from technological advances, the same type of job loss is not the same one replaced and that leading to increasing unemployment in the lower-middle class. This occurs largely in the US and developed countries where technological advances contribute to higher demand for highly skilled labor but demand for middle-wage labor continues to fall. Economists call this trend "income polarization" where unskilled labor wages are driven down and skilled labor is driven up and it is predicted to continue in developed economies. Lights-out manufacturing Lights-out manufacturing is a production system with no human workers, to eliminate labor costs. It grew in popularity in the U.S. when General Motors in 1982 implemented humans "hands-off" manufacturing to "replace risk-averse bureaucracy with automation and robots". However, the factory never reached full "lights out" status. The expansion of lights out manufacturing requires: Reliability of equipment Long-term mechanic capabilities Planned preventive maintenance Commitment from the staff Health and environment The costs of automation to the environment are different depending on the technology, product or engine automated. There are automated engines that consume more energy resources from the Earth in comparison with previous engines and vice versa. Hazardous operations, such as oil refining, the manufacturing of industrial chemicals, and all forms of metal working, were always early contenders for automation. The automation of vehicles could prove to have a substantial impact on the environment, although the nature of this impact could be beneficial or harmful depending on several factors. Because automated vehicles are much less likely to get into accidents compared to human-driven vehicles, some precautions built into current models (such as anti-lock brakes or laminated glass) would not be required for self-driving versions. Removal of these safety features reduces the weight of the vehicle, and coupled with more precise acceleration and braking, as well as fuel-efficient route mapping, can increase fuel economy and reduce emissions. Despite this, some researchers theorize that an increase in the production of self-driving cars could lead to a boom in vehicle ownership and usage, which could potentially negate any environmental benefits of self-driving cars if they are used more frequently. Automation of homes and home appliances is also thought to impact the environment. A study of energy consumption of automated homes in Finland showed that smart homes could reduce energy consumption by monitoring levels of consumption in different areas of the home and adjusting consumption to reduce energy leaks (e.g. automatically reducing consumption during the nighttime when activity is low). This study, along with others, indicated that the smart home's ability to monitor and adjust consumption levels would reduce unnecessary energy usage. However, some research suggests that smart homes might not be as efficient as non-automated homes. A more recent study has indicated that, while monitoring and adjusting consumption levels do decrease unnecessary energy use, this process requires monitoring systems that also consume an amount of energy. The energy required to run these systems sometimes negates their benefits, resulting in little to no ecological benefit. Convertibility and turnaround time Another major shift in automation is the increased demand for flexibility and convertibility in manufacturing processes. Manufacturers are increasingly demanding the ability to easily switch from manufacturing Product A to manufacturing Product B without having to completely rebuild the production lines. Flexibility and distributed processes have led to the introduction of Automated Guided Vehicles with Natural Features Navigation. Digital electronics helped too. Former analog-based instrumentation was replaced by digital equivalents which can be more accurate and flexible, and offer greater scope for more sophisticated configuration, parametrization, and operation. This was accompanied by the fieldbus revolution which provided a networked (i.e. a single cable) means of communicating between control systems and field-level instrumentation, eliminating hard-wiring. Discrete manufacturing plants adopted these technologies fast. The more conservative process industries with their longer plant life cycles have been slower to adopt and analog-based measurement and control still dominate. The growing use of Industrial Ethernet on the factory floor is pushing these trends still further, enabling manufacturing plants to be integrated more tightly within the enterprise, via the internet if necessary. Global competition has also increased demand for Reconfigurable Manufacturing Systems. Automation tools Engineers can now have numerical control over automated devices. The result has been a rapidly expanding range of applications and human activities. Computer-aided technologies (or CAx) now serve as the basis for mathematical and organizational tools used to create complex systems. Notable examples of CAx include computer-aided design (CAD software) and computer-aided manufacturing (CAM software). The improved design, analysis, and manufacture of products enabled by CAx has been beneficial for industry. Information technology, together with industrial machinery and processes, can assist in the design, implementation, and monitoring of control systems. One example of an industrial control system is a programmable logic controller (PLC). PLCs are specialized hardened computers which are frequently used to synchronize the flow of inputs from (physical) sensors and events with the flow of outputs to actuators and events. Human-machine interfaces (HMI) or computer human interfaces (CHI), formerly known as man-machine interfaces, are usually employed to communicate with PLCs and other computers. Service personnel who monitor and control through HMIs can be called by different names. In the industrial process and manufacturing environments, they are called operators or something similar. In boiler houses and central utility departments, they are called stationary engineers. Different types of automation tools exist: ANN – Artificial neural network DCS – Distributed control system HMI – Human machine interface RPA – Robotic process automation SCADA – Supervisory control and data acquisition PLC – Programmable logic controller Instrumentation Motion control Robotics Host simulation software (HSS) is a commonly used testing tool that is used to test the equipment software. HSS is used to test equipment performance concerning factory automation standards (timeouts, response time, processing time). Cognitive automation Cognitive automation, as a subset of AI, is an emerging genus of automation enabled by cognitive computing. Its primary concern is the automation of clerical tasks and workflows that consist of structuring unstructured data. Cognitive automation relies on multiple disciplines: natural language processing, real-time computing, machine learning algorithms, big data analytics, and evidence-based learning. According to Deloitte, cognitive automation enables the replication of human tasks and judgment "at rapid speeds and considerable scale." Such tasks include: Document redaction Data extraction and document synthesis / reporting Contract management Natural language search Customer, employee, and stakeholder onboarding Manual activities and verifications Follow-up and email communications Recent and emerging applications CAD AI Artificially intelligent computer-aided design (CAD) can use text-to-3D, image-to-3D, and video-to-3D to automate in 3D modeling. AI CAD libraries could also be developed using linked open data of schematics and diagrams. Ai CAD assistants are used as tools to help streamline workflow. Automated power production Technologies like solar panels, wind turbines, and other renewable energy sources—together with smart grids, micro-grids, battery storage—can automate power production. Agricultural production Many agricultural operations are automated with machinery and equipment to improve their diagnosis, decision-making and/or performing. Agricultural automation can relieve the drudgery of agricultural work, improve the timeliness and precision of agricultural operations, raise productivity and resource-use efficiency, build resilience, and improve food quality and safety. Increased productivity can free up labour, allowing agricultural households to spend more time elsewhere. The technological evolution in agriculture has resulted in progressive shifts to digital equipment and robotics. Motorized mechanization using engine power automates the performance of agricultural operations such as ploughing and milking. With digital automation technologies, it also becomes possible to automate diagnosis and decision-making of agricultural operations. For example, autonomous crop robots can harvest and seed crops, while drones can gather information to help automate input application. Precision agriculture often employs such automation technologies Motorized mechanization has generally increased in recent years. Sub-Saharan Africa is the only region where the adoption of motorized mechanization has stalled over the past decades. Automation technologies are increasingly used for managing livestock, though evidence on adoption is lacking. Global automatic milking system sales have increased over recent years, but adoption is likely mostly in Northern Europe, and likely almost absent in low- and middle-income countries. Automated feeding machines for both cows and poultry also exist, but data and evidence regarding their adoption trends and drivers is likewise scarce. Retail Many supermarkets and even smaller stores are rapidly introducing self-checkout systems reducing the need for employing checkout workers. In the U.S., the retail industry employs 15.9 million people as of 2017 (around 1 in 9 Americans in the workforce). Globally, an estimated 192 million workers could be affected by automation according to research by Eurasia Group. Online shopping could be considered a form of automated retail as the payment and checkout are through an automated online transaction processing system, with the share of online retail accounting jumping from 5.1% in 2011 to 8.3% in 2016. However, two-thirds of books, music, and films are now purchased online. In addition, automation and online shopping could reduce demands for shopping malls, and retail property, which in the United States is currently estimated to account for 31% of all commercial property or around . Amazon has gained much of the growth in recent years for online shopping, accounting for half of the growth in online retail in 2016. Other forms of automation can also be an integral part of online shopping, for example, the deployment of automated warehouse robotics such as that applied by Amazon using Kiva Systems. Food and drink The food retail industry has started to apply automation to the ordering process; McDonald's has introduced touch screen ordering and payment systems in many of its restaurants, reducing the need for as many cashier employees. The University of Texas at Austin has introduced fully automated cafe retail locations. Some cafes and restaurants have utilized mobile and tablet "apps" to make the ordering process more efficient by customers ordering and paying on their device. Some restaurants have automated food delivery to tables of customers using a conveyor belt system. The use of robots is sometimes employed to replace waiting staff. Construction Automation in construction is the combination of methods, processes, and systems that allow for greater machine autonomy in construction activities. Construction automation may have multiple goals, including but not limited to, reducing jobsite injuries, decreasing activity completion times, and assisting with quality control and quality assurance. Mining Automated mining involves the removal of human labor from the mining process. The mining industry is currently in the transition towards automation. Currently, it can still require a large amount of human capital, particularly in the third world where labor costs are low so there is less incentive for increasing efficiency through automation. Video surveillance The Defense Advanced Research Projects Agency (DARPA) started the research and development of automated visual surveillance and monitoring (VSAM) program, between 1997 and 1999, and airborne video surveillance (AVS) programs, from 1998 to 2002. Currently, there is a major effort underway in the vision community to develop a fully-automated tracking surveillance system. Automated video surveillance monitors people and vehicles in real-time within a busy environment. Existing automated surveillance systems are based on the environment they are primarily designed to observe, i.e., indoor, outdoor or airborne, the number of sensors that the automated system can handle and the mobility of sensors, i.e., stationary camera vs. mobile camera. The purpose of a surveillance system is to record properties and trajectories of objects in a given area, generate warnings or notify the designated authorities in case of occurrence of particular events. Highway systems As demands for safety and mobility have grown and technological possibilities have multiplied, interest in automation has grown. Seeking to accelerate the development and introduction of fully automated vehicles and highways, the U.S. Congress authorized more than $650 million over six years for intelligent transport systems (ITS) and demonstration projects in the 1991 Intermodal Surface Transportation Efficiency Act (ISTEA). Congress legislated in ISTEA that:[T]he Secretary of Transportation shall develop an automated highway and vehicle prototype from which future fully automated intelligent vehicle-highway systems can be developed. Such development shall include research in human factors to ensure the success of the man-machine relationship. The goal of this program is to have the first fully automated highway roadway or an automated test track in operation by 1997. This system shall accommodate the installation of equipment in new and existing motor vehicles.Full automation commonly defined as requiring no control or very limited control by the driver; such automation would be accomplished through a combination of sensor, computer, and communications systems in vehicles and along the roadway. Fully automated driving would, in theory, allow closer vehicle spacing and higher speeds, which could enhance traffic capacity in places where additional road building is physically impossible, politically unacceptable, or prohibitively expensive. Automated controls also might enhance road safety by reducing the opportunity for driver error, which causes a large share of motor vehicle crashes. Other potential benefits include improved air quality (as a result of more-efficient traffic flows), increased fuel economy, and spin-off technologies generated during research and development related to automated highway systems. Waste management Automated waste collection trucks prevent the need for as many workers as well as easing the level of labor required to provide the service. Business process Business process automation (BPA) is the technology-enabled automation of complex business processes. It can help to streamline a business for simplicity, achieve digital transformation, increase service quality, improve service delivery or contain costs. BPA consists of integrating applications, restructuring labor resources and using software applications throughout the organization. Robotic process automation (RPA; or RPAAI for self-guided RPA 2.0) is an emerging field within BPA and uses AI. BPAs can be implemented in a number of business areas including marketing, sales and workflow. Home Home automation (also called domotics) designates an emerging practice of increased automation of household appliances and features in residential dwellings, particularly through electronic means that allow for things impracticable, overly expensive or simply not possible in recent past decades. The rise in the usage of home automation solutions has taken a turn reflecting the increased dependency of people on such automation solutions. However, the increased comfort that gets added through these automation solutions is remarkable. Laboratory Automation is essential for many scientific and clinical applications. Therefore, automation has been extensively employed in laboratories. From as early as 1980 fully automated laboratories have already been working. However, automation has not become widespread in laboratories due to its high cost. This may change with the ability of integrating low-cost devices with standard laboratory equipment. Autosamplers are common devices used in laboratory automation. Logistics automation Logistics automation is the application of computer software or automated machinery to improve the efficiency of logistics operations. Typically this refers to operations within a warehouse or distribution center, with broader tasks undertaken by supply chain engineering systems and enterprise resource planning systems. Industrial automation Industrial automation deals primarily with the automation of manufacturing, quality control, and material handling processes. General-purpose controllers for industrial processes include programmable logic controllers, stand-alone I/O modules, and computers. Industrial automation is to replace the human action and manual command-response activities with the use of mechanized equipment and logical programming commands. One trend is increased use of machine vision to provide automatic inspection and robot guidance functions, another is a continuing increase in the use of robots. Industrial automation is simply required in industries. Industrial Automation and Industry 4.0 The rise of industrial automation is directly tied to the "Fourth Industrial Revolution", which is better known now as Industry 4.0. Originating from Germany, Industry 4.0 encompasses numerous devices, concepts, and machines, as well as the advancement of the industrial internet of things (IIoT). An "Internet of Things is a seamless integration of diverse physical objects in the Internet through a virtual representation." These new revolutionary advancements have drawn attention to the world of automation in an entirely new light and shown ways for it to grow to increase productivity and efficiency in machinery and manufacturing facilities. Industry 4.0 works with the IIoT and software/hardware to connect in a way that (through communication technologies) add enhancements and improve manufacturing processes. Being able to create smarter, safer, and more advanced manufacturing is now possible with these new technologies. It opens up a manufacturing platform that is more reliable, consistent, and efficient than before. Implementation of systems such as SCADA is an example of software that takes place in Industrial Automation today. SCADA is a supervisory data collection software, just one of the many used in Industrial Automation. Industry 4.0 vastly covers many areas in manufacturing and will continue to do so as time goes on. Industrial robotics Industrial robotics is a sub-branch in industrial automation that aids in various manufacturing processes. Such manufacturing processes include machining, welding, painting, assembling and material handling to name a few. Industrial robots use various mechanical, electrical as well as software systems to allow for high precision, accuracy and speed that far exceed any human performance. The birth of industrial robots came shortly after World War II as the U.S. saw the need for a quicker way to produce industrial and consumer goods. Servos, digital logic and solid-state electronics allowed engineers to build better and faster systems and over time these systems were improved and revised to the point where a single robot is capable of running 24 hours a day with little or no maintenance. In 1997, there were 700,000 industrial robots in use, the number has risen to 1.8M in 2017 In recent years, AI with robotics is also used in creating an automatic labeling solution, using robotic arms as the automatic label applicator, and AI for learning and detecting the products to be labelled. Programmable Logic Controllers Industrial automation incorporates programmable logic controllers in the manufacturing process. Programmable logic controllers (PLCs) use a processing system which allows for variation of controls of inputs and outputs using simple programming. PLCs make use of programmable memory, storing instructions and functions like logic, sequencing, timing, counting, etc. Using a logic-based language, a PLC can receive a variety of inputs and return a variety of logical outputs, the input devices being sensors and output devices being motors, valves, etc. PLCs are similar to computers, however, while computers are optimized for calculations, PLCs are optimized for control tasks and use in industrial environments. They are built so that only basic logic-based programming knowledge is needed and to handle vibrations, high temperatures, humidity, and noise. The greatest advantage PLCs offer is their flexibility. With the same basic controllers, a PLC can operate a range of different control systems. PLCs make it unnecessary to rewire a system to change the control system. This flexibility leads to a cost-effective system for complex and varied control systems. PLCs can range from small "building brick" devices with tens of I/O in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems. They can be designed for multiple arrangements of digital and analog inputs and outputs (I/O), extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory. It was from the automotive industry in the United States that the PLC was born. Before the PLC, control, sequencing, and safety interlock logic for manufacturing automobiles was mainly composed of relays, cam timers, drum sequencers, and dedicated closed-loop controllers. Since these could number in the hundreds or even thousands, the process for updating such facilities for the yearly model change-over was very time-consuming and expensive, as electricians needed to individually rewire the relays to change their operational characteristics. When digital computers became available, being general-purpose programmable devices, they were soon applied to control sequential and combinatorial logic in industrial processes. However, these early computers required specialist programmers and stringent operating environmental control for temperature, cleanliness, and power quality. To meet these challenges, the PLC was developed with several key attributes. It would tolerate the shop-floor environment, it would support discrete (bit-form) input and output in an easily extensible manner, it would not require years of training to use, and it would permit its operation to be monitored. Since many industrial processes have timescales easily addressed by millisecond response times, modern (fast, small, reliable) electronics greatly facilitate building reliable controllers, and performance could be traded off for reliability. Agent-assisted automation Agent-assisted automation refers to automation used by call center agents to handle customer inquiries. The key benefit of agent-assisted automation is compliance and error-proofing. Agents are sometimes not fully trained or they forget or ignore key steps in the process. The use of automation ensures that what is supposed to happen on the call actually does, every time. There are two basic types: desktop automation and automated voice solutions. Control Open-loop and closed-loop Discrete control (on/off) One of the simplest types of control is on-off control. An example is a thermostat used on household appliances which either open or close an electrical contact. (Thermostats were originally developed as true feedback-control mechanisms rather than the on-off common household appliance thermostat.) Sequence control, in which a programmed sequence of discrete operations is performed, often based on system logic that involves system states. An elevator control system is an example of sequence control. PID controller A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism (controller) widely used in industrial control systems. In a PID loop, the controller continuously calculates an error value as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms, respectively (sometimes denoted P, I, and D) which give their name to the controller type. The theoretical understanding and application date from the 1920s, and they are implemented in nearly all analog control systems; originally in mechanical controllers, and then using discrete electronics and latterly in industrial process computers. Sequential control and logical sequence or system state control Sequential control may be either to a fixed sequence or to a logical one that will perform different actions depending on various system states. An example of an adjustable but otherwise fixed sequence is a timer on a lawn sprinkler. States refer to the various conditions that can occur in a use or sequence scenario of the system. An example is an elevator, which uses logic based on the system state to perform certain actions in response to its state and operator input. For example, if the operator presses the floor n button, the system will respond depending on whether the elevator is stopped or moving, going up or down, or if the door is open or closed, and other conditions. Early development of sequential control was relay logic, by which electrical relays engage electrical contacts which either start or interrupt power to a device. Relays were first used in telegraph networks before being developed for controlling other devices, such as when starting and stopping industrial-sized electric motors or opening and closing solenoid valves. Using relays for control purposes allowed event-driven control, where actions could be triggered out of sequence, in response to external events. These were more flexible in their response than the rigid single-sequence cam timers. More complicated examples involved maintaining safe sequences for devices such as swing bridge controls, where a lock bolt needed to be disengaged before the bridge could be moved, and the lock bolt could not be released until the safety gates had already been closed. The total number of relays and cam timers can number into the hundreds or even thousands in some factories. Early programming techniques and languages were needed to make such systems manageable, one of the first being ladder logic, where diagrams of the interconnected relays resembled the rungs of a ladder. Special computers called programmable logic controllers were later designed to replace these collections of hardware with a single, more easily re-programmed unit. In a typical hard-wired motor start and stop circuit (called a control circuit) a motor is started by pushing a "Start" or "Run" button that activates a pair of electrical relays. The "lock-in" relay locks in contacts that keep the control circuit energized when the push-button is released. (The start button is a normally open contact and the stop button is a normally closed contact.) Another relay energizes a switch that powers the device that throws the motor starter switch (three sets of contacts for three-phase industrial power) in the main power circuit. Large motors use high voltage and experience high in-rush current, making speed important in making and breaking contact. This can be dangerous for personnel and property with manual switches. The "lock-in" contacts in the start circuit and the main power contacts for the motor are held engaged by their respective electromagnets until a "stop" or "off" button is pressed, which de-energizes the lock in relay. Commonly interlocks are added to a control circuit. Suppose that the motor in the example is powering machinery that has a critical need for lubrication. In this case, an interlock could be added to ensure that the oil pump is running before the motor starts. Timers, limit switches, and electric eyes are other common elements in control circuits. Solenoid valves are widely used on compressed air or hydraulic fluid for powering actuators on mechanical components. While motors are used to supply continuous rotary motion, actuators are typically a better choice for intermittently creating a limited range of movement for a mechanical component, such as moving various mechanical arms, opening or closing valves, raising heavy press-rolls, applying pressure to presses. Computer control Computers can perform both sequential control and feedback control, and typically a single computer will do both in an industrial application. Programmable logic controllers (PLCs) are a type of special-purpose microprocessor that replaced many hardware components such as timers and drum sequencers used in relay logic–type systems. General-purpose process control computers have increasingly replaced stand-alone controllers, with a single computer able to perform the operations of hundreds of controllers. Process control computers can process data from a network of PLCs, instruments, and controllers to implement typical (such as PID) control of many individual variables or, in some cases, to implement complex control algorithms using multiple inputs and mathematical manipulations. They can also analyze data and create real-time graphical displays for operators and run reports for operators, engineers, and management. Control of an automated teller machine (ATM) is an example of an interactive process in which a computer will perform a logic-derived response to a user selection based on information retrieved from a networked database. The ATM process has similarities with other online transaction processes. The different logical responses are called scenarios. Such processes are typically designed with the aid of use cases and flowcharts, which guide the writing of the software code. The earliest feedback control mechanism was the water clock invented by Greek engineer Ctesibius (285–222 BC).
Technology
Basics_6
null
173362
https://en.wikipedia.org/wiki/Fortnight
Fortnight
A fortnight is a unit of time equal to 14 days (two weeks). The word derives from the Old English term , meaning "" (or "fourteen days", since the Anglo-Saxons counted by nights). Astronomy and tides In astronomy, a lunar fortnight is half a lunar synodic month, which is equivalent to the mean period between a full moon and a new moon (and vice versa). This is equal to 14.07 days. It gives rise to a lunar fortnightly tidal constituent (see: Long-period tides). Analogs and translations In many languages, there is no single word for a two-week period, and the equivalent terms "two weeks", "14 days", or "15 days" (counting inclusively) have to be used. Celtic languages: in Welsh, the term pythefnos, meaning "15 nights", is used. This is in keeping with the Welsh term for a week, which is wythnos ("eight nights"). In Irish, the term is coicís. Similarly, in Greek, the term δεκαπενθήμερο (dekapenthímero), meaning "15 days", is used. The Hindu calendar uses the Sanskrit word पक्ष "pakṣa", meaning one half of a lunar month, which is between 14 and 15 solar days. In Romance languages there are the terms quincena (or quince días) in Galician and Spanish, quinzena or quinze dies in Catalan and quinze dias or quinzena in Portuguese, quindicina in Italian, quinze jours or quinzaine in French, and chenzină in Romanian, all meaning "a grouping of 15". Semitic languages have a "doubling suffix". When added at the end of the word for "week" it changes the meaning to "two weeks". In Hebrew, the single-word שבועיים (shvu′ayim) means exactly "two weeks". Also in Arabic, by adding the common dual suffix to the word for "week", أسبوع, the form أسبوعين (usbu′ayn), meaning "two weeks", is formed. Slavic languages: in Czech the terms čtrnáctidenní and dvoutýdenní have the same meaning as "fortnight". In Ukrainian, the term два тижні is used in relation to "biweekly, two weeks".
Physical sciences
Time
Basics and measurement
173366
https://en.wikipedia.org/wiki/Mechanization
Mechanization
Mechanization (or mechanisation) is the process of changing from working largely or exclusively by hand or with animals to doing that work with machinery. In an early engineering text, a machine is defined as follows: In every fields, mechanization includes the use of hand tools. In modern usage, such as in engineering or economics, mechanization implies machinery more complex than hand tools and would not include simple devices such as an ungeared horse or donkey mill. Devices that cause speed changes or changes to or from reciprocating to rotary motion, using means such as gears, pulleys or sheaves and belts, shafts, cams and cranks, usually are considered machines. After electrification, when most small machinery was no longer hand powered, mechanization was synonymous with motorized machines. Extension of mechanization of the production process is termed as automation and it is controlled by a closed loop system in which feedback is provided by the sensors. In an automated machine the work of different mechanisms is performed automatically. History Ancient times Water wheels date to the Roman period and were used to grind grain and lift irrigation water. Water-powered bellows were in use on blast furnaces in China in 31 AD. By the 13th century, water wheels powered sawmills and trip hammers, to pull cloth and pound flax and later cotton rags into pulp for making paper. Trip hammers are shown crushing ore in De re Metallica (1555). Clocks were some of the most complex early mechanical devices. Clock makers were important developers of machine tools including gear and screw cutting machines, and were also involved in the mathematical development of gear designs. Clocks were some of the earliest mass-produced items, beginning around 1830. Water-powered bellows for blast furnaces, used in China in ancient times, were in use in Europe by the 15th century. De re Metallica contains drawings related to bellows for blast furnaces including a fabrication drawing. Improved gear designs decreased wear and increased efficiency. Mathematical gear designs were developed in the mid 17th century. French mathematician and engineer Desargues designed and constructed the first mill with epicycloidal teeth ca. 1650. In the 18th century involute gears, another mathematical derived design, came into use. Involute gears are better for meshing gears of different sizes than epicycloidal. Gear cutting machines came into use in the 18th century. Industrial revolution The Newcomen steam engine was first used, to pump water from a mine, in 1712. John Smeaton introduced metal gears and axles to water wheels in the mid to last half of the 18th century. The Industrial Revolution started mainly with textile machinery, such as the spinning jenny (1764) and water frame (1768). Demand for metal parts used in textile machinery led to the invention of many machine tools in the late 1700s until the mid-1800s. After the early decades of the 19th century, iron increasingly replaced wood in gearing and shafts in textile machinery. In the 1840s self acting machine tools were developed. Machinery was developed to make nails ca. 1810. The Fourdrinier paper machine for continuous production of paper was patented in 1801, displacing the centuries-old hand method of making individual sheets of paper. One of the first mechanical devices used in agriculture was the seed drill invented by Jethro Tull around 1700. The seed drill allowed more uniform spacing of seed and planting depth than hand methods, increasing yields and saving valuable seed. In 1817, the first bicycle was invented and used in Germany. Mechanized agriculture greatly increased in the late eighteenth and early nineteenth centuries with horse drawn reapers and horse powered threshing machines. By the late nineteenth century steam power was applied to threshing and steam tractors appeared. Internal combustion began being used for tractors in the early twentieth century. Threshing and harvesting was originally done with attachments for tractors, but in the 1930s independently powered combine harvesters were in use. In the mid to late 19th century, hydraulic and pneumatic devices were able to power various mechanical actions, such as positioning tools or work pieces. Pile drivers and steam hammers are examples for heavy work. In food processing, pneumatic or hydraulic devices could start and stop filling of cans or bottles on a conveyor. Power steering for automobiles uses hydraulic mechanisms, as does practically all earth moving equipment and other construction equipment and many attachments to tractors. Pneumatic (usually compressed air) power is widely used to operate industrial valves. Twentieth century By the early 20th century machines developed the ability to perform more complex operations that had previously been done by skilled craftsmen. An example is the glass bottle making machine developed 1905. It replaced highly paid glass blowers and child labor helpers and led to the mass production of glass bottles. After 1900 factories were electrified, and electric motors and controls were used to perform more complicated mechanical operations. This resulted in mechanized processes to manufacture almost all goods. Categories In manufacturing, mechanization replaced hand methods of making goods. Prime movers are devices that convert thermal, potential or kinetic energy into mechanical work. Prime movers include internal combustion engines, combustion turbines (jet engines), water wheels and turbines, windmills and wind turbines and steam engines and turbines. Powered transportation equipment such as locomotives, automobiles and trucks and airplanes, is a classification of machinery which includes sub classes by engine type, such as internal combustion, combustion turbine and steam. Inside factories, warehouses, lumber yards and other manufacturing and distribution operations, material handling equipment replaced manual carrying or hand trucks and carts. In mining and excavation, power shovels replaced picks and shovels. Rock and ore crushing had been done for centuries by water-powered trip hammers, but trip hammers have been replaced by modern ore crushers and ball mills. Bulk material handling systems and equipment are used for a variety of materials including coal, ores, grains, sand, gravel and wood products. Construction equipment includes cranes, concrete mixers, concrete pumps, cherry pickers and an assortment of power tools. Powered machinery Powered machinery today usually means either by electric motor or internal combustion engine. Before the first decade of the 20th century powered usually meant by steam engine, water or wind. Many of the early machines and machine tools were hand powered, but most changed over to water or steam power by the early 19th century. Before electrification, mill and factory power was usually transmitted using a line shaft. Electrification allowed individual machines to each be powered by a separate motor in what is called unit drive. Unit drive allowed factories to be better arranged and allowed different machines to run at different speeds. Unit drive also allowed much higher speeds, which was especially important for machine tools. A step beyond mechanization is automation. Early production machinery, such as the glass bottle blowing machine (ca. 1890s), required a lot of operator involvement. By the 1920s fully automatic machines, which required much less operator attention, were being used. Military usage The term is also used in the military to refer to the use of tracked armoured vehicles, particularly armoured personnel carriers, to move troops ( mechanized infantry) that would otherwise have marched or ridden trucks into combat. In military terminology, mechanized refers to ground units that can fight from vehicles, while motorized refers to units (motorized infantry) that are transported and go to battle in unarmoured vehicles such as trucks. Thus, a towed artillery unit is considered motorized while a self-propelled one is mechanized. Mechanical vs human labour When we compare the efficiency of a labourer, we see that he has an efficiency of about 1%–5.5% (depending on whether he uses arms, or a combination of arms and legs). Internal combustion engines mostly have an efficiency of about 20%, although large diesel engines, such as those used to power ships, may have efficiencies of nearly 50%. Industrial electric motors have efficiencies up to the low 90% range, before correcting for the conversion efficiency of fuel to electricity of about 35%. When we compare the costs of using an internal combustion engine to a worker to perform work, we notice that an engine can perform more work at a comparative cost. 1 liter of fossil fuel burnt with an IC engine equals about 50 hands of workers operating for 24 hours or 275 arms and legs for 24 hours. In addition, the combined work capability of a human is also much lower than that of a machine. An average human worker can provide work good for around 0,9 hp (2.3 MJ per hour) while a machine (depending on the type and size) can provide for far greater amounts of work. For example, it takes more than one and a half hour of hard labour to deliver only one kWh – which a small engine could deliver in less than one hour while burning less than one litre of petroleum fuel. This implies that a gang of 20 to 40 men will require a financial compensation for their work at least equal to the required expended food calories (which is at least 4 to 20 times higher). In most situations, the worker will also want compensation for the lost time, which is easily 96 times greater per day. Even if we assume the real wage cost for the human labour to be at US $1.00/day, an energy cost is generated of about $4.00/kWh. Despite this being a low wage for hard labour, even in some of the countries with the lowest wages, it represents an energy cost that is significantly more expensive than even exotic power sources such as solar photovoltaic panels (and thus even more expensive when compared to wind energy harvesters or luminescent solar concentrators). Levels of mechanization For simplification, one can study mechanization as a series of steps. Many students refer to this series as indicating basic-to-advanced forms of mechanical society. hand/muscle power hand-tools powered hand-tools, e.g. electric-controlled powered tools, single functioned, fixed cycle powered tools, multi-functioned, program controlled powered tools, remote-controlled powered tools, activated by work-piece (e.g.: coin phone) measurement selected signaling control, e.g. hydro power control performance recording automated machine action altered through measurement segregation/rejection according to measurement selection of appropriate action cycle correcting performance after operation correcting performance during operation
Technology
Basics_6
null
173416
https://en.wikipedia.org/wiki/Mathematical%20physics
Mathematical physics
Mathematical physics refers to the development of mathematical methods for application to problems in physics. The Journal of Mathematical Physics defines the field as "the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories". An alternative definition would also include those mathematics that are inspired by physics, known as physical mathematics. Scope There are several distinct branches of mathematical physics, and these roughly correspond to particular historical parts of our world. Classical mechanics Applying the techniques of mathematical physics to classical mechanics typically involves the rigorous, abstract, and advanced reformulation of Newtonian mechanics in terms of Lagrangian mechanics and Hamiltonian mechanics (including both approaches in the presence of constraints). Both formulations are embodied in analytical mechanics and lead to an understanding of the deep interplay between the notions of symmetry and conserved quantities during the dynamical evolution of mechanical systems, as embodied within the most elementary formulation of Noether's theorem. These approaches and ideas have been extended to other areas of physics, such as statistical mechanics, continuum mechanics, classical field theory, and quantum field theory. Moreover, they have provided multiple examples and ideas in differential geometry (e.g., several notions in symplectic geometry and vector bundles). Partial differential equations Within mathematics proper, the theory of partial differential equation, variational calculus, Fourier analysis, potential theory, and vector analysis are perhaps most closely associated with mathematical physics. These fields were developed intensively from the second half of the 18th century (by, for example, D'Alembert, Euler, and Lagrange) until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics. Quantum theory The theory of atomic spectra (and, later, quantum mechanics) developed almost concurrently with some parts of the mathematical fields of linear algebra, the spectral theory of operators, operator algebras and, more broadly, functional analysis. Nonrelativistic quantum mechanics includes Schrödinger operators, and it has connections to atomic and molecular physics. Quantum information theory is another subspecialty. Relativity and quantum relativistic theories The special and general theories of relativity require a rather different type of mathematics. This was group theory, which played an important role in both quantum field theory and differential geometry. This was, however, gradually supplemented by topology and functional analysis in the mathematical description of cosmological as well as quantum field theory phenomena. In the mathematical description of these physical areas, some concepts in homological algebra and category theory are also important. Statistical mechanics Statistical mechanics forms a separate field, which includes the theory of phase transitions. It relies upon the Hamiltonian mechanics (or its quantum version) and it is closely related with the more mathematical ergodic theory and some parts of probability theory. There are increasing interactions between combinatorics and physics, in particular statistical physics. Usage The usage of the term "mathematical physics" is sometimes idiosyncratic. Certain parts of mathematics that initially arose from the development of physics are not, in fact, considered parts of mathematical physics, while other closely related fields are. For example, ordinary differential equations and symplectic geometry are generally viewed as purely mathematical disciplines, whereas dynamical systems and Hamiltonian mechanics belong to mathematical physics. John Herapath used the term for the title of his 1847 text on "mathematical principles of natural philosophy", the scope at that time being "the causes of heat, gaseous elasticity, gravitation, and other great phenomena of nature". Mathematical vs. theoretical physics The term "mathematical physics" is sometimes used to denote research aimed at studying and solving problems in physics or thought experiments within a mathematically rigorous framework. In this sense, mathematical physics covers a very broad academic realm distinguished only by the blending of some mathematical aspect and theoretical physics aspect. Although related to theoretical physics, mathematical physics in this sense emphasizes the mathematical rigour of the similar type as found in mathematics. On the other hand, theoretical physics emphasizes the links to observations and experimental physics, which often requires theoretical physicists (and mathematical physicists in the more general sense) to use heuristic, intuitive, or approximate arguments. Such arguments are not considered rigorous by mathematicians. Such mathematical physicists primarily expand and elucidate physical theories. Because of the required level of mathematical rigour, these researchers often deal with questions that theoretical physicists have considered to be already solved. However, they can sometimes show that the previous solution was incomplete, incorrect, or simply too naïve. Issues about attempts to infer the second law of thermodynamics from statistical mechanics are examples. Other examples concern the subtleties involved with synchronisation procedures in special and general relativity (Sagnac effect and Einstein synchronisation). The effort to put physical theories on a mathematically rigorous footing not only developed physics but also has influenced developments of some mathematical areas. For example, the development of quantum mechanics and some aspects of functional analysis parallel each other in many ways. The mathematical study of quantum mechanics, quantum field theory, and quantum statistical mechanics has motivated results in operator algebras. The attempt to construct a rigorous mathematical formulation of quantum field theory has also brought about some progress in fields such as representation theory. Prominent mathematical physicists Before Newton There is a tradition of mathematical analysis of nature that goes back to the ancient Greeks; examples include Euclid (Optics), Archimedes (On the Equilibrium of Planes, On Floating Bodies), and Ptolemy (Optics, Harmonics). Later, Islamic and Byzantine scholars built on these works, and these ultimately were reintroduced or became available to the West in the 12th century and during the Renaissance. In the first decade of the 16th century, amateur astronomer Nicolaus Copernicus proposed heliocentrism, and published a treatise on it in 1543. He retained the Ptolemaic idea of epicycles, and merely sought to simplify astronomy by constructing simpler sets of epicyclic orbits. Epicycles consist of circles upon circles. According to Aristotelian physics, the circle was the perfect form of motion, and was the intrinsic motion of Aristotle's fifth element—the quintessence or universal essence known in Greek as aether for the English pure air—that was the pure substance beyond the sublunary sphere, and thus was celestial entities' pure composition. The German Johannes Kepler [1571–1630], Tycho Brahe's assistant, modified Copernican orbits to ellipses, formalized in the equations of Kepler's laws of planetary motion. An enthusiastic atomist, Galileo Galilei in his 1623 book The Assayer asserted that the "book of nature is written in mathematics". His 1632 book, about his telescopic observations, supported heliocentrism. Having made use of experimentation, Galileo then refuted geocentric cosmology by refuting Aristotelian physics itself. Galileo's 1638 book Discourse on Two New Sciences established the law of equal free fall as well as the principles of inertial motion, two central concepts of what today is known as classical mechanics. By the Galilean law of inertia as well as the principle of Galilean invariance, also called Galilean relativity, for any object experiencing inertia, there is empirical justification for knowing only that it is at relative rest or relative motion—rest or motion with respect to another object. René Descartes developed a complete system of heliocentric cosmology anchored on the principle of vortex motion, Cartesian physics, whose widespread acceptance helped bring the demise of Aristotelian physics. Descartes used mathematical reasoning as a model for science, and developed analytic geometry, which in time allowed the plotting of locations in 3D space (Cartesian coordinates) and marking their progressions along the flow of time. Christiaan Huygens, a talented mathematician and physicist and older contemporary of Newton, was the first to successfully idealize a physical problem by a set of mathematical parameters in Horologium Oscillatorum (1673), and the first to fully mathematize a mechanistic explanation of an unobservable physical phenomenon in Traité de la Lumière (1690). He is thus considered a forerunner of theoretical physics and one of the founders of modern mathematical physics. Newtonian physics and post Newtonian The prevailing framework for science in the 16th and early 17th centuries was one borrowed from Ancient Greek mathematics, where geometrical shapes formed the building blocks to describe and think about space, and time was often thought as a separate entity. With the introduction of algebra into geometry, and with it the idea of a coordinate system, time and space could now be thought as axes belonging to the same plane. This essential mathematical framework is at the base of all modern physics and used in all further mathematical frameworks developed in next centuries. By the middle of the 17th century, important concepts such as the fundamental theorem of calculus (proved in 1668 by Scottish mathematician James Gregory) and finding extrema and minima of functions via differentiation using Fermat's theorem (by French mathematician Pierre de Fermat) were already known before Leibniz and Newton. Isaac Newton (1642–1727) developed calculus (although Gottfried Wilhelm Leibniz developed similar concepts outside the context of physics) and Newton's method to solve problems in mathematics and physics. He was extremely successful in his application of calculus and other methods to the study of motion. Newton's theory of motion, culminating in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy) in 1687, modeled three Galilean laws of motion along with Newton's law of universal gravitation on a framework of absolute space—hypothesized by Newton as a physically real entity of Euclidean geometric structure extending infinitely in all directions—while presuming absolute time, supposedly justifying knowledge of absolute motion, the object's motion with respect to absolute space. The principle of Galilean invariance/relativity was merely implicit in Newton's theory of motion. Having ostensibly reduced the Keplerian celestial laws of motion as well as Galilean terrestrial laws of motion to a unifying force, Newton achieved great mathematical rigor, but with theoretical laxity. In the 18th century, the Swiss Daniel Bernoulli (1700–1782) made contributions to fluid dynamics, and vibrating strings. The Swiss Leonhard Euler (1707–1783) did special work in variational calculus, dynamics, fluid dynamics, and other areas. Also notable was the Italian-born Frenchman, Joseph-Louis Lagrange (1736–1813) for work in analytical mechanics: he formulated Lagrangian mechanics) and variational methods. A major contribution to the formulation of Analytical Dynamics called Hamiltonian dynamics was also made by the Irish physicist, astronomer and mathematician, William Rowan Hamilton (1805–1865). Hamiltonian dynamics had played an important role in the formulation of modern theories in physics, including field theory and quantum mechanics. The French mathematical physicist Joseph Fourier (1768 – 1830) introduced the notion of Fourier series to solve the heat equation, giving rise to a new approach to solving partial differential equations by means of integral transforms. Into the early 19th century, following mathematicians in France, Germany and England had contributed to mathematical physics. The French Pierre-Simon Laplace (1749–1827) made paramount contributions to mathematical astronomy, potential theory. Siméon Denis Poisson (1781–1840) worked in analytical mechanics and potential theory. In Germany, Carl Friedrich Gauss (1777–1855) made key contributions to the theoretical foundations of electricity, magnetism, mechanics, and fluid dynamics. In England, George Green (1793–1841) published An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism in 1828, which in addition to its significant contributions to mathematics made early progress towards laying down the mathematical foundations of electricity and magnetism. A couple of decades ahead of Newton's publication of a particle theory of light, the Dutch Christiaan Huygens (1629–1695) developed the wave theory of light, published in 1690. By 1804, Thomas Young's double-slit experiment revealed an interference pattern, as though light were a wave, and thus Huygens's wave theory of light, as well as Huygens's inference that light waves were vibrations of the luminiferous aether, was accepted. Jean-Augustin Fresnel modeled hypothetical behavior of the aether. The English physicist Michael Faraday introduced the theoretical concept of a field—not action at a distance. Mid-19th century, the Scottish James Clerk Maxwell (1831–1879) reduced electricity and magnetism to Maxwell's electromagnetic field theory, whittled down by others to the four Maxwell's equations. Initially, optics was found consequent of Maxwell's field. Later, radiation and then today's known electromagnetic spectrum were found also consequent of this electromagnetic field. The English physicist Lord Rayleigh [1842–1919] worked on sound. The Irishmen William Rowan Hamilton (1805–1865), George Gabriel Stokes (1819–1903) and Lord Kelvin (1824–1907) produced several major works: Stokes was a leader in optics and fluid dynamics; Kelvin made substantial discoveries in thermodynamics; Hamilton did notable work on analytical mechanics, discovering a new and powerful approach nowadays known as Hamiltonian mechanics. Very relevant contributions to this approach are due to his German colleague mathematician Carl Gustav Jacobi (1804–1851) in particular referring to canonical transformations. The German Hermann von Helmholtz (1821–1894) made substantial contributions in the fields of electromagnetism, waves, fluids, and sound. In the United States, the pioneering work of Josiah Willard Gibbs (1839–1903) became the basis for statistical mechanics. Fundamental theoretical results in this area were achieved by the German Ludwig Boltzmann (1844–1906). Together, these individuals laid the foundations of electromagnetic theory, fluid dynamics, and statistical mechanics. Relativistic By the 1880s, there was a prominent paradox that an observer within Maxwell's electromagnetic field measured it at approximately constant speed, regardless of the observer's speed relative to other objects within the electromagnetic field. Thus, although the observer's speed was continually lost relative to the electromagnetic field, it was preserved relative to other objects in the electromagnetic field. And yet no violation of Galilean invariance within physical interactions among objects was detected. As Maxwell's electromagnetic field was modeled as oscillations of the aether, physicists inferred that motion within the aether resulted in aether drift, shifting the electromagnetic field, explaining the observer's missing speed relative to it. The Galilean transformation had been the mathematical process used to translate the positions in one reference frame to predictions of positions in another reference frame, all plotted on Cartesian coordinates, but this process was replaced by Lorentz transformation, modeled by the Dutch Hendrik Lorentz [1853–1928]. In 1887, experimentalists Michelson and Morley failed to detect aether drift, however. It was hypothesized that motion into the aether prompted aether's shortening, too, as modeled in the Lorentz contraction. It was hypothesized that the aether thus kept Maxwell's electromagnetic field aligned with the principle of Galilean invariance across all inertial frames of reference, while Newton's theory of motion was spared. Austrian theoretical physicist and philosopher Ernst Mach criticized Newton's postulated absolute space. Mathematician Jules-Henri Poincaré (1854–1912) questioned even absolute time. In 1905, Pierre Duhem published a devastating criticism of the foundation of Newton's theory of motion. Also in 1905, Albert Einstein (1879–1955) published his special theory of relativity, newly explaining both the electromagnetic field's invariance and Galilean invariance by discarding all hypotheses concerning aether, including the existence of aether itself. Refuting the framework of Newton's theory—absolute space and absolute time—special relativity refers to relative space and relative time, whereby length contracts and time dilates along the travel pathway of an object. Cartesian coordinates arbitrarily used rectilinear coordinates. Gauss, inspired by Descartes' work, introduced the curved geometry, replacing rectilinear axis by curved ones. Gauss also introduced another key tool of modern physics, the curvature. Gauss's work was limited to two dimensions. Extending it to three or more dimensions introduced a lot of complexity, with the need of the (not yet invented) tensors. It was Riemman the one in charge to extend curved geometry to N dimensions. In 1908, Einstein's former mathematics professor Hermann Minkowski, applied the curved geometry construction to model 3D space together with the 1D axis of time by treating the temporal axis like a fourth spatial dimension—altogether 4D spacetime—and declared the imminent demise of the separation of space and time. Einstein initially called this "superfluous learnedness", but later used Minkowski spacetime with great elegance in his general theory of relativity, extending invariance to all reference frames—whether perceived as inertial or as accelerated—and credited this to Minkowski, by then deceased. General relativity replaces Cartesian coordinates with Gaussian coordinates, and replaces Newton's claimed empty yet Euclidean space traversed instantly by Newton's vector of hypothetical gravitational force—an instant action at a distance—with a gravitational field. The gravitational field is Minkowski spacetime itself, the 4D topology of Einstein aether modeled on a Lorentzian manifold that "curves" geometrically, according to the Riemann curvature tensor. The concept of Newton's gravity: "two masses attract each other" replaced by the geometrical argument: "mass transform curvatures of spacetime and free falling particles with mass move along a geodesic curve in the spacetime" (Riemannian geometry already existed before the 1850s, by mathematicians Carl Friedrich Gauss and Bernhard Riemann in search for intrinsic geometry and non-Euclidean geometry.), in the vicinity of either mass or energy. (Under special relativity—a special case of general relativity—even massless energy exerts gravitational effect by its mass equivalence locally "curving" the geometry of the four, unified dimensions of space and time.) Quantum Another revolutionary development of the 20th century was quantum theory, which emerged from the seminal contributions of Max Planck (1856–1947) (on black-body radiation) and Einstein's work on the photoelectric effect. In 1912, a mathematician Henri Poincare published Sur la théorie des quanta. He introduced the first non-naïve definition of quantization in this paper. The development of early quantum physics followed by a heuristic framework devised by Arnold Sommerfeld (1868–1951) and Niels Bohr (1885–1962), but this was soon replaced by the quantum mechanics developed by Max Born (1882–1970), Louis de Broglie (1892–1987), Werner Heisenberg (1901–1976), Paul Dirac (1902–1984), Erwin Schrödinger (1887–1961), Satyendra Nath Bose (1894–1974), and Wolfgang Pauli (1900–1958). This revolutionary theoretical framework is based on a probabilistic interpretation of states, and evolution and measurements in terms of self-adjoint operators on an infinite-dimensional vector space. That is called Hilbert space (introduced by mathematicians David Hilbert (1862–1943), Erhard Schmidt (1876–1959) and Frigyes Riesz (1880–1956) in search of generalization of Euclidean space and study of integral equations), and rigorously defined within the axiomatic modern version by John von Neumann in his celebrated book Mathematical Foundations of Quantum Mechanics, where he built up a relevant part of modern functional analysis on Hilbert spaces, the spectral theory (introduced by David Hilbert who investigated quadratic forms with infinitely many variables. Many years later, it had been revealed that his spectral theory is associated with the spectrum of the hydrogen atom. He was surprised by this application.) in particular. Paul Dirac used algebraic constructions to produce a relativistic model for the electron, predicting its magnetic moment and the existence of its antiparticle, the positron. List of prominent contributors to mathematical physics in the 20th century Prominent contributors to the 20th century's mathematical physics include (ordered by birth date): William Thomson (Lord Kelvin) (1824–1907) Oliver Heaviside (1850–1925) Jules Henri Poincaré (1854–1912) David Hilbert (1862–1943) Arnold Sommerfeld (1868–1951) Constantin Carathéodory (1873–1950) Albert Einstein (1879–1955) Emmy Noether (1882–1935) Max Born (1882–1970) George David Birkhoff (1884–1944) Hermann Weyl (1885–1955) Satyendra Nath Bose (1894–1974) Louis de Broglie (1892–1987) Norbert Wiener (1894–1964) John Lighton Synge (1897–1995) Mário Schenberg (1914–1990) Wolfgang Pauli (1900–1958) Paul Dirac (1902–1984) Eugene Wigner (1902–1995) Andrey Kolmogorov (1903–1987) Lars Onsager (1903–1976) John von Neumann (1903–1957) Sin-Itiro Tomonaga (1906–1979) Hideki Yukawa (1907–1981) Nikolay Nikolayevich Bogolyubov (1909–1992) Subrahmanyan Chandrasekhar (1910–1995) Mark Kac (1914–1984) Julian Schwinger (1918–1994) Richard Phillips Feynman (1918–1988) Irving Ezra Segal (1918–1998) Ryogo Kubo (1920–1995) Arthur Strong Wightman (1922–2013) Chen-Ning Yang (1922–) Rudolf Haag (1922–2016) Freeman John Dyson (1923–2020) Martin Gutzwiller (1925–2014) Abdus Salam (1926–1996) Jürgen Moser (1928–1999) Michael Francis Atiyah (1929–2019) Joel Louis Lebowitz (1930–) Roger Penrose (1931–) Elliott Hershel Lieb (1932–) Yakir Aharonov (1932–) Sheldon Glashow (1932–) Steven Weinberg (1933–2021) Ludvig Dmitrievich Faddeev (1934–2017) David Ruelle (1935–) Yakov Grigorevich Sinai (1935–) Vladimir Igorevich Arnold (1937–2010) Arthur Michael Jaffe (1937–) Roman Wladimir Jackiw (1939–) Leonard Susskind (1940–) Rodney James Baxter (1940–) Michael Victor Berry (1941–) Giovanni Gallavotti (1941–) Stephen William Hawking (1942–2018) Jerrold Eldon Marsden (1942–2010) Michael C. Reed (1942–) John Michael Kosterlitz (1943–) Israel Michael Sigal (1945–) Alexander Markovich Polyakov (1945–) Barry Simon (1946–) Herbert Spohn (1946–) John Lawrence Cardy (1947–) Giorgio Parisi (1948-) Abhay Ashtekar (1949-) Edward Witten (1951–) F. Duncan Haldane (1951–) Ashoke Sen (1956–) Juan Martín Maldacena (1968–)
Physical sciences
Physics basics: General
Physics
173462
https://en.wikipedia.org/wiki/Cessna%20172
Cessna 172
The Cessna 172 Skyhawk is an American four-seat, single-engine, high wing, fixed-wing aircraft made by the Cessna Aircraft Company. First flown in 1955, more 172s have been built than any other aircraft. It was developed from the 1948 Cessna 170 but with tricycle landing gear rather than conventional landing gear. The Skyhawk name was originally used for a trim package, but was later applied to all standard-production 172 aircraft, while some upgraded versions were marketed as the Cutlass, Powermatic, and Hawk XP. The aircraft was also produced under license in France by Reims Aviation, which marketed upgraded versions as the Reims Rocket. Measured by its longevity and popularity, the Cessna 172 is the most successful aircraft in history. Cessna delivered the first production model in 1956, and , the company and its partners had built more than 44,000 units. With a break from 1986–96, the aircraft remains in production today. A light general aviation airplane, the Skyhawk's main competitors throughout much of its history were the Beechcraft Musketeer and Grumman American AA-5 series, though neither are currently in production. Other prominent competitors still in production include the Piper PA-28 Cherokee, and, more recently, the Diamond DA40 Star and Cirrus SR20. Design and development The Cessna 172 started as a tricycle landing gear variant of the taildragger Cessna 170, with a basic level of standard equipment. In January 1955, Cessna flew an improved variant of the Cessna 170, a Continental O-300-A-powered Cessna 170C with larger elevators and a more angular tailfin. Although the variant was tested and certified, Cessna decided to modify it with a tricycle landing gear, and the modified Cessna 170C flew again on June 12, 1955. To reduce the time and cost of certification, the type was added to the Cessna 170 type certificate as the Model 172. Later, the 172 was given its own type certificate. The 172 became an overnight sales success, and over 1,400 were built in 1956, its first full year of production. Early 172s were similar in appearance to the 170s, with the same straight aft fuselage and tall landing gear legs, although the 172 had a straight tailfin while the 170 had a rounded fin and rudder. In 1960, the 172A incorporated revised landing gear and the swept-back tailfin, which is still in use today. The final aesthetic development, found in the 1963 172D and all later 172 models, was a lowered rear deck allowing an aft window. Cessna advertised this added rear visibility as "Omni-Vision". Production halted in the mid-1980s, but resumed in 1996 with the 160 hp (120 kW) Cessna 172R Skyhawk. Cessna supplemented this in 1998 with the 180 hp (135 kW) Cessna 172S Skyhawk SP. Modifications The Cessna 172 may be modified via a wide array of supplemental type certificates (STCs), including increased engine power and higher gross weights. Available STC engine modifications increase power from , add constant-speed propellers, or allow the use of automobile gasoline. Other modifications include additional fuel tank capacity in the wing tips, added baggage compartment tanks, added wheel pants to reduce drag, or enhanced landing and takeoff performance and safety with a STOL kit. The 172 has also been equipped with the fuel injected Superior Air Parts Vantage engine. Operational history World records From December 4, 1958, to February 7, 1959, Robert Timm and John Cook set the world record for (refueled) flight endurance in a used Cessna 172, registration number N9172B. They took off from McCarran Field (now Harry Reid International Airport) in Las Vegas, Nevada, and landed back at McCarran Field after 64 days, 22 hours, 19 minutes and 5 seconds in a flight covering an estimated , over 6 times further than flying around the world at the equator. The flight was part of a fund-raising effort for the Damon Runyon Cancer Fund. The aircraft is now on display at the airport. Variants Cessna has historically used model years similar to U.S. auto manufacturers, with sales of new models typically starting a few months prior to the actual calendar year. Introduced in November 1955 for the 1956 model year as a development of the Cessna 170B with tricycle landing gear, dubbed "Land-O-Matic" by Cessna. The 172 also featured a redesigned tail similar to the experimental 170C, "Para-Lift" flaps, and a maximum gross weight of while retaining the 170B's Continental O-300-A six-cylinder, air-cooled engine. The 1957 and 1959 model years brought only minor changes, while 1959 introduced a new cowling for improved engine cooling. The prototype 172, c/n 612, was modified from 170 c/n 27053, which previously served as the prototype of the 170B. A total of 3,757 were constructed over the four model years; 1,178 (1956), 1,041 (1957), 750 (1958), 788 (1959). 1960 model year with a swept-back vertical tail and rudder and powered by a O-300-C engine. It was also the first 172 to be certified for floatplane operation. 994 built. 1961 model year with shorter landing gear, engine mounts lengthened by three inches (76 mm), a reshaped cowling, a pointed propeller spinner, and an increased gross weight of . The stepped firewall introduced in the closely related Cessna 175 was adopted in the 172, along with the 175's wider, rearranged instrument panel located further aft in the fuselage. For the first time, the Skyhawk name was applied to an available deluxe option package that included optional wheel fairings, avionics, and a cargo door along with full exterior paint rather than partial paint stripes. The Skyhawk was also powered by an O-300-D in place of the O-300-C of the standard model. 989 built. 1962 model year with fiberglass wingtips, redesigned wheel fairings, a key starter to replace the previous pull-starter, and an optional autopilot. The seats were redesigned to be six-way adjustable, and a child seat was made optional to allow two children to be carried in the baggage area. 810 built. 1963 model year with a cut down rear fuselage with a wraparound Omni-Vision rear window, a one-piece windshield, increased horizontal stabilizer span, and a folding hat shelf in the rear cabin. Gross weight was increased to , where it would stay until the 172P. New rudder and brake pedals were also added. 1,011 were built by Cessna, while a further 18 were produced by Reims Aviation in France as the F172D. 1964 model year with a redesigned instrument panel with center-mounted avionics and circuit breakers replacing the electrical fuses of previous models. 1,209 built, 67 built by Reims as the F172E. 1965 model year with electrically-operated flaps to replace the previous lever-operated system and improved instrument lighting. 1,400 built, plus 94 by Reims as the F172F. The 172F formed the basis for the U.S. Air Force's T-41A Mescalero primary trainer, which was used during the 1960s and early 1970s as initial flight screening aircraft in USAF Undergraduate Pilot Training (UPT). Following their removal from the UPT program, some extant USAF T-41s were assigned to the U.S. Air Force Academy for the cadet pilot indoctrination program, while others were distributed to Air Force aero clubs. 1966 model year with a longer, more pointed spinner and sold for US$12,450 in its basic 172 version and US$13,300 in the upgraded Skyhawk version. 1,474 built (including 26 as the T-41A), plus 140 by Reims as the F172G. 1967 model year with a 60A alternator replacing the generator, a rotating beacon replacing the flashing unit, redesigned wheel fairings, and a shorter-stroke nose gear oleo to reduce drag and improve the appearance of the aircraft in flight. A new cowling was used, introducing shock-mounts that transmitted lower noise levels to the cockpit and reduced cowl cracking. The electric stall warning horn was replaced by a pneumatic one. 1,586 built (including 34 as the T-41A), plus 435 by Reims as the F172H for both the 1967 and 1968 model years. The 1968 model year marked the beginning of the Lycoming-powered 172s, with the 172I introduced with a Lycoming O-320-E2D engine of , an increase of over the Continental powerplant. The increased power resulted in an increase in optimal cruise from true airspeed (TAS) to TAS. There was no change in the sea level rate of climb at per minute. Starting with this model, the standard and deluxe Skyhawk models were no longer powered by different engines. The 172I also introduced the first standard "T" instrument arrangement. 649 built. For 1968, Cessna planned to replace the 172 with a newly designed aircraft called the 172J, featuring the same general configuration but with a more sloping windshield, a strutless cantilever wing, a more stylish interior, and various other improvements. A single 172J prototype, registered N3765C (c/n 660), was built. However, the popularity of the previous 172 with Cessna dealers and flight schools prompted the cancellation of the replacement plan, and the 172J was redesignated as the 177 from the second prototype onward and sold alongside the 172. Introduced for the 1969 model year with a redesigned tailfin cap and reshaped rear windows enlarged by . Optional long-range wing fuel tanks were also offered. The 1970 model year featured fiberglass, downward-shaped, conical camber wingtips and optional fully articulated seats. 2,055 built for both model years, plus 50 by Reims as the F172K. Introduced for the 1971 model year with tapered, tubular steel landing gear legs replacing the original flat spring steel legs, increasing landing gear width by . The new landing gear was lighter, but required aerodynamic fairings to maintain the same speed and climb performance as experienced with the flat steel design. 172L also had a nose-mounted landing light, a bonded baggage door, and optional cabin skylights. The 1972 model year introduced a plastic fairing between the dorsal fin and vertical fin to introduce a greater family resemblance to the 182's vertical fin. 1972 also introduced a reduced-diameter propeller, bonded cabin doors, and improved instrument panel controls. 1,535 built for both model years, plus 100 by Reims as the F172L. Introduced for the 1973 model year with a "Camber-Lift" wing with a drooped leading edge for improved low-speed handling, a key-locking baggage door, and new lighting switches. The 1974 model year introduced the Skyhawk II, which was sold alongside the baseline 172M and Skyhawk models with higher standard equipment, including a second nav/comm radio, an ADF and transponder, a larger baggage compartment, and nose-mounted dual landing lights. 1975 introduced inertia-reel shoulder harnesses and an improved instrument panel and door seals. Beginning in 1976, Cessna stopped marketing the aircraft as the 172 and began exclusively using the "Skyhawk" designation. This model year also saw a redesigned instrument panel to hold more avionics. Among other changes, the fuel and other small gauges were relocated to the left side for improved pilot readability compared with the earlier 172 panel designs. 6,826 built; 4,926 (1973–75) and 1,900 (1976), plus 610 by Reims as the F172M. 1977 model year powered by a Lycoming O-320-H2AD engine designed to run on 100-octane fuel (hence the "Skyhawk/100" name), whereas all previous engines used 80/87 fuel. Other changes included pre-select flap control and optional rudder trim. The 1978 model year brought a 28-volt electrical system to replace the previous 14-volt system as well as optional air conditioning. The 1979 model year increased the flap-extension speed to . 6,425 total built; 1,725 (1977), 1,725 (1978), 1,850 (1979), and 1,125 (1980), plus 525 by Reims as the F172N. There was no "O" model 172, to avoid confusion with the number zero. Introduced for the 1981 model year with a Lycoming O-320-D2J engine replacing the O-320-H2AD of the 172N, which had proven unreliable. Other changes included a decreased maximum flap deflection from 40 degrees to 30 to allow a gross weight increase from to . A wet wing and air conditioning were optional. The 1982 model year moved the landing lights from the nose to the wing to increase bulb life, while 1983 added some minor soundproofing improvements and thicker windows. 1984 introduced a second door latch pin, a thicker windshield and side windows, additional avionics capacity, and low-vacuum warning lights. 2,664 total built; 1,052 (1981), 724 (1982), 319 (1983), 179 (1984), 256 (1985), and 134 (1986), plus 215 by Reims as the F172P. Following the end of 172P production in 1986, Cessna ceased production of the Skyhawk for ten years. Introduced for the 1983 model year, the 172Q was given the name "Cutlass" to create an affiliation with the 172RG Cutlass RG, although it was actually a 172P with a Lycoming O-360-A4N engine of . The aircraft had a gross weight of and an optimal cruise speed of compared to the 172P's cruise speed of on less. It had a useful load that was about more than the Skyhawk P and a rate of climb that was actually per minute lower, due to the higher gross weight. The Cutlass II was offered as a deluxe model of the 172Q, as was the Cutlass II/Nav-Pac with IFR equipment. The 172Q was produced alongside the 172P for the 1983 and 1984 model years before being discontinued. Sources disagree on the exact number of 172Q aircraft built, and the construction numbers listed on the Federal Aviation Administration type certificate overlap with those of the 1983 and 1984 172P. The Skyhawk R was introduced in 1996 and is powered by a derated Lycoming IO-360-L2A producing a maximum of 160 horsepower (120 kW) at just 2,400 rpm. This is the first Cessna 172 to have a factory-fitted fuel-injected engine. The 172R's maximum takeoff weight is . This model year introduced many improvements, including a new interior with soundproofing, an all new multi-level ventilation system, a standard four point intercom, contoured, energy absorbing, 26g front seats with vertical and reclining adjustments and inertia reel harnesses. The Cessna 172S was introduced in 1998 and is powered by a Lycoming IO-360-L2A producing . The maximum engine rpm was increased from 2,400 rpm to 2,700 rpm resulting in a increase over the "R" model. As a result, the maximum takeoff weight was increased to . This model is marketed under the name Skyhawk SP, although the Type Certification data sheet specifies it is a 172S. The 172S is built primarily for the private owner-operator and is, in its later years, offered with the Garmin G1000 avionics package and leather seats as standard equipment. , the 172S model was the only Skyhawk model in production. Variants under 175 type certificate As the Cessna 175 Skylark had gained a reputation for poor engine reliability, Cessna attempted to regain sales by rebranding the aircraft as a variant of the 172. Several later 172 variants, generally those with higher-than-standard engine power or gross weight, were built under the 175 type certificate although most did not use the unpopular Continental GO-300-E engine from the 175. The 175 Skylark was rebranded for the 1963 model year as the P172D Powermatic, continuing where the Skylark left off at 175C. It was powered by a Continental GO-300-E with a geared reduction drive powering a constant-speed propeller, increasing cruise speed by over the standard 172D. It differed from the 175C in that it had a cut-down rear fuselage with an "Omni-Vision" rear window and an increased horizontal stabilizer span. A deluxe version was marketed as the Skyhawk Powermatic with a slightly increased top speed. Despite the rebranding, sales did not meet expectations, and the 175 type was discontinued for the civilian market after the 1963 model year. 65 were built, plus 3 by Reims as the FP172D. Although the 175 type was discontinued for the civilian market, Cessna continued to produce the aircraft for the United States Armed Forces as the T-41 Mescalero. Introduced in 1967, the R172E was built in T-41B, T-41C, and T-41D variants for the US Army, USAF Academy, and US Military Aid Program, respectively. As the T-41B, the R172E was powered by a fuel-injected Continental IO-360-D or -DE driving a constant-speed propeller, and featured a 28V electrical system, jettisonable doors, an openable right front window, a 6.00x6 nose wheel tire and military avionics, but no baggage door. The T-41C was similar to the T-41B, but had a 14V electrical system, a fixed-pitch propeller, civilian avionics, and no rear seats. The T-41D featured a 28V electrical system, four seats, corrosion-proofing, reinforced flaps and ailerons, a baggage door, and provisions for wing-mounted pylons. 255 T-41B, 45 T-41C, and 34 T-41D aircraft were built. While Cessna produced the R172E exclusively for military use, Reims built a civilian model as the FR172E Reims Rocket, with 60 built for the 1968 model year. The R172F was similar to the R172E and was built in both T-41C and T-41D variants. 7 (T-41C) and 74 (T-41D) built, plus 85 by Reims as the FR172F Reims Rocket for the 1969 model year. The R172G was similar to the R172E/F, differing in that it was certified to be powered by a Continental IO-360-C, -D, -CB, or -DB engine. 28 (T-41D) built, plus 80 by Reims as the FR172G Reims Rocket for the 1970 model year. The R172H introduced the extended dorsal fillet of the 172L to the T-41D. It was also certified to be powered by a Continental IO-360-C, -D, -H, -CB, -DB, or -HB engine. 163 (T-41D) built, plus 125 by Reims as the FR172H Reims Rocket for the 1971 and 1972 model years. Certified to be powered by a Continental IO-360-H or -HB engine. Only one was built by Cessna, while Reims built 240 as the FR172J Reims Rocket for the 1973 through 1976 model years. Following the success of the Reims Rocket in Europe, Cessna decided to once again produce the 175 type for the civilian market as the R172K Hawk XP, beginning with the 1977 model year. It was powered by a derated Continental IO-360-K or -KB engine driving a McCauley constant-speed propeller and featured a new cowling with landing lights and an upgraded interior. The Hawk XP II was also available with full IFR avionics. However, owners claimed that the increased performance of the "XP" did not compensate for its increased purchase price and the higher operating costs associated with the larger engine. The aircraft was well accepted for use on floats, however, as the standard 172 is not a strong floatplane, even with only two people on board, while the XP's extra power improves water takeoff performance dramatically. 1 (1973 prototype), 725 (1977), 205 (1978), 270 (1979), 200 (1980), and 55 (1981) built, plus 85 (30 in 1977, 55 in 1978–81) by Reims as the FR172K Reims Rocket for the 1977 through 1981 model years. Cessna introduced a retractable landing gear version of the 172 in 1980, designating it as the 172RG and marketing it as the Cutlass RG. The Cutlass RG sold for about US$19,000 more than the standard 172 and featured a variable-pitch, constant-speed propeller and a more powerful Lycoming O-360-F1A6 engine of , giving it an optimal cruise speed of 140 knots (260 km/h), compared to for the contemporary 172N or 172P. It also had more fuel capacity than a standard Skyhawk, versus , giving it greater range and endurance. The 172RG first flew on August 24, 1976. It was the lowest-priced four-seat retractable-gear airplane on the U.S. market when it was introduced. Although the general aviation aircraft market was contracting at the time, the RG proved popular as an inexpensive flight-school trainer for complex aircraft and commercial pilot ratings under U.S. pilot certification rules, which required demonstrating proficiency in an aircraft with retractable landing gear. The 172RG uses the same basic landing gear as the heavier R182 Skylane RG, which Cessna touted as a benefit, saying it was a proven design; however, owners have found the landing gear to have higher maintenance requirements than comparable systems from other manufacturers, with several parts prone to rapid wear or cracking. Compared to a standard 172, the 172RG is easier to load with its center of gravity too far aft, which adversely affects the aircraft's longitudinal stability. While numbered and marketed as a 172, the 172RG was certified on the Cessna 175 type certificate. No significant design updates were made to the 172RG during its five-year model run. 1,191 were produced. Although it is slower and has less passenger and cargo capacity than popular competing single-engine retractable-gear aircraft such as the Beechcraft Bonanza, the Cutlass RG is praised by owners for its relatively low operating costs, robust and reliable engine, and docile flying qualities comparable to the standard 172, although it has higher landing gear maintenance and insurance costs than a fixed-gear 172. Special versions J172T Model introduced in July 2014 for 2015 customer deliveries, powered by a Continental CD-155 diesel engine installed by the factory under a supplemental type certificate. Initial retail price in 2014 was $435,000 (~$ in ). The model has a top speed of and burns per hour less fuel than the standard 172. As a result, the model has an range, an increase of more than 38% over the standard 172. This model is a development of the proposed and then canceled Skyhawk TD. Cessna has indicated that the JT-A will be made available in 2016. In reviewing this new model Paul Bertorelli of AVweb said: "I'm sure Cessna will find some sales for the Skyhawk JT-A, but at $420,000, it's hard to see how it will ignite much market expansion just because it's a Cessna. It gives away $170,000 to the near-new Redbird Redhawk conversion which is a lot of change to pay merely for the smell of a new airplane. Diesel engines cost more than twice as much to manufacture as gasoline engines do and although their fuel efficiency gains back some of that investment, if the complete aircraft package is too pricey, the debt service will eat up any savings, making a new aircraft not just unattractive, but unaffordable. I haven't run the numbers on the JT-A yet, but I can tell from previous analysis that there are definite limits." The model was certified by both EASA and the FAA in June 2017. It was discontinued in May 2018, due to poor sales as a result of the aircraft's high price, which was twice the price of the same aircraft as a diesel conversion. The aircraft remains available as an STC conversion from Continental Motors, Inc. In July 2010, Cessna announced it was developing an electrically powered 172 as a proof-of-concept in partnership with Bye Energy. In July 2011, Bye Energy, whose name had been changed to Beyond Aviation, announced the prototype had commenced taxi tests on 22 July 2011 and a first flight would follow soon. In 2012, the prototype, using Panacis batteries, engaged in multiple successful test flights. The R&D project was not pursued for production. Canceled model On October 4, 2007, Cessna announced its plan to build a diesel-powered model, to be designated the 172 Skyhawk TD ("Turbo Diesel") starting in mid-2008. The planned engine was to be a Thielert Centurion 2.0, liquid-cooled, two-liter displacement, dual overhead cam, four-cylinder, in-line, turbo-diesel with full authority digital engine control with an output of and burning Jet-A fuel. In July 2013, the 172TD model was canceled due to Thielert's bankruptcy. The aircraft was later refined into the Turbo Skyhawk JT-A, which was certified in June 2014 and discontinued in May 2018. Simulator company Redbird Flight uses the same engine and reconditioned 172 airframes to produce a similar model, the Redbird Redhawk. Premier Aircraft Sales also announced in February 2014 that it would offer refurbished 172 airframes equipped with the Continental/Thielert Centurion 2.0 diesel engine. Military operators A variant of the 172, the T-41 Mescalero was used as a trainer with the United States Air Force and Army. In addition, the United States Border Patrol uses a fleet of 172s for aerial surveillance along the Mexico-US border. From 1972 to 2019 the Irish Air Corps used the Reims version for aerial surveillance and monitoring of cash, prisoner and explosive escorts, in addition to army cooperation and pilot training roles. For T-41 operators, see Cessna T-41 Mescalero. FAPA/DAA Austrian Air Force 1× 172 Bolivian Air Force 3× 172K Chilean Army 18× R172K (retired) Colombian Air Force – To replace Cessna T-41s used for primary training with deliveries from June 2021. Ecuadorian Air Force 8× 172F Ecuadorian Army 1× 172G Guatemalan Air Force 6× 172K Honduran Air Force 3 Indonesian Air Force Iraqi Air Force Irish Air Corps 8× FR172H, 1× FR172K Five FR172H remained in service until 2019. Air Reconnaissance Unit 2 Lithuanian Air Force 1 Malagasy Air Force 4× 172M Nicaraguan Air Force 7 Pakistan Air Force 4× 172N Philippine Army -3 Units of 172M in In service (PA-101, PA-103 & PA-911) Philippine Navy - 1×172F - Donated By Olympic Aviation in 2007 as PN 330. 1×172N - Purchased from Welcome Export Inc. in July 2008 as PN 331, 4x172S- acquired from US Foreign Military Sales delivered in February 2022 Royal Saudi Air Force 8× F172G, 4× F172H, 4× F172M Republic of Singapore Air Force 8× 172K, delivered 1969 and retired 1972. Suriname Air Force (One in service for sale) Accidents and incidents On February 13, 1964, Ken Hubbs, second baseman for the Chicago Cubs and winner of the Rookie of the Year Award and the Gold Glove Award, was killed when the Cessna 172 he was flying crashed near Bird Island in Utah Lake. On October 23, 1964, David Box, lead singer for The Crickets on their 1960 release version of "Peggy Sue Got Married" and "Don't Cha Know" and later a solo artist, was killed when the Cessna 172 he was aboard crashed in northwest Harris County, Texas, while en route to a performance. Box was the second lead vocalist for The Crickets to die in a plane crash, following Buddy Holly. On August 31, 1969, American professional boxer Rocky Marciano was killed when the Cessna 172 in which he was a passenger crashed on approach to an airfield outside Newton, Iowa. On September 25, 1978, a Cessna 172, N7711G, and Pacific Southwest Airlines Flight 182, a Boeing 727, collided over San Diego, California. There were 144 fatalities, 2 in the Cessna 172, 135 on the PSA Flight 182 and 7 on the ground. On May 28, 1987, a rented Reims Cessna F172P, registered D-ECJB, was used by German teenage pilot Mathias Rust in an unauthorized flight from Helsinki-Malmi Airport through Soviet airspace to land near the Red Square in Moscow, all without being intercepted by Soviet air defense. On April 9, 1990, Atlantic Southeast Airlines Flight 2254, an Embraer EMB 120 Brasilia, collided head-on with a Civil Air Patrol Cessna 172, N99501, while en route from Gadsden Municipal Airport to Hartsfield–Jackson Atlanta International Airport. The Cessna crashed, killing two occupants, but the Brasilia made a safe emergency landing. On January 5, 2002, high school student Charles J. Bishop stole a Cessna 172, N2371N, and intentionally crashed it into the side of the Bank of America Tower in downtown Tampa, Florida, killing only himself and otherwise causing very little damage. On April 6, 2009, a Cessna 172N, C-GFJH, belonging to Confederation College in Thunder Bay, Ontario, Canada, was stolen by a student who flew it into United States airspace over Lake Superior. The 172 was intercepted and followed by NORAD F-16s, finally landing on Highway 60 in Ellsinore, Missouri, after a seven-hour flight. The student pilot, a Canadian citizen born in Turkey, Adam Dylan Leon, formerly known as Yavuz Berke, suffered from depression and was attempting to commit suicide by being shot down, but was instead arrested shortly after landing. On November 3, 2009, he was sentenced to two years in a US federal prison after pleading guilty to all three charges against him: interstate transportation of a stolen aircraft, importation of a stolen aircraft, and illegal entry into the US. College procedures at the time allowed easy access to aircraft and keys were routinely left in them. On November 11, 2021, Glen de Vries, co-founder of Medidata Solutions and Blue Origin space tourist, died in the crash of a 172 near Hampton Township, New Jersey. On March 5, 2024, a 172M of 99 Flying School, 5Y-NNJ, crashed after colliding with Safarilink Aviation Flight 053, a de Havilland Canada Dash 8, near Wilson Airport over Nairobi National Park, killing the instructor and student pilot aboard the 172. The Safarilink flight landed safely with no injuries to the 44 people on board. Specifications (172R)
Technology
Specific aircraft_2
null
173484
https://en.wikipedia.org/wiki/Magellanic%20Clouds
Magellanic Clouds
The Magellanic Clouds (Magellanic system or Nubeculae Magellani) are two irregular dwarf galaxies in the southern celestial hemisphere. Orbiting the Milky Way galaxy, these satellite galaxies are members of the Local Group. Because both show signs of a bar structure, they are often reclassified as Magellanic spiral galaxies. The two galaxies are the following: Large Magellanic Cloud (LMC), about away Small Magellanic Cloud (SMC), about away The Magellanic clouds are visible to the unaided eye from the Southern Hemisphere, but cannot be observed from the most northern latitudes. History An early possible mention of the Large Magellanic Cloud is in petroglyphs and rock drawings found in Chile. They may be the objects mentioned by the polymath Ibn Qutaybah (d. 889 CE), in his book on Al-Anwā̵’ (the stations of the Moon in pre-Islamic Arabian culture): وأسفل من سهيل قدما سهيل . وفى مجرى قدمى سهيل، من خلفهما كواكب زهر كبار، لا ترى بالعراق، يسميها أهل تهامة الأعبار And below Canopus, there are the feet of Canopus, and on their extension, behind them bright big stars, not seen in Iraq, the people of Tihama call them al-a‘bār. Later Al Sufi, a professional astronomer, in 964 CE, in his Book of Fixed Stars, mentioned the same quote, but with a different spelling. Under Argo Navis, he quoted that "unnamed others have claimed that beneath Canopus there are two stars known as the 'feet of Canopus', and beneath those there are bright white stars that are unseen in Iraq nor Najd, and that the inhabitants of Tihama call them al-Baqar [cows], and Ptolemy did not mention any of this so we [Al-Sufi] do not know if this is true or false." Both Ibn Qutaybah and Al-Sufi were probably quoting from the former's contemporary (and compatriot) and famed scientist Abu Hanifa Dinawari's mostly lost work on Anwaa. Abu Hanifa was probably quoting earlier sources, which may be just travelers stories, and hence Al-Sufi's comments about their veracity. In Europe, the Clouds were reported by 16th century Italian authors Peter Martyr d'Anghiera and Andrea Corsali, both based on Portuguese voyages. Subsequently, they were reported by Antonio Pigafetta, who accompanied the expedition of Ferdinand Magellan on its circumnavigation of the world in 1519–1522. However, naming the clouds after Magellan did not become widespread until much later. In Bayer's Uranometria they are designated as nubecula major and nubecula minor. In the 1756 star map of the French astronomer Lacaille, they are designated as le Grand Nuage and le Petit Nuage ("the Large Cloud" and "the Small Cloud"). John Herschel studied the Magellanic Clouds from South Africa, writing an 1847 report detailing 919 objects in the Large Magellanic Cloud and 244 objects in the Small Magellanic Cloud. In 1867 Cleveland Abbe suggested that they were separate satellites of the Milky Way. Distances were first estimated by Ejnar Hertzsprung in 1913 using 1912 measurements of Cepheid variables in the SMC by Henrietta Leavitt. Recalibration of the Cepheid scales allowed Harlow Shapley to refine the measurement, and these were again revised in 1952 following further research. , some astronomers believe the Magellanic Clouds should be renamed, alleging that Magellan was a murderer and neither an astronomer nor the discoverer of the dwarf galaxies. Characteristics The Large Magellanic Cloud and its neighbour and relative, the Small Magellanic Cloud, are conspicuous objects in the southern hemisphere, looking like separated pieces of the Milky Way to the naked eye. Roughly 21° apart in the night sky, the true distance between them is roughly 75,000 light-years. Until the discovery of the Sagittarius Dwarf Elliptical Galaxy in 1994, they were the closest known galaxies to our own. The LMC lies about 160,000 light years away, while the SMC is around 200,000. The LMC is about 70% larger than the diameter of the SMC (32,200 ly and 18,900 ly respectively). For comparison, the Milky Way is about 87,400 ly across. The total mass of these two galaxies is uncertain. Only a fraction of their gas seems to have coalesced into stars and they probably both have large dark matter halos. One recent estimate of the total mass of the LMC is about 1/10 that of the Milky Way. That would make the LMC rather a large galaxy in the current observable universe. Since the sizes of relatively nearby galaxies are highly skewed, the average mass can be a misleading statistic. In terms of rank, the LMC appears to be the fourth most massive member of over 50 galaxies in the local group. Suggesting that the Magellanic cloud system is historically not a part of the Milky Way is evidence that the SMC has been in orbit about the LMC for a very long time. The Magellanic system seems most similar to the distinct NGC 3109 system, which is on the edge of the Local Group. Astronomers have long assumed that the Magellanic Clouds have orbited the Milky Way at approximately their current distances, but evidence suggests that it is rare for them to come as close to the Milky Way as they are now. Observation and theoretical evidence suggest that the Magellanic Clouds have both been greatly distorted by tidal interaction with the Milky Way as they travel close to it. The LMC maintains a very clear spiral structure in radio-telescope images of neutral hydrogen. Streams of neutral hydrogen connect them to the Milky Way and to each other, and both resemble disrupted barred spiral galaxies. Their gravity has affected the Milky Way as well, distorting the outer parts of the galactic disk. Aside from their different structure and lower mass, they differ from our galaxy in two major ways. They are gas-rich; a higher fraction of their mass is hydrogen and helium compared to the Milky Way. They are also more metal-poor than the Milky Way; the youngest stars in the LMC and SMC have a metallicity of 0.5 and 0.25 times solar, respectively. Both are noted for their nebulae and young stellar populations, but as in our own galaxy their stars range from the very young to the very old, indicating a long stellar formation history. The Large Magellanic Cloud was the host galaxy to a supernova (SN 1987A), the brightest observed in over four centuries. Measurements with the Hubble Space Telescope, announced in 2006, suggest the Magellanic Clouds may be moving too fast to be long term companions of the Milky Way. If they are in orbit, that orbit takes at least 4 billion years. They are possibly on a first approach and we are witnessing the start of a galactic merger that may overlap with the Milky Way's expected merger with the Andromeda Galaxy (and perhaps the Triangulum Galaxy) in the future. In 2019, astronomers discovered the young star cluster Price-Whelan 1 using Gaia data. The star cluster has a low metallicity and belongs to the leading arm of the Magellanic Clouds. The existence of this star cluster suggests that the leading arm of the Magellanic Clouds is 90,000 light-years away from the Milky Way—closer than previously thought. Mini Magellanic Cloud (MMC) Astrophysicists D. S. Mathewson, V. L. Ford and N. Visvanathan proposed that the SMC may in fact be split in two, with a smaller section of this galaxy behind the main part of the SMC (as seen from Earth's perspective), and separated by about 30,000 light years. They suggest the reason for this is due to a past interaction with the LMC splitting the SMC, and that the two sections are still moving apart. They have dubbed this smaller remnant the Mini Magellanic Cloud. This hypothesis was confirmed in 2023. The part of the SMC which is closer to Earth lies 196,000 light-years (60 kiloparsecs) away, whereas the farther part lies 215,000 light-years (66 kiloparsecs) away.
Physical sciences
Notable galaxies
Astronomy
173493
https://en.wikipedia.org/wiki/Small%20Magellanic%20Cloud
Small Magellanic Cloud
The Small Magellanic Cloud (SMC) is a dwarf galaxy near the Milky Way. Classified as a dwarf irregular galaxy, the SMC has a D25 isophotal diameter of about , and contains several hundred million stars. It has a total mass of approximately 7 billion solar masses. At a distance of about 200,000 light-years, the SMC is among the nearest intergalactic neighbors of the Milky Way and is one of the most distant objects visible to the naked eye. The SMC is visible from the entire Southern Hemisphere and can be fully glimpsed low above the southern horizon from latitudes south of about 15° north. The galaxy is located across the constellation of Tucana and part of Hydrus, appearing as a faint hazy patch resembling a detached piece of the Milky Way. The SMC has an average apparent diameter of about 4.2° (8 times the Moon's) and thus covers an area of about 14 square degrees (70 times the Moon's). Since its surface brightness is very low, this deep-sky object is best seen on clear moonless nights and away from city lights. The SMC forms a pair with the Large Magellanic Cloud (LMC), which lies 20° to the east, and, like the LMC, is a member of the Local Group. It is currently a satellite of the Milky Way, but is likely a former satellite of the LMC. Observation history In the southern hemisphere, the Magellanic clouds have long been included in the lore of native inhabitants, including south sea islanders and indigenous Australians. Persian astronomer Al Sufi mentions them in his Book of Fixed Stars, repeating a quote by the polymath Ibn Qutaybah, but had not observed them himself. European sailors may have first noticed the clouds during the Middle Ages when they were used for navigation. Portuguese and Dutch sailors called them the Cape Clouds, a name that was retained for several centuries. During the circumnavigation of the Earth by Ferdinand Magellan in 1519–1522, they were described by Antonio Pigafetta as dim clusters of stars. In Johann Bayer's celestial atlas Uranometria, published in 1603, he named the smaller cloud, Nubecula Minor. In Latin, Nubecula means a little cloud. Between 1834 and 1838, John Frederick William Herschel made observations of the southern skies with his reflector from the Royal Observatory. While observing the Nubecula Minor, he described it as a cloudy mass of light with an oval shape and a bright center. Within the area of this cloud he catalogued a concentration of 37 nebulae and clusters. In 1891, Harvard College Observatory opened an observing station at Arequipa in Peru. Between 1893 and 1906, under the direction of Solon Bailey, the telescope at this site was used to survey photographically both the Large and Small Magellanic Clouds. Henrietta Swan Leavitt, an astronomer at the Harvard College Observatory, used the plates from Arequipa to study the variations in relative luminosity of stars in the SMC. In 1908, the results of her study were published, which showed that a type of variable star called a "cluster variable", later called a Cepheid variable after the prototype star Delta Cephei, showed a definite relationship between the variability period and the star's apparent brightness. Leavitt realized that since all the stars in the SMC are roughly the same distance from Earth, this result implied that there is similar relationship between period and absolute brightness. This important period-luminosity relation allowed the distance to any other Cepheid variable to be estimated in terms of the distance to the SMC. She hoped a few Cepheid variables could be found close enough to Earth so that their parallax, and hence distance from Earth, could be measured. This soon happened, allowing Cepheid variables to be used as standard candles, facilitating many astronomical discoveries. Using this period-luminosity relation, in 1913 the distance to the SMC was first estimated by Ejnar Hertzsprung. First he measured thirteen nearby cepheid variables to find the absolute magnitude of a variable with a period of one day. By comparing this to the periodicity of the variables as measured by Leavitt, he was able to estimate a distance of 10,000 parsecs (30,000 light years) between the Sun and the SMC. This later proved to be a gross underestimate of the true distance, but it did demonstrate the potential usefulness of this technique. Announced in 2006, measurements with the Hubble Space Telescope suggest that either the Large and Small Magellanic Clouds may be moving too fast to be orbiting the Milky Way, or that the Milky Way Galaxy is more massive than was thought. Features The SMC contains a central bar structure, and astronomers speculate that it was once a barred spiral galaxy that was disrupted by the Milky Way to become somewhat irregular. There is a bridge of gas connecting the Small Magellanic Cloud with the Large Magellanic Cloud (LMC), which is evidence of tidal interaction between the galaxies. This bridge of gas is a star-forming site. The Magellanic Clouds have a common envelope of neutral hydrogen, indicating they have been gravitationally bound for a long time. In 2017, using the Dark Energy Survey plus MagLiteS data, a stellar over-density associated with the Small Magellanic Cloud was discovered, which is probably the result of interactions between the SMC and LMC. X-ray sources The Small Magellanic Cloud contains a large and active population of X-ray binaries. Recent star formation has led to a large population of massive stars and high-mass X-ray binaries (HMXBs) which are the relics of the short-lived upper end of the initial mass function. The young stellar population and the majority of the known X-ray binaries are concentrated in the SMC's Bar. HMXB pulsars are rotating neutron stars in binary systems with Be-type (spectral type 09-B2, luminosity classes V–III) or supergiant stellar companions. Most HMXBs are of the Be type which account for 70% in the Milky Way and 98% in the SMC. The Be-star equatorial disk provides a reservoir of matter that can be accreted onto the neutron star during periastron passage (most known systems have large orbital eccentricity) or during large-scale disk ejection episodes. This scenario leads to strings of X-ray outbursts with typical X-ray luminosities Lx = 1036–1037 erg/s, spaced at the orbital period, plus infrequent giant outbursts of greater duration and luminosity. Monitoring surveys of the SMC performed with NASA's Rossi X-ray Timing Explorer (RXTE) see X-ray pulsars in outburst at more than 1036 erg/s and have counted 50 by the end of 2008. The ROSAT and ASCA missions detected many faint X-ray point sources, but the typical positional uncertainties frequently made positive identification difficult. Recent studies using XMM-Newton and Chandra have now cataloged several hundred X-ray sources in the direction of the SMC, of which perhaps half are considered likely HMXBs, and the remainder a mix of foreground stars, and background AGN. No X-rays above background were observed from the Magellanic Clouds during the September 20, 1966, Nike-Tomahawk flight. Balloon observation from Mildura, Australia, on October 24, 1967, of the SMC set an upper limit of X-ray detection. An X-ray astronomy instrument was carried aboard a Thor missile launched from Johnston Atoll on September 24, 1970, at 12:54 UTC for altitudes above 300 km, to search for the Small Magellanic Cloud. The SMC was detected with an X-ray luminosity of 5 erg/s in the range 1.5–12 keV, and 2.5 erg/s in the range 5–50 keV for an apparently extended source. The fourth Uhuru catalog lists an early X-ray source within the constellation Tucana: 4U 0115-73 (3U 0115-73, 2A 0116-737, SMC X-1). Uhuru observed the SMC on January 1, 12, 13, 16, and 17, 1971, and detected one source located at 01149-7342, which was then designated SMC X-1. Some X-ray counts were also received on January 14, 15, 18, and 19, 1971. The third Ariel 5 catalog (3A) also contains this early X-ray source within Tucana: 3A 0116-736 (2A 0116-737, SMC X-1). The SMC X-1, a HMXRB, is at J2000 right ascension (RA) declination (Dec) . Two additional sources detected and listed in 3A include SMC X-2 at 3A 0042-738 and SMC X-3 at 3A 0049-726. Mini Magellanic Cloud (MMC) It has been proposed by astrophysicists D. S. Mathewson, V. L. Ford and N. Visvanathan that the SMC may in fact be split in two, with a smaller section of this galaxy behind the main part of the SMC (as seen from Earth perspective), and separated by about 30,000 ly. They suggest the reason for this is due to a past interaction with the LMC that split the SMC, and that the two sections are still moving apart. They dubbed this smaller remnant the Mini Magellanic Cloud. In 2023, it was reported that the SMC is indeed two separate structures with distinct stellar and gaseous chemical compositions, separated by around 5 kiloparsecs.
Physical sciences
Notable galaxies
null
173523
https://en.wikipedia.org/wiki/Ultraviolet%20astronomy
Ultraviolet astronomy
Ultraviolet astronomy is the observation of electromagnetic radiation at ultraviolet wavelengths between approximately 10 and 320 nanometres; shorter wavelengths—higher energy photons—are studied by X-ray astronomy and gamma-ray astronomy. Ultraviolet light is not visible to the human eye. Most of the light at these wavelengths is absorbed by the Earth's atmosphere, so observations at these wavelengths must be performed from the upper atmosphere or from space. Overview Ultraviolet line spectrum measurements (spectroscopy) are used to discern the chemical composition, densities, and temperatures of the interstellar medium, and the temperature and composition of hot young stars. UV observations can also provide essential information about the evolution of galaxies. They can be used to discern the presence of a hot white dwarf or main sequence companion in orbit around a cooler star. The ultraviolet universe looks quite different from the familiar stars and galaxies seen in visible light. Most stars are actually relatively cool objects emitting much of their electromagnetic radiation in the visible or near-infrared part of the spectrum. Ultraviolet radiation is the signature of hotter objects, typically in the early and late stages of their evolution. In the Earth's sky seen in ultraviolet light, most stars would fade in prominence. Some very young massive stars and some very old stars and galaxies, growing hotter and producing higher-energy radiation near their birth or death, would be visible. Clouds of gas and dust would block the vision in many directions along the Milky Way. Space-based solar observatories such as SDO and SOHO use ultraviolet telescopes (called AIA and EIT, respectively) to view activity on the Sun and its corona. Weather satellites such as the GOES-R series also carry telescopes for observing the Sun in ultraviolet. The Hubble Space Telescope and FUSE have been the most recent major space telescopes to view the near and far UV spectrum of the sky, though other UV instruments have flown on smaller observatories such as GALEX, as well as sounding rockets and the Space Shuttle. Pioneers in ultraviolet astronomy include George Robert Carruthers, Robert Wilson, and Charles Stuart Bowyer. Ultraviolet space telescopes - Far Ultraviolet Camera/Spectrograph on Apollo 16 (April 1972) + ESRO - TD-1A (135-286 nm; 1972–1974) - Orbiting Astronomical Observatory (#2:1968-73. #3:1972-1981) - Orion 1 and Orion 2 Space Observatories (#1: 200-380 nm, 1971; #2: 200-300 nm, 1973) + - Astronomical Netherlands Satellite (150-330 nm, 1974–1976) + - International Ultraviolet Explorer (115-320 nm, 1978–1996) - Astron-1 (150-350 nm, 1983–1989) - Glazar 1 and 2 on Mir (in Kvant-1, 1987–2001) - FAUST (140-180 nm, in ATLAS-1 Spacelab aboard STS-45 mission, March 1992) - EUVE (7-76 nm, 1992–2001) - FUSE (90.5-119.5 nm, 1999–2007) + - Extreme ultraviolet Imaging Telescope (on SOHO imaging Sun at 17.1, 19.5, 28.4, and 30.4 nm) + - Hubble Space Telescope (various 115-800 nm,1990-1997-) (STIS 115–1030 nm, 1997–) (WFC3 200-1700 nm, 2009–) - Swift Gamma-Ray Burst Mission (170–650 nm, 2004- ) - Hopkins Ultraviolet Telescope (flew in 1990 and 1995) - ROSAT XUV (17-210eV) (30-6 nm, 1990–1999) - Far Ultraviolet Spectroscopic Explorer (90.5-119.5 nm, 1999–2007) - Galaxy Evolution Explorer (135–280 nm, 2003–2012) - Hisaki (130-530 nm, 2013 - 2023) - Lunar-based ultraviolet telescope (LUT) (on Chang'e 3 lunar lander, 245-340  nm, 2013 -) - Astrosat (130-530 nm, 2015 -) - Colorado Ultraviolet Transit Experiment - (255-330 nm spectrograph, 2021- ) - PROBA-3 (CUTE) - (530-588 nm coronagraph, 2024- ) - Public Telescope (PST) (100-180 nm, Proposed 2015, EU funded study ) - Viewpoint-1 SpaceFab.US (200-950 nm, Launch planned 2022)
Physical sciences
High-energy astronomy
Astronomy
173546
https://en.wikipedia.org/wiki/Caraway
Caraway
Caraway, also known as meridian fennel and Persian cumin (Carum carvi), is a biennial plant in the family Apiaceae, native to western Asia, Europe, and North Africa. Etymology The etymology of "caraway" is unclear. Caraway has been called by many names in different regions, with names deriving from the Latin (cumin), the Greek karon (again, cumin), which was adapted into Latin as (now meaning caraway), and the Sanskrit karavi, sometimes translated as "caraway", but other times understood to mean "fennel". English use of the term caraway dates to at least 1440, possibly having Arabic origin. Description The plant is similar in appearance to other members of the carrot family, with finely divided, feathery leaves with thread-like divisions, growing on stems. The main flower stem is tall, with small white or pink flowers in compound umbels composed of 5–16 unequal rays long. Caraway fruits, informally called seeds, are smooth, crescent-shaped, laterally compressed achenes, around long, with five pale ridges and a distinctive pleasant smell when crushed. It flowers in June and July. History Caraway was mentioned by the early Greek botanist Pedanius Dioscorides as a herb and tonic. It was later mentioned in the Roman Apicius as an ingredient in recipes. Caraway was known in the Arab world as karawiya, and cultivated in Morocco. Cultivation The only species that is cultivated is Carum carvi, its fruits being used in many ways in cooking and in the preparation of medicinal products and liqueurs. The plant prefers warm, sunny locations and well-drained soil rich in organic matter. In warmer regions, it is planted in the winter as an annual. In temperate climates, it is planted as a summer annual or biennial. It is widely established as a cultivated plant. The Netherlands, Poland and Germany are the top caraway producers. Finland supplies about 28% (2011) of the world's caraway production from some 1500 farms, the high output occurring possibly from its favorable climate and latitudes, which ensure long summer hours of sunlight. Nutrition Caraway seeds are 10% water, 50% carbohydrates, 20% protein, and 15% fat (table). In a reference amount, caraway seeds are a rich source (20% or more of the Daily Value, DV) of protein, B vitamins (24–33% DV), vitamin C (25% DV), and several dietary minerals, especially iron (125% DV), phosphorus (81% DV), and zinc (58% DV) (table). Phytochemicals When ground, caraway seeds yield up to 7.5% of volatile oil, mostly D-carvone, and 15% fixed oil of which the major fatty acids are oleic, linoleic, petroselinic, and palmitic acids. Phytochemicals identified in caraway seed oil include thymol, o-cymene, γ‑terpinene, trimethylene dichloride, β-pinene, 2-(1-cyclohexenyl), cyclohexanone, β-phellandrene, 3-carene, α-thujene, and linalool. Uses The fruits, usually used whole, have a pungent, anise-like flavor and aroma that comes from essential oils, mostly carvone, limonene, and anethole. Caraway is used as a spice in breads, especially rye bread. A common use of caraway is whole as an addition to rye bread – often called seeded rye or Jewish rye bread (see Borodinsky bread). Caraway seeds are often used in Irish soda bread and other baked goods. Caraway may be used in desserts, liquors, casseroles, and other foods. Its leaves can be added to salads, stews, and soups, and are sometimes consumed as herbs, either raw, dried, or cooked, similar to parsley. The root is consumed as a winter root vegetable in some places, similar to parsnips. Caraway fruits are found in diverse European cuisines and dishes, for example sauerkraut, and the United Kingdom's caraway seed cake. In Austrian cuisine, it is used to season beef and, in German cuisine, pork. In Hungarian cuisine, it is added to goulash, and in Norwegian cuisine and Swedish cuisine, it is used for making caraway black bread. Caraway oil is used to for the production of Kümmel liquor in Germany and Russia, Scandinavian akvavit, and Icelandic brennivín. Caraway can be infused in a variety of cheeses, such as havarti and bondost to add flavor. In Latvian cuisine, whole caraway seeds are added to the Jāņi sour milk cheese. In Oxford, where the plant appeared to have become naturalised in a meadow, the seeds were formerly offered on a tray by publicans to people who wished to disguise the odour of their drinker's breath.
Biology and health sciences
Herbs and spices
Plants
173548
https://en.wikipedia.org/wiki/Fennel
Fennel
Fennel (Foeniculum vulgare) is a flowering plant species in the carrot family. It is a hardy, perennial herb with yellow flowers and feathery leaves. It is indigenous to the shores of the Mediterranean but has become widely naturalized in many parts of the world, especially on dry soils near the sea coast and on riverbanks. It is a highly flavorful herb used in cooking and, along with the similar-tasting anise, is one of the primary ingredients of absinthe. Florence fennel or finocchio (, , ) is a selection with a swollen, bulb-like stem base that is used as a vegetable. Description Foeniculum vulgare is a perennial herb. The stem is hollow, erect, and glaucous green, and it can grow up to tall. The leaves grow up to long; they are finely dissected, with the ultimate segments filiform (threadlike), about wide. Its leaves are similar to those of dill, but thinner. The flowers are produced in terminal compound umbels wide, each umbel section having 20–50 tiny yellow flowers on short pedicels. The fruit is a dry schizocarp from long, half as wide or less, and grooved. Since the seed in the fruit is attached to the pericarp, the whole fruit is often mistakenly called "seed". Chemistry The aromatic character of fennel fruits derives from volatile oils imparting mixed aromas, including trans-anethole and estragole (resembling liquorice), fenchone (mint and camphor), limonene, 1-octen-3-ol (mushroom). Other phytochemicals found in fennel fruits include polyphenols, such as rosmarinic acid and luteolin, among others in minor content. Similar species Some plants in the Apiaceae family are poisonous and often difficult to identify. Dill, coriander, ajwain, and caraway are similar-looking herbs but shorter-growing than fennel, reaching only . Dill has thread-like, feathery leaves and yellow flowers; coriander and caraway have white flowers and finely divided leaves (though not as fine as dill or fennel) and are also shorter-lived (being annual or biennial plants). The superficial similarity in appearance between these seeds may have led to a sharing of names and etymology, as in the case of meridian fennel, a term for caraway. Giant fennel (Ferula communis) is a large, coarse plant with a pungent aroma, which grows wild in the Mediterranean region and is only occasionally grown in gardens elsewhere. Other species of the genus Ferula are also called giant fennel, but they are not culinary herbs. In North America, fennel may be found growing in the same habitat and alongside natives osha (Ligusticum porteri) and Lomatium species, useful medicinal relatives in the parsley family. Most Lomatium species have yellow flowers like fennel, but some are white-flowered and resemble poison hemlock. Lomatium is an important historical food plant of Native Americans known as 'biscuit root'. Most Lomatium spp. have finely divided, hairlike leaves; their roots have a delicate rice-like odor, unlike the musty odor of hemlock. Lomatium species prefer dry, rocky soils devoid of organic material. Etymology Fennel came into Old English from Old French fenoil which in turn came from Latin , a diminutive of , meaning "hay". Cultivation Fennel is widely cultivated, both in its native range and elsewhere, for its edible, strongly flavored leaves and fruits. Its aniseed or liquorice flavor comes from anethole, an aromatic compound also found in anise and star anise, and its taste and aroma are similar to theirs, though usually not as strong. Florence fennel (Foeniculum vulgare Azoricum Group; syn. F. vulgare var. azoricum) is a cultivar group with inflated leaf bases which form a bulb-like structure. It is of cultivated origin, and has a mild anise-like flavor but is sweeter and more aromatic. Florence fennel plants are smaller than the wild type. Several cultivars of Florence fennel are also known by several other names, notably the Italian name finocchio. In North American supermarkets, it is often mislabeled as "anise." Foeniculum vulgare 'Purpureum' or 'Nigra', "bronze-leaved" fennel, is widely available as a decorative garden plant. Fennel has become naturalized along roadsides, in pastures, and in other open sites in many regions, including northern Europe, the United States, southern Canada, and much of Asia and Australia. It propagates well by both root crown and seed and is considered an invasive species and a weed in Australia and the United States. It can drastically alter the composition and structure of many plant communities, including grasslands, coastal scrub, riparian, and wetland communities. It appears to do this by outcompeting native species for light, nutrients, and water and perhaps by exuding allelopathic substances that inhibit the growth of other plants. In western North America, fennel can be found from the coastal and inland wildland-urban interface east into hill and mountain areas, excluding desert habitats. On Santa Cruz Island, California for example, fennel has achieved 50 to 90% absolute cover. Production As grouped by the United Nations Food and Agriculture Organization, production data for fennel are combined with similar spices – anise, star anise, and coriander. In 2014, India produced 60% of the world output of fennel, with China and Bulgaria as leading secondary producers. Uses Fennel was prized by the ancient Greeks and Romans, who used it as medicine, food, and insect repellent. Fennel tea was believed to give courage to the warriors before battle. According to Greek mythology, Prometheus used a giant stalk of fennel to carry fire from Mount Olympus to Earth. Emperor Charlemagne required the cultivation of fennel on all imperial farms. Florence fennel is one of the three main herbs used in the preparation of absinthe, an alcoholic mixture which originated as a medicinal elixir in Europe and became, by the late 19th century, a popular alcoholic drink in France and other countries. Fennel fruit is a common and traditional spice in flavored Scandinavian brännvin (a loosely defined group of distilled spirits, which include akvavit). Fennel is also featured in the Chinese Materia Medica for its medicinal functions. A 2016 study found F. vulgare essential oil to have insecticidal properties. Nutrition A raw fennel bulb is 90% water, 1% protein, 7% carbohydrates, and contains negligible fat. Dried fennel seeds are typically used as a spice in minute quantities. A reference amount of of fennel seeds provides of food energy and is a rich source (20% or more of the Daily Value, DV) of protein, dietary fiber, B vitamins and several dietary minerals, especially calcium, iron, magnesium and manganese, all of which exceed 90% DV. Fennel seeds are 52% carbohydrates (including 40% dietary fiber), 15% fat, 16% protein, and 9% water. Cuisine The bulb, foliage, and fruits of the fennel plant are used in many of the culinary traditions of the world. The small flowers of wild fennel (known as fennel "pollen") are the most potent form of fennel, but also the most expensive. Dried fennel fruit is an aromatic, anise-flavored spice, brown or green when fresh, slowly turning a dull grey as the fruit ages. For cooking, green fruits are optimal. The leaves are delicately flavored and similar in shape to dill. The bulb is a crisp vegetable that can be sautéed, stewed, braised, grilled, or eaten raw. Tender young leaves are used for garnishes, as a salad, to add flavor to salads, to flavor sauces to be served with puddings, and in soups and fish sauce. Both the inflated leaf bases and the tender young shoots can be eaten like celery. Fennel fruits are sometimes confused with those of anise, which are similar in taste and appearance, though smaller. Fennel is also a flavoring in some natural toothpastes. The fruits are used in cookery and sweet desserts. Many cultures in India, Afghanistan, Iran, and the Middle East use fennel fruits in cooking. In Iraq, fennel seeds are used as an ingredient in nigella-flavored breads. It is one of the most important spices in Kashmiri cuisine and Gujarati cooking. In Indian cuisine, whole fennel seeds and fennel powder are used as a spice in various sweet and savory dishes. It is an essential ingredient in the Assamese/Bengali/Oriya spice mixture panch phoron and in Chinese five-spice powders. In many parts of India, roasted fennel fruits are consumed as mukhwas, an after-meal digestive and breath freshener (saunf), or candied as comfit. Fennel seeds are also often used as an ingredient in paan, a breath freshener most popularly consumed in India. In China, fennel stem and leaves are often ingredients in the stuffings of jiaozi, baozi, or pies, as well in cold dishes as a green vegetable. Fennel fruits are present in well-known mixed spices such as the five-spice powder or . Fennel leaves are used in some parts of India as leafy green vegetables either by themselves or mixed with other vegetables, cooked to be served and consumed as part of a meal. In Syria and Lebanon, the young leaves are used to make a special kind of egg omelette (along with onions and flour) called . Many egg, fish, and other dishes employ fresh or dried fennel leaves. Florence fennel is a key ingredient in some Italian salads, or it can be braised and served as a warm side dish. It may be blanched or marinated, or cooked in risotto. Fennel fruits are the primary flavor component in Italian sausage. In Spain, the stems of the fennel plant are used in the preparation of pickled eggplants, . A herbal tea or tisane can also be made from fennel. On account of its aromatic properties, fennel fruit forms one of the ingredients of the well-known compound liquorice powder. In the Indian subcontinent, fennel fruits are eaten raw, sometimes with a sweetener. Culture The Greek name for fennel is marathon () or marathos (), and the place of the famous battle of Marathon literally means a plain with fennel. The word is first attested in Mycenaean Linear B form as . In Hesiod's Theogony, Prometheus steals the ember of fire from the gods in a hollow fennel stalk. As Old English , fennel is one of the nine plants invoked in the pagan Anglo-Saxon Nine Herbs Charm, recorded in the 10th century. In the 15th century, Portuguese settlers on Madeira noticed the abundance of wild fennel and used the Portuguese word funcho (fennel) and the suffix to form the name of a new town, Funchal. Henry Wadsworth Longfellow's 1842 poem "The Goblet of Life" repeatedly refers to the plant and mentions its purported ability to strengthen eyesight: Above the lower plants, it towers, The Fennel with its yellow flowers; And in an earlier age than ours Was gifted with the wondrous powers Lost vision to restore.
Biology and health sciences
Apiales
null
173652
https://en.wikipedia.org/wiki/Hemorrhoid
Hemorrhoid
Hemorrhoids (or haemorrhoids), also known as piles, are vascular structures in the anal canal. In their normal state, they are cushions that help with stool control. They become a disease when swollen or inflamed; the unqualified term hemorrhoid is often used to refer to the disease. The signs and symptoms of hemorrhoids depend on the type present. Internal hemorrhoids often result in painless, bright red rectal bleeding when defecating. External hemorrhoids often result in pain and swelling in the area of the anus. If bleeding occurs, it is usually darker. Symptoms frequently get better after a few days. A skin tag may remain after the healing of an external hemorrhoid. While the exact cause of hemorrhoids remains unknown, a number of factors that increase pressure in the abdomen are believed to be involved. This may include constipation, diarrhea, and sitting on the toilet for long periods. Hemorrhoids are also more common during pregnancy. Diagnosis is made by looking at the area. Many people incorrectly refer to any symptom occurring around the anal area as hemorrhoids, and serious causes of the symptoms should not be ruled out. Colonoscopy or sigmoidoscopy is reasonable to confirm the diagnosis and rule out more serious causes. Often, no specific treatment is needed. Initial measures consist of increasing fiber intake, drinking fluids to maintain hydration, NSAIDs to help with pain, and rest. Medicated creams may be applied to the area, but their effectiveness is poorly supported by evidence. A number of minor procedures may be performed if symptoms are severe or do not improve with conservative management. Hemorrhoidal artery embolization (HAE) is a safe and effective minimally invasive procedure that can be performed and is typically better tolerated than traditional therapies. Surgery is reserved for those who fail to improve following these measures. Approximately 50% to 66% of people have problems with hemorrhoids at some point in their lives. Males and females are both affected with about equal frequency. Hemorrhoids affect people most often between 45 and 65 years of age, and they are more common among the wealthy, although this may reflect differences in healthcare access rather than true prevalence. Outcomes are usually good. The first known mention of the disease is from a 1700 BC Egyptian papyrus. Signs and symptoms In about 40% of people with pathological hemorrhoids, there are no significant symptoms. Internal and external hemorrhoids may present differently; however, many people may have a combination of the two. Bleeding enough to cause anemia is rare, and life-threatening bleeding is even more uncommon. Many people feel embarrassed when facing the problem and often seek medical care only when the case is advanced. External If not thrombosed, external hemorrhoids may cause few problems. However, when thrombosed, hemorrhoids may be very painful. Nevertheless, this pain typically resolves in two to three days. The swelling may, however, take a few weeks to disappear. A skin tag may remain after healing. If hemorrhoids are large and cause issues with hygiene, they may produce irritation of the surrounding skin, and thus itchiness around the anus. Internal Internal hemorrhoids usually present with painless, bright red rectal bleeding during or following a bowel movement. The blood typically covers the stool (a condition known as hematochezia), is on the toilet paper, or drips into the toilet bowl. The stool itself is usually normally coloured. Other symptoms may include mucous discharge, a perianal mass if they prolapse through the anus, itchiness, and fecal incontinence. Internal hemorrhoids are usually painful only if they become thrombosed or necrotic. Causes The exact cause of symptomatic hemorrhoids is unknown. A number of factors are believed to play a role, including irregular bowel habits (constipation or diarrhea), lack of exercise, nutritional factors (low-fiber diets), increased intra-abdominal pressure (prolonged straining, ascites, an intra-abdominal mass, or pregnancy), genetics, an absence of valves within the hemorrhoidal veins, and aging. Other factors believed to increase risk include obesity, prolonged sitting, a chronic cough, and pelvic floor dysfunction. Squatting while defecating may also increase the risk of severe hemorrhoids. Evidence for these associations, however, is poor. Being a receptive partner in anal intercourse has been listed as a cause. During pregnancy, pressure from the fetus on the abdomen and hormonal changes cause the hemorrhoidal vessels to enlarge. The birth of the baby also leads to increased intra-abdominal pressures. Pregnant women rarely need surgical treatment, as symptoms usually resolve after delivery. Pathophysiology Hemorrhoid cushions are a part of normal human anatomy and become a pathological disease only when they experience abnormal changes. There are three main cushions present in the normal anal canal. These are located classically at left lateral, right anterior, and right posterior positions. They are composed of neither arteries nor veins, but blood vessels called sinusoids, connective tissue, and smooth muscle. Sinusoids do not have muscle tissue in their walls, as veins do. This set of blood vessels is known as the hemorrhoidal plexus. Hemorrhoid cushions are important for continence. They contribute to 15–20% of anal closure pressure at rest and protect the internal and external anal sphincter muscles during the passage of stool. When a person bears down, the intra-abdominal pressure grows, and hemorrhoid cushions increase in size, helping maintain anal closure. Hemorrhoid symptoms are believed to result when these vascular structures slide downwards or when venous pressure is excessively increased. Increased internal and external anal sphincter pressure may also be involved in hemorrhoid symptoms. Two types of hemorrhoids occur: internals from the superior hemorrhoidal plexus and externals from the inferior hemorrhoidal plexus. The pectinate line divides the two regions. Diagnosis Hemorrhoids are typically diagnosed by physical examination. A visual examination of the anus and surrounding area may diagnose external or prolapsed hemorrhoids. Visual confirmation of internal hemorrhoids, on the other hand, may require anoscopy, insertion of a hollow tube device with a light attached at one end. A digital rectal exam (DRE) can also be performed to detect possible rectal tumors, polyps, an enlarged prostate, or abscesses. This examination may not be possible without appropriate sedation because of pain, although most internal hemorrhoids are not associated with pain. If pain is present, the condition is more likely to be an anal fissure or external hemorrhoid rather than internal hemorrhoid. Internal Internal hemorrhoids originate above the pectinate line. They are covered by columnar epithelium, which lacks pain receptors. They were classified in 1985 into four grades based on the degree of prolapse: Grade I: No prolapse, just prominent blood vessels Grade II: Prolapse upon bearing down, but spontaneous reduction Grade III: Prolapse upon bearing down requiring manual reduction Grade IV: Prolapse with inability to be manually reduced. External External hemorrhoids occur below the dentate (or pectinate) line. They are covered proximally by anoderm and distally by skin, both of which are sensitive to pain and temperature. Differential Many anorectal problems, including fissures, fistulae, abscesses, colorectal cancer, rectal varices, and itching have similar symptoms and may be incorrectly referred to as hemorrhoids. Rectal bleeding may also occur owing to colorectal cancer, colitis including inflammatory bowel disease, diverticular disease, and angiodysplasia. If anemia is present, other potential causes should be considered. Other conditions that produce an anal mass include skin tags, anal warts, rectal prolapse, polyps, and enlarged anal papillae. Anorectal varices due to portal hypertension (blood pressure in the portal venous system) may present similar to hemorrhoids but are a different condition. Portal hypertension does not increase the risk of hemorrhoids. Prevention A number of preventative measures are recommended, including avoiding straining while attempting to defecate, avoiding constipation and diarrhea either by eating a high-fiber diet and drinking plenty of fluid or by taking fiber supplements and getting sufficient exercise. Spending less time attempting to defecate, avoiding reading while on the toilet, and losing weight for overweight persons and avoiding heavy lifting are also recommended. Management Conservative Conservative treatment typically consists of foods rich in dietary fiber, intake of oral fluids to maintain hydration, nonsteroidal anti-inflammatory drugs, sitz baths, and rest. Increased fiber intake has been shown to improve outcomes and may be achieved by dietary alterations or the consumption of fiber supplements. Evidence for benefits from sitz baths during any point in treatment, however, is lacking. If they are used, they should be limited to 15 minutes at a time. Decreasing time spent on the toilet and not straining is also recommended. While many topical agents and suppositories are available for the treatment of hemorrhoids, little evidence supports their use. As such, they are not recommended by the American Society of Colon and Rectal Surgeons. Steroid-containing agents should not be used for more than 14 days, as they may cause thinning of the skin. Most agents include a combination of active ingredients. These may include a barrier cream such as petroleum jelly or zinc oxide, an analgesic agent such as lidocaine, and a vasoconstrictor such as epinephrine. Some contain Balsam of Peru to which certain people may be allergic. Flavonoids are of questionable benefit, with potential side effects. Symptoms usually resolve following pregnancy; thus active treatment is often delayed until after delivery. Evidence does not support the use of traditional Chinese herbal treatment. The use of phlebotonics has been investigated in the treatment of low-grade hemorrhoids, although these drugs are not approved for such use in the United States or Germany. The use of phlebotonics for the treatment of chronic venous diseases is restricted in Spain. Procedures A number of office-based procedures may be performed. While generally safe, rare serious side effects such as perianal sepsis may occur. Rubber band ligation is typically recommended as the first-line treatment in those with grade I to III disease. It is a procedure in which elastic bands are applied onto internal hemorrhoid at least 1 cm above the pectinate line to cut off its blood supply. Within 5–7 days, the withered hemorrhoid falls off. If the band is placed too close to the pectinate line, intense pain results immediately afterwards. The cure rate has been found to be about 87%, with a complication rate of up to 3%. Sclerotherapy involves the injection of a sclerosing agent, such as phenol, into the hemorrhoid. This causes the vein walls to collapse and the hemorrhoids to shrivel up. The success rate four years after treatment is about 70%. A number of cauterization methods have been shown to be effective for hemorrhoids, but are usually used only when other methods fail. This procedure can be done using electrocautery, infrared radiation, laser surgery, or cryosurgery. Infrared cauterization may be an option for grade I or II disease. In those with grade III or IV disease, reoccurrence rates are high. Hemorrhoidal artery embolization (HAE) is an additional minimally invasive procedure performed by an interventional radiologist. HAE involves the blockage of abnormal blood flow to the rectal (hemorrhoidal) arteries using microcoils and/or microparticles to decrease the size of the hemorrhoids and improve hemorrhoid related symptoms, especially bleeding. HAE is very effective at stopping bleeding related symptom with success rate of approximately 90%. Surgery A number of surgical techniques may be used if conservative management and simple procedures fail. All surgical treatments are associated with some degree of complications, including bleeding, infection, anal strictures, and urinary retention, due to the close proximity of the rectum to the nerves that supply the bladder. Also, a small risk of fecal incontinence occurs, particularly of liquid, with rates reported between 0% and 28%. Mucosal ectropion is another condition which may occur after hemorrhoidectomy (often together with anal stenosis). This is where the anal mucosa becomes everted from the anus, similar to a very mild form of rectal prolapse. Excisional hemorrhoidectomy is a surgical excision of the hemorrhoid used primarily only in severe cases. It is associated with significant postoperative pain and usually requires two to four weeks for recovery. However, the long-term benefit is greater in those with grade III hemorrhoids as compared to rubber band ligation. It is the recommended treatment in those with a thrombosed external hemorrhoid if carried out within 24–72 hours. Evidence to support this is weak, however. Glyceryl trinitrate ointment after the procedure helps both with pain and with healing. Doppler-guided transanal hemorrhoidal dearterialization is a minimally invasive treatment using an ultrasound Doppler to accurately locate the arterial blood inflow. These arteries are then "tied off" and the prolapsed tissue is sutured back to its normal position. It has a slightly higher recurrence rate but fewer complications compared to a hemorrhoidectomy. Stapled hemorrhoidectomy, also known as stapled hemorrhoidopexy, involves the removal of much of the abnormally enlarged hemorrhoidal tissue, followed by a repositioning of the remaining hemorrhoidal tissue back to its normal anatomical position. It is generally less painful and is associated with faster healing compared to complete removal of hemorrhoids. However, the chance of symptomatic hemorrhoids returning is greater than for conventional hemorrhoidectomy, so it is typically recommended only for grade II or III disease. Epidemiology It is difficult to determine how common hemorrhoids are as many people with the condition do not see a healthcare provider. However, symptomatic hemorrhoids are thought to affect at least 50% of the US population at some time during their lives, and around 5% of the population is affected at any given time. Both sexes experience about the same incidence of the condition, with rates peaking between 45 and 65 years. Some studies have found that they are common in people of higher socioeconomic status, however this may reflect differences in healthcare access rather than true prevalence. Long-term outcomes are generally good, though some people may have recurrent symptomatic episodes. Only a small proportion of persons end up needing surgery. History The first known mention of this disease is from a 1700 BC Egyptian papyrus, which advises: "Thou shouldest give a recipe, an ointment of great protection; acacia leaves, ground, titurated and cooked together. Smear a strip of fine linen there-with and place in the anus, that he recovers immediately." In 460 BC, the Hippocratic corpus discusses a treatment similar to modern rubber band ligation: "And hemorrhoids in like manner you may treat by transfixing them with a needle and tying them with very thick and woolen thread, for application, and do not foment until they drop off, and always leave one behind; and when the patient recovers, let him be put on a course of Hellebore." Hemorrhoids may have been described in the Bible, with earlier English translations using the now-obsolete spelling "emerods". Celsus (25 BC – 14 AD) described ligation and excision procedures and discussed the possible complications. Galen advocated severing the connection of the arteries to veins, claiming it reduced both pain and the spread of gangrene. The Susruta Samhita (4th–5th century BC) is similar to the words of Hippocrates, but emphasizes wound cleanliness. In the 13th century, European surgeons such as Lanfranc of Milan, Guy de Chauliac, Henri de Mondeville, and John of Ardene made great progress and development of the surgical techniques. In medieval times, hemorrhoids were also known as Saint Fiacre's curse after a sixth-century saint who developed them following tilling the soil. The first use of the word "hemorrhoid" in English occurs in 1398, derived from the Old French "emorroides", from Latin hæmorrhoida, in turn from the Greek αἱμορροΐς (haimorrhois), "liable to discharge blood", from αἷμα (haima), "blood" and ῥόος (rhoos), "stream, flow, current", itself from ῥέω (rheo), "to flow, to stream". Notable cases Hall-of-Fame baseball player George Brett was removed from a game in the 1980 World Series due to hemorrhoid pain. After undergoing minor surgery, Brett returned to play in the next game, quipping, "My problems are all behind me". Brett underwent further hemorrhoid surgery the following spring. Conservative political commentator Glenn Beck underwent surgery for hemorrhoids, subsequently describing his unpleasant experience in a widely viewed 2008 YouTube video. Former U.S. President Jimmy Carter had surgery for hemorrhoids in 1984. Cricketers Matthew Hayden and Viv Richards have suffered the condition. During World War II, US Army Lieutenant Colonel Harold Cohen was selected by General George S. Patton to organize a raid to rescue Patton's son-in-law from a German prison camp; Cohen was prevented from leading the raid due to hemorrhoids. Patton personally examined Cohen and remarked, "that is some sorry ass".
Biology and health sciences
Human anatomy
Health
173724
https://en.wikipedia.org/wiki/Hawking%20radiation
Hawking radiation
Hawking radiation is black body radiation released outside a black hole's event horizon due to quantum effects according to a model developed by Stephen Hawking in 1974. The radiation was not predicted by previous models which assumed that once electromagnetic radiation is inside the event horizon, it cannot escape. Hawking radiation is predicted to be extremely faint and is many orders of magnitude below the current best telescopes' detecting ability. Hawking radiation would reduce the mass and rotational energy of black holes and consequently cause black hole evaporation. Because of this, black holes that do not gain mass through other means are expected to shrink and ultimately vanish. For all except the smallest black holes, this happens extremely slowly. The radiation temperature, called Hawking temperature, is inversely proportional to the black hole's mass, so micro black holes are predicted to be larger emitters of radiation than larger black holes and should dissipate faster per their mass. Consequently, if small black holes exist, as permitted by the hypothesis of primordial black holes, they ought to lose mass more rapidly as they shrink, leading to a final cataclysm of high energy radiation alone. Such radiation bursts have not yet been detected. Background Modern black holes were first predicted by Einstein's 1915 theory of general relativity. Evidence of the astrophysical objects termed black holes began to mount half a century later, and these objects are of current interest primarily because of their compact size and immense gravitational attraction. Early research into black holes was done by individuals such as Karl Schwarzschild and John Wheeler, who modeled black holes as having zero entropy. A black hole can form when enough matter or energy is compressed into a volume small enough that the escape velocity is greater than the speed of light. Because nothing can travel that fast, nothing within a certain distance, proportional to the mass of the black hole, can escape beyond that distance. The region beyond which not even light can escape is the event horizon: an observer outside it cannot observe, become aware of, or be affected by events within the event horizon. Alternatively, using a set of infalling coordinates in general relativity, one can conceptualize the event horizon as the region beyond which space is infalling faster than the speed of light. (Although nothing can travel through space faster than light, space itself can infall at any speed.) Once matter is inside the event horizon, all of the matter inside falls inevitably into a gravitational singularity, a place of infinite curvature and zero size, leaving behind a warped spacetime devoid of any matter; a classical black hole is pure empty spacetime, and the simplest (nonrotating and uncharged) is characterized just by its mass and event horizon. Discovery In 1971 Soviet scientists Yakov Zeldovich and Alexei Starobinsky proposed that rotating black holes ought to create and emit particles, reasoning by analogy with electromagnetic spinning metal spheres. In 1972, Jacob Bekenstein developed a theory and reported that the black holes should have an entropy proportional to their surface area. Initially Stephen Hawking argued against Bekenstein's theory, viewing black holes as a simple object with no entropy. After meeting Zeldovich in Moscow in 1973, Hawking put these two ideas together using his mixture of quantum field theory and general relativity. In his 1974 paper Hawking showed that in theory, black holes radiate particles as if it were a blackbody. Particles escaping effectively drain energy from the black hole. Due to Bekenstein's contribution to black hole entropy, it is also known as Bekenstein–Hawking radiation. Hawking radiation derives from vacuum fluctuations. A quantum fluctuation in the electromagnetic field can result in a photon outside of the black hole horizon paired with one on the inside. The horizon allows one to escape in each direction. Emission process Hawking radiation is dependent on the Unruh effect and the equivalence principle applied to black-hole horizons. Close to the event horizon of a black hole, a local observer must accelerate to keep from falling in. An accelerating observer sees a thermal bath of particles that pop out of the local acceleration horizon, turn around, and free-fall back in. The condition of local thermal equilibrium implies that the consistent extension of this local thermal bath has a finite temperature at infinity, which implies that some of these particles emitted by the horizon are not reabsorbed and become outgoing Hawking radiation. A Schwarzschild black hole has a metric The black hole is the background spacetime for a quantum field theory. The field theory is defined by a local path integral, so if the boundary conditions at the horizon are determined, the state of the field outside will be specified. To find the appropriate boundary conditions, consider a stationary observer just outside the horizon at position The local metric to lowest order is which is Rindler in terms of . The metric describes a frame that is accelerating to keep from falling into the black hole. The local acceleration, , diverges as . The horizon is not a special boundary, and objects can fall in. So the local observer should feel accelerated in ordinary Minkowski space by the principle of equivalence. The near-horizon observer must see the field excited at a local temperature which is the Unruh effect. The gravitational redshift is given by the square root of the time component of the metric. So for the field theory state to consistently extend, there must be a thermal background everywhere with the local temperature redshift-matched to the near horizon temperature: The inverse temperature redshifted to at infinity is and is the near-horizon position, near , so this is really Thus a field theory defined on a black-hole background is in a thermal state whose temperature at infinity is From the black-hole temperature, it is straightforward to calculate the black-hole entropy . The change in entropy when a quantity of heat is added is The heat energy that enters serves to increase the total mass, so So the entropy of a black hole is proportional to its surface area: where, since the radius of the black hole is twice its mass, we have that the area A is given by Assuming that a small black hole has zero entropy, the integration constant is zero. Forming a black hole is the most efficient way to compress mass into a region, and this entropy is also a bound on the information content of any sphere in space time. The form of the result strongly suggests that the physical description of a gravitating theory can be somehow encoded onto a bounding surface. Black hole evaporation When particles escape, the black hole loses a small amount of its energy and therefore some of its mass (mass and energy are related by Einstein's equation ). Consequently, an evaporating black hole will have a finite lifespan. By dimensional analysis, the life span of a black hole can be shown to scale as the cube of its initial mass, and Hawking estimated that any black hole formed in the early universe with a mass of less than approximately 1012 kg would have evaporated completely by the present day. In 1976, Don Page refined this estimate by calculating the power produced, and the time to evaporation, for a non-rotating, non-charged Schwarzschild black hole of mass . The time for the event horizon or entropy of a black hole to halve is known as the Page time. The calculations are complicated by the fact that a black hole, being of finite size, is not a perfect black body; the absorption cross section goes down in a complicated, spin-dependent manner as frequency decreases, especially when the wavelength becomes comparable to the size of the event horizon. Page concluded that primordial black holes could survive to the present day only if their initial mass were roughly or larger. Writing in 1976, Page using the understanding of neutrinos at the time erroneously worked on the assumption that neutrinos have no mass and that only two neutrino flavors exist, and therefore his results of black hole lifetimes do not match the modern results which take into account 3 flavors of neutrinos with nonzero masses. A 2008 calculation using the particle content of the Standard Model and the WMAP figure for the age of the universe yielded a mass bound of . Some pre-1998 calculations, using outdated assumptions about neutrinos, were as follows: If black holes evaporate under Hawking radiation, a solar mass black hole will evaporate over 1064 years which is vastly longer than the age of the universe. A supermassive black hole with a mass of 1011 (100 billion) will evaporate in around . Some monster black holes in the universe are predicted to continue to grow up to perhaps 1014 during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 2 × 10106 years. Post-1998 science modifies these results slightly; for example, the modern estimate of a solar-mass black hole lifetime is 1067 years. The power emitted by a black hole in the form of Hawking radiation can be estimated for the simplest case of a nonrotating, non-charged Schwarzschild black hole of mass . Combining the formulas for the Schwarzschild radius of the black hole, the Stefan–Boltzmann law of blackbody radiation, the above formula for the temperature of the radiation, and the formula for the surface area of a sphere (the black hole's event horizon), several equations can be derived. The Hawking radiation temperature is: The Bekenstein–Hawking luminosity of a black hole, under the assumption of pure photon emission (i.e. that no other particles are emitted) and under the assumption that the horizon is the radiating surface is: where is the luminosity, i.e., the radiated power, is the reduced Planck constant, is the speed of light, is the gravitational constant and is the mass of the black hole. It is worth mentioning that the above formula has not yet been derived in the framework of semiclassical gravity. The time that the black hole takes to dissipate is: where and are the mass and (Schwarzschild) volume of the black hole, and are Planck mass and Planck time. A black hole of one solar mass ( = ) takes more than to evaporate—much longer than the current age of the universe at . But for a black hole of , the evaporation time is . This is why some astronomers are searching for signs of exploding primordial black holes. However, since the universe contains the cosmic microwave background radiation, in order for the black hole to dissipate, the black hole must have a temperature greater than that of the present-day blackbody radiation of the universe of 2.7 K. A study suggests that must be less than 0.8% of the mass of the Earth – approximately the mass of the Moon. Black hole evaporation has several significant consequences: Black hole evaporation produces a more consistent view of black hole thermodynamics by showing how black holes interact thermally with the rest of the universe. Unlike most objects, a black hole's temperature increases as it radiates away mass. The rate of temperature increase is exponential, with the most likely endpoint being the dissolution of the black hole in a violent burst of gamma rays. A complete description of this dissolution requires a model of quantum gravity, however, as it occurs when the black hole's mass approaches 1 Planck mass, its radius will also approach two Planck lengths. The simplest models of black hole evaporation lead to the black hole information paradox. The information content of a black hole appears to be lost when it dissipates, as under these models the Hawking radiation is random (it has no relation to the original information). A number of solutions to this problem have been proposed, including suggestions that Hawking radiation is perturbed to contain the missing information, that the Hawking evaporation leaves some form of remnant particle containing the missing information, and that information is allowed to be lost under these conditions. Problems and extensions Trans-Planckian problem The trans-Planckian problem is the issue that Hawking's original calculation includes quantum particles where the wavelength becomes shorter than the Planck length near the black hole's horizon. This is due to the peculiar behavior there, where time stops as measured from far away. A particle emitted from a black hole with a finite frequency, if traced back to the horizon, must have had an infinite frequency, and therefore a trans-Planckian wavelength. The Unruh effect and the Hawking effect both talk about field modes in the superficially stationary spacetime that change frequency relative to other coordinates that are regular across the horizon. This is necessarily so, since to stay outside a horizon requires acceleration that constantly Doppler shifts the modes. An outgoing photon of Hawking radiation, if the mode is traced back in time, has a frequency that diverges from that which it has at great distance, as it gets closer to the horizon, which requires the wavelength of the photon to "scrunch up" infinitely at the horizon of the black hole. In a maximally extended external Schwarzschild solution, that photon's frequency stays regular only if the mode is extended back into the past region where no observer can go. That region seems to be unobservable and is physically suspect, so Hawking used a black hole solution without a past region that forms at a finite time in the past. In that case, the source of all the outgoing photons can be identified: a microscopic point right at the moment that the black hole first formed. The quantum fluctuations at that tiny point, in Hawking's original calculation, contain all the outgoing radiation. The modes that eventually contain the outgoing radiation at long times are redshifted by such a huge amount by their long sojourn next to the event horizon that they start off as modes with a wavelength much shorter than the Planck length. Since the laws of physics at such short distances are unknown, some find Hawking's original calculation unconvincing. The trans-Planckian problem is nowadays mostly considered a mathematical artifact of horizon calculations. The same effect occurs for regular matter falling onto a white hole solution. Matter that falls on the white hole accumulates on it, but has no future region into which it can go. Tracing the future of this matter, it is compressed onto the final singular endpoint of the white hole evolution, into a trans-Planckian region. The reason for these types of divergences is that modes that end at the horizon from the point of view of outside coordinates are singular in frequency there. The only way to determine what happens classically is to extend in some other coordinates that cross the horizon. There exist alternative physical pictures that give the Hawking radiation in which the trans-Planckian problem is addressed. The key point is that similar trans-Planckian problems occur when the modes occupied with Unruh radiation are traced back in time. In the Unruh effect, the magnitude of the temperature can be calculated from ordinary Minkowski field theory, and is not controversial. Large extra dimensions The formulas from the previous section are applicable only if the laws of gravity are approximately valid all the way down to the Planck scale. In particular, for black holes with masses below the Planck mass (~), they result in impossible lifetimes below the Planck time (~). This is normally seen as an indication that the Planck mass is the lower limit on the mass of a black hole. In a model with large extra dimensions (10 or 11), the values of Planck constants can be radically different, and the formulas for Hawking radiation have to be modified as well. In particular, the lifetime of a micro black hole with a radius below the scale of the extra dimensions is given by equation 9 in Cheung (2002) and equations 25 and 26 in Carr (2005). where is the low-energy scale, which could be as low as a few TeV, and is the number of large extra dimensions. This formula is now consistent with black holes as light as a few TeV, with lifetimes on the order of the "new Planck time" ~. In loop quantum gravity A detailed study of the quantum geometry of a black hole event horizon has been made using loop quantum gravity. Loop-quantization does not reproduce the result for black hole entropy originally discovered by Bekenstein and Hawking, unless the value of a free parameter is set to cancel out various constants such that the Bekenstein–Hawking entropy formula is reproduced. However, quantum gravitational corrections to the entropy and radiation of black holes have been computed based on the theory. Based on the fluctuations of the horizon area, a quantum black hole exhibits deviations from the Hawking radiation spectrum that would be observable were X-rays from Hawking radiation of evaporating primordial black holes to be observed. The quantum effects are centered at a set of discrete and unblended frequencies highly pronounced on top of the Hawking spectrum. Experimental observation Astronomical search In June 2008, NASA launched the Fermi space telescope, which is searching for the terminal gamma-ray flashes expected from evaporating primordial black holes. As of Jan 1st, 2024, none have been detected. Heavy-ion collider physics If speculative large extra dimension theories are correct, then CERN's Large Hadron Collider may be able to create micro black holes and observe their evaporation. No such micro black hole has been observed at CERN. Experimental Under experimentally achievable conditions for gravitational systems, this effect is too small to be observed directly. It was predicted that Hawking radiation could be studied by analogy using sonic black holes, in which sound perturbations are analogous to light in a gravitational black hole and the flow of an approximately perfect fluid is analogous to gravity (see Analog models of gravity). Observations of Hawking radiation were reported, in sonic black holes employing Bose–Einstein condensates. In September 2010 an experimental set-up created a laboratory "white hole event horizon" that the experimenters claimed was shown to radiate an optical analog to Hawking radiation. However, the results remain unverified and debatable, and its status as a genuine confirmation remains in doubt.
Physical sciences
Theory of relativity
Physics
173773
https://en.wikipedia.org/wiki/Cataclysmic%20variable%20star
Cataclysmic variable star
In astronomy, cataclysmic variable stars (CVs) are stars which irregularly increase in brightness by a large factor, then drop back down to a quiescent state. They were initially called novae (), since ones with an outburst brightness visible to the naked eye and an invisible quiescent brightness appeared as new stars in the sky. Cataclysmic variable stars are binary stars that consist of two components; a white dwarf primary, and a mass transferring secondary. The stars are so close to each other that the gravity of the white dwarf distorts the secondary, and the white dwarf accretes matter from the companion. Therefore, the secondary is often referred to as the donor star, and it is usually less massive than the primary. The infalling matter, which is usually rich in hydrogen, forms in most cases an accretion disk around the white dwarf. Strong UV and X-ray emission is often detected from the accretion disc, powered by the loss of gravitational potential energy from the infalling material. The shortest currently observed orbit in a hydrogen-rich system is 51 minutes in ZTF J1813+4251. Material at the inner edge of disc falls onto the surface of the white dwarf primary. A classical nova outburst occurs when the density and temperature at the bottom of the accumulated hydrogen layer rise high enough to ignite runaway hydrogen fusion reactions, which rapidly convert the hydrogen layer to helium. If the accretion process continues long enough to bring the white dwarf close to the Chandrasekhar limit, the increasing interior density may ignite runaway carbon fusion and trigger a Type Ia supernova explosion, which would completely destroy the white dwarf. The accretion disc may be prone to an instability leading to dwarf nova outbursts, when the outer portion of the disc changes from a cool, dull mode to a hotter, brighter mode for a time, before reverting to the cool mode. Dwarf novae can recur on a timescale of days to decades. Classification Cataclysmic variables are subdivided into several smaller groups, often named after a bright prototype star characteristic of the class. In some cases the magnetic field of the white dwarf is strong enough to disrupt the inner accretion disk or even prevent disk formation altogether. Magnetic systems often show strong and variable polarization in their optical light, and are therefore sometimes called polars; these often exhibit small-amplitude brightness fluctuations at what is presumed to be the white dwarf's period of rotation. There are over 1600 known CV systems. The catalog was frozen as of 1 February 2006 though more are discovered each year. Discovery Cataclysmic variables are among the classes of astronomical objects most commonly found by amateurs, since a cataclysmic variable in its outburst phase is bright enough to be detectable with very modest instruments, and the only celestial objects easily confused with them are bright asteroids whose movement from night to night is clear. Verifying that an object is a cataclysmic variable is also fairly straightforward: they are usually quite blue objects, they exhibit rapid and strong variability, and they tend to have peculiar emission lines. They emit in the ultraviolet and X-ray ranges; they are expected also to emit gamma rays, from annihilation of positrons from proton-rich nuclei produced in the fusion explosion, but this has not yet been detected. Around six galactic novae (i.e. in our own galaxy) are discovered each year, whilst models based on observations in other galaxies suggest that the rate of occurrence ought to be between 20 and 50; this discrepancy is due partly to obscuration by interstellar dust, and partly to a lack of observers in the southern hemisphere and to the difficulties of observing while the Sun is up and at full moon. Superhumps Some cataclysmic variables experience periodic brightenings caused by deformations of the accretion disk when its rotation is in resonance with the orbital period of the binary.
Physical sciences
Stellar astronomy
Astronomy
173844
https://en.wikipedia.org/wiki/Transpose
Transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The transpose of a matrix was introduced in 1858 by the British mathematician Arthur Cayley. In the case of a logical matrix representing a binary relation R, the transpose corresponds to the converse relation RT. Transpose of a matrix Definition The transpose of a matrix , denoted by , , , , , , or , may be constructed by any one of the following methods: Reflect over its main diagonal (which runs from top-left to bottom-right) to obtain Write the rows of as the columns of Write the columns of as the rows of Formally, the -th row, -th column element of is the -th row, -th column element of : If is an matrix, then is an matrix. In the case of square matrices, may also denote the th power of the matrix . For avoiding a possible confusion, many authors use left upperscripts, that is, they denote the transpose as . An advantage of this notation is that no parentheses are needed when exponents are involved: as , notation is not ambiguous. In this article, this confusion is avoided by never using the symbol as a variable name. Matrix definitions involving transposition A square matrix whose transpose is equal to itself is called a symmetric matrix; that is, is symmetric if A square matrix whose transpose is equal to its negative is called a skew-symmetric matrix; that is, is skew-symmetric if A square complex matrix whose transpose is equal to the matrix with every entry replaced by its complex conjugate (denoted here with an overline) is called a Hermitian matrix (equivalent to the matrix being equal to its conjugate transpose); that is, is Hermitian if A square complex matrix whose transpose is equal to the negation of its complex conjugate is called a skew-Hermitian matrix; that is, is skew-Hermitian if A square matrix whose transpose is equal to its inverse is called an orthogonal matrix; that is, is orthogonal if A square complex matrix whose transpose is equal to its conjugate inverse is called a unitary matrix; that is, is unitary if Examples Properties Let and be matrices and be a scalar. The operation of taking the transpose is an involution (self-inverse). The transpose respects addition. The transpose of a scalar is the same scalar. Together with the preceding property, this implies that the transpose is a linear map from the space of matrices to the space of the matrices. The order of the factors reverses. By induction, this result extends to the general case of multiple matrices, so . The determinant of a square matrix is the same as the determinant of its transpose. The dot product of two column vectors and can be computed as the single entry of the matrix product If has only real entries, then is a positive-semidefinite matrix. The transpose of an invertible matrix is also invertible, and its inverse is the transpose of the inverse of the original matrix.The notation is sometimes used to represent either of these equivalent expressions. If is a square matrix, then its eigenvalues are equal to the eigenvalues of its transpose, since they share the same characteristic polynomial. for two column vectors and the standard dot product. Over any field , a square matrix is similar to . This implies that and have the same invariant factors, which implies they share the same minimal polynomial, characteristic polynomial, and eigenvalues, among other properties. A proof of this property uses the following two observations. Let and be matrices over some base field and let be a field extension of . If and are similar as matrices over , then they are similar over . In particular this applies when is the algebraic closure of . If is a matrix over an algebraically closed field in Jordan normal form with respect to some basis, then is similar to . This further reduces to proving the same fact when is a single Jordan block, which is a straightforward exercise. Products If is an matrix and is its transpose, then the result of matrix multiplication with these two matrices gives two square matrices: is and is . Furthermore, these products are symmetric matrices. Indeed, the matrix product has entries that are the inner product of a row of with a column of . But the columns of are the rows of , so the entry corresponds to the inner product of two rows of . If is the entry of the product, it is obtained from rows and in . The entry is also obtained from these rows, thus , and the product matrix () is symmetric. Similarly, the product is a symmetric matrix. A quick proof of the symmetry of results from the fact that it is its own transpose: Implementation of matrix transposition on computers On a computer, one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order. For example, software libraries for linear algebra, such as BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement. However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in row-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a fast Fourier transform algorithm, transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing memory locality. Ideally, one might hope to transpose a matrix with minimal additional storage. This leads to the problem of transposing an n × m matrix in-place, with O(1) additional storage or at most storage much less than mn. For n ≠ m, this involves a complicated permutation of the data elements that is non-trivial to implement in-place. Therefore, efficient in-place matrix transposition has been the subject of numerous research publications in computer science, starting in the late 1950s, and several algorithms have been developed. Transposes of linear maps and bilinear forms As the main use of matrices is to represent linear maps between finite-dimensional vector spaces, the transpose is an operation on matrices that may be seen as the representation of some operation on linear maps. This leads to a much more general definition of the transpose that works on every linear map, even when linear maps cannot be represented by matrices (such as in the case of infinite dimensional vector spaces). In the finite dimensional case, the matrix representing the transpose of a linear map is the transpose of the matrix representing the linear map, independently of the basis choice. Transpose of a linear map Let denote the algebraic dual space of an -module . Let and be -modules. If is a linear map, then its algebraic adjoint or dual, is the map defined by . The resulting functional is called the pullback of by . The following relation characterizes the algebraic adjoint of for all and where is the natural pairing (i.e. defined by ). This definition also applies unchanged to left modules and to vector spaces. The definition of the transpose may be seen to be independent of any bilinear form on the modules, unlike the adjoint (below). The continuous dual space of a topological vector space (TVS) is denoted by . If and are TVSs then a linear map is weakly continuous if and only if , in which case we let denote the restriction of to . The map is called the transpose of . If the matrix describes a linear map with respect to bases of and , then the matrix describes the transpose of that linear map with respect to the dual bases. Transpose of a bilinear form Every linear map to the dual space defines a bilinear form , with the relation . By defining the transpose of this bilinear form as the bilinear form defined by the transpose i.e. , we find that . Here, is the natural homomorphism into the double dual. Adjoint If the vector spaces and have respectively nondegenerate bilinear forms and , a concept known as the adjoint, which is closely related to the transpose, may be defined: If is a linear map between vector spaces and , we define as the adjoint of if satisfies for all and . These bilinear forms define an isomorphism between and , and between and , resulting in an isomorphism between the transpose and adjoint of . The matrix of the adjoint of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms. In this context, many authors however, use the term transpose to refer to the adjoint as defined here. The adjoint allows us to consider whether is equal to . In particular, this allows the orthogonal group over a vector space with a quadratic form to be defined without reference to matrices (nor the components thereof) as the set of all linear maps for which the adjoint equals the inverse. Over a complex vector space, one often works with sesquilinear forms (conjugate-linear in one argument) instead of bilinear forms. The Hermitian adjoint of a map between such spaces is defined similarly, and the matrix of the Hermitian adjoint is given by the conjugate transpose matrix if the bases are orthonormal.
Mathematics
Linear algebra
null
173857
https://en.wikipedia.org/wiki/Harvard%20Mark%20I
Harvard Mark I
The Harvard Mark I, or IBM Automatic Sequence Controlled Calculator (ASCC), was one of the earliest general-purpose electromechanical computers used in the war effort during the last part of World War II. One of the first programs to run on the Mark I was initiated on 29 March 1944 by John von Neumann. At that time, von Neumann was working on the Manhattan Project, and needed to determine whether implosion was a viable choice to detonate the atomic bomb that would be used a year later. The Mark I also computed and printed mathematical tables, which had been the initial goal of British inventor Charles Babbage for his analytical engine in 1837. According to Edmund Berkeley, the operators of the Mark I often called the machine "Bessy, the Bessel engine", after Bessel functions. The Mark I was disassembled in 1959; part of it was given to IBM, part went to the Smithsonian Institution, and part entered the Harvard Collection of Historical Scientific Instruments. For decades, Harvard's portion was on display in the lobby of the Aiken Computation Lab. About 1997, it was moved to the Harvard Science Center. In 2021, it was moved again, to the lobby of Harvard's new Science and Engineering Complex in Allston, Massachusetts. Origins The original concept was presented to IBM by Howard Aiken in November 1937. After a feasibility study by IBM engineers, the company chairman Thomas Watson Sr. personally approved the project and its funding in February 1939. Howard Aiken had started to look for a company to design and build his calculator in early 1937. After two rejections, he was shown a demonstration set that Charles Babbage’s son had given to Harvard University 70 years earlier. This led him to study Babbage and to add references to the Analytical Engine to his proposal; the resulting machine "brought Babbage’s principles of the Analytical Engine almost to full realization, while adding important new features." The ASCC was developed and built by IBM at their Endicott plant and shipped to Harvard in February 1944. It began computations for the US Navy Bureau of Ships in May and was officially presented to the university on August 7, 1944. Although not the first working computer, the machine was the first to automate the execution of complex calculations, making it a significant step forward for computing. Design and construction The ASCC was built from switches, relays, rotating shafts, and clutches. It used 765,000 electromechanical components and hundreds of miles of wire, comprising a volume of – in length, in height, and deep. It weighed about . The basic calculating units had to be synchronized and powered mechanically, so they were operated by a drive shaft coupled to a electric motor, which served as the main power source and system clock. From the IBM Archives: The Automatic Sequence Controlled Calculator (Harvard Mark I) was the first operating machine that could execute long computations automatically. A project conceived by Harvard University’s Dr. Howard Aiken, the Mark I was built by IBM engineers in Endicott, N.Y. A steel frame 51 feet long and 8 feet high held the calculator, which consisted of an interlocking panel of small gears, counters, switches and control circuits, all only a few inches in depth. The ASCC used of wire with three million connections, 3,500 multipole relays with 35,000 contacts, 2,225 counters, 1,464 tenpole switches and tiers of 72 adding machines, each with 23 significant numbers. It was the industry’s largest electromechanical calculator. The enclosure for the Mark I was designed by futuristic American industrial designer Norman Bel Geddes at IBM's expense. Aiken was annoyed that the cost ($50,000 or more according to Grace Hopper) was not used to build additional computer equipment. Operation The Mark I had 60 sets of 24 switches for manual data entry and could store 72 numbers, each 23 decimal digits long. It could do 3 additions or subtractions in a second. A multiplication took 6 seconds, a division took 15.3 seconds, and a logarithm or a trigonometric function took over one minute. The Mark I read its instructions from a 24-channel punched paper tape. It executed the current instruction and then read the next one. A separate tape could contain numbers for input, but the tape formats were not interchangeable. Instructions could not be executed from the storage registers. Because instructions were not stored in working memory, it is widely claimed that the Harvard Mark I was the origin of the Harvard architecture. However, this is disputed in The Myth of the Harvard Architecture published in the IEEE Annals of the History of Computing, which shows the term 'Harvard architecture' did not come into use until the 1970s (in the context of microcontrollers) and was only retrospectively applied to the Harvard machines, and that the term could only be applied to the Mark III and IV, not to the Mark I or II. The main sequence mechanism was unidirectional. This meant that complex programs had to be physically lengthy. A program loop was accomplished by loop unrolling or by joining the end of the paper tape containing the program back to the beginning of the tape (literally creating a loop). At first, conditional branching in Mark I was performed manually. Later modifications in 1946 introduced automatic program branching (by subroutine call). The first programmers of the Mark I were computing pioneers Richard Milton Bloch, Robert Campbell, and Grace Hopper. There was also a small technical team whose assignment was to actually operate the machine; some had been IBM employees before being required to join the Navy to work on the machine. This technical team was not informed of the overall purpose of their work while at Harvard. Instruction format The 24 channels of the input tape were divided into three fields of eight channels each. Each storage location, each set of switches, and the registers associated with the input, output, and arithmetic units were assigned a unique identifying index number. These numbers were represented in binary on the control tape. The first field was the binary index of the result of the operation, the second was the source datum for the operation and the third field was a code for the operation to be performed. Contribution to the Manhattan Project In 1928 L.J. Comrie was the first to turn IBM "punched-card equipment to scientific use: computation of astronomical tables by the method of finite differences, as envisioned by Babbage 100 years earlier for his Difference Engine". Very soon after, IBM started to modify its tabulators to facilitate this kind of computation. One of these tabulators, built in 1931, was The Columbia Difference Tabulator. John von Neumann had a team at Los Alamos that used "modified IBM punched-card machines" to determine the effects of the implosion. In March 1944, he proposed to run certain problems regarding implosion of the Mark I, and in 1944 he arrived with two mathematicians to write a simulation program to study the implosion of the first atomic bomb. The Los Alamos group completed its work in a much shorter time than the Cambridge group. However, the punched-card machine operation computed values to six decimal places, whereas the Mark I computed values to eighteen decimal places. Additionally, Mark I integrated the partial differential equation at a much smaller interval size [or smaller mesh] and so...achieved far greater precision. "Von Neumann joined the Manhattan Project in 1943, working on the immense number of calculations needed to build the atomic bomb. He showed that the implosion design, which would later be used in the Trinity and Fat Man bombs, was likely faster and more efficient than the gun design." Aiken and IBM Aiken published a press release announcing the Mark I listing himself as the sole inventor. James W. Bryce was the only IBM person mentioned, even though several IBM engineers including Clair Lake and Frank Hamilton had helped to build various elements. IBM chairman Thomas J. Watson was enraged, and only reluctantly attended the dedication ceremony on August 7, 1944. Aiken, in turn, decided to build further machines without IBM's help, and the ASCC came to be generally known as the "Harvard Mark I". IBM went on to build its Selective Sequence Electronic Calculator (SSEC) to both test new technology and provide more publicity for the company's efforts. Successors The Mark I was followed by the Harvard Mark II (1947 or 1948), Mark III/ADEC (September 1949), and Harvard Mark IV (1952) – all the work of Aiken. The Mark II was an improvement over the Mark I, although it still was based on electromechanical relays. The Mark III used mostly electronic components—vacuum tubes and crystal diodes—but also included mechanical components: rotating magnetic drums for storage, plus relays for transferring data between drums. The Mark IV was all-electronic, replacing the remaining mechanical components with magnetic core memory. The Mark II and Mark III were delivered to the US Navy base at Dahlgren, Virginia. The Mark IV was built for the US Air Force, but it stayed at Harvard. The Mark I was disassembled in 1959, and portions of it went on display in the Science Center, as part of the Harvard Collection of Historical Scientific Instruments. It was relocated to the new Science and Engineering Complex in Allston in July 2021. Other sections of the original machine had much earlier been transferred to IBM and the Smithsonian Institution.
Technology
Early computers
null
173900
https://en.wikipedia.org/wiki/Epicenter
Epicenter
The epicenter (), epicentre, or epicentrum in seismology is the point on the Earth's surface directly above a hypocenter or focus, the point where an earthquake or an underground explosion originates. Determination The primary purpose of a seismometer is to locate the initiating points of earthquake epicenters. The secondary purpose, of determining the 'size' or magnitude must be calculated after the precise location is known. The earliest seismographs were designed to give a sense of the direction of the first motions from an earthquake. The Chinese frog seismograph would have dropped its ball in the general compass direction of the earthquake, assuming a strong positive pulse. We now know that first motions can be in almost any direction depending on the type of initiating rupture (focal mechanism). The first refinement that allowed a more precise determination of the location was the use of a time scale. Instead of merely noting, or recording, the absolute motions of a pendulum, the displacements were plotted on a moving graph, driven by a clock mechanism. This was the first seismogram, which allowed precise timing of the first ground motion, and an accurate plot of subsequent motions. From the first seismograms, as seen in the figure, it was noticed that the trace was divided into two major portions. The first seismic wave to arrive was the P wave, followed closely by the S wave. Knowing the relative 'velocities of propagation', it was a simple matter to calculate the distance of the earthquake. One seismograph would give the distance, but that could be plotted as a circle, with an infinite number of possibilities. Two seismographs would give two intersecting circles, with two possible locations. Only with a third seismograph would there be a precise location. Modern earthquake location still requires a minimum of three seismometers. Most likely, there are many, forming a seismic array. The emphasis is on precision since much can be learned about the fault mechanics and seismic hazard, if the locations can be determined to be within a kilometer or two, for small earthquakes. For this, computer programs use an iterative process, involving a 'guess and correction' algorithm. As well, a very good model of the local crustal velocity structure is required: seismic velocities vary with the local geology. For P waves, the relation between velocity and bulk density of the medium has been quantified in Gardner's relation. Surface damage Before the instrumental period of earthquake observation, the epicenter was thought to be the location where the greatest damage occurred, but the subsurface fault rupture may be long and spread surface damage across the entire rupture zone. As an example, in the magnitude 7.9 Denali earthquake of 2002 in Alaska, the epicenter was at the western end of the rupture, but the greatest damage was about away at the eastern end. Focal depths of earthquakes occurring in continental crust mostly range from . Continental earthquakes below are rare whereas in subduction zone earthquakes can originate at depths deeper than . Epicentral distance During an earthquake, seismic waves propagates in all directions from the hypocenter. Seismic shadowing occurs on the opposite side of the Earth from the earthquake epicenter because the planet's liquid outer core refracts the longitudinal or compressional (P waves) while it absorbs the transverse or shear waves (S waves). Outside the seismic shadow zone, both types of wave can be detected, but because of their different velocities and paths through the Earth, they arrive at different times. By measuring the time difference on any seismograph and the distance on a travel-time graph on which the P wave and S wave have the same separation, geologists can calculate the distance to the quake's epicenter. This distance is called the epicentral distance, commonly measured in ° (degrees) and denoted as Δ (delta) in seismology. The Láska's empirical rule provides an approximation of epicentral distance in the range of 2,000−10,000 km. Once distances from the epicenter have been calculated from at least three seismographic measuring stations, the point can be located, using trilateration. Epicentral distance is also used in calculating seismic magnitudes as developed by Richter and Gutenberg. Fault rupture The point at which fault slipping begins is referred to as the focus of the earthquake. The fault rupture begins at the focus and then expands along the fault surface. The rupture stops where the stresses become insufficient to continue breaking the fault (because the rocks are stronger) or where the rupture enters ductile material. The magnitude of an earthquake is related to the total area of its fault rupture. Most earthquakes are small, with rupture dimensions less than the depth of the focus so the rupture doesn't break the surface, but in high magnitude, destructive earthquakes, surface breaks are common. Fault ruptures in large earthquakes can extend for more than . When a fault ruptures unilaterally (with the epicenter at or near the end of the fault break) the waves are stronger in one direction along the fault. Macroseismic epicenter The macroseismic epicenter is the best estimate of the location of the epicenter derived without instrumental data. This may be estimated using intensity data, information about foreshocks and aftershocks, knowledge of local fault systems or extrapolations from data regarding similar earthquakes. For historical earthquakes that have not been instrumentally recorded, only a macroseismic epicenter can be given. Etymology The word is derived from the Neo-Latin noun epicentrum, the latinisation of the ancient Greek adjective ἐπίκεντρος (), "occupying a cardinal point, situated on a centre", from ἐπί (epi) "on, upon, at" and κέντρον (kentron) "centre". The term was coined by Irish seismologist Robert Mallet. It is also used to mean "center of activity", as in "Travel is restricted in the Chinese province thought to be the epicentre of the SARS outbreak." Garner's Modern American Usage gives several examples of use in which "epicenter" is used to mean "center". Garner also refers to a William Safire article in which Safire quotes a geophysicist as attributing the use of the term to "spurious erudition on the part of writers combined with scientific illiteracy on the part of copy editors". Garner has speculated that these misuses may just be "metaphorical descriptions of focal points of unstable and potentially destructive environments."
Physical sciences
Seismology
Earth science
173961
https://en.wikipedia.org/wiki/Center%20of%20mass
Center of mass
In physics, the center of mass of a distribution of mass in space (sometimes referred to as the barycenter or balance point) is the unique point at any given time where the weighted relative position of the distributed mass sums to zero. For a rigid body containing its center of mass, this is the point to which a force may be applied to cause a linear acceleration without an angular acceleration. Calculations in mechanics are often simplified when formulated with respect to the center of mass. It is a hypothetical point where the entire mass of an object may be assumed to be concentrated to visualise its motion. In other words, the center of mass is the particle equivalent of a given object for application of Newton's laws of motion.l In the case of a single rigid body, the center of mass is fixed in relation to the body, and if the body has uniform density, it will be located at the centroid. The center of mass may be located outside the physical body, as is sometimes the case for hollow or open-shaped objects, such as a horseshoe. In the case of a distribution of separate bodies, such as the planets of the Solar System, the center of mass may not correspond to the position of any individual member of the system. The center of mass is a useful reference point for calculations in mechanics that involve masses distributed in space, such as the linear and angular momentum of planetary bodies and rigid body dynamics. In orbital mechanics, the equations of motion of planets are formulated as point masses located at the centers of mass (see Barycenter (astronomy) for details). The center of mass frame is an inertial frame in which the center of mass of a system is at rest with respect to the origin of the coordinate system. History The concept of center of gravity or weight was studied extensively by the ancient Greek mathematician, physicist, and engineer Archimedes of Syracuse. He worked with simplified assumptions about gravity that amount to a uniform field, thus arriving at the mathematical properties of what we now call the center of mass. Archimedes showed that the torque exerted on a lever by weights resting at various points along the lever is the same as what it would be if all of the weights were moved to a single point—their center of mass. In his work On Floating Bodies, Archimedes demonstrated that the orientation of a floating object is the one that makes its center of mass as low as possible. He developed mathematical techniques for finding the centers of mass of objects of uniform density of various well-defined shapes. Other ancient mathematicians who contributed to the theory of the center of mass include Hero of Alexandria and Pappus of Alexandria. In the Renaissance and Early Modern periods, work by Guido Ubaldi, Francesco Maurolico, Federico Commandino, Evangelista Torricelli, Simon Stevin, Luca Valerio, Jean-Charles de la Faille, Paul Guldin, John Wallis, Christiaan Huygens, Louis Carré, Pierre Varignon, and Alexis Clairaut expanded the concept further. Newton's second law is reformulated with respect to the center of mass in Euler's first law. Definition The center of mass is the unique point at the center of a distribution of mass in space that has the property that the weighted position vectors relative to this point sum to zero. In analogy to statistics, the center of mass is the mean location of a distribution of mass in space. A system of particles In the case of a system of particles , each with mass that are located in space with coordinates , the coordinates R of the center of mass satisfy Solving this equation for R yields the formula A continuous volume If the mass distribution is continuous with the density ρ(r) within a solid Q, then the integral of the weighted position coordinates of the points in this volume relative to the center of mass R over the volume V is zero, that is Solve this equation for the coordinates R to obtain Where M is the total mass in the volume. If a continuous mass distribution has uniform density, which means that ρ is constant, then the center of mass is the same as the centroid of the volume. Barycentric coordinates The coordinates R of the center of mass of a two-particle system, P1 and P2, with masses m1 and m2 is given by Let the percentage of the total mass divided between these two particles vary from 100% P1 and 0% P2 through 50% P1 and 50% P2 to 0% P1 and 100% P2, then the center of mass R moves along the line from P1 to P2. The percentages of mass at each point can be viewed as projective coordinates of the point R on this line, and are termed barycentric coordinates. Another way of interpreting the process here is the mechanical balancing of moments about an arbitrary point. The numerator gives the total moment that is then balanced by an equivalent total force at the center of mass. This can be generalized to three points and four points to define projective coordinates in the plane, and in space, respectively. Systems with periodic boundary conditions For particles in a system with periodic boundary conditions two particles can be neighbours even though they are on opposite sides of the system. This occurs often in molecular dynamics simulations, for example, in which clusters form at random locations and sometimes neighbouring atoms cross the periodic boundary. When a cluster straddles the periodic boundary, a naive calculation of the center of mass will incorrect. A generalized method for calculating the center of mass for periodic systems is to treat each coordinate, x and y and/or z, as if it were on a circle instead of a line. The calculation takes every particle's x coordinate and maps it to an angle, where xmax is the system size in the x direction and . From this angle, two new points can be generated, which can be weighted by the mass of the particle for the center of mass or given a value of 1 for the geometric center: In the plane, these coordinates lie on a circle of radius 1. From the collection of and values from all the particles, the averages and are calculated. where is the sum of the masses of all of the particles. These values are mapped back into a new angle, , from which the x coordinate of the center of mass can be obtained: The process can be repeated for all dimensions of the system to determine the complete center of mass. The utility of the algorithm is that it allows the mathematics to determine where the "best" center of mass is, instead of guessing or using cluster analysis to "unfold" a cluster straddling the periodic boundaries. If both average values are zero, , then is undefined. This is a correct result, because it only occurs when all particles are exactly evenly spaced. In that condition, their x coordinates are mathematically identical in a periodic system. Center of gravity A body's center of gravity is the point around which the resultant torque due to gravity forces vanishes. Where a gravity field can be considered to be uniform, the mass-center and the center-of-gravity will be the same. However, for satellites in orbit around a planet, in the absence of other torques being applied to a satellite, the slight variation (gradient) in gravitational field between closer-to and further-from the planet (stronger and weaker gravity respectively) can lead to a torque that will tend to align the satellite such that its long axis is vertical. In such a case, it is important to make the distinction between the center-of-gravity and the mass-center. Any horizontal offset between the two will result in an applied torque. The mass-center is a fixed property for a given rigid body (e.g. with no slosh or articulation), whereas the center-of-gravity may, in addition, depend upon its orientation in a non-uniform gravitational field. In the latter case, the center-of-gravity will always be located somewhat closer to the main attractive body as compared to the mass-center, and thus will change its position in the body of interest as its orientation is changed. In the study of the dynamics of aircraft, vehicles and vessels, forces and moments need to be resolved relative to the mass center. That is true independent of whether gravity itself is a consideration. Referring to the mass-center as the center-of-gravity is something of a colloquialism, but it is in common usage and when gravity gradient effects are negligible, center-of-gravity and mass-center are the same and are used interchangeably. In physics the benefits of using the center of mass to model a mass distribution can be seen by considering the resultant of the gravity forces on a continuous body. Consider a body Q of volume V with density ρ(r) at each point r in the volume. In a parallel gravity field the force f at each point r is given by, where dm is the mass at the point r, g is the acceleration of gravity, and is a unit vector defining the vertical direction. Choose a reference point R in the volume and compute the resultant force and torque at this point, and If the reference point R is chosen so that it is the center of mass, then which means the resultant torque . Because the resultant torque is zero the body will move as though it is a particle with its mass concentrated at the center of mass. By selecting the center of gravity as the reference point for a rigid body, the gravity forces will not cause the body to rotate, which means the weight of the body can be considered to be concentrated at the center of mass. Linear and angular momentum The linear and angular momentum of a collection of particles can be simplified by measuring the position and velocity of the particles relative to the center of mass. Let the system of particles Pi, i = 1, ..., n of masses mi be located at the coordinates ri with velocities vi. Select a reference point R and compute the relative position and velocity vectors, The total linear momentum and angular momentum of the system are and If R is chosen as the center of mass these equations simplify to where m is the total mass of all the particles, p is the linear momentum, and L is the angular momentum. The law of conservation of momentum predicts that for any system not subjected to external forces the momentum of the system will remain constant, which means the center of mass will move with constant velocity. This applies for all systems with classical internal forces, including magnetic fields, electric fields, chemical reactions, and so on. More formally, this is true for any internal forces that cancel in accordance with Newton's Third Law. Determination The experimental determination of a body's center of mass makes use of gravity forces on the body and is based on the fact that the center of mass is the same as the center of gravity in the parallel gravity field near the earth's surface. The center of mass of a body with an axis of symmetry and constant density must lie on this axis. Thus, the center of mass of a circular cylinder of constant density has its center of mass on the axis of the cylinder. In the same way, the center of mass of a spherically symmetric body of constant density is at the center of the sphere. In general, for any symmetry of a body, its center of mass will be a fixed point of that symmetry. In two dimensions An experimental method for locating the center of mass is to suspend the object from two locations and to drop plumb lines from the suspension points. The intersection of the two lines is the center of mass. The shape of an object might already be mathematically determined, but it may be too complex to use a known formula. In this case, one can subdivide the complex shape into simpler, more elementary shapes, whose centers of mass are easy to find. If the total mass and center of mass can be determined for each area, then the center of mass of the whole is the weighted average of the centers. This method can even work for objects with holes, which can be accounted for as negative masses. A direct development of the planimeter known as an integraph, or integerometer, can be used to establish the position of the centroid or center of mass of an irregular two-dimensional shape. This method can be applied to a shape with an irregular, smooth or complex boundary where other methods are too difficult. It was regularly used by ship builders to compare with the required displacement and center of buoyancy of a ship, and ensure it would not capsize. In three dimensions An experimental method to locate the three-dimensional coordinates of the center of mass begins by supporting the object at three points and measuring the forces, F1, F2, and F3 that resist the weight of the object, ( is the unit vector in the vertical direction). Let r1, r2, and r3 be the position coordinates of the support points, then the coordinates R of the center of mass satisfy the condition that the resultant torque is zero, or This equation yields the coordinates of the center of mass R* in the horizontal plane as, The center of mass lies on the vertical line L, given by The three-dimensional coordinates of the center of mass are determined by performing this experiment twice with the object positioned so that these forces are measured for two different horizontal planes through the object. The center of mass will be the intersection of the two lines L1 and L2 obtained from the two experiments. Applications Engineering designs Automotive applications Engineers try to design a sports car so that its center of mass is lowered to make the car handle better, which is to say, maintain traction while executing relatively sharp turns. The characteristic low profile of the U.S. military Humvee was designed in part to allow it to tilt farther than taller vehicles without rolling over, by ensuring its low center of mass stays over the space bounded by the four wheels even at angles far from the horizontal. Aeronautics The center of mass is an important point on an aircraft, which significantly affects the stability of the aircraft. To ensure the aircraft is stable enough to be safe to fly, the center of mass must fall within specified limits. If the center of mass is ahead of the forward limit, the aircraft will be less maneuverable, possibly to the point of being unable to rotate for takeoff or flare for landing. If the center of mass is behind the aft limit, the aircraft will be more maneuverable, but also less stable, and possibly unstable enough so as to be impossible to fly. The moment arm of the elevator will also be reduced, which makes it more difficult to recover from a stalled condition. For helicopters in hover, the center of mass is always directly below the rotorhead. In forward flight, the center of mass will move forward to balance the negative pitch torque produced by applying cyclic control to propel the helicopter forward; consequently a cruising helicopter flies "nose-down" in level flight. Astronomy The center of mass plays an important role in astronomy and astrophysics, where it is commonly referred to as the barycenter. The barycenter is the point between two objects where they balance each other; it is the center of mass where two or more celestial bodies orbit each other. When a moon orbits a planet, or a planet orbits a star, both bodies are actually orbiting a point that lies away from the center of the primary (larger) body. For example, the Moon does not orbit the exact center of the Earth, but a point on a line between the center of the Earth and the Moon, approximately 1,710 km (1,062 miles) below the surface of the Earth, where their respective masses balance. This is the point about which the Earth and Moon orbit as they travel around the Sun. If the masses are more similar, e.g., Pluto and Charon, the barycenter will fall outside both bodies. Rigging and safety Knowing the location of the center of gravity when rigging is crucial, possibly resulting in severe injury or death if assumed incorrectly. A center of gravity that is at or above the lift point will most likely result in a tip-over incident. In general, the further the center of gravity below the pick point, the safer the lift. There are other things to consider, such as shifting loads, strength of the load and mass, distance between pick points, and number of pick points. Specifically, when selecting lift points, it is very important to place the center of gravity at the center and well below the lift points. Body motion The center of mass of the adult human body vertically is 10 cm above the trochanter (the femur joins the hip), with it in horizontally being located 1.4 cm forward of the knee, and 1.0 behind the trochanter. In kinesiology and biomechanics, the center of mass is an important parameter that assists people in understanding their human locomotion. Typically, a human's center of mass is detected with one of two methods: the reaction board method is a static analysis that involves the person lying down on that instrument, and use of their static equilibrium equation to find their center of mass; the segmentation method relies on a mathematical solution based on the physical principle that the summation of the torques of individual body sections, relative to a specified axis, must equal the torque of the whole system that constitutes the body, measured relative to the same axis. Optimization The Center-of-gravity method is a method for convex optimization, which uses the center-of-gravity of the feasible region.
Physical sciences
Classical mechanics
Physics
174003
https://en.wikipedia.org/wiki/Phar%20Lap
Phar Lap
Phar Lap (4 October 1926 – 5 April 1932) was a New Zealand-born champion Australian Thoroughbred racehorse. Achieving great success during his distinguished career, his initial underdog status gave people hope during the early years of the Great Depression. He won the Melbourne Cup, two Cox Plates, the Australian Derby, and 19 other weight-for-age races. He is universally revered as one of the greatest race horses of all time, not just in Australia but in the history of Thoroughbred horse racing. One of his greatest performances was winning the Agua Caliente Handicap in Mexico in track-record time in his final race. He won in a different country, after a bad start many lengths behind the leaders, with no training before the race, and he split his hoof during the race. After a sudden and mysterious illness, Phar Lap died in 1932 in Menlo Park, California. At the time, he was the third-highest stakes-winner in the world. His mounted hide is displayed at the Melbourne Museum, his skeleton at the Museum of New Zealand, and his heart at the National Museum of Australia. Name The name Phar Lap derives from the common Zhuang and Thai word for lightning: ฟ้าแลบ , literally 'sky flash'. Phar Lap was called "The Wonder Horse," "The Red Terror," and "Big Red" (the latter nickname was also given to two of the greatest United States racehorses, Man o' War and Secretariat). He was affectionately known as "Bobby" to his strapper Tommy Woodcock He was also sometimes referred to as "Australia's Wonder Horse." According to the Museum of Victoria, Aubrey Ping, a medical student at the University of Sydney, suggested "Farlap" as the horse's name. Ping knew the word from his father, a Zhuang-speaking Chinese immigrant. Phar Lap's trainer Harry Telford liked the name, but changed the F to PH to create a seven letter word, which was split in two in keeping with the dominant naming pattern of Melbourne Cup winners. Early life A chestnut gelding, Phar Lap was foaled on 4 October 1926 in Seadown near Timaru in the South Island of New Zealand. He was sired by Night Raid from Entreaty by Winkie. He was by the same sire as the Melbourne Cup winner Nightmarch. Phar Lap was a brother to seven other horses, Fortune's Wheel, Nea Lap (won 5 races), Nightguard, All Clear, Friday Night, Te Uira and Raphis, none of which won a principal (stakes) race. He was a half-brother to another four horses, only two of which were able to win any races at all. Sydney trainer Harry Telford persuaded American businessman David J. Davis to buy the colt at auction, based on his pedigree. Telford's brother Hugh, who lived in New Zealand, was asked to bid up to 190 guineas at the 1928 Trentham Yearling Sales. When the horse was obtained for a mere 160 guineas, he thought it was a great bargain until the colt arrived in Australia. The horse was gangly, his face was covered with warts, and he had an awkward gait. Davis was furious when he saw the colt as well, and refused to pay to train the horse. Telford had not been particularly successful as a trainer, and Davis was one of his few remaining owners. To placate Davis, he agreed to train the horse for nothing, in exchange for a two-thirds share of any winnings. Telford leased the horse for three years and was eventually sold joint ownership by Davis. Although standing a winning racehorse at stud could be quite lucrative, Telford gelded Phar Lap anyway, hoping the colt would concentrate on racing. Racing career Phar Lap finished last in the first race and did not place in his next three races. He won his first race on 27 April 1929, the Maiden Juvenile Handicap at Rosehill, ridden by Jack Baker of Armidale, a 17-year-old apprentice. He didn't race for several months but was then entered in a series of races, in which he moved up in class. Phar Lap took second in the Chelmsford Stakes at Randwick on 14 September 1929, and the racing community started treating him with respect. He won the Rosehill Guineas by three lengths on 21 September 1929, ridden by James L. Munro. As his achievements grew, there were some who tried to halt his progress. Criminals tried to shoot Phar Lap on the morning of Saturday 1 November 1930 after he had finished track work. They missed, and later that day he won the Melbourne Stakes, and three days later the Melbourne Cup as odds-on favourite at 8 to 11. In the four years of his racing career, Phar Lap won 37 of 51 races he entered, including the Melbourne Cup, being ridden by Jim Pike, in 1930 with 9 st 12 lb (). In that year and 1931, he won 14 races in a row. From his win as a three-year-old in the VRC St. Leger Stakes until his final race in Mexico, Phar Lap won 32 of 35 races. In the three races that he did not win, he ran second on two occasions, beaten by a short head and a neck, and in the 1931 Melbourne Cup he finished eighth when carrying 10 st 10 lb (). Phar Lap at the time was owned by American businessman David J. Davis and leased to Telford. After their three-year lease agreement ended, Telford had enough money to become joint owner of the horse. Davis then had Phar Lap shipped to North America to race. Telford did not agree with this decision and refused to go, so Davis, who along with his wife traveled to Mexico with him, brought Phar Lap's strapper Tommy Woodcock as his new trainer. Phar Lap was shipped by boat to Agua Caliente Racetrack near Tijuana, Mexico, to compete in the Agua Caliente Handicap, which was offering the largest prize money ever offered in North America racing. Phar Lap won in track-record time while carrying 129 pounds (58.5 kg). The horse was ridden by Australian jockey Billy Elliot for his seventh win from seven rides. From there, the horse was sent to a private ranch near Menlo Park, California, while his owner negotiated with racetrack officials for special race appearances. Death Early on 5 April 1932, the horse's strapper for the North American visit, Tommy Woodcock, found him in severe pain and with a high temperature. Within a few hours, Phar Lap haemorrhaged to death. An autopsy revealed that the horse's stomach and intestines were inflamed, leading many to believe the horse had been deliberately poisoned. There have been alternative theories, including accidental poisoning from lead insecticide and a stomach condition. It was not until the 1980s that the infection could be formally identified. In 2000, equine specialists studying the two necropsies concluded that Phar Lap probably died of duodenitis-proximal jejunitis, an acute bacterial gastroenteritis. In 2006, Australian Synchrotron Research scientists said it was almost certain Phar Lap was poisoned with a large single dose of arsenic in the hours before he died, perhaps supporting the theory that Phar Lap was killed on the orders of US gangsters, who feared the Melbourne Cup-winning champion would inflict big losses on their illegal bookmakers. No real evidence of involvement by a criminal element exists, however. Sydney veterinarian Percy Sykes believes deliberate poisoning did not cause the death. He said "In those days, arsenic was quite a common tonic, usually given in the form of a solution (Fowler's Solution)", and suggests this was the cause of the high levels. "It was so common that I'd reckon 90 percent of the horses had arsenic in their system." In December 2007, Phar Lap's mane was tested for multiple doses of arsenic which, if found, would point to accidental poisoning. In April 2008, an 82-page handwritten notebook belonging to Telford and containing recipes for tonics given to Phar Lap in the days before swabbing was sold by a Melbourne auction house. It showed that Phar Lap was given tonics designed to boost his performance that included arsenic, strychnine, cocaine and caffeine. The find gave credence to Woodcock's deathbed admission in 1985 that Phar Lap may have been given an overdose of a tonic before the horse died in 1932. The notebook was sold to the Melbourne Museum for $37,000. On 19 June 2008, the Melbourne Museum released the findings of the forensic investigation conducted by Ivan Kempson, University of South Australia, and Dermot Henry, Natural Science Collections at Museum Victoria. Kempson analysed six hairs from Phar Lap's mane at the Advanced Photon Source at Argonne National Laboratory near Chicago. These high resolution X-rays detect arsenic in hair samples, showing the specific difference "between arsenic, which had entered the hair cells via the blood and arsenic which had infused the hair cells by the taxidermy process when he was stuffed and mounted at the museum". Kempson and Henry discovered that in the 30 to 40 hours before Phar Lap's death, the horse ingested a massive dose of arsenic. "We can't speculate where the arsenic came from, but it was easily accessible at the time", Henry said. In October 2011 the Sydney Morning Herald published an article in which a New Zealand physicist and information from Phar Lap's strapper state that the great horse was never given any tonic with arsenic and that he died of an infection. Said Putt, "Unless we are prepared to say that Tommy Woodcock was a downright liar, which even today, decades after the loveable and respected horseman's death, would ostracise us with the Australian racing public, we must accept him on his word. The ineluctable conclusion we are left with, whether we like it or not, is that Phar Lap's impeccable achievements here and overseas were utterly tonic, stimulant, and drug-free." Contradicting this is the tonic book of Harry Telford, Phar Lap's owner and trainer, on display in Museum Victoria, Melbourne. One recipe for a "general tonic" has a main ingredient of arsenic and has written below it: "A great tonic for all horses". Several theories have been proposed for the large amount of arsenic in Phar Lap's body. Legacy Following his death, Phar Lap's heart was donated to the Institute of Anatomy in Canberra and his skeleton to the New Zealand's National Museum in Wellington. After preparations of the hide by New York City taxidermist Louis Paul Jonas, Phar Lap's stuffed body was placed in the Australia Gallery at Melbourne Museum. The hide and the skeleton were put on exhibition together when Museum of New Zealand Te Papa Tongarewa lent the skeleton to the Melbourne Museum in September 2010 as part of celebrations for the 150th running of the 2010 Melbourne Cup. Phar Lap's heart was remarkable for its size, weighing , compared with a normal horse's heart at . Now held at the National Museum of Australia in Canberra, it is the object visitors most often request to see. The author and film maker Peter Luck was convinced the heart is a fake. In Luck's 1979 television series This Fabulous Century, the daughter of Walker Neilson, the government veterinarian who performed the first post-mortem on Phar Lap, says her father told her the heart was necessarily cut to pieces during the autopsy, and the heart on display is that of a draughthorse. However the expression "a heart as big as Phar Lap" to describe a very generous or courageous person became a popular idiom. Several books and films have featured Phar Lap, including the 1983 film Phar Lap, and the song "Phar Lap—Farewell To You". Phar Lap was one of five inaugural inductees into both the Australian Racing Hall of Fame and New Zealand Racing Hall of Fame. In the Blood-Horse magazine ranking of the Top 100 U.S. Thoroughbred champions of the 20th century, Phar Lap was ranked No. 22. The horse is considered to be a national icon in both Australia and New Zealand. In 1978 he was honoured on a postage stamp issued by Australia Post and features in the Australian citizenship test. Phar Lap has been honoured with a $500,000 life-sized bronze memorial near his birthplace in Timaru, New Zealand, that was unveiled on 25 November 2009. The statue is located at the entrance to Phar Lap Raceway in Washdyke. There is also a life-sized bronze statue at Flemington Racecourse in Melbourne. Phar Lap has several residential streets named after him in Australia, New Zealand, and the United States. (In many cases, the name is merged into a single word "Pharlap".) In 1931, Gilbert Percy Whitley, an ichthyologist at the Australian Museum, proposed a new genus of seahorse, Farlapiscis, named after Phar Lap. Farlapiscis was subsequently categorized as a junior synonym of the genus Hippocampus. 1930 racebook Race record 1928/1929: Two-year-old season 1929/1930: Three-year-old season 1930/1931: Four-year-old season 1931/1932: Five-year-old season Total: 51 starts – 37 wins, 3 seconds, 2 thirds, 2 fourths, 7 unplaced Pedigree
Biology and health sciences
Individual animals
Animals
174026
https://en.wikipedia.org/wiki/Spherical%20geometry
Spherical geometry
Spherical geometry or spherics () is the geometry of the two-dimensional surface of a sphere or the -dimensional surface of higher dimensional spheres. Long studied for its practical applications to astronomy, navigation, and geodesy, spherical geometry and the metrical tools of spherical trigonometry are in many respects analogous to Euclidean plane geometry and trigonometry, but also have some important differences. The sphere can be studied either extrinsically as a surface embedded in 3-dimensional Euclidean space (part of the study of solid geometry), or intrinsically using methods that only involve the surface itself without reference to any surrounding space. Principles In plane (Euclidean) geometry, the basic concepts are points and (straight) lines. In spherical geometry, the basic concepts are point and great circle. However, two great circles on a plane intersect in two antipodal points, unlike coplanar lines in Elliptic geometry. In the extrinsic 3-dimensional picture, a great circle is the intersection of the sphere with any plane through the center. In the intrinsic approach, a great circle is a geodesic; a shortest path between any two of its points provided they are close enough. Or, in the (also intrinsic) axiomatic approach analogous to Euclid's axioms of plane geometry, "great circle" is simply an undefined term, together with postulates stipulating the basic relationships between great circles and the also-undefined "points". This is the same as Euclid's method of treating point and line as undefined primitive notions and axiomatizing their relationships. Great circles in many ways play the same logical role in spherical geometry as lines in Euclidean geometry, e.g., as the sides of (spherical) triangles. This is more than an analogy; spherical and plane geometry and others can all be unified under the umbrella of geometry built from distance measurement, where "lines" are defined to mean shortest paths (geodesics). Many statements about the geometry of points and such "lines" are equally true in all those geometries provided lines are defined that way, and the theory can be readily extended to higher dimensions. Nevertheless, because its applications and pedagogy are tied to solid geometry, and because the generalization loses some important properties of lines in the plane, spherical geometry ordinarily does not use the term "line" at all to refer to anything on the sphere itself. If developed as a part of solid geometry, use is made of points, straight lines and planes (in the Euclidean sense) in the surrounding space. In spherical geometry, angles are defined between great circles, resulting in a spherical trigonometry that differs from ordinary trigonometry in many respects; for example, the sum of the interior angles of a spherical triangle exceeds 180 degrees. Relation to similar geometries Because a sphere and a plane differ geometrically, (intrinsic) spherical geometry has some features of a non-Euclidean geometry and is sometimes described as being one. However, spherical geometry was not considered a full-fledged non-Euclidean geometry sufficient to resolve the ancient problem of whether the parallel postulate is a logical consequence of the rest of Euclid's axioms of plane geometry, because it requires another axiom to be modified. The resolution was found instead in elliptic geometry, to which spherical geometry is closely related, and hyperbolic geometry; each of these new geometries makes a different change to the parallel postulate. The principles of any of these geometries can be extended to any number of dimensions. An important geometry related to that of the sphere is that of the real projective plane; it is obtained by identifying antipodal points (pairs of opposite points) on the sphere. Locally, the projective plane has all the properties of spherical geometry, but it has different global properties. In particular, it is non-orientable, or one-sided, and unlike the sphere it cannot be drawn as a surface in 3-dimensional space without intersecting itself. Concepts of spherical geometry may also be applied to the oblong sphere, though minor modifications must be implemented on certain formulas. History Greek antiquity The earliest mathematical work of antiquity to come down to our time is On the rotating sphere (Περὶ κινουμένης σφαίρας, Peri kinoumenes sphairas) by Autolycus of Pitane, who lived at the end of the fourth century BC. Spherical trigonometry was studied by early Greek mathematicians such as Theodosius of Bithynia, a Greek astronomer and mathematician who wrote Spherics, a book on the geometry of the sphere, and Menelaus of Alexandria, who wrote a book on spherical trigonometry called Sphaerica and developed Menelaus' theorem. Islamic world The Book of Unknown Arcs of a Sphere written by the Islamic mathematician Al-Jayyani is considered to be the first treatise on spherical trigonometry. The book contains formulae for right-handed triangles, the general law of sines, and the solution of a spherical triangle by means of the polar triangle. The book On Triangles by Regiomontanus, written around 1463, is the first pure trigonometrical work in Europe. However, Gerolamo Cardano noted a century later that much of its material on spherical trigonometry was taken from the twelfth-century work of the Andalusi scholar Jabir ibn Aflah. Euler's work Leonhard Euler published a series of important memoirs on spherical geometry: L. Euler, Principes de la trigonométrie sphérique tirés de la méthode des plus grands et des plus petits, Mémoires de l'Académie des Sciences de Berlin 9 (1753), 1755, p. 233–257; Opera Omnia, Series 1, vol. XXVII, p. 277–308. L. Euler, Eléments de la trigonométrie sphéroïdique tirés de la méthode des plus grands et des plus petits, Mémoires de l'Académie des Sciences de Berlin 9 (1754), 1755, p. 258–293; Opera Omnia, Series 1, vol. XXVII, p. 309–339. L. Euler, De curva rectificabili in superficie sphaerica, Novi Commentarii academiae scientiarum Petropolitanae 15, 1771, pp. 195–216; Opera Omnia, Series 1, Volume 28, pp. 142–160. L. Euler, De mensura angulorum solidorum, Acta academiae scientiarum imperialis Petropolitinae 2, 1781, p. 31–54; Opera Omnia, Series 1, vol. XXVI, p. 204–223. L. Euler, Problematis cuiusdam Pappi Alexandrini constructio, Acta academiae scientiarum imperialis Petropolitinae 4, 1783, p. 91–96; Opera Omnia, Series 1, vol. XXVI, p. 237–242. L. Euler, Geometrica et sphaerica quaedam, Mémoires de l'Académie des Sciences de Saint-Pétersbourg 5, 1815, p. 96–114; Opera Omnia, Series 1, vol. XXVI, p. 344–358. L. Euler, Trigonometria sphaerica universa, ex primis principiis breviter et dilucide derivata, Acta academiae scientiarum imperialis Petropolitinae 3, 1782, p. 72–86; Opera Omnia, Series 1, vol. XXVI, p. 224–236. L. Euler, Variae speculationes super area triangulorum sphaericorum, Nova Acta academiae scientiarum imperialis Petropolitinae 10, 1797, p. 47–62; Opera Omnia, Series 1, vol. XXIX, p. 253–266. Properties Spherical geometry has the following properties: Any two great circles intersect in two diametrically opposite points, called antipodal points. Any two points that are not antipodal points determine a unique great circle. There is a natural unit of angle measurement (based on a revolution), a natural unit of length (based on the circumference of a great circle) and a natural unit of area (based on the area of the sphere). Each great circle is associated with a pair of antipodal points, called its poles which are the common intersections of the set of great circles perpendicular to it. This shows that a great circle is, with respect to distance measurement on the surface of the sphere, a circle: the locus of points all at a specific distance from a center. Each point is associated with a unique great circle, called the polar circle of the point, which is the great circle on the plane through the centre of the sphere and perpendicular to the diameter of the sphere through the given point. As there are two arcs determined by a pair of points, which are not antipodal, on the great circle they determine, three non-collinear points do not determine a unique triangle. However, if we only consider triangles whose sides are minor arcs of great circles, we have the following properties: The angle sum of a triangle is greater than 180° and less than 540°. The area of a triangle is proportional to the excess of its angle sum over 180°. Two triangles with the same angle sum are equal in area. There is an upper bound for the area of triangles. The composition (product) of two reflections-across-a-great-circle may be considered as a rotation about either of the points of intersection of their axes. Two triangles are congruent if and only if they correspond under a finite product of such reflections. Two triangles with corresponding angles equal are congruent (i.e., all similar triangles are congruent). Relation to Euclid's postulates If "line" is taken to mean great circle, spherical geometry only obeys two of Euclid's five postulates: the second postulate ("to produce [extend] a finite straight line continuously in a straight line") and the fourth postulate ("that all right angles are equal to one another"). However, it violates the other three. Contrary to the first postulate ("that between any two points, there is a unique line segment joining them"), there is not a unique shortest route between any two points (antipodal points such as the north and south poles on a spherical globe are counterexamples); contrary to the third postulate, a sphere does not contain circles of arbitrarily great radius; and contrary to the fifth (parallel) postulate, there is no point through which a line can be drawn that never intersects a given line. A statement that is equivalent to the parallel postulate is that there exists a triangle whose angles add up to 180°. Since spherical geometry violates the parallel postulate, there exists no such triangle on the surface of a sphere. The sum of the angles of a triangle on a sphere is , where f is the fraction of the sphere's surface that is enclosed by the triangle. For any positive value of f, this exceeds 180°.
Mathematics
Non-Euclidean geometry
null
174030
https://en.wikipedia.org/wiki/Western%20blot
Western blot
The western blot (sometimes called the protein immunoblot), or western blotting, is a widely used analytical technique in molecular biology and immunogenetics to detect specific proteins in a sample of tissue homogenate or extract. Besides detecting the proteins, this technique is also utilized to visualize, distinguish, and quantify the different proteins in a complicated protein combination. Western blot technique uses three elements to achieve its task of separating a specific protein from a complex: separation by size, transfer of protein to a solid support, and marking target protein using a primary and secondary antibody to visualize. A synthetic or animal-derived antibody (known as the primary antibody) is created that recognizes and binds to a specific target protein. The electrophoresis membrane is washed in a solution containing the primary antibody, before excess antibody is washed off. A secondary antibody is added which recognizes and binds to the primary antibody. The secondary antibody is visualized through various methods such as staining, immunofluorescence, and radioactivity, allowing indirect detection of the specific target protein. Other related techniques include dot blot analysis, quantitative dot blot, immunohistochemistry and immunocytochemistry, where antibodies are used to detect proteins in tissues and cells by immunostaining, and enzyme-linked immunosorbent assay (ELISA). The name western blot is a play on the Southern blot, a technique for DNA detection named after its inventor, English biologist Edwin Southern. Similarly, detection of RNA is termed as northern blot. The term western blot was given by W. Neal Burnette in 1981, although the method itself was independently invented in 1979 by Jaime Renart, Jakob Reiser, and George Stark at Stanford University, and by Harry Towbin, Theophil Staehelin, and Julian Gordon at the Friedrich Miescher Institute in Basel, Switzerland. The Towbin group also used secondary antibodies for detection, thus resembling the actual method that is almost universally used today. Between 1979 and 2019 "it has been mentioned in the titles, abstracts, and keywords of more than 400,000 PubMed-listed publications" and may still be the most-used protein-analytical technique. Applications The western blot is extensively used in biochemistry for the qualitative detection of single proteins and protein-modifications (such as post-translational modifications). At least 8–9% of all protein-related publications are estimated to apply western blots. It is used as a general method to identify the presence of a specific single protein within a complex mixture of proteins. A semi-quantitative estimation of a protein can be derived from the size and colour intensity of a protein band on the blot membrane. In addition, applying a dilution series of a purified protein of known concentrations can be used to allow a more precise estimate of protein concentration. The western blot is routinely used for verification of protein production after cloning. It is also used in medical diagnostics, e.g., in the HIV test or BSE-Test. The confirmatory HIV test employs a western blot to detect anti-HIV antibody in a human serum sample. Proteins from known HIV-infected cells are separated and blotted on a membrane as above. Then, the serum to be tested is applied in the primary antibody incubation step; free antibody is washed away, and a secondary anti-human antibody linked to an enzyme signal is added. The stained bands then indicate the proteins to which the patient's serum contains antibody. A western blot is also used as the definitive test for variant Creutzfeldt–Jakob disease, a type of prion disease linked to the consumption of contaminated beef from cattle with bovine spongiform encephalopathy (BSE, commonly referred to as 'mad cow disease'). Another application is in the diagnosis of tularemia. An evaluation of the western blot's ability to detect antibodies against F. tularensis revealed that its sensitivity is almost 100% and the specificity is 99.6%. Some forms of Lyme disease testing employ western blotting. A western blot can also be used as a confirmatory test for Hepatitis B infection and HSV-2 (Herpes Type 2) infection. In veterinary medicine, a western blot is sometimes used to confirm FIV+ status in cats. Further applications of the western blot technique include its use by the World Anti-Doping Agency (WADA). Blood doping is the misuse of certain techniques and/or substances to increase one's red blood cell mass, which allows the body to transport more oxygen to muscles and therefore increase stamina and performance. There are three widely known substances or methods used for blood doping, namely, erythropoietin (EPO), synthetic oxygen carriers and blood transfusions. Each is prohibited under WADA's List of Prohibited Substances and Methods. The western blot technique was used during the 2014 FIFA World Cup in the anti-doping campaign for that event. In total, over 1000 samples were collected and analysed by Reichel, et al. in the WADA accredited Laboratory of Lausanne, Switzerland. Recent research utilizing the western blot technique showed an improved detection of EPO in blood and urine based on novel Velum SAR precast horizontal gels optimized for routine analysis. With the adoption of the horizontal SAR-PAGE in combination with the precast film-supported Velum SAR gels the discriminatory capacity of micro-dose application of rEPO was significantly enhanced. Identification of protein localization across cells For medication development, the identification of therapeutic targets, and biological research, it is essential to comprehend where proteins are located within a cell. The subcellular locations of proteins inside the cell and their functions are closely related. The relationship between protein function and localization suggests that when proteins move, their functions may change or acquire new characteristics. A protein's subcellular placement can be determined using a variety of methods. Numerous efficient and reliable computational tools and strategies have been created and used to identify protein subcellular localization. With the aid of subcellular fractionation methods, WB continues to be an important fundamental method for the investigation and comprehension of protein localization. Epitope mapping Due to their various epitopes, antibodies have gained interest in both basic and clinical research. The foundation of antibody characterization and validation is epitope mapping. The procedure of identifying an antibody's binding sites (epitopes) on the target protein is referred to as "epitope mapping." Finding the binding epitope of an antibody is essential for the discovery and creation of novel vaccines, diagnostics, and therapeutics. As a result, various methods for mapping antibody epitopes have been created. At this point, western blotting's specificity is the main feature that sets it apart from other epitope mapping techniques. There are several application of western blot for epitope mapping on human skin samples, hemorrhagic disease virus. Procedure The western blot method is composed of gel electrophoresis to separate native proteins by 3-D structure or denatured proteins by the length of the polypeptide, followed by an electrophoretic transfer onto a membrane (mostly PVDF or nitrocellulose) and an immunostaining procedure to visualize a certain protein on the blot membrane. Sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE) is generally used for the denaturing electrophoretic separation of proteins. Sodium dodecyl sulfate (SDS) is generally used as a buffer (as well as in the gel) in order to give all proteins present a uniform negative charge, since proteins can be positively, negatively, or neutrally charged. Prior to electrophoresis, protein samples are often boiled to denature the proteins present. This ensures that proteins are separated based on size and prevents proteases (enzymes that break down proteins) from degrading samples. Following electrophoretic separation, the proteins are transferred to a membrane (typically nitrocellulose or PVDF). The membrane is often then stained with Ponceau S in order to visualize the proteins on the blot and ensure a proper transfer occurred. Next the proteins are blocked with milk (or other blocking agents) to prevent non-specific antibody binding, and then stained with antibodies specific to the target protein. Lastly, the membrane will be stained with a secondary antibody that recognizes the first antibody staining, which can then be used for detection by a variety of methods. The gel electrophoresis step is included in western blot analysis to resolve the issue of the cross-reactivity of antibodies. Sample preparation As a significant step in conducting a western blot, sample preparation has to be done effectively since the interpretation of this assay is influenced by the protein preparation, which is composed of protein extraction and purification processes. To achieve efficient protein extraction, a proper homogenization method needs to be chosen due to the fact that it is responsible for bursting the cell membrane and releasing the intracellular components. Besides that, the ideal lysis buffer is needed to acquire substantial amounts of target protein content because the buffer is leading the process of protein solubilization and preventing protein degradation. After completing the sample preparation, the protein content is ready to be separated by the utilization of gel electrophoresis. Gel electrophoresis The proteins of the sample are separated using gel electrophoresis. Separation of proteins may be by isoelectric point (pI), molecular weight, electric charge, or a combination of these factors. The nature of the separation depends on the treatment of the sample and the nature of the gel. By far the most common type of gel electrophoresis employs polyacrylamide gels and buffers loaded with sodium dodecyl sulfate (SDS). SDS-PAGE (SDS-polyacrylamide gel electrophoresis) maintains polypeptides in a denatured state once they have been treated with strong reducing agents to remove secondary and tertiary structure (e.g. disulfide bonds [S-S] to sulfhydryl groups [SH and SH]) and thus allows separation of proteins by their molecular mass. Sampled proteins become covered in the negatively charged SDS, effectively becoming anionic, and migrate towards the positively charged (higher voltage) anode (usually having a red wire) through the acrylamide mesh of the gel. Smaller proteins migrate faster through this mesh, and the proteins are thus separated according to size (usually measured in kilodaltons, kDa). The concentration of acrylamide determines the resolution of the gel – the greater the acrylamide concentration, the better the resolution of lower molecular weight proteins. The lower the acrylamide concentration, the better the resolution of higher molecular weight proteins. Proteins travel only in one dimension along the gel for most blots. Samples are loaded into wells in the gel. One lane is usually reserved for a marker or ladder, which is a commercially available mixture of proteins of known molecular weights, typically stained so as to form visible, coloured bands. When voltage is applied along the gel, proteins migrate through it at different speeds dependent on their size. These different rates of advancement (different electrophoretic mobilities) separate into bands within each lane. Protein bands can then be compared to the ladder bands, allowing estimation of the protein's molecular weight. It is also possible to use a two-dimensional gel which spreads the proteins from a single sample out in two dimensions. Proteins are separated according to isoelectric point (pH at which they have a neutral net charge) in the first dimension, and according to their molecular weight in the second dimension. Transfer To make the proteins accessible to antibody detection, they are moved from within the gel onto a membrane, a solid support, which is an essential part of the process. There are two types of membrane: nitrocellulose (NC) or polyvinylidene difluoride (PVDF). NC membrane has high affinity for protein and its retention abilities. However, NC is brittle, and does not allow the blot to be used for re-probing, whereas PVDF membrane allows the blot to be re-probed. The most commonly used method for transferring the proteins is called electroblotting. Electroblotting uses an electric current to pull the negatively charged proteins from the gel towards the positively charged anode, and into the PVDF or NC membrane. The proteins move from within the gel onto the membrane while maintaining the organization they had within the gel. An older method of transfer involves placing a membrane on top of the gel, and a stack of filter papers on top of that. The entire stack is placed in a buffer solution which moves up the paper by capillary action, bringing the proteins with it. In practice this method is not commonly used due to the lengthy procedure time. As a result of either transfer process, the proteins are exposed on a thin membrane layer for detection. Both varieties of membrane are chosen for their non-specific protein binding properties (i.e. binds all proteins equally well). Protein binding is based upon hydrophobic interactions, as well as charged interactions between the membrane and protein. Nitrocellulose membranes are cheaper than PVDF, but are far more fragile and cannot withstand repeated probings. Total protein staining Total protein staining allows the total protein that has been successfully transferred to the membrane to be visualised, allowing the user to check the uniformity of protein transfer and to perform subsequent normalization of the target protein with the actual protein amount per lane. Normalization with the so-called "loading control" was based on immunostaining of housekeeping proteins in the classical procedure, but is heading toward total protein staining recently, due to multiple benefits. At least seven different approaches for total protein staining have been described for western blot normalization: Ponceau S, stain-free techniques, Sypro Ruby, Epicocconone, Coomassie R-350, Amido Black, and Cy5. In order to avoid noise of signal, total protein staining should be performed before blocking of the membrane. Nevertheless, post-antibody stainings have been described as well. Blocking Since the membrane has been chosen for its ability to bind protein and as both antibodies and the target are proteins, steps must be taken to prevent the interactions between the membrane and the antibody used for detection of the target protein. Blocking of non-specific binding is achieved by placing the membrane in a dilute solution of protein – typically 3–5% bovine serum albumin (BSA) or non-fat dry milk (both are inexpensive) in tris-buffered saline (TBS) or I-Block, with a minute percentage (0.1%) of detergent such as Tween 20 or Triton X-100. Although non-fat dry milk is preferred due to its availability, an appropriate blocking solution is needed as not all proteins in milk are compatible with all the detection bands. The protein in the dilute solution attaches to the membrane in all places where the target proteins have not attached. Thus, when the antibody is added, it cannot bind to the membrane, and therefore the only available binding site is the specific target protein. This reduces background in the final product of the western blot, leading to clearer results, and eliminates false positives. Incubation During the detection process, the membrane is "probed" for the protein of interest with a modified antibody which is linked to a reporter enzyme; when exposed to an appropriate substrate, this enzyme drives a colorimetric reaction and produces a colour. For a variety of reasons, this traditionally takes place in a two-step process, although there are now one-step detection methods available for certain applications. Primary antibody The primary antibodies are generated when a host species or immune cell culture is exposed to the protein of interest (or a part thereof). Normally, this is part of the immune response, whereas here they are harvested and used as sensitive and specific detection tools that bind the protein directly. After blocking, a solution of primary antibody (generally between 0.5 and 5 micrograms/mL) diluted in either PBS or TBST wash buffer is incubated with the membrane under gentle agitation for typically an hour at room temperature, or overnight at 4°C. It can also be incubated at different temperatures, with lesser temperatures being associated with more binding, both specific (to the target protein, the "signal") and non-specific ("noise"). Following incubation, the membrane is washed several times in wash buffer to remove unbound primary antibody, and thereby minimize background. Typically, the wash buffer solution is composed of buffered saline solution with a small percentage of detergent, and sometimes with powdered milk or BSA. Secondary antibody After rinsing the membrane to remove unbound primary antibody, the membrane is exposed to another antibody known as the secondary antibody. Antibodies come from animal sources (or animal sourced hybridoma cultures). The secondary antibody recognises and binds to the species-specific portion of the primary antibody. Therefore, an anti-mouse secondary antibody will bind to almost any mouse-sourced primary antibody, and can be referred to as an 'anti-species' antibody (e.g. anti-mouse, anti-goat etc.). To allow detection of the target protein, the secondary antibody is commonly linked to biotin or a reporter enzyme such as alkaline phosphatase or horseradish peroxidase. This means that several secondary antibodies will bind to one primary antibody and enhance the signal, allowing the detection of proteins of a much lower concentration than would be visible by SDS-PAGE alone. Horseradish peroxidase is commonly linked to secondary antibodies to allow the detection of the target protein by chemiluminescence. The chemiluminescent substrate is cleaved by horseradish peroxidase, resulting in the production of luminescence. Therefore, the production of luminescence is proportional to the amount of horseradish peroxidase-conjugated secondary antibody, and therefore, indirectly measures the presence of the target protein. A sensitive sheet of photographic film is placed against the membrane, and exposure to the light from the reaction creates an image of the antibodies bound to the blot. A cheaper but less sensitive approach utilizes a 4-chloronaphthol stain with 1% hydrogen peroxide; the reaction of peroxide radicals with 4-chloronaphthol produces a dark purple stain that can be photographed without using specialized photographic film. As with the ELISPOT and ELISA procedures, the enzyme can be provided with a substrate molecule that will be converted by the enzyme to a coloured reaction product that will be visible on the membrane (see the figure below with blue bands). Another method of secondary antibody detection utilizes a near-infrared fluorophore-linked antibody. The light produced from the excitation of a fluorescent dye is static, making fluorescent detection a more precise and accurate measure of the difference in the signal produced by labeled antibodies bound to proteins on a western blot. Proteins can be accurately quantified because the signal generated by the different amounts of proteins on the membranes is measured in a static state, as compared to chemiluminescence, in which light is measured in a dynamic state. A third alternative is to use a radioactive label rather than an enzyme coupled to the secondary antibody, such as labeling an antibody-binding protein like Staphylococcus Protein A or Streptavidin with a radioactive isotope of iodine. Since other methods are safer, quicker, and cheaper, this method is now rarely used; however, an advantage of this approach is the sensitivity of auto-radiography-based imaging, which enables highly accurate protein quantification when combined with optical software (e.g. Optiquant). One step Historically, the probing process was performed in two steps because of the relative ease of producing primary and secondary antibodies in separate processes. This gives researchers and corporations huge advantages in terms of flexibility, reduction of cost, and adds an amplification step to the detection process. Given the advent of high-throughput protein analysis and lower limits of detection, however, there has been interest in developing one-step probing systems that would allow the process to occur faster and with fewer consumables. This requires a probe antibody which both recognizes the protein of interest and contains a detectable label, probes which are often available for known protein tags. The primary probe is incubated with the membrane in a manner similar to that for the primary antibody in a two-step process, and then is ready for direct detection after a series of wash steps. Detection and visualization After the unbound probes are washed away, the western blot is ready for detection of the probes that are labeled and bound to the protein of interest. In practical terms, not all westerns reveal protein only at one band in a membrane. Size approximations are taken by comparing the stained bands to that of the marker or ladder loaded during electrophoresis. The process is commonly repeated for a structural protein, such as actin or tubulin, that should not change between samples. The amount of target protein is normalized to the structural protein to control between groups. A superior strategy is the normalization to the total protein visualized with trichloroethanol or epicocconone. This practice ensures correction for the amount of total protein on the membrane in case of errors or incomplete transfers. (see western blot normalization) Colorimetric detection The colorimetric detection method depends on incubation of the western blot with a substrate that reacts with the reporter enzyme (such as peroxidase) that is bound to the secondary antibody. This converts the soluble dye into an insoluble form of a different colour that precipitates next to the enzyme and thereby stains the membrane. Development of the blot is then stopped by washing away the soluble dye. Protein levels are evaluated through densitometry (how intense the stain is) or spectrophotometry. Chemiluminescent detection Chemiluminescent detection methods depend on incubation of the western blot with a substrate that will luminesce when exposed to the reporter on the secondary antibody. The light is then detected by CCD cameras which capture a digital image of the western blot or photographic film. The use of film for western blot detection is slowly disappearing because of non linearity of the image (non accurate quantification). The image is analysed by densitometry, which evaluates the relative amount of protein staining and quantifies the results in terms of optical density. Newer software allows further data analysis such as molecular weight analysis if appropriate standards are used. Radioactive detection Radioactive labels do not require enzyme substrates, but rather, allow the placement of medical X-ray film directly against the western blot, which develops as it is exposed to the label and creates dark regions which correspond to the protein bands of interest (see image above). The importance of radioactive detections methods is declining due to its hazardous radiation , because it is very expensive, health and safety risks are high, and ECL (enhanced chemiluminescence) provides a useful alternative. Fluorescent detection The fluorescently labeled probe is excited by light and the emission of the excitation is then detected by a photosensor such as a CCD camera equipped with appropriate emission filters which captures a digital image of the western blot and allows further data analysis such as molecular weight analysis and a quantitative western blot analysis. Fluorescence is considered to be one of the best methods for quantification but is less sensitive than chemiluminescence. Secondary probing One major difference between nitrocellulose and PVDF membranes relates to the ability of each to support "stripping" antibodies off and reusing the membrane for subsequent antibody probes. While there are well-established protocols available for stripping nitrocellulose membranes, the sturdier PVDF allows for easier stripping, and for more reuse before background noise limits experiments. Another difference is that, unlike nitrocellulose, PVDF must be soaked in 95% ethanol, isopropanol or methanol before use. PVDF membranes also tend to be thicker and more resistant to damage during use. Minimum requirement specification for Western Blot In order to ensure that the results of Western blots are reproducible, it is important to report the various parameters mentioned above, including specimen preparation, the concentration of protein used for loading, the percentage of gel and running condition, various transfer methods, attempting to block conditions, the concentration of antibodies, and identification and quantitative determination methods. Many of the articles that have been published don't cover all of these variables. Hence, it is crucial to describe different experimental circumstances or parameters in order to increase the repeatability and precision of WB. To increase WB repeatability, a minimum reporting criteria is thus required. 2-D gel electrophoresis Two-dimensional SDS-PAGE uses the principles and techniques outlined above. 2-D SDS-PAGE, as the name suggests, involves the migration of polypeptides in 2 dimensions. For example, in the first dimension, polypeptides are separated according to isoelectric point, while in the second dimension, polypeptides are separated according to their molecular weight. The isoelectric point of a given protein is determined by the relative number of positively (e.g. lysine, arginine) and negatively (e.g. glutamate, aspartate) charged amino acids, with negatively charged amino acids contributing to a low isoelectric point and positively charged amino acids contributing to a high isoelectric point. Samples could also be separated first under nonreducing conditions using SDS-PAGE, and under reducing conditions in the second dimension, which breaks apart disulfide bonds that hold subunits together. SDS-PAGE might also be coupled with urea-PAGE for a 2-dimensional gel. In principle, this method allows for the separation of all cellular proteins on a single large gel. A major advantage of this method is that it often distinguishes between different isoforms of a particular protein – e.g. a protein that has been phosphorylated (by addition of a negatively charged group). Proteins that have been separated can be cut out of the gel and then analysed by mass spectrometry, which identifies their molecular weight. Problems Detection problems There may be a weak or absent signal in the band for a number of reasons related to the amount of antibody and antigen used. This problem might be resolved by using the ideal antigen and antibody concentrations and dilutions specified in the supplier's data sheet. Increasing the exposition period in the detection system's software can address weak bands caused by lower sample and antibody concentrations. Multiple band problems When the protein is broken down by proteases, several bands other than predicted bands of low molecular weight might appear. The development of numerous bands can be prevented by properly preparing protein samples with enough protease inhibitors. Multiple bands might show up in the high molecular weight region because some proteins form dimers, trimers, and multimers; this issue might be solved by heating the sample for longer periods of time. Proteins with post-translational modifications (PTMs) or numerous isoforms cause several bands to appear at various molecular weight areas. PTMs can be removed from a specimen using specific chemicals, which also remove extra bands. High background Strong antibody concentrations, inadequate blocking, inadequate washing, and excessive exposure time during imaging can result in a high background in the blots. A high background in the blots could be avoided by fixing these issues. Irregular and uneven bands It has been claimed that a variety of odd and unequal bands, including black dots, white spots or bands, and curving bands, have occurred. The block dots are removed from the blots by effective blocking. White patches develop as a result of bubbles between the membrane and gel. White bands appear in the blots when main and secondary antibodies are present in significant concentrations. Because of the high voltage used during the gel run and the rapid protein migration, smiley bands appear in the blots. The strange bands in the blot are resolved by resolving these problems. Mitigations During the western blotting, there could be several problems related to the different steps of this procedure. Those problems could originate from a protein analysis step such as the detection of low- or post-translationally modified proteins. Additionally, they can be based on the selection of antibodies since the quality of the antibodies plays a significant role in the detection of proteins specifically. On account of the presence of these kinds of problems, a variety of improvements are being produced in the fields of preparation of cell lysate and blotting procedures to build up reliable results. Moreover, to achieve more sensitive analysis and overcome the problems associated with western blotting, several different techniques have been developed and utilized, such as far-western blotting, diffusion blotting, single-cell resolution western blotting, and automated microfluidic western blotting. Presentation Researchers use different software to process and align image-sections for elegant presentation of western blot results. Popular tools include Sciugo, Microsoft PowerPoint, Adobe Illustrator and GIMP.
Technology
Biotechnology
null
174080
https://en.wikipedia.org/wiki/Diagonal%20matrix
Diagonal matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale. Definition As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix with columns and rows is diagonal if However, the main diagonal entries are unrestricted. The term diagonal matrix may sometimes refer to a , which is an -by- matrix with all the entries not of the form being zero. For example: More often, however, diagonal matrix refers to square matrices, which can be specified explicitly as a . A square diagonal matrix is a symmetric matrix, so this can also be called a . The following matrix is square diagonal matrix: If the entries are real numbers or complex numbers, then it is a normal matrix as well. In the remainder of this article we will consider only square diagonal matrices, and refer to them simply as "diagonal matrices". Vector-to-matrix diag operator A diagonal matrix can be constructed from a vector using the operator: This may be written more compactly as . The same operator is also used to represent block diagonal matrices as where each argument is a matrix. The operator may be written as: where represents the Hadamard product and is a constant vector with elements 1. Matrix-to-vector diag operator The inverse matrix-to-vector operator is sometimes denoted by the identically named where the argument is now a matrix and the result is a vector of its diagonal entries. The following property holds: Scalar matrix A diagonal matrix with equal diagonal entries is a scalar matrix; that is, a scalar multiple of the identity matrix . Its effect on a vector is scalar multiplication by . For example, a 3×3 scalar matrix has the form: The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that commute with all other square matrices of the same size. By contrast, over a field (like the real numbers), a diagonal matrix with all diagonal elements distinct only commutes with diagonal matrices (its centralizer is the set of diagonal matrices). That is because if a diagonal matrix has then given a matrix with the term of the products are: and and (since one can divide by ), so they do not commute unless the off-diagonal terms are zero. Diagonal matrices where the diagonal entries are not all equal or all distinct have centralizers intermediate between the whole space and only diagonal matrices. For an abstract vector space (rather than the concrete vector space ), the analog of scalar matrices are scalar transformations. This is true more generally for a module over a ring , with the endomorphism algebra (algebra of linear operators on ) replacing the algebra of matrices. Formally, scalar multiplication is a linear map, inducing a map (from a scalar to its corresponding scalar transformation, multiplication by ) exhibiting as a -algebra. For vector spaces, the scalar transforms are exactly the center of the endomorphism algebra, and, similarly, scalar invertible transforms are the center of the general linear group . The former is more generally true free modules for which the endomorphism algebra is isomorphic to a matrix algebra. Vector operations Multiplying a vector by a diagonal matrix multiplies each of the terms by the corresponding diagonal entry. Given a diagonal matrix and a vector , the product is: This can be expressed more compactly by using a vector instead of a diagonal matrix, , and taking the Hadamard product of the vectors (entrywise product), denoted : This is mathematically equivalent, but avoids storing all the zero terms of this sparse matrix. This product is thus used in machine learning, such as computing products of derivatives in backpropagation or multiplying IDF weights in TF-IDF, since some BLAS frameworks, which multiply matrices efficiently, do not include Hadamard product capability directly. Matrix operations The operations of matrix addition and matrix multiplication are especially simple for diagonal matrices. Write for a diagonal matrix whose diagonal entries starting in the upper left corner are . Then, for addition, we have and for matrix multiplication, The diagonal matrix is invertible if and only if the entries are all nonzero. In this case, we have In particular, the diagonal matrices form a subring of the ring of all -by- matrices. Multiplying an -by- matrix from the left with amounts to multiplying the -th row of by for all ; multiplying the matrix from the right with amounts to multiplying the -th column of by for all . Operator matrix in eigenbasis As explained in determining coefficients of operator matrix, there is a special basis, , for which the matrix takes the diagonal form. Hence, in the defining equation , all coefficients with are zero, leaving only one term per sum. The surviving diagonal elements, , are known as eigenvalues and designated with in the equation, which reduces to The resulting equation is known as eigenvalue equation and used to derive the characteristic polynomial and, further, eigenvalues and eigenvectors. In other words, the eigenvalues of are with associated eigenvectors of . Properties The determinant of is the product . The adjugate of a diagonal matrix is again diagonal. Where all matrices are square, A matrix is diagonal if and only if it is triangular and normal. A matrix is diagonal if and only if it is both upper- and lower-triangular. A diagonal matrix is symmetric. The identity matrix and zero matrix are diagonal. A 1×1 matrix is always diagonal. The square of a 2×2 matrix with zero trace is always diagonal. Applications Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation and eigenvalues/eigenvectors given above, it is typically desirable to represent a given matrix or linear map by a diagonal matrix. In fact, a given -by- matrix is similar to a diagonal matrix (meaning that there is a matrix such that is diagonal) if and only if it has linearly independent eigenvectors. Such matrices are said to be diagonalizable. Over the field of real or complex numbers, more is true. The spectral theorem says that every normal matrix is unitarily similar to a diagonal matrix (if then there exists a unitary matrix such that is diagonal). Furthermore, the singular value decomposition implies that for any matrix , there exist unitary matrices and such that is diagonal with positive entries. Operator theory In operator theory, particularly the study of PDEs, operators are particularly easy to understand and PDEs easy to solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a separable partial differential equation. Therefore, a key technique to understanding operators is a change of coordinates—in the language of operators, an integral transform—which changes the basis to an eigenbasis of eigenfunctions: which makes the equation separable. An important example of this is the Fourier transform, which diagonalizes constant coefficient differentiation operators (or more generally translation invariant operators), such as the Laplacian operator, say, in the heat equation. Especially easy are multiplication operators, which are defined as multiplication by (the values of) a fixed function–the values of the function at each point correspond to the diagonal entries of a matrix.
Mathematics
Linear algebra
null
174108
https://en.wikipedia.org/wiki/Abc%20conjecture
Abc conjecture
The abc conjecture (also known as the Oesterlé–Masser conjecture) is a conjecture in number theory that arose out of a discussion of Joseph Oesterlé and David Masser in 1985. It is stated in terms of three positive integers and (hence the name) that are relatively prime and satisfy . The conjecture essentially states that the product of the distinct prime factors of is usually not much smaller than . A number of famous conjectures and theorems in number theory would follow immediately from the abc conjecture or its versions. Mathematician Dorian Goldfeld described the abc conjecture as "The most important unsolved problem in Diophantine analysis". The abc conjecture originated as the outcome of attempts by Oesterlé and Masser to understand the Szpiro conjecture about elliptic curves, which involves more geometric structures in its statement than the abc conjecture. The abc conjecture was shown to be equivalent to the modified Szpiro's conjecture. Various attempts to prove the abc conjecture have been made, but none have gained broad acceptance. Shinichi Mochizuki claimed to have a proof in 2012, but the conjecture is still regarded as unproven by the mainstream mathematical community. Formulations Before stating the conjecture, the notion of the radical of an integer must be introduced: for a positive integer , the radical of , denoted , is the product of the distinct prime factors of . For example, If a, b, and c are coprime positive integers such that a + b = c, it turns out that "usually" . The abc conjecture deals with the exceptions. Specifically, it states that: An equivalent formulation is: Equivalently (using the little o notation): A fourth equivalent formulation of the conjecture involves the quality q(a, b, c) of the triple (a, b, c), which is defined as For example: A typical triple (a, b, c) of coprime positive integers with a + b = c will have c < rad(abc), i.e. q(a, b, c) < 1. Triples with q > 1 such as in the second example are rather special, they consist of numbers divisible by high powers of small prime numbers. The fourth formulation is: Whereas it is known that there are infinitely many triples (a, b, c) of coprime positive integers with a + b = c such that q(a, b, c) > 1, the conjecture predicts that only finitely many of those have q > 1.01 or q > 1.001 or even q > 1.0001, etc. In particular, if the conjecture is true, then there must exist a triple (a, b, c) that achieves the maximal possible quality q(a, b, c). Examples of triples with small radical The condition that ε > 0 is necessary as there exist infinitely many triples a, b, c with c > rad(abc). For example, let The integer b is divisible by 9: Using this fact, the following calculation is made: By replacing the exponent 6n with other exponents forcing b to have larger square factors, the ratio between the radical and c can be made arbitrarily small. Specifically, let p > 2 be a prime and consider Now it may be plausibly claimed that b is divisible by p2: The last step uses the fact that p2 divides 2p(p−1) − 1. This follows from Fermat's little theorem, which shows that, for p > 2, 2p−1 = pk + 1 for some integer k. Raising both sides to the power of p then shows that 2p(p−1) = p2(...) + 1. And now with a similar calculation as above, the following results: A list of the highest-quality triples (triples with a particularly small radical relative to c) is given below; the highest quality, 1.6299, was found by Eric Reyssat for Some consequences The abc conjecture has a large number of consequences. These include both known results (some of which have been proven separately only since the conjecture has been stated) and conjectures for which it gives a conditional proof. The consequences include: Roth's theorem on Diophantine approximation of algebraic numbers. The Mordell conjecture (already proven in general by Gerd Faltings). As equivalent, Vojta's conjecture in dimension 1. The Erdős–Woods conjecture allowing for a finite number of counterexamples. The existence of infinitely many non-Wieferich primes in every base b > 1. The weak form of Marshall Hall's conjecture on the separation between squares and cubes of integers. Fermat's Last Theorem has a famously difficult proof by Andrew Wiles. However it follows easily, at least for , from an effective form of a weak version of the abc conjecture. The abc conjecture says the lim sup of the set of all qualities (defined above) is 1, which implies the much weaker assertion that there is a finite upper bound for qualities. The conjecture that 2 is such an upper bound suffices for a very short proof of Fermat's Last Theorem for . The Fermat–Catalan conjecture, a generalization of Fermat's Last Theorem concerning powers that are sums of powers. The L-function L(s, χd) formed with the Legendre symbol, has no Siegel zero, given a uniform version of the abc conjecture in number fields, not just the abc conjecture as formulated above for rational integers. A polynomial P(x) has only finitely many perfect powers for all integers x if P has at least three simple zeros. A generalization of Tijdeman's theorem concerning the number of solutions of ym = xn + k (Tijdeman's theorem answers the case k = 1), and Pillai's conjecture (1931) concerning the number of solutions of Aym = Bxn + k. As equivalent, the Granville–Langevin conjecture, that if f is a square-free binary form of degree n > 2, then for every real β > 2 there is a constant C(f, β) such that for all coprime integers x, y, the radical of f(x, y) exceeds C · max{|x|, |y|}n−β. all the polynominals (x^n-1)/(x-1) have an infinity of square-free values. As equivalent, the modified Szpiro conjecture, which would yield a bound of rad(abc)1.2+ε. has shown that the abc conjecture implies that the Diophantine equation n! + A = k2 has only finitely many solutions for any given integer A. There are ~cfN positive integers n ≤ N for which f(n)/B' is square-free, with cf > 0 a positive constant defined as: The Beal conjecture, a generalization of Fermat's Last Theorem proposing that if A, B, C, x, y, and z are positive integers with Ax + By = Cz and x, y, z > 2, then A, B, and C have a common prime factor. The abc conjecture would imply that there are only finitely many counterexamples. Lang's conjecture, a lower bound for the height of a non-torsion rational point of an elliptic curve. A negative solution to the Erdős–Ulam problem on dense sets of Euclidean points with rational distances. An effective version of Siegel's theorem about integral points on algebraic curves. Theoretical results The abc conjecture implies that c can be bounded above by a near-linear function of the radical of abc. Bounds are known that are exponential. Specifically, the following bounds have been proven: In these bounds, K1 and K3 are constants that do not depend on a, b, or c, and K2 is a constant that depends on ε (in an effectively computable way) but not on a, b, or c. The bounds apply to any triple for which c > 2. There are also theoretical results that provide a lower bound on the best possible form of the abc conjecture. In particular, showed that there are infinitely many triples (a, b, c) of coprime integers with a + b = c and for all k < 4. The constant k was improved to k = 6.068 by . Computational results In 2006, the Mathematics Department of Leiden University in the Netherlands, together with the Dutch Kennislink science institute, launched the ABC@Home project, a grid computing system, which aims to discover additional triples a, b, c with rad(abc) < c. Although no finite set of examples or counterexamples can resolve the abc conjecture, it is hoped that patterns in the triples discovered by this project will lead to insights about the conjecture and about number theory more generally. As of May 2014, ABC@Home had found 23.8 million triples. Note: the quality q(a, b, c) of the triple (a, b, c) is defined above. Refined forms, generalizations and related statements The abc conjecture is an integer analogue of the Mason–Stothers theorem for polynomials. A strengthening, proposed by , states that in the abc conjecture one can replace rad(abc) by where ω is the total number of distinct primes dividing a, b and c. Andrew Granville noticed that the minimum of the function over occurs when This inspired to propose a sharper form of the abc conjecture, namely: with κ an absolute constant. After some computational experiments he found that a value of was admissible for κ. This version is called the "explicit abc conjecture". also describes related conjectures of Andrew Granville that would give upper bounds on c of the form where Ω(n) is the total number of prime factors of n, and where Θ(n) is the number of integers up to n divisible only by primes dividing n. proposed a more precise inequality based on . Let k = rad(abc). They conjectured there is a constant C1 such that holds whereas there is a constant C2 such that holds infinitely often. formulated the n conjecture—a version of the abc conjecture involving n > 2 integers. Claimed proofs Lucien Szpiro proposed a solution in 2007, but it was found to be incorrect shortly afterwards. Since August 2012, Shinichi Mochizuki has claimed a proof of Szpiro's conjecture and therefore the abc conjecture. He released a series of four preprints developing a new theory he called inter-universal Teichmüller theory (IUTT), which is then applied to prove the abc conjecture. The papers have not been widely accepted by the mathematical community as providing a proof of abc. This is not only because of their length and the difficulty of understanding them, but also because at least one specific point in the argument has been identified as a gap by some other experts. Although a few mathematicians have vouched for the correctness of the proof and have attempted to communicate their understanding via workshops on IUTT, they have failed to convince the number theory community at large. In March 2018, Peter Scholze and Jakob Stix visited Kyoto for discussions with Mochizuki. While they did not resolve the differences, they brought them into clearer focus. Scholze and Stix wrote a report asserting and explaining an error in the logic of the proof and claiming that the resulting gap was "so severe that ... small modifications will not rescue the proof strategy"; Mochizuki claimed that they misunderstood vital aspects of the theory and made invalid simplifications. On April 3, 2020, two mathematicians from the Kyoto research institute where Mochizuki works announced that his claimed proof would be published in Publications of the Research Institute for Mathematical Sciences, the institute's journal. Mochizuki is chief editor of the journal but recused himself from the review of the paper. The announcement was received with skepticism by Kiran Kedlaya and Edward Frenkel, as well as being described by Nature as "unlikely to move many researchers over to Mochizuki's camp". In March 2021, Mochizuki's proof was published in RIMS.
Mathematics
Prime numbers
null
174151
https://en.wikipedia.org/wiki/SATA
SATA
SATA (Serial AT Attachment) is a computer bus interface that connects host bus adapters to mass storage devices such as hard disk drives, optical drives, and solid-state drives. Serial ATA succeeded the earlier Parallel ATA (PATA) standard to become the predominant interface for storage devices. Serial ATA industry compatibility specifications originate from the Serial ATA International Organization (SATA-IO) which are then released by the INCITS Technical Committee T13, AT Attachment (INCITS T13). History SATA was announced in 2000 in order to provide several advantages over the earlier PATA interface such as reduced cable size and cost (seven conductors instead of 40 or 80), native hot swapping, faster data transfer through higher signaling rates, and more efficient transfer through an (optional) I/O queuing protocol. Revision 1.0 of the specification was released in January 2003. Serial ATA industry compatibility specifications originate from the Serial ATA International Organization (SATA-IO). The SATA-IO group collaboratively creates, reviews, ratifies, and publishes the interoperability specifications, the test cases and plugfests. As with many other industry compatibility standards, the SATA content ownership is transferred to other industry bodies: primarily INCITS T13 and an INCITS T10 subcommittee (SCSI), a subgroup of T10 responsible for Serial Attached SCSI (SAS). The remainder of this article strives to use the SATA-IO terminology and specifications. Before SATA's introduction in 2000, PATA was simply known as ATA. The "AT Attachment" (ATA) name originated after the 1984 release of the IBM Personal Computer AT, more commonly known as the IBM AT. The IBM AT's controller interface became a de facto industry interface for the inclusion of hard disks. "AT" was IBM's abbreviation for "Advanced Technology"; thus, many companies and organizations indicate SATA is an abbreviation of "Serial Advanced Technology Attachment". However, the ATA specifications simply use the name "AT Attachment", to avoid possible trademark issues with IBM. SATA host adapters and devices communicate via a high-speed serial cable over two pairs of conductors. In contrast, parallel ATA (the redesignation for the legacy ATA specifications) uses a 16-bit wide data bus with many additional support and control signals, all operating at a much lower frequency. To ensure backward compatibility with legacy ATA software and applications, SATA uses the same basic ATA and ATAPI command sets as legacy ATA devices. The world's first SATA hard disk drive is the Seagate Barracuda SATA V, which was released in Jan 2003. SATA has replaced parallel ATA in consumer desktop and laptop computers; SATA's market share in the desktop PC market was 99% in 2008. PATA has mostly been replaced by SATA for any use; with PATA in declining use in industrial and embedded applications that use CompactFlash (CF) storage, which was designed around the legacy PATA standard. A 2008 standard, CFast, to replace CompactFlash is based on SATA. Features Hot plug The Serial ATA spec requires SATA devices be capable of hot plugging; that is, devices that meet the specification are capable of insertion or removal of a device into or from a backplane connector (combined signal and power) that has power on. After insertion, the device initializes and then operates normally. Depending upon the operating system, the host may also initialize, resulting in a hot swap. The powered host and device do not need to be in an idle state for safe insertion and removal, although unwritten data may be lost when power is removed. Unlike PATA, both SATA and eSATA support hot plugging by design. However, this feature requires proper support at the host, device (drive), and operating-system levels. In general, SATA devices fulfill the device-side hot-plugging requirements, and most SATA host adapters support this function. For eSATA, hot plugging is supported in AHCI mode only. IDE mode does not support hot plugging. Advanced Host Controller Interface Advanced Host Controller Interface (AHCI) is an open host controller interface published and used by Intel, which has become a de facto standard. It allows the use of advanced features of SATA such as hotplug and native command queuing (NCQ). If AHCI is not enabled by the motherboard and chipset, SATA controllers typically operate in "IDE emulation" mode, which does not allow access to device features not supported by the ATA (also called IDE) standard. Windows device drivers that are labeled as SATA are often running in IDE emulation mode unless they explicitly state that they are AHCI mode, in RAID mode, or a mode provided by a proprietary driver and command set that allowed access to SATA's advanced features before AHCI became popular. Modern versions of Microsoft Windows, Mac OS X, FreeBSD, Linux with version 2.6.19 onward, as well as Solaris and OpenSolaris, include support for AHCI, but earlier operating systems such as Windows XP do not. Even in those instances, a proprietary driver may have been created for a specific chipset, such as Intel's. Revisions SATA revisions are typically designated with a dash followed by Roman numerals, e.g. "SATA-III", to avoid confusion with the speed, which is always displayed in Arabic numerals, e.g. "SATA 6 Gbit/s". The speeds given are the raw interface rate in Gbit/s including line code overhead, and the usable data rate in MB/s without overhead. SATA revision 1.0 (1.5 Gbit/s, 150 MB/s, Serial ATA-150) Revision 1.0a was released on January 7, 2003. First-generation SATA interfaces, now known as SATA 1.5 Gbit/s, communicate at a rate of 1.5 Gbit/s, and do not support Native Command Queuing (NCQ). Taking 8b/10b encoding overhead into account, they have an actual uncoded transfer rate of 1.2 Gbit/s (150 MB/s). The theoretical burst throughput of SATA 1.5 Gbit/s is similar to that of PATA/133, but newer SATA devices offer enhancements such as NCQ, which improve performance in a multitasking environment. During the initial period after SATA 1.5 Gbit/s finalization, adapter and drive manufacturers used a "bridge chip" to convert existing PATA designs for use with the SATA interface. Bridged drives have a SATA connector, may include either or both kinds of power connectors, and, in general, perform identically to their native-SATA equivalents. , the fastest 10,000 rpm SATA hard disk drives could transfer data at maximum (not average) rates of up to 157 MB/s, which is beyond the capabilities of the older PATA/133 specification and also exceeds the capabilities of SATA 1.5 Gbit/s. SATA revision 2.0 (3 Gbit/s, 300 MB/s, Serial ATA-300) SATA revision 2.0 was released in April 2004, introducing Native Command Queuing (NCQ). It is backward compatible with SATA 1.5 Gbit/s. Second-generation SATA interfaces run with a native transfer rate of 3.0 Gbit/s that, when accounted for the 8b/10b encoding scheme, equals to the maximum uncoded transfer rate of 2.4 Gbit/s (300 MB/s). The theoretical burst throughput of the SATA revision 2.0, which is also known as the SATA 3 Gbit/s, doubles the throughput of SATA revision 1.0. All SATA data cables meeting the SATA spec are rated for 3.0 Gbit/s and handle modern mechanical drives without any loss of sustained and burst data transfer performance. However, high-performance flash-based drives can exceed the SATA 3 Gbit/s transfer rate; this is addressed with the SATA 6 Gbit/s interoperability standard. SATA revision 2.5 Announced in August 2005, SATA revision 2.5 consolidated the specification to a single document. SATA revision 2.6 Announced in February 2007, SATA revision 2.6 introduced the following features: Slimline connector Micro connector (initially for 1.8” HDD) Mini Internal Multilane cable and connector Mini External Multilane cable and connector NCQ Priority NCQ Unload Enhancements to the BIST Activate FIS Enhancements for robust reception of the Signature FIS SATA revision 3.0 (6 Gbit/s, 600 MB/s, Serial ATA-600) Serial ATA International Organization (SATA-IO) presented the draft specification of SATA 6 Gbit/s physical layer in July 2008, and ratified its physical layer specification on August 18, 2008. The full 3.0 standard was released on May 27, 2009. Third-generation SATA interfaces run with a native transfer rate of 6.0 Gbit/s; taking 8b/10b encoding into account, the maximum uncoded transfer rate is 4.8 Gbit/s (600 MB/s). The theoretical burst throughput of SATA 6.0 Gbit/s is double that of SATA revision 2.0. It is backward compatible with earlier SATA implementations. The SATA 3.0 specification contains the following changes: 6 Gbit/s for scalable performance. Continued compatibility with SAS, including SAS 6 Gbit/s, as per "a SAS domain may support attachment to and control of unmodified SATA devices connected directly into the SAS domain using the Serial ATA Tunneled Protocol (STP)" from the SATA Revision 3.0 Gold specification. Isochronous Native Command Queuing (NCQ) streaming command to enable isochronous quality of service data transfers for streaming digital content applications. An NCQ management feature that helps optimize performance by enabling host processing and management of outstanding NCQ commands. Improved power management capabilities. A small low insertion force (LIF) connector for more compact 1.8-inch storage devices. A 7 mm optical disk drive profile for the slimline SATA connector (in addition to the existing 12.7 mm and 9.5 mm profiles). Alignment with the INCITS ATA8-ACS standard. In general, the enhancements are aimed at improving quality of service for video streaming and high-priority interrupts. In addition, the standard continues to support distances up to one meter. The newer speeds may require higher power consumption for supporting chips, though improved process technologies and power management techniques may mitigate this. The later specification can use existing SATA cables and connectors, though it was reported in 2008 that some OEMs were expected to upgrade host connectors for the higher speeds. SATA revision 3.1 Released in July 2011, SATA revision 3.1 introduced or changed the following features: mSATA, for solid-state drives in mobile computing devices, a PCI Express Mini Card-like connector that is electrically SATA. The connector was also used in some desktop computers, such as certain HP business PCs. Zero-power optical disk drive, a SATA optical drive that draws no power when idle. Queued TRIM Command, improves solid-state drive performance. Required Link Power Management, reduces overall system power demand of several SATA devices. Hardware Control Features, enable host identification of device capabilities. Universal Storage Module (USM), a new standard for cableless plug-in (slot) powered storage for consumer electronics devices. SATA revision 3.2 Released in August 2013, SATA revision 3.2 introduced the following features: The SATA Express specification defines an interface that combines both SATA and PCI Express buses, making it possible for both types of storage devices to coexist. By employing PCI Express, a much higher theoretical throughput of 1969 MB/s is possible. The SATA M.2 standard is a small form factor implementation of the SATA Express interface, with the addition of an internal USB 3.0 port; see the M.2 (NGFF) section below for a more detailed summary. microSSD introduces a ball grid array electrical interface for miniaturized, embedded SATA storage. USM Slim reduces thickness of Universal Storage Module (USM) from to . DevSleep enables lower power consumption for always-on devices while they are in low-power modes such as InstantGo (which used to be known as Connected Standby). Hybrid Information provides higher performance for solid-state hybrid drives. SATA revision 3.3 Released in February 2016, SATA revision 3.3 introduced the following features: Shingled magnetic recording (SMR) host-control support (device-controlled SMR HDDs are the same as standard CMR HDDs with respect to SATA compatibility). SMR provides a 25 percent or greater increase in hard disk drive capacity by overlapping tracks on the media. Optional Zoned ATA Command Set (ZAC) feature. Power Disable feature (see PWDIS pin) allows for remote power cycling of SATA drives and a Rebuild Assist function that speeds up the rebuild process to help ease maintenance in the data center. Transmitter Emphasis Specification increases interoperability and reliability between host and devices in electrically demanding environments. An activity indicator and staggered spin-up can be controlled by the same pin, adding flexibility and providing users with more choices. The new Power Disable feature (similar to the SAS Power Disable feature) uses Pin 3 of the SATA power connector. Some legacy power supplies that provide 3.3 V power on Pin 3 would force drives with Power Disable feature to get stuck in a hard reset condition preventing them from spinning up. The problem can usually be eliminated by using a simple “Molex to SATA” power adaptor to supply power to these drives. SATA revision 3.4 Released in June 2018, SATA revision 3.4 introduced the following features that enable monitoring of device conditions and execution of housekeeping tasks, both with minimal impact on performance: Durable/Ordered Write Notification: enables writing selected critical cache data to the media, minimizing impact on normal operations. Device Temperature Monitoring: allows for active monitoring of SATA device temperature and other conditions without impacting normal operation by utilizing the SFF-8609 standard for out-of-band (OOB) communications. Device Sleep Signal Timing: provides additional definition to enhance compatibility between manufacturers’ implementations. SATA revision 3.5 Released in July 2020, SATA revision 3.5 introduces features that enable increased performance benefits and promote greater integration of SATA devices and products with other industry I/O standards: Device Transmit Emphasis for Gen 3 PHY: aligns SATA with other characteristics of other I/O measurement solutions to help SATA-IO members with testing and integration. Defined Ordered NCQ Commands: allows the host to specify the processing relationships among queued commands and sets the order in which commands are processed in the queue. Command Duration Limit Features: reduces latency by allowing the host to define quality of service categories, giving the host more granularity in controlling command properties. The feature helps align SATA with the "Fast Fail" requirements established by the Open Compute Project (OCP) and specified in the INCITS T13 Technical Committee standard. SATA revision 3.5a was released in March 2021. Cables, connectors, and ports Connectors and cables present the most visible differences between SATA and parallel ATA drives. Unlike PATA, the same connectors are used on 3.5-inch SATA hard disks (for desktop and server computers) and 2.5-inch disks (for portable or small computers). Standard SATA connectors for both data and power have a conductor pitch of . Low insertion force is required to mate a SATA connector. A smaller mini-SATA or mSATA connector is used by smaller devices such as 1.8-inch SATA drives, some DVD and Blu-ray drives, and mini SSDs. A special eSATA connector is specified for external devices, and an optionally implemented provision for clips to hold internal connectors firmly in place. SATA drives may be plugged into SAS controllers and communicate on the same physical cable as native SAS disks, but SATA controllers cannot handle SAS disks. Female SATA ports (on motherboards for example) are for use with SATA data cables that have locks or clips to prevent accidental unplugging. Some SATA cables have right- or left-angled connectors to ease connection to circuit boards. Data connector The SATA standard defines a data cable with seven conductors (three grounds and four active data lines in two pairs) and 8 mm wide wafer connectors on each end. SATA cables can have lengths up to , and connect one motherboard socket to one hard drive. PATA ribbon cables, in comparison, connect one motherboard socket to one or two hard drives, carry either 40 or 80 wires, and are limited to in length by the PATA specification; however, cables up to are readily available. Thus, SATA connectors and cables are easier to fit in closed spaces and reduce obstructions to air cooling. Some cables even include a locking feature, whereby a small (usually metal) spring holds the plug in the socket. SATA connectors may be straight, upward-angled, downward-angled, leftward-angled, or rightward-angled. Angled connectors allow lower-profile connections. Downward-angled connectors lead the cable immediately away from the drive, on the circuit-board side. Upward-angled connectors lead the cable across the drive towards its top. One of the problems associated with the transmission of data at high speed over electrical connections is described as noise, which is due to electrical coupling between data circuits and other circuits. As a result, the data circuits can both affect other circuits and be affected by them. Designers use a number of techniques to reduce the undesirable effects of such unintentional coupling. One such technique used in SATA links is differential signaling. This is an enhancement over PATA, which uses single-ended signaling. The use of fully shielded, dual coax conductors, with multiple ground connections, for each differential pair improves isolation between the channels and reduces the chances of lost data in difficult electrical environments. SATA Power connectors Standard power connector (15 pins) SATA specifies a different power connector than the four-pin Molex connector used on Parallel ATA (PATA) devices (and earlier small storage devices, going back to ST-506 hard disk drives and even to floppy disk drives that predated the IBM PC). It is a wafer-type connector, like the SATA data connector, but much wider (fifteen pins versus seven) to avoid confusion between the two. Some early SATA drives included the four-pin Molex power connector together with the new fifteen-pin connector, but most SATA drives now have only the latter. The new SATA power connector contains many more pins for several reasons: 3.3 V is supplied along with the traditional 5 V and 12 V supplies. However, very few drives actually use it. Pin 3 in SATA revision 3.3 has been redefined as PWDIS and is used to enter and exit the POWER DISABLE mode in line with SAS-3. If Pin 3 is driven HIGH (2.1–3.6 V max), power to the drive circuitry is cut. Drives with this feature enabled do not power up in systems designed to SATA revision 3.1 or earlier, because Pin 3 driven HIGH prevents the drive from powering up. Workarounds include using a Molex adapter without 3.3 V or putting insulating tape over the PWDIS pin. To reduce resistance and increase current capability, each voltage is supplied by three pins in parallel, though one pin in each group is intended for precharging (see below). Each pin should be able to carry 1.5 A. Five parallel pins provide a low-resistance ground connection. Two ground pins and one pin for each supplied voltage support hot-plug precharging. Ground pins 4 and 12 in a hot-swap cable are the longest, so they make contact first when the connectors are mated. Drive power connector pins 3, 7, and 13 are longer than the others, so they make contact next. The drive uses them to charge its internal bypass capacitors through current-limiting resistances. Finally, the remaining power pins make contact, bypassing the resistances and providing a low-resistance source of each voltage. This two-step mating process avoids glitches to other loads and possible arcing or erosion of the SATA power-connector contacts. Pin 11 might be used (often by chassis or backplane hardware independent from SATA host controller and its data connection) for staggered spinup, activity indication, emergency head parking, or other vendor defined functions in various combinations. It is an open-collector signal, which may be pulled down by the connector or the drive. Host signaling: If pulled down at the connector (as it is on most cable-style SATA power connectors), the drive spins up as soon as power is applied. If left floating, the drive waits until it is spoken to. This prevents many drives from spinning up simultaneously, which might draw too much power. Drive signaling: The pin is also pulled low by the drive to indicate drive activity. This may be used to give feedback to the user through an LED. Relevant definitions of pin operation have changed multiple times in published revisions of SATA standard, so the observed behavior may be dependent on device version, host version, firmware and software configuration. There is also a specification for transmission of drive temperature and other status values with activity signal pulses routinely used to make LED blink. Passive adapters are available that convert a four-pin Molex connector to a SATA power connector, providing the 5 V and 12 V lines available on the Molex connector, but not 3.3 V. There are also four-pin Molex-to-SATA power adapters that include electronics to additionally provide the 3.3 V power supply. However, most drives do not require the 3.3 V power line. Just like SATA data connectors, SATA power connectors may be straight, upward-angled, or downward-angled. Slimline power connector (6 pins) The power connector is reduced to six pins so it supplies only +5 V (red wire), and not +12 V or +3.3 V. Pin 1 of the slimline power connector, denoting device presence, is shorter than the others to allow hot-swapping. Note: The data connector used is the same as the non-slimline version. Low-cost adapters exist to convert from standard SATA to slimline SATA. SATA 2.6 is the first revision that defined the slimline power connector targeted for smaller form-factors drives, such as laptop optical drives. Micro connector The micro SATA connector (sometimes called uSATA or μSATA) originated with SATA 2.6, and is intended for 1.8-inch hard disk drives. There is also a micro data connector, similar in appearance but slightly thinner than the standard data connector. Additional pins Some SATA drives, in particular mechanical ones, come with an extra 4 or more pin interface which isn't uniformly standardized but nevertheless serves similar purpose defined by each drive manufacturer. As IDE drives used those extra pins for setting up Master and Slave drives, on SATA drives, those pins are generally used to select different Power modes for use in USB-SATA bridges or enables additional features like Spread Spectrum Clocking, SATA Speed Limit or Factory Mode for Diagnostics and Recovery, by the use of a jumper. eSATA Standardized in 2004, eSATA (e standing for external) provides a variant of SATA meant for external connectivity. It uses a more robust connector, longer shielded cables, and stricter (but backward-compatible) electrical standards. The protocol and logical signaling (link/transport layers and above) are identical to internal SATA. The differences are: Minimum transmit amplitude increased: Range is 500–600 mV instead of 400–600 mV. Minimum receive amplitude decreased: Range is 240–600 mV instead of 325–600 mV. Maximum cable length increased to from . The eSATA cable and connector is similar to the SATA 1.0a cable and connector, with these exceptions: The eSATA connector is mechanically different to prevent unshielded internal cables from being used externally. The eSATA connector discards the L-shaped key and changes the position and size of the guides. The eSATA insertion depth is deeper: 6.6 mm instead of 5 mm. The contact positions are also changed. The eSATA cable has an extra shield to reduce EMI to FCC and CE requirements. Internal cables do not need the extra shield to satisfy EMI requirements because they are inside a shielded case. The eSATA connector uses metal springs for shield contact and mechanical retention. The eSATA connector has a design-life of 5,000 matings; the ordinary SATA connector is only specified for 50. Aimed at the consumer market, eSATA enters an external storage market served also by the USB and FireWire interfaces. The SATA interface has certain advantages. Most external hard-disk-drive cases with FireWire or USB interfaces use either PATA or SATA drives and "bridges" to translate between the drives' interfaces and the enclosures' external ports; this bridging incurs some inefficiency. Some single disks can transfer 157 MB/s during real use, about four times the maximum transfer rate of USB 2.0 or FireWire 400 (IEEE 1394a) and almost twice as fast as the maximum transfer rate of FireWire 800. The S3200 FireWire 1394b specification reaches around 400 MB/s (3.2 Gbit/s), and USB 3.0 has a nominal speed of 5 Gbit/s. Some low-level drive features, such as S.M.A.R.T., may not operate through some USB or FireWire or USB+FireWire bridges; eSATA does not suffer from these issues provided that the controller manufacturer (and its drivers) presents eSATA drives as ATA devices, rather than as SCSI devices, as has been common with Silicon Image, JMicron, and Nvidia nForce drivers for Windows Vista. In those cases SATA drives do not have low-level features accessible. The eSATA version of SATA 6G operates at 6.0 Gbit/s (the term "SATA III" is avoided by the SATA-IO organization to prevent confusion with SATA II 3.0 Gbit/s, which was colloquially referred to as "SATA 3G" [bit/s] or "SATA 300" [MB/s] since the 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both "SATA 1.5G" [bit/s] or "SATA 150" [MB/s]). Therefore, eSATA connections operate with negligible differences between them. Once an interface can transfer data as fast as a drive can handle them, increasing the interface speed does not improve data transfer. There are some disadvantages, however, to the eSATA interface: Devices built before the eSATA interface became popular lack external SATA connectors. For small form-factor devices (such as external 2.5-inch disks), a PC-hosted USB or FireWire link can usually supply sufficient power to operate the device. However, eSATA connectors cannot supply power, and require a power supply for the external device. The related eSATAp (but mechanically incompatible, sometimes called eSATA/USB) connector adds power to an external SATA connection, so that an additional power supply is not needed. few new computers have dedicated external SATA (eSATA) connectors, with USB3 dominating and USB3 Type C, often with the Thunderbolt alternate mode, starting to replace the earlier USB connectors. Still sometimes present are single ports supporting both USB3 and eSATA. Desktop computers without a built-in eSATA interface can install an eSATA host bus adapter (HBA); if the motherboard supports SATA, an externally available eSATA connector can be added. Notebook computers with the now rare Cardbus or ExpressCard could add an eSATA HBA. With passive adapters, the maximum cable length is reduced to due to the absence of compliant eSATA signal-levels. eSATAp eSATAp stands for powered eSATA. It is also known as Power over eSATA, Power eSATA, eSATA/USB Combo, or eSATA USB Hybrid Port (EUHP). An eSATAp port combines the four pins of the USB 2.0 (or earlier) port, the seven pins of the eSATA port, and optionally two 12 V power pins. Both SATA traffic and device power are integrated in a single cable, as is the case with USB but not eSATA. The 5 V power is provided through two USB pins, while the 12 V power may optionally be provided. Typically desktop, but not notebook, computers provide 12 V power, so can power devices requiring this voltage, typically 3.5-inch disk and CD/DVD drives, in addition to 5 V devices such as 2.5-inch drives. Both USB and eSATA devices can be used with an eSATAp port, when plugged in with a USB or eSATA cable, respectively. An eSATA device cannot be powered via an eSATAp cable, but a special cable can make both SATA or eSATA and power connectors available from an eSATAp port. An eSATAp connector can be built into a computer with internal SATA and USB, by fitting a bracket with connections for internal SATA, USB, and power connectors and an externally accessible eSATAp port. Though eSATAp connectors have been built into several devices, manufacturers do not refer to an official standard. Pre-standard implementations Prior to the final eSATA 6 Gbit/s specification many add-on cards and some motherboards advertised eSATA 6 Gbit/s support because they had 6 Gbit/s SATA 3.0 controllers for internal-only solutions. Those implementations are non-standard, and eSATA 6 Gbit/s requirements were ratified in the July 18, 2011 SATA 3.1 specification. Some products might not be fully eSATA 6 Gbit/s compliant. Mini-SATA (mSATA) Mini-SATA (abbreviated as mSATA), which is distinct from the micro connector, was announced by the Serial ATA International Organization on September 21, 2009. Applications include netbooks, laptops and other devices that require a solid-state drive in a small footprint. The physical dimensions of the mSATA connector are identical to those of the PCI Express Mini Card interface, but the interfaces are electrically incompatible; the data signals (TX±/RX± SATA, PETn0 PETp0 PERn0 PERp0 PCI Express) need a connection to the SATA host controller instead of the PCI Express host controller. The M.2 specification has superseded both mSATA and mini-PCIe. SFF-8784 connector Slim 2.5-inch SATA devices, in height, use the twenty-pin SFF-8784 edge connector to save space. By combining the data signals and power lines into a slim connector that effectively enables direct connection to the device's printed circuit board (PCB) without additional space-consuming connectors, SFF-8784 allows further internal layout compaction for portable devices such as ultrabooks. Pins 1 to 10 are on the connector's bottom side, while pins 11 to 20 are on the top side. SATA Express SATA Express, initially standardized in the SATA 3.2 specification, is an interface that supports either SATA or PCI Express storage devices. The host connector is backward compatible with the standard 3.5-inch SATA data connector, allowing up to two legacy SATA devices to connect. At the same time, the host connector provides up to two PCI Express 3.0 lanes as a pure PCI Express connection to the storage device, allowing bandwidths of up to 2 GB/s. Instead of the otherwise usual approach of doubling the native speed of the SATA interface, PCI Express was selected for achieving data transfer speeds greater than 6 Gbit/s. It was concluded that doubling the native SATA speed would take too much time, too many changes would be required to the SATA standard, and would result in a much greater power consumption when compared to the existing PCI Express bus. In addition to supporting legacy Advanced Host Controller Interface (AHCI), SATA Express also makes it possible for NVM Express (NVMe) to be used as the logical device interface for connected PCI Express storage devices. As M.2 form factor, described below, achieved much larger popularity, SATA Express is considered as a failed standard and dedicated ports quickly disappeared from motherboards. M.2 (NGFF) M.2, formerly known as the Next Generation Form Factor (NGFF), is a specification for computer expansion cards and associated connectors. It replaces the mSATA standard, which uses the PCI Express Mini Card physical layout. Having a smaller and more flexible physical specification, together with more advanced features, the M.2 is more suitable for solid-state storage applications in general, especially when used in small devices such as ultrabooks or tablets. The M.2 standard is designed as a revision and improvement to the mSATA standard, so that larger printed circuit boards (PCBs) can be manufactured. While mSATA took advantage of the existing PCI Express Mini Card form factor and connector, M.2 has been designed to maximize usage of the card space, while minimizing the footprint. Supported host controller interfaces and internally provided ports are a superset to those defined by the SATA Express interface. Essentially, the M.2 standard is a small form factor implementation of the SATA Express interface, with the addition of an internal USB 3.0 port. U.2 (SFF-8639) U.2, formerly known as SFF-8639. Like M.2, it carries a PCI Express electrical signal, however U.2 uses a PCIe 3.0 ×4 link providing a higher bandwidth of 32 Gbit/s in each direction. In order to provide maximum backward compatibility the U.2 connector also supports SATA and multi-path SAS. Topology SATA uses a point-to-point architecture. The physical connection between a controller and a storage device is not shared among other controllers and storage devices. SATA defines multipliers, which allows a single SATA controller port to drive up to fifteen storage devices. The multiplier performs the function of a hub; the controller and each storage device is connected to the hub. This is conceptually similar to SAS expanders. PC systems have SATA controllers built into the motherboard, typically featuring two to eight ports. Additional ports can be installed through add-in SATA host adapters (available in variety of bus-interfaces: USB, PCI, PCIe). Backward and forward compatibility SATA and PATA At the hardware interface level, SATA and PATA (Parallel AT Attachment) devices are completely incompatible: they cannot be interconnected without an adapter. At the application level, SATA devices can be specified to look and act like PATA devices. Many motherboards offer a "Legacy Mode" option, which makes SATA drives appear to the OS like PATA drives on a standard controller. This Legacy Mode eases OS installation by not requiring that a specific driver be loaded during setup, but sacrifices support for some (vendor specific) features of SATA. Legacy Mode often if not always disables some of the boards' PATA or SATA ports, since the standard PATA controller interface supports only four drives. (Often, which ports are disabled is configurable.) The common heritage of the ATA command set has enabled the proliferation of low-cost PATA to SATA bridge chips. Bridge chips were widely used on PATA drives (before the completion of native SATA drives) as well in standalone converters. When attached to a PATA drive, a device-side converter allows the PATA drive to function as a SATA drive. Host-side converters allow a motherboard PATA port to connect to a SATA drive. The market has produced powered enclosures for both PATA and SATA drives that interface to the PC through USB, Firewire or eSATA, with the restrictions noted above. PCI cards with a SATA connector exist that allow SATA drives to connect to legacy systems without SATA connectors. SATA 1.5 Gbit/s and SATA 3 Gbit/s The designers of SATA standard as an overall goal aimed for backward and forward compatibility with future revisions of the SATA standard. To prevent interoperability problems that could occur when next generation SATA drives are installed on motherboards with standard legacy SATA 1.5 Gbit/s host controllers, many manufacturers have made it easy to switch those newer drives to the previous standard's mode. Examples of such provisions include: Seagate/Maxtor has added a user-accessible jumper-switch, known as the "force 150", to enable the drive switch between forced 1.5 Gbit/s and 1.5/3 Gbit/s negotiated operation. Western Digital uses a jumper setting called OPT1 enabled to force 1.5 Gbit/s data transfer speed (OPT1 is enabled by putting the jumper on pins 5 and 6). Samsung drives can be forced to 1.5 Gbit/s mode using software that may be downloaded from the manufacturer's website. Configuring some Samsung drives in this manner requires the temporary use of a SATA-2 (SATA 3.0 Gbit/s) controller while programming the drive. The "force 150" switch (or equivalent) is also useful for attaching SATA 3 Gbit/s hard drives to SATA controllers on PCI cards, since many of these controllers (such as the Silicon Image chips) run at 3 Gbit/s, even though the PCI bus cannot reach 1.5 Gbit/s speeds. This can cause data corruption in operating systems that do not specifically test for this condition and limit the disk transfer speed. SATA 3 Gbit/s and SATA 6 Gbit/s SATA 3 Gbit/s and SATA 6 Gbit/s are compatible with each other. Most devices that are only SATA 3 Gbit/s can connect with devices that are SATA 6 Gbit/s, and vice versa, though SATA 3 Gbit/s devices connect with SATA 6 Gbit/s devices only at the slower 3 Gbit/s speed. SATA 1.5 Gbit/s and SATA 6 Gbit/s SATA 1.5 Gbit/s and SATA 6 Gbit/s are compatible with each other. Most devices that are only SATA 1.5 Gbit/s can connect with devices that are SATA 6 Gbit/s, and vice versa, though SATA 1.5 Gbit/s devices only connect with SATA 6 Gbit/s devices at the slower 1.5 Gbit/s speed. Comparison to other interfaces SATA and SCSI Parallel SCSI uses a more complex bus than SATA, usually resulting in higher manufacturing costs. SCSI buses also allow connection of several drives on one shared channel, whereas SATA allows one drive per channel, unless using a port multiplier. Serial Attached SCSI uses the same physical interconnects as SATA, and most SAS HBAs also support 3 and 6 Gbit/s SATA devices (an HBA requires support for Serial ATA Tunneling Protocol). SATA 3 Gbit/s theoretically offers a maximum bandwidth of 300 MB/s per device, which is only slightly lower than the rated speed for SCSI Ultra 320 with a maximum of 320 MB/s total for all devices on a bus. SCSI drives provide greater sustained throughput than multiple SATA drives connected via a simple (i.e., command-based) port multiplier because of disconnect-reconnect and aggregating performance. In general, SATA devices link compatibly to SAS enclosures and adapters, whereas SCSI devices cannot be directly connected to a SATA bus. SCSI, SAS, and fibre-channel (FC) drives are more expensive than SATA, so they are used in servers and disk arrays where the better performance justifies the additional cost. Inexpensive ATA and SATA drives evolved in the home-computer market, hence there is a view that they are less reliable. As those two worlds overlapped, the subject of reliability became somewhat controversial. Note that, in general, the failure rate of a disk drive is related to the quality of its heads, platters and supporting manufacturing processes, not to its interface. Use of serial ATA in the business market increased from 22% in 2006 to 28% in 2008. Comparison with other buses SCSI-3 devices with SCA-2 connectors are designed for hot swapping. Many server and RAID systems provide hardware support for transparent hot swapping. The designers of the SCSI standard prior to SCA-2 connectors did not target hot swapping, but in practice, most RAID implementations support hot swapping of hard disks.
Technology
Computer hardware
null
174236
https://en.wikipedia.org/wiki/Diagenesis
Diagenesis
Diagenesis () is the process of physical and chemical changes in sediments first caused by water-rock interactions, microbial activity, and compaction after their deposition. Increased pressure and temperature only start to play a role as sediments become buried much deeper in the Earth's crust. In the early stages, the transformation of poorly consolidated sediments into sedimentary rock (lithification) is simply accompanied by a reduction in porosity and water expulsion (clay sediments), while their main mineralogical assemblages remain unaltered. As the rock is carried deeper by further deposition above, its organic content is progressively transformed into kerogens and bitumens. The process of diagenesis excludes surface alteration (weathering) and deep metamorphism. There is no sharp boundary between diagenesis and metamorphism, but the latter occurs at higher temperatures and pressures. Hydrothermal solutions, meteoric groundwater, rock porosity, permeability, dissolution/precipitation reactions, and time are all influential factors. After deposition, sediments are compacted as they are buried beneath successive layers of sediment and cemented by minerals that precipitate from solution. Grains of sediment, rock fragments and fossils can be replaced by other minerals (e.g. calcite, siderite, pyrite or marcasite) during diagenesis. Porosity usually decreases during diagenesis, except in rare cases such as dissolution of minerals and dolomitization. The study of diagenesis in rocks is used to understand the geologic history they have undergone and the nature and type of fluids that have circulated through them. From a commercial standpoint, such studies aid in assessing the likelihood of finding various economically viable mineral and hydrocarbon deposits. The process of diagenesis is also important in the decomposition of bone tissue. Role in anthropology and paleontology The term diagenesis, literally meaning "across generation", is extensively used in geology. However, this term has filtered into the field of anthropology, archaeology and paleontology to describe the changes and alterations that take place on skeletal (biological) material. Specifically, diagenesis "is the cumulative physical, chemical, and biological environment; these processes will modify an organic object's original chemical and/or structural properties and will govern its ultimate fate, in terms of preservation or destruction". In order to assess the potential impact of diagenesis on archaeological or fossil bones, many factors need to be assessed, beginning with elemental and mineralogical composition of bone and enveloping soil, as well as the local burial environment (geology, climatology, groundwater). The composite nature of bone, comprising one-third organic (mainly protein collagen) and two thirds mineral (calcium phosphate mostly in the form of hydroxyapatite) renders its diagenesis more complex. Alteration occurs at all scales from molecular loss and substitution, through crystallite reorganization, porosity, and microstructural changes, and in many cases, to the disintegration of the complete unit. Three general pathways of the diagenesis of bone have been identified: Chemical deterioration of the organic phase. Chemical deterioration of the mineral phase. (Micro) biological attack of the composite. They are as follows: The dissolution of collagen depends on time, temperature, and environmental pH. At high temperatures, the rate of collagen loss will be accelerated, and extreme pH can cause collagen swelling and accelerated hydrolysis. Due to the increase in porosity of bones through collagen loss, the bone becomes susceptible to hydrolytic infiltration where the hydroxyapatite, with its affinity for amino acids, permits charged species of endogenous and exogenous origin to take up residence. The hydrolytic activity plays a key role in the mineral phase transformations that expose the collagen to accelerated chemical- and bio-degradation. Chemical changes affect crystallinity. Mechanisms of chemical change, such as the uptake of F− or may cause recrystallization where hydroxyapatite is dissolved and re-precipitated allowing for the incorporation or substitution of exogenous material. Once an individual has been interred, microbial attack, the most common mechanism of bone deterioration, occurs rapidly. During this phase, most bone collagen is lost and porosity is increased. The dissolution of the mineral phase caused by low pH permits access to the collagen by extracellular microbial enzymes thus microbial attack. Role in hydrocarbon generation When animal or plant matter is buried during sedimentation, the constituent organic molecules (lipids, proteins, carbohydrates and lignin-humic compounds) break down due to the increase in temperature and pressure. This transformation occurs in the first few hundred meters of burial and results in the creation of two primary products: kerogens and bitumens. It is generally accepted that hydrocarbons are formed by the thermal alteration of these kerogens (the biogenic theory). In this way, given certain conditions (which are largely temperature-dependent) kerogens will break down to form hydrocarbons through a chemical process known as cracking, or catagenesis. A kinetic model based on experimental data can capture most of the essential transformation in diagenesis, and a mathematical model in a compacting porous medium to model the dissolution-precipitation mechanism. These models have been intensively studied and applied in real geological applications. Diagenesis has been divided, based on hydrocarbon and coal genesis into: eodiagenesis (early), mesodiagenesis (middle) and telodiagenesis (late). During the early or eodiagenesis stage shales lose pore water, little to no hydrocarbons are formed and coal varies between lignite and sub-bituminous. During mesodiagenesis, dehydration of clay minerals occurs, the main development of oil genesis occurs and high to low volatile bituminous coals are formed. During telodiagenesis, organic matter undergoes cracking and dry gas is produced; semi-anthracite coals develop. Early diagenesis in newly formed aquatic sediments is mediated by microorganisms using different electron acceptors as part of their metabolism. Organic matter is mineralized, liberating gaseous carbon dioxide (CO2) in the porewater, which, depending on the conditions, can diffuse into the water column. The various processes of mineralization in this phase are nitrification and denitrification, manganese oxide reduction, iron hydroxide reduction, sulfate reduction, and fermentation. Role in bone decomposition Diagenesis alters the proportions of organic collagen and inorganic components (hydroxyapatite, calcium, magnesium) of bone exposed to environmental conditions, especially moisture. This is accomplished by the exchange of natural bone constituents, deposition in voids or defects, adsorption onto the bone surface and leaching from the bone.
Physical sciences
Sedimentology
Earth science
56153
https://en.wikipedia.org/wiki/Ophthalmology
Ophthalmology
Ophthalmology (, ) is a clinical and surgical specialty within medicine that deals with the diagnosis and treatment of eye disorders. A former term is oculism. An ophthalmologist is a physician who undergoes subspecialty training in medical and surgical eye care. Following a medical degree, a doctor specialising in ophthalmology must pursue additional postgraduate residency training specific to that field. This may include a one-year integrated internship that involves more general medical training in other fields such as internal medicine or general surgery. Following residency, additional specialty training (or fellowship) may be sought in a particular aspect of eye pathology. Ophthalmologists prescribe medications to treat ailments, such as eye diseases, implement laser therapy, and perform surgery when needed. Ophthalmologists provide both primary and specialty eye care—medical and surgical. Most ophthalmologists participate in academic research on eye diseases at some point in their training and many include research as part of their career. Ophthalmology has always been at the forefront of medical research with a long history of advancement and innovation in eye care. Diseases A brief list of some of the most common diseases treated by ophthalmologists: Cataract Excessive tearing (tear duct obstruction) Proptosis (bulged eyes) Thyroid eye disease Eye tumors Ptosis Diabetic retinopathy Dry eye syndrome Glaucoma Macular degeneration Retinal detachment Endophthalmitis Refractive errors Strabismus (misalignment or deviation of eyes) Uveitis Ocular trauma Ruptured globe injury Orbital fracture The most valued pharmaceutical companies worldwide whose leading products are in Ophthalmology are Regeneron (United States) for Macular degeneration (AMD) treatment and Bausch Health (Canada) for Front of eye. Diagnosis Eye examination Following are examples of examination methods performed during an eye examination that enables diagnosis Visual acuity assessment Ocular tonometry to determine intraocular pressure Extraocular motility and ocular alignment assessment Slit lamp examination Dilated fundus examination Gonioscopy Refraction Specialized tests Optical coherence tomography (OCT) is a medical technological platform used to assess ocular structures. The information is then used by physicians to assess staging of pathological processes and confirm clinical diagnoses. Subsequent OCT scans are used to assess the efficacy of managing diabetic retinopathy, age-related macular degeneration, and glaucoma. Optical coherence tomography angiography (OCTA) and Fluorescein angiography to visualize the vascular networks of the retina and choroid. Electroretinography (ERG) measures the electrical responses of various cell types in the retina, including the photoreceptors (rods and cones), inner retinal cells (bipolar and amacrine cells), and the ganglion cells. Electrooculography (EOG) is a technique for measuring the corneo-retinal standing potential that exists between the front and the back of the human eye. The resulting signal is called the electrooculogram. Primary applications are in ophthalmological diagnosis and in recording eye movements. Visual field testing to detect dysfunction in central and peripheral vision which may be caused by various medical conditions such as glaucoma, stroke, pituitary disease, brain tumours or other neurological deficits. Corneal topography is a non-invasive medical imaging technique for mapping the anterior curvature of the cornea, the outer structure of the eye. Ultrasonography of the eyes may be performed by an ophthalmologist. Ophthalmic surgery Eye surgery, also known as ocular surgery, is surgery performed on the eye or its adnexa by an ophthalmologist. The eye is a fragile organ, and requires extreme care before, during, and after a surgical procedure. An eye surgeon is responsible for selecting the appropriate surgical procedure for the patient and for taking the necessary safety precautions. Subspecialties Ophthalmology includes subspecialities that deal either with certain diseases or diseases of certain parts of the eye. Some of them are: Anterior segment surgery Cornea, ocular surface, and external disease Glaucoma Neuro-ophthalmology Ocular oncology Oculoplastics and orbit surgery Ophthalmic pathology Paediatric ophthalmology/strabismus (misalignment of the eyes) Refractive surgery Medical retina, deals with treatment of retinal problems through non-surgical means Uveitis Veterinary specialty training programs in veterinary ophthalmology exist in some countries. Vitreo-retinal surgery, deals with surgical management of retinal and posterior segment diseases Medical retina and vitreo-retinal surgery sometimes are combined and together they are called posterior segment subspecialisation Etymology The Greek roots of the word ophthalmology are ὀφθαλμός (, "eye") and -λoγία (-, "study, discourse"), i.e., "the study of eyes". The discipline applies to all animal eyes, whether human or not, since the practice and procedures are quite similar with respect to disease processes, although there are differences in the anatomy or disease prevalence. History Ancient near east and the Greek period In the Ebers Papyrus from ancient Egypt dating to 1550 BC, a section is devoted to eye diseases. Prior to Hippocrates, physicians largely based their anatomical conceptions of the eye on speculation, rather than empiricism. They recognized the sclera and transparent cornea running flushly as the outer coating of the eye, with an inner layer with pupil, and a fluid at the centre. It was believed, by Alcamaeon (fifth century BC) and others, that this fluid was the medium of vision and flowed from the eye to the brain by a tube. Aristotle advanced such ideas with empiricism. He dissected the eyes of animals, and discovering three layers (not two), found that the fluid was of a constant consistency with the lens forming (or congealing) after death, and the surrounding layers were seen to be juxtaposed. He and his contemporaries further put forth the existence of three tubes leading from the eye, not one. One tube from each eye met within the skull. The Greek physician Rufus of Ephesus (first century AD) recognised a more modern concept of the eye, with conjunctiva, extending as a fourth epithelial layer over the eye. Rufus was the first to recognise a two-chambered eye, with one chamber from cornea to lens (filled with water), the other from lens to retina (filled with a substance resembling egg whites). Celsus the Greek philosopher of the second century AD gave a detailed description of cataract surgery by the couching method. The Greek physician Galen (second century AD) remedied some mistaken descriptions, including about the curvature of the cornea and lens, the nature of the optic nerve, and the existence of a posterior chamber. Although this model was a roughly correct modern model of the eye, it contained errors. Still, it was not advanced upon again until after Vesalius. A ciliary body was then discovered and the sclera, retina, choroid, and cornea were seen to meet at the same point. The two chambers were seen to hold the same fluid, as well as the lens being attached to the choroid. Galen continued the notion of a central canal, but he dissected the optic nerve and saw that it was solid. He mistakenly counted seven optical muscles, one too many. He also knew of the tear ducts. Ancient India The Indian surgeon Sushruta wrote the Sushruta Samhita in Sanskrit in approximately the sixth century BC, which describes 76 ocular diseases (of these, 51 surgical) as well as several ophthalmological surgical instruments and techniques. His description of cataract surgery was compatible with the method of couching. He has been described as one of the first cataract surgeons. Medieval Islam Medieval Islamic Arabic and Persian scientists (unlike their classical predecessors) considered it normal to combine theory and practice, including the crafting of precise instruments, and therefore, found it natural to combine the study of the eye with the practical application of that knowledge. Hunayn ibn Ishaq, and others beginning with the medieval Arabic period, taught that the crystalline lens is in the exact center of the eye. This idea was propagated until the end of the 1500s. Ibn al-Nafis, an Arabic native of Damascus, wrote a large textbook, The Polished Book on Experimental Ophthalmology, divided into two parts, On the Theory of Ophthalmology and Simple and Compounded Ophthalmic Drugs. Avicenna wrote in his Canon "rescheth", which means "retiformis", and Gerard of Cremona translated this at approximately 1150 into the new term "retina". Modern period In the seventeenth and eighteenth centuries, hand lenses were used by Malpighi, microscopes by Leeuwenhoek, preparations for fixing the eye for study by Ruysch, and later the freezing of the eye by Petit. This allowed for detailed study of the eye and an advanced model. Some mistakes persisted, such as: why the pupil changed size (seen to be vessels of the iris filling with blood), the existence of the posterior chamber, and the nature of the retina. Unaware of their functions, Leeuwenhoek noted the existence of photoreceptors, however, they were not properly described until Gottfried Reinhold Treviranus in 1834. Jacques Daviel performed the first documented planned primary cataract extraction on Sep. 18, 1750 in Cologne. Georg Joseph Beer (1763–1821) was an Austrian ophthalmologist and leader of the First Viennese School of Medicine. He introduced a flap operation for treatment of cataract (Beer's operation), as well as having popularized the instrument used to perform the surgery (Beer's knife). In North America, indigenous healers treated some eye diseases by rubbing or scraping the eyes or eyelids. Ophthalmic surgery in The United Kingdom The first ophthalmic surgeon in the UK was John Freke, appointed to the position by the governors of St. Bartholomew's Hospital in 1727. A major breakthrough came with the appointment of Baron de Wenzel (1724–90), a German who became the oculist to King George III of Great Britain in 1772. His skill at removing cataracts legitimized the field. The first dedicated ophthalmic hospital opened in 1805 in London; it is now called Moorfields Eye Hospital. Clinical developments at Moorfields and the founding of the Institute of Ophthalmology (now part of the University College London) by Sir Stewart Duke-Elder established the site as the largest eye hospital in the world and a nexus for ophthalmic research. Central Europe In Berlin, ophthalmologist Albrecht von Graefe introduced iridectomy as a treatment for glaucoma and improved cataract surgery, he is also considered the founding father of the German Ophthalmological Society. Numerous ophthalmologists fled Germany after 1933 as the Nazis began to persecute those of Jewish descent. A representative leader was Joseph Igersheimer (1879–1965), best known for his discoveries with arsphenamine for the treatment of syphilis. He fled to Turkey in 1933. As one of eight emigrant directors in the Faculty of Medicine at the University of Istanbul, he built a modern clinic and trained students. In 1939, he went to the United States, becoming a professor at Tufts University. German ophthalmologist, Gerhard Meyer-Schwickerath is widely credited with developing the predecessor of laser coagulation, photocoagulation. In 1946, Igersheimer conducted the first experiments on light coagulation. In 1949, he performed the first successful treatment of a retinal detachment with a light beam (light coagulation) with a self-constructed device on the roof of the ophthalmic clinic at the University of Hamburg-Eppendorf. Polish ophthalmology dates to the thirteenth century. The Polish Ophthalmological Society was founded in 1911. A representative leader was Adam Zamenhof (1888–1940), who introduced certain diagnostic, surgical, and nonsurgical eye-care procedures. He was executed by the German Nazis in 1940. Zofia Falkowska (1915–93) head of the Faculty and Clinic of Ophthalmology in Warsaw from 1963 to 1976, was the first to use lasers in her practice. Contributions by physicists The prominent physicists of the late nineteenth and early twentieth centuries included Ernst Abbe (1840–1905), a co-owner of at the Zeiss Jena factories in Germany, where he developed numerous optical instruments. Hermann von Helmholtz (1821–1894) was a polymath who made contributions to many fields of science and invented the ophthalmoscope in 1851. They both made theoretical calculations on image formation in optical systems and also had studied the optics of the eye. Bibliography Christopher Leffler (ed.), Biographies of Ophthalmologists from Around the World: Ancient, Medieval, and Early Modern, Wroclaw 2024, pp. 384, ISBN 9798342679220 Professional requirements Ophthalmologists are physicians (MD/DO in the U.S. or MBBS in the UK and elsewhere or DO/DOMS/DNB, who typically complete an undergraduate degree, general medical school, followed by a residency in ophthalmology. Ophthalmologists typically perform optical, medical and surgical eye care. Australia and New Zealand In Australia and New Zealand, the FRACO or FRANZCO is the equivalent postgraduate specialist qualification. The structured training system takes place over five years of postgraduate training. Overseas-trained ophthalmologists are assessed using the pathway published on the RANZCO website. Those who have completed their formal training in the UK and have the CCST or CCT, usually are deemed to be comparable. Bangladesh In Bangladesh to be an ophthalmologist the basic degree is an MBBS. Then they have to obtain a postgraduate degree or diploma in an ophthalmology specialty. In Bangladesh, these are diploma in ophthalmology, diploma in community ophthalmology, fellow or member of the College of Physicians and Surgeons in ophthalmology, and Master of Science in ophthalmology. Canada In Canada, after medical school an ophthalmology residency is undertaken. The residency typically lasts five years, which culminates in fellowship of the Royal College of Surgeons of Canada (FRCSC). Subspecialty training is undertaken by approximately 30% of fellows (FRCSC) in a variety of fields from anterior segment, cornea, glaucoma, vision rehabilitation, uveitis, oculoplastics, medical and surgical retina, ocular oncology, Ocular pathology, or neuro-ophthalmology. Approximately 35 vacancies open per year for ophthalmology residency training in all of Canada. These numbers fluctuate per year, ranging from 30 to 37 spots. Of these, up to ten spots are at French-speaking universities in Quebec. At the end of the five years, the graduating ophthalmologist must pass the oral and written portions of the Royal College exam in either English or French. India In India, after completing MBBS degree, postgraduate study in ophthalmology is required. The degrees are doctor of medicine, master of surgery, diploma in ophthalmic medicine and surgery, and diplomate of national board. The concurrent training and work experience are in the form of a junior residency at a medical college, eye hospital, or institution under the supervision of experienced faculty. Further work experience in the form of fellowship, registrar, or senior resident refines the skills of these eye surgeons. All members of the India Ophthalmologist Society and various state-level ophthalmologist societies hold regular conferences and actively promote continuing medical education. Nepal In Nepal, to become an ophthalmologist, three years of postgraduate study is required after completing an MBBS degree. The postgraduate degree in ophthalmology is called medical doctor in ophthalmology. Currently, this degree is provided by Tilganga Institute of Ophthalmology, Tilganga, Kathmandu, BPKLCO, Institute of Medicine, TU, Kathmandu, BP Koirala Institute of Health Sciences, Dharan, Kathmandu University, Dhulikhel, and National Academy of Medical Science, Kathmandu. A few Nepalese citizens also study this subject in Bangladesh, China, India, Pakistan, and other countries. All graduates have to pass the Nepal Medical Council Licensing Exam to become a registered ophthalmologists in Nepal. The concurrent residency training is in the form of a PG student (resident) at a medical college, eye hospital, or institution according to the degree providing university's rules and regulations. Nepal Ophthalmic Society holds regular conferences and actively promotes continuing medical education. Ireland In Ireland, the Royal College of Surgeons of Ireland grants membership (MRCSI (Ophth)) and fellowship (FRCSI (Ophth)) qualifications in conjunction with the Irish College of Ophthalmologists. Total postgraduate training involves an intern year, a minimum of three years of basic surgical training, and a further 4.5 years of higher surgical training. Clinical training takes place within public, Health Service Executive-funded hospitals in Dublin, Sligo, Limerick, Galway, Waterford, and Cork. A minimum of 8.5 years of training is required before eligibility to work in consultant posts. Some trainees take extra time to obtain MSc, MD or PhD degrees and to undertake clinical fellowships in the UK, Australia, and the United States. Pakistan In Pakistan, after MBBS, a four-year full-time residency program leads to an exit-level FCPS examination in ophthalmology, held under the auspices of the College of Physicians and Surgeons, Pakistan. The tough examination is assessed by both highly qualified Pakistani and eminent international ophthalmic consultants. As a prerequisite to the final examinations, an intermediate module, an optics and refraction module, and a dissertation written on a research project carried out under supervision is also assessed. Moreover, a two-and-a-half-year residency program leads to an MCPS while a two-year training of DOMS is also being offered. For candidates in the military, a stringent two-year graded course, with quarterly assessments, is held under Armed Forces Post Graduate Medical Institute in Rawalpindi. The M.S. in ophthalmology is also one of the specialty programs. In addition to programs for physicians, various diplomas and degrees for allied eyecare personnel are also being offered to produce competent optometrists, orthoptists, ophthalmic nurses, ophthalmic technologists, and ophthalmic technicians in this field. These programs are being offered, notably by the College of Ophthalmology and Allied Vision Sciences, in Lahore and the Pakistan Institute of Community Ophthalmology in Peshawar. Subspecialty fellowships also are being offered in the fields of pediatric ophthalmology and vitreoretinal ophthalmology. King Edward Medical University, Al Shifa Trust Eye Hospital Rawalpindi, and Al- Ibrahim Eye Hospital Karachi also have started a degree program in this field. Philippines In the Philippines, Ophthalmology is considered a medical specialty that uses medicine and surgery to treat diseases of the eye. There is only one professional organization in the country that is duly recognized by the PMA and the PCS: the Philippine Academy of Ophthalmology (PAO). PAO and the state-standard Philippine Board of Ophthalmology (PBO) regulates ophthalmology residency programs and board certification. To become a general ophthalmologist in the Philippines, a candidate must have completed a doctor of medicine degree (MD) or its equivalent (e.g. MBBS), have completed an internship in Medicine, have passed the physician licensure exam, and have completed residency training at a hospital accredited by the Philippine Board of Ophthalmology (accrediting arm of PAO). Attainment of board certification in ophthalmology from the PBO is essential in acquiring privileges in most major health institutions. Graduates of residency programs can receive further training in ophthalmology subspecialties, such as neuro-ophthalmology, retina, etc. by completing a fellowship program that varies in length depending on each program's requirements. United Kingdom In the United Kingdom, three colleges grant postgraduate degrees in ophthalmology. The Royal College of Ophthalmologists (RCOphth) and the Royal College of Surgeons of Edinburgh grant MRCOphth/FRCOphth and MRCSEd/FRCSEd, (although membership is no longer a prerequisite for fellowship), the Royal College of Glasgow grants FRCS. Postgraduate work as a specialist registrar and one of these degrees is required for specialization in eye diseases. Such clinical work is within the NHS, with supplementary private work for some consultants. Only 2.3 ophthalmologists exist per 100,000 population in the UK – fewer pro rata than in any nations in the European Union. United States Ophthalmologists typically complete four years of undergraduate studies, four years of medical school and four years of eye-specific training (residency). Some pursue additional training, known as a fellowship - typically one to two years. Ophthalmologists are physicians who specialize in the eye and related structures. They perform medical and surgical eye care and may also write prescriptions for corrective lenses. They often manage late stage eye disease, which typically involves surgery. Ophthalmologists must complete the requirements of continuing medical education to maintain licensure and for recertification. Notable ophthalmologists The following is a list of physicians who have significantly contributed to the field of ophthalmology: 18th–19th centuries Theodor Leber (1840–1917) discovered Leber's congenital amaurosis, Leber's hereditary optic neuropathy, Leber's miliary aneurysm, and Leber's stellate neuroretinitis Carl Ferdinand von Arlt (1812–1887), the elder (Austrian), proved that myopia is largely due to an excessive axial length, published influential textbooks on eye disease, and ran annual eye clinics in needy areas long before the concept of volunteer eye camps became popular; his name is still attached to some disease signs, e.g., von Arlt's line in trachoma and his son, Ferdinand Ritter von Arlt, the younger, was also an ophthalmologist Jacques Daviel (1696–1762) (France) performed the first documented planned primary cataract extraction on Sep. 18, 1750 in Cologne. Franciscus Donders (1818–1889) (Dutch) published pioneering analyses of ocular biomechanics, intraocular pressure, glaucoma, and physiological optics and he made possible the prescribing of combinations of spherical and cylindrical lenses to treat astigmatism Joseph Forlenze (1757–1833) (Italy), specialist in cataract surgery, became popular during the First French Empire, healing, among many, personalities such as the minister Jean-Étienne-Marie Portalis and the poet Ponce Denis Lebrun; he was nominated by Napoleon "chirurgien oculiste of the lycees, the civil hospices and all the charitable institutions of the departments of the Empire", and he also was known for his free interventions, mainly in favour of poor people Albrecht von Graefe (1828–1870) (Germany) probably the most important ophthalmologist of the nineteenth century, along with Helmholtz and Donders, one of the 'founding fathers' of ophthalmology as a specialty, he was a brilliant clinician and charismatic teacher who had an international influence on the development of ophthalmology, and was a pioneer in mapping visual field defects and diagnosis and treatment of glaucoma, and he introduced a cataract extraction technique that remained the standard for more than 100 years, and many other important surgical techniques such as iridectomy. He rationalised the use of many ophthalmically important drugs, including mydriatics and miotics; he also was the founder of one of the earliest ophthalmic societies (German Ophthalmological Society, 1857) and one of the earliest ophthalmic journals (Graefe's Archives of Ophthalmology) L. L. Zamenhof (b.1859) (Poland) was a Polish ophthalmologist who created the constructed international auxiliary language known as Esperanto. Allvar Gullstrand (1862–1930) (Sweden) was a Nobel Prize-winner in 1911 for his research on the eye as a light-refracting apparatus, he described the 'schematic eye', a mathematical model of the human eye based on his measurements known as the 'optical constants' of the eye; his measurements are still used today Hermann von Helmholtz (1821–1894), a great German polymath, invented the ophthalmoscope (1851) and published important work on physiological optics, including colour vision. Julius Hirschberg (1843–1925) (Germany) in 1879 became the first to use an electromagnet to remove metallic foreign bodies from the eye and in 1886 developed the Hirschberg test for measuring strabismus Peter Adolph Gad (1846–1907), Danish-Brazilian ophthalmologist who founded the first eye infirmary in São Paulo, Brazil Rosa Kerschbaumer-Putjata (1851–1923), Russian-Austrian ophthalmologist who was the first female doctor in Austria, headed "mobile ophthalmological troops" in Russia and reduced the above-average number of blind people in Salzburg where she ran a private eye clinic. Socrate Polara (1800–1860, Italy) founded the first dedicated ophthalmology clinic in Sicily in 1829, entirely as a philanthropic endeavor; later he was appointed as the first director of the ophthalmology department at the Grand Hospital of Palermo, Sicily, in 1831 after the Sicilian government became convinced of the importance of state support for the specialization Herman Snellen (1834–1908) (Netherlands) introduced the Snellen chart to study visual acuity 20th–21st centuries Vladimir Petrovich Filatov (1875–1956) (Russia) contributed the tube flap grafting method, corneal transplantation, and preservation of grafts from cadaver eyes and tissue therapy; he founded the Filatov Institute of Eye Diseases and Tissue Therapy, Odessa, one of the leading eye-care institutes in the world. Shinobu Ishihara (1879–1963) (Japan), in 1918, invented the Ishihara Color Vision Test, a common method for determining Color blindness; he also made major contributions to the study of Trachoma and Myopia. Ignacio Barraquer (1884–1965) (Spain), in 1917, invented the first motorized vacuum instrument (erisophake) for intracapsular cataract extraction; he founded the Barraquer Clinic in 1941 and the Barraquer Institute in 1947 in Barcelona, Spain. Ernst Fuchs (1851–1930) was an Austrian ophthalmologist known for his discovery and description of numerous ocular diseases and abnormalities including Fuchs' dystrophy and Fuchs heterochromic iridocyclitis. Tsutomu Sato (1902–1960) (Japan) pioneer in incisional refractive surgery, including techniques for astigmatism and the invention of radial keratotomy for myopia. Jules Gonin (1870–1935) (Switzerland) was the "father of retinal detachment surgery". Sir Harold Ridley (1906–2001) (United Kingdom), in 1949, may have been the first to successfully implant an artificial intraocular lens after observing that plastic fragments in the eyes of wartime pilots were well tolerated; he fought for decades against strong reactionary opinions to have the concept accepted as feasible and useful. Wajid Ali Khan Burki (1900-1989) (Pakistan), was the "father of medical services" in Pakistan and distinguished ophthalmologist widely recognized as an expert in the field of eye care. Charles Schepens (1912–2006) (Belgium) was the "father of modern retinal surgery" and developer of the Schepens indirect binocular ophthalmoscope whilst at Moorfields Eye Hospital; he was the founder of the Schepens Eye Research Institute, associated with Harvard Medical School and the Massachusetts Eye and Ear Infirmary, in Boston, Massachusetts. Tom Pashby (1915–2005) (Canada) was Canadian Standards Association and a sport safety advocate to prevent eye injuries and spinal cord injuries, developed safer sports equipment, named to the Order of Canada, inducted into Canada's Sport Hall of Fame. Marshall M. Parks (1918–2005) (United States) was the "father of pediatric ophthalmology". José Ignacio Barraquer (1916–1998) (Spain) was the "father of modern refractive surgery" and in the 1960s, he developed lamellar techniques, including keratomileusis and keratophakia, as well as the first microkeratome and corneal microlathe. Tadeusz Krwawicz (1910–1988) (Poland), in 1961, developed the first cryoprobe for intracapsular cataract extraction. Svyatoslav Fyodorov (1927–2000) (Russia) was the "father of ophthalmic microsurgery" and he improved and popularized radial keratotomy, invented a surgical cure for cataract, and he developed scleroplasty. Charles Kelman (1930–2004) (United States) developed the ultrasound and mechanized irrigation and aspiration system for phacoemulsification, first allowing cataract extraction through a small incision. Helena Ndume (b.1960) (Namibia) is a renowned ophthalmologist notable for her charitable work among people with eye-related illnesses. Rand Paul (b. 1963) (United States) worked as an ophthalmologist before becoming a US senator. Andromahi Rapanou ((b. 1984) (Greece), specialist in Retinal Pathology
Biology and health sciences
Fields of medicine
null
56212
https://en.wikipedia.org/wiki/Linen
Linen
Linen () is a textile made from the fibers of the flax plant. Linen is very strong and absorbent and dries faster than cotton. Because of these properties, linen is comfortable to wear in hot weather and is valued for use in garments. Linen textiles can be made from flax plant fiber, yarn, as well as woven and knitted. Linen also has other distinctive characteristics, such as its tendency to wrinkle. It takes significantly longer to harvest than a material like cotton although both are natural fibers. It is also more difficult to weave than cotton. Linen textiles appear to be some of the oldest in the world; their history goes back many thousands of years. Dyed flax fibers found in a cave in Southeastern Europe (present-day Georgia) suggest the use of woven linen fabrics from wild flax may date back over 30,000 years. Linen was used in ancient civilizations including Mesopotamia and ancient Egypt, and linen is mentioned in the Bible. In the 18th century and beyond, the linen industry was important in the economies of several countries in Europe as well as the American colonies. Textiles in a linen weave texture, even when made of cotton, hemp, or other non-flax fibers, are also loosely referred to as "linen". Etymology The word linen is of West Germanic origin and cognate to the Latin name for the flax plant, , and the earlier Greek (). This word history has given rise to a number of other terms in English, most notably line, from the use of a linen (flax) thread to determine a straight line. It is also etymologically related to a number of other terms, including lining, because linen was often used to create an inner layer for clothing, and lingerie, from French, which originally denoted underwear made of linen. History People in various parts of the world began weaving linen at least several thousand years ago. It was also recovered from Qumran Cave 1 near the Dead Sea. Early history The discovery of dyed flax fibers in a cave in Southern Caucasus, West Asia (modern day country, Georgia) dated to 36,000 years ago suggests that ancient people used wild flax fibers to create linen-like fabrics from an early date. Fragments of straw, seeds, fibers, yarns, and various types of fabrics, including linen samples, dating to about 8,000 BC have been found in Swiss lake dwellings. Woven flax textile fragments have been "found between infant and child" in a burial at Çatalhöyük, a large settlement dating to around 7,000 BC. To the southeast, in ancient Mesopotamia, flax was domesticated and linen was produced. It was used mainly by the wealthier class of the society, including priests. The Sumerian poem of the courtship of Inanna mentions flax and linen. In ancient Egypt, linen was used for mummification and for burial shrouds. It was also worn as clothing on a daily basis; white linen was worn because of the extreme heat. For example, the Tarkhan dress, considered to be among the oldest woven garments in the world and dated to between 3482 and 3102 BC, is made of linen. Plutarch wrote that the priests of Isis also wore linen because of its purity. Linen was sometimes used as a form of currency in ancient Egypt. Egyptian mummies were wrapped in linen as a symbol of light and purity, and as a display of wealth. Some of these fabrics, woven from hand-spun yarns, were very fine for their day, but are coarse compared with modern linen. When the tomb of the Pharaoh Ramses II, who died in 1213 BC, was discovered in 1881, the linen wrappings were in a state of perfect preservation after more than 3000 years. In the Ulster Museum, Belfast there is the mummy of 'Takabuti' the daughter of a priest of Amun, who died 2,500 years ago. The linen on this mummy is also in a perfect state of preservation. The earliest written documentation of a linen industry comes from the Linear B tablets of Pylos, Greece, where linen is depicted as an ideogram and also written as "li-no" (Greek: λίνον, linon), and the female linen workers are cataloged as "li-ne-ya" (λίνεια, lineia). Middle Ages By the Middle Ages, there was a thriving trade in German flax and linen. The trade spread throughout Germany by the 9th century and spread to Flanders and Brabant by the 11th century. The Lower Rhine was a center of linen making in the Middle Ages. Flax was cultivated and linen used for clothing in Ireland by the 11th century. Evidence suggests that flax may have been grown and sold in Southern England in the 12th and 13th centuries. Textiles, primarily linen and wool, were produced in decentralized home weaving mills. Modern history Linen continued to be valued for garments in the 16th century and beyond. Specimens of linen garments worn by historical figures have survived. For example, a linen cap worn by Emperor Charles V was carefully preserved after his death in 1558. There is a long history of the production of linen in Ireland. When the Edict of Nantes was revoked in 1685, many of the Huguenots who fled France settled in the British Isles and elsewhere. They brought improved methods for linen production with them, contributing to the growth of the linen industry in Ireland in particular. Among them was Louis Crommelin, a leader who was appointed overseer of the royal linen manufacture of Ireland. He settled in the town of Lisburn near Belfast, which is itself perhaps the most famous linen producing center throughout history; during the Victorian era the majority of the world's linen was produced in the city, which gained it the name Linenopolis. Although the linen industry was already established in Ulster, Louis Crommelin found scope for improvement in weaving, and his efforts were so successful that he was appointed by the Government to develop the industry over a much wider range than the small confines of Lisburn and its surroundings. The direct result of his good work was the establishment, under statute, of the Board of Trustees of the Linen Manufacturers of Ireland in the year 1711. Several grades were produced including coarse lockram. The Living Linen Project was set up in 1995 as an oral archive of the knowledge of the Irish linen industry, which was at that time still available within a nucleus of people who formerly worked in the industry in Ulster. The linen industry was increasingly critical in the economies of Europe in the 18th and 19th centuries. In England and then in Germany, industrialization and machine production replaced manual work and production moved from the home to new factories. Linen was also an important product in the American colonies, where it was brought over with the first settlers and became the most commonly used fabric and a valuable asset for colonial households. The homespun movement encouraged the use of flax to make home spun textiles. Through the 1830s, most farmers in the northern United States continued to grow flax for linen to be used for the family's clothing. In the late 19th and early 20th centuries, linen was very significant to Russia and its economy. At one time it was the country's greatest export item and Russia produced about 80% of the world's fiber flax crop. In December 2006, the General Assembly of the United Nations proclaimed 2009 to be the International Year of Natural Fibres in order to raise people's awareness of linen and other natural fibers. One study suggests that the functional properties of linen fabric can be improved by incorporating chitosan-citric acid and phytic acid thiourea. The effects of this process include improved levels of antibacterial activity, increased wrinkle resistance, flame retardancy, UV protection, and antioxidant properties. Additionally, the linen fabric was able to retain durability for about 20 washes. Religion There are many references to linen throughout the Bible, reflecting the textile's entrenched presence in human cultures. In Judaism, the only law concerning which fabrics may be interwoven together in clothing concerns the mixture of linen and wool, called shaatnez; it is restricted in "Thou shalt not wear a mingled stuff, wool and linen together" and , "...neither shall there come upon thee a garment of two kinds of stuff mingled together." There is no explanation for this in the Torah itself and it is categorized as a type of law known as chukim, a statute beyond man's ability to comprehend. First-century Romano-Jewish historian Josephus suggested that the reason for the prohibition was to keep the laity from wearing the official garb of the priests, while medieval Sephardic Jewish philosopher Maimonides thought that the reason was that heathen priests wore such mixed garments. Others explain that it is because God often forbids mixtures of disparate kinds, not designed by God to be compatible in a certain way, with mixing animal and vegetable fibers being similar to having two different types of plowing animals yoked together; also, such commands serve both a practical as well as allegorical purpose, perhaps here preventing a priestly garment that would cause discomfort (or excessive sweat) in a hot climate. Linen is also mentioned in the Bible in Proverbs 31, a passage describing a noble wife. says, "She makes coverings for her bed; she is clothed in fine linen and purple." Fine white linen is also worn by angels in the New Testament (). In the Book of Joshua, Rahab, a prostitute in Jericho, hides two Israelite spies under bundles of flax. Uses Many products can be made with linen, such as clothing, bed sheets, aprons, bags, towels (swimming, bath, beach, body and wash towels), napkins, runners, and upholstery. It is used especially in sailcloth and lent cloth, sewing threads, handkerchiefs, table cloth, sheets, collars, cuffs etc.. Today, linen is usually an expensive textile produced in relatively small quantities. It has a long staple (individual fiber length) relative to cotton and other natural fibers. Linen fabric has been used for table coverings, bed coverings and clothing for centuries. The significant cost of linen derives not only from the difficulty of working with the thread but also because the flax plant itself requires a great deal of attention. In addition, flax thread is not elastic, and therefore it is difficult to weave without breaking threads. Thus linen is considerably more expensive to manufacture than cotton. The collective term "linens" is still often used generically to describe a class of woven or knitted bed, bath, table and kitchen textiles traditionally made of flax-based linen but today made from a variety of fibers. The term "linens" refers to lightweight undergarments such as shirts, chemises, waist-shirts, lingerie (a cognate with linen), and detachable shirt collars and cuffs, all of which were historically made almost exclusively out of linen. The inner layer of fine composite cloth garments (as for example dress jackets) was traditionally made of linen, hence the word lining. Over the past 30 years the end use for linen has changed dramatically. Approximately 70% of linen production in the 1990s was for apparel textiles, whereas in the 1970s only about 5% was used for fashion fabrics. Linen uses range across bed and bath fabrics (tablecloths, bath towels, dish towels, bed sheets); home and commercial furnishing items (wallpaper/wall coverings, upholstery, window treatments); apparel items (suits, dresses, skirts, shirts); and industrial products (luggage, canvases, sewing thread). It was once the preferred yarn for hand-sewing the uppers of moccasin-style shoes (loafers), but has been replaced by synthetics. A linen handkerchief, pressed and folded to display the corners, was a standard decoration of a well-dressed man's suit during most of the first part of the 20th century. Nowadays, linen is one of the most preferred materials for bed sheets due to its durability and hypoallergenic properties. Linen can be up to three times stronger than cotton. This is because the cellulose fibers in linen yarn are slightly longer and wrapped tighter than those found in cotton yarn. This gives it great durability and allows linen products to be long-lasting. Currently researchers are working on a cotton/flax blend to create new yarns which will improve the feel of denim during hot and humid weather. Conversely, some brands such as 100% Capri specially treat the linen to look like denim. Linen fabric is one of the preferred traditional supports for oil painting. In the United States cotton is popularly used instead, as linen is many times more expensive there, restricting its use to professional painters. In Europe, however, linen is usually the only fabric support available in art shops; in the UK both are freely available with cotton being cheaper. Linen is preferred to cotton for its strength, durability and archival integrity. Linen is also used extensively by artisan bakers. Known as a couche, the flax cloth is used to hold the dough into shape while in the final rise, just before baking. The couche is heavily dusted with flour which is rubbed into the pores of the fabric. Then the shaped dough is placed on the couche. The floured couche makes a "non stick" surface to hold the dough. Then ridges are formed in the couche to keep the dough from spreading. In the past, linen was also used for books (the only surviving example of which is the Liber Linteus). Due to its strength, in the Middle Ages linen was used for shields, gambesons, and bowstrings; in classical antiquity it was used to make a type of body armour, referred to as a linothorax. Additionally, linen was commonly used to make riggings, sail-cloths, nets, ropes, and canvases because the tensility of the cloth would increase by 20% when wet. Because of its strength when wet, Irish linen is a very popular wrap of pool/billiard cues, due to its absorption of sweat from hands. In 1923, the German city Bielefeld issued banknotes printed on linen. United States currency paper is made from 25% linen and 75% cotton. Flax fiber Description Linen is a bast fiber. Flax fibers vary in length from about 25 to 150 mm (1 to 6 in) and average 12–16 micrometers in diameter. There are two varieties: shorter tow fibers used for coarser fabrics and longer line fibers used for finer fabrics. Flax fibers can usually be identified by their “nodes” which add to the flexibility and texture of the fabric. The cross-section of the linen fiber is made up of irregular polygonal shapes which contribute to the coarse texture of the fabric. Properties Linen fabric feels cool to touch, a phenomenon which indicates its higher conductivity (the same principle that makes metals feel "cold"). It is smooth, making the finished fabric lint-free, and gets softer the more it is washed. However, constant creasing in the same place in sharp folds will tend to break the linen threads. This wear can show up in collars, hems, and any area that is iron creased during laundering. Linen's poor elasticity means that it easily wrinkles. Mildew, perspiration, and bleach can damage the fabric, but because it is not made from animal fibers (keratin) it is impervious to clothes moths and carpet beetles. Linen is relatively easy to take care of, since it resists dirt and stains, has no lint or pilling tendency, and can be dry-cleaned, machine-washed, or steamed. It can withstand high temperatures, and has only moderate initial shrinkage. Linen should not be dried too much by tumble drying, and it is much easier to iron when damp. Linen wrinkles very easily, and thus some more formal garments require ironing often, in order to maintain perfect smoothness. Nevertheless, the tendency to wrinkle is often considered part of linen's particular "charm", and many modern linen garments are designed to be air-dried on a good clothes hanger and worn without the necessity of ironing. A characteristic often associated with linen yarn is the presence of slubs, or small, soft, irregular lumps, which occur randomly along its length. In the past, slubs were traditionally considered to be defects, and were associated with low-quality linen. However, in the case of many present-day linen fabrics, particularly in the decorative furnishing industry, slubs are considered as part of the aesthetic appeal of an expensive natural product. In addition, slubs do not compromise the integrity of the fabric, and therefore they are not viewed as a defect. However, the very finest linen has very consistent diameter threads, with no slubs at all. Linen can degrade in a few weeks when buried in soil. Linen is more biodegradable than cotton, making it an eco friendly fiber. Measure The standard measure of bulk linen yarn is the "lea", which is the number of yards in a pound of linen divided by 300. For example, a yarn having a size of 1 lea will give 300 yards per pound. The fine yarns used in handkerchiefs, etc. might be 40 lea, and give 40x300 = 12,000 yards per pound. This is a specific length therefore an indirect measurement of the fineness of the linen, i.e., the number of length units per unit mass. The symbol is NeL. The metric unit, Nm, is more commonly used in continental Europe. This is the number of 1,000 m lengths per kilogram. In China, the English Cotton system unit, NeC, is common. This is the number of 840 yard lengths in a pound. Production method Linen is laborious to manufacture. The quality of the finished linen product is often dependent upon growing conditions and harvesting techniques. To generate the longest possible fibers, flax is either hand-harvested by pulling up the entire plant or stalks are cut very close to the root. After harvesting, the plants are dried, and then the seeds are removed through a mechanized process called “rippling” (threshing) and winnowing. The fibers must then be loosened from the stalk. This is achieved through retting, a process which uses bacteria to decompose the pectin that binds the fibers together. Natural retting methods take place in tanks and pools, or directly in the fields. There are also chemical retting methods; these are faster, but are typically more harmful to the environment and to the fibers themselves. After retting, the stalks are ready for scutching, which takes place between August and December. Scutching removes the woody portion of the stalks by crushing them between two metal rollers, so that the parts of the stalk can be separated. The fibers are removed and the other parts such as linseed, shives, and tow are set aside for other uses. Next the fibers are heckled: the short fibers are separated with heckling combs by 'combing' them away, to leave behind only the long, soft flax fibers. After the fibers have been separated and processed, they are typically spun into yarns and woven or knit into linen textiles. These textiles can then be bleached, dyed, printed on, or finished with a number of treatments or coatings. An alternate production method is known as “cottonizing” which is quicker and requires less equipment. The flax stalks are processed using traditional cotton machinery; however, the finished fibers often lose the characteristic linen look. Producers In 2018, according to the United Nations' repository of official international trade statistics, China was the top exporter of woven linen fabrics by trade value, with a reported $732.3 million in exports; Italy ($173.0 million), Belgium ($68.9 million) and the United Kingdom ($51.7 million) were also major exporters.
Technology
Fabrics and fibers
null
56216
https://en.wikipedia.org/wiki/Technical%20%28vehicle%29
Technical (vehicle)
A technical, known as a non-standard tactical vehicle (NSTV) in United States military parlance, is a light improvised fighting vehicle, typically an open-backed civilian pickup truck or four-wheel drive vehicle modified to mount SALWs and heavy weaponry, such as a machine gun, automatic grenade launcher, anti-aircraft autocannon, rotary cannon, anti-tank weapon, anti-tank gun, ATGM, mortar, multiple rocket launcher, recoilless rifle, or other support weapon (somewhat like a light military gun truck or potentially even a self-propelled gun), etc. Etymology The neologism technical describing such a vehicle is believed to have originated in Somalia during the Somali Civil War in the early 1990s. Barred from bringing in private security, non-governmental organizations hired local gunmen to protect their personnel, using money defined as "technical assistance grants". Eventually the term broadened to include any vehicle carrying armed men. However, an alternative account is given by Michael Maren, who says the term was first used in Somalia in the 1980s, after engineers from Soviet arms manufacturer Tekniko mounted weapons on vehicles for the Somali National Movement during the Somaliland War of Independence. Technicals have also been referred to as battlewagons and gunwagons. In Russia and Ukraine, technicals are often referred to as tachanka, a reference to horse-drawn machine gun platforms from the First World War and Russian Civil War. Features Among irregular militaries, often centered on the perceived strength and charisma of male warlords, the prestige of technicals is strong. According to one article, "The Technical is the most significant symbol of power in southern Somalia. It is a small truck with large tripod machine guns mounted on the back. A warlord's power is measured by how many of these vehicles he has." Technicals are not commonly used by well-funded militaries that are able to procure purpose-built combat vehicles, because the soft-skinned civilian vehicles that technicals are based on do not offer much armor protection to crew and passengers. Technicals fill the niche of traditional light cavalry. Generally costing much less than purpose-built combat vehicles, the major asset of technicals is speed and mobility, as well as their ability to strike from unexpected directions with automatic fire and light troop deployment. Further, the reliability of vehicles such as the Toyota Hilux is useful for forces that lack the repair-related infrastructure of a conventional military on land. However, in direct engagements they are no match for heavier vehicles, such as tanks or other armored fighting vehicles, and they are mostly helpless against any air support from a proper military. History Prototypes and early usage Light improvised fighting vehicles date back to the first use of automobiles, and to the horse-drawn tachankas mounting machine guns in eastern Europe and Russia. At the Bombardment of Papeete during World War I, the French armed several Ford trucks with 37 mm guns to bolster their defense of the city. During the Spanish Civil War, field guns were fixed to trucks to act as improvised self-propelled guns, while improvised armored cars were constructed by attaching steel plates to trucks. During World War II, various British and Commonwealth units, including the Long Range Desert Group (LRDG), the No. 1 Demolition Squadron or 'PPA' (Popski's Private Army), and the Special Air Service (SAS) were noted for their exploits in the deserts of Egypt, Libya and Chad using unarmored motor vehicles, often fitted with machine guns. Examples of LRDG vehicles include the Chevrolet WB 30 cwt Patrol Truck and the Willys MB Jeep. The SAS' use of heavily armed Land Rovers continued post war with their use of Series 1 Land Rovers and later Series 11A 1968 Land Rovers in the Dhofar Rebellion. The SAS painted their Land Rovers pink as it was found to provide excellent camouflage in the desert and they were nicknamed 'Pink Panthers' or Pinkies. The SAS also used a more modern Land Rover Desert Patrol Vehicle (DPV) during the Gulf War. Western Sahara Tactics for employing technicals were pioneered by the Sahrawi People's Liberation Army, the armed wing of the Polisario Front, fighting for independence against Mauritania (1975–79) and Morocco (1975–present) from headquarters in Tindouf, Algeria. Algeria provided arms and Land Rovers to Sahrawi guerrillas, who successfully used them in long-range desert raids against the less agile conventional armies of their opponents, recalling Sahrawi tribal raids (ghazis) of the pre-colonial period. Polisario later gained access to heavier equipment, but four-wheel drive vehicles remain a staple of their arsenal. The Moroccan army quickly changed their strategy and created mounted units using technicals to challenge Polisario speed and hit and run strategies in the large desert, where the Moroccan units proved their efficiency. Chadian–Libyan conflict In 1987, Chadian troops equipped with technicals drove the heavily mechanized Libyan army from the Aozou Strip. The vehicles were instrumental in the victory at the Battle of Fada, and were driven over into Libya to raid military bases. It was discovered that these light vehicles could ride through anti-tank minefields without detonating the mines when driven at speeds over 100 km/h. The vehicles had become so famous that, in 1984, Time dubbed early stages of the conflict the "Great Toyota War". The Toyota War was unusual in that the force equipped with improvised vehicles prevailed over the force equipped with purpose-built fighting vehicles. MILAN anti-tank guided missiles provided by France were key to the Chadian success, while the Libyan forces were poorly deployed and organized. The Troubles in Northern Ireland Throughout the conflict in Northern Ireland (1960s-1998), the Provisional IRA fitted vehicles, especially vans and trucks, with automatic weapons, heavy machine guns, and also improvised mortars. Sometimes the vehicles were armored with welded plates and sandbags. The IRA employed tractors and trailers to transport and fire improvised mortars, and heavy equipment to tear down fences and barbed wire and break into fortified security bases. Improvised flamethrowers were usually modified manure spreaders pulled to their targets by tractor. Somali Civil War Technicals played an important role in the 1990s Somali Civil War and the War in Somalia (2006–2009). Even prior to the collapse of the Somali Democratic Republic, camouflaged Toyota pickup trucks with mounted M2 Browning machine guns appeared in Somali military parades during the 1980s. After the fall of the Siad Barre regime and the collapse of the Somali National Army (SNA), it was rare for any Somali force to field armored fighting vehicles. However, technicals were very common. Somali faction leader Mohamed Farrah Aidid used 30 technicals along with a force of 600 militia to capture Baidoa in September 1995. It was reported that after his death in 1996, his body was carried to his funeral on a Toyota pickup. Proving their susceptibility to heavy weapons and their value as a military prize, the Islamic Courts Union (ICU) was able to capture 30 "battlewagons" during the defeat of warlord Abdi Qeybdid's militia at the Second Battle of Mogadishu in 2006. That September, an impressive array of 130 technicals was used to take Kismayo from the forces of the Juba Valley Alliance. On November 13, 2006, then President of Puntland, General Adde Musa, personally led fifty technicals to Galkacyo to confront the Islamists. They were used a month later against the army of the Islamic Courts Union at the Battle of Bandiradley alongside Abdi Qeybdiid's reconstituted militia. However, forced into conventional battles in the War in Somalia (2006–2009), the unarmored technicals of the ICU proved no match for the T-55 tanks, Mil Mi-24 helicopter gunships and fighter-bombers employed by Ethiopia. War in Afghanistan In the War in Afghanistan, U.S. special operations forces units such as the Green Berets were known to use technicals for patrol both because of the rugged terrain and the nature of their clandestine operations. The Taliban also use technicals in the bulk of their mobile fighting force. Iraq War Technicals were used by Iraqi military forces in the 2003 invasion of Iraq. The Iraqi Republican Guard and Fedayeen emulated tactics of the Somali National Alliance with limited success, but were outmatched by Coalition armor and aviation. In the aftermath of the invasion technicals saw use by Iraqi insurgents for transporting personnel and quick raids against the Iraqi police forces. The insurgent use of technicals increased after the Iraq Spring Fighting of 2004. Many military utility vehicles have been modified to serve as gun trucks to protect Coalition convoys. The Humvee allows for weapon mounts by design, so it is not considered a technical. The Coalition also supplied technicals to the Iraqi police. Private military contractors also use technicals and the United States military used modified Toyota Hiluxes, Land Cruisers, and other trucks as well. Darfur conflict Janjaweed militias use technicals on their raids against civilian villages in Darfur, Sudan, as do the Sudan Liberation Army (SLA) and Justice and Equality Movement (JEM) rebel troops in defense of their areas of operations. Light vehicles such as technicals are often thought to be more mobile than armored vehicles, but on one occasion an African peace-keeper driving a Grizzly AVGP whose guns had jammed, succeeded in catching up with, ramming and rolling over a fleeing Sudanese technical. Lebanon Introduced by the Palestine Liberation Organization (PLO) guerrilla groups, technicals were extensively employed by all factions involved in the Lebanese Civil War between 1975 and 1990, including the Christian Lebanese Front and the Lebanese National Movement (LNM) irregular militias, the Lebanese Army and the Internal Security Forces (ISF). Opposition forces have reportedly used technicals in the fighting for the Chouf District during the May 2008 clashes in Lebanon. Libyan Civil War During the First Libyan Civil War, both regime loyalist forces as well as the anti-Gaddafi forces used technicals extensively. The type of warfare that had been carried out in the conflict—wherein highly mobile groups of soldiers and rebels continued to move to and from on the desert terrain, retreating at a time and then suddenly attacking to regain control of small towns and villages in the Eastern rebel held parts of Libya—had led to the technical becoming a vehicle of choice for both sides. Technicals had also been widely used by the rebels whilst setting up checkpoints. It also formed a vast percentage of the rebel inventory which was limited to light weapons, light body armor and very few tanks. Some medium flatbed trucks carried the Soviet-made ZPU and ZU-23-2 towed anti-aircraft twin or quad barreled guns, as well as recoilless rifles and S-5 rocket helicopter rocket launcher pods. Some rebels have improvised with captured heavy weaponry, like BMP-1 turrets and helicopter rocket pods, as well as lower-tech methods such as using doorbells to ignite rocket-launched ammunition. Rebel technicals have also frequently employed BM-21 Grad rockets. Rocket tubes were salvaged from damaged regime Ural-375D trucks and mounted on the backs of pickups, with the technicals able to fire anywhere from one to six rockets. Syrian Civil War In the Syrian Civil War, technicals are extensively used as improvised fighting vehicles, especially by opposition forces such as Jaysh al-Thuwar, who largely lack conventional fighting vehicles. Syrian government forces also use technicals, but on a smaller scale. The kind of weapons mounted on technicals varies widely, including machine guns, recoilless rifles, anti-aircraft autocannons (commonly ZPU and ZU-23-2) and even BMP-1 turrets. The Military of ISIL extensively used technicals in Iraq and Syria. Peshmerga forces have used technicals to surround and attack ISIS targets. Russo-Ukrainian War War in Donbas During the 2014 war in Donbas, both sides were using home-made military vehicles. OSCE monitors recorded 15 Russian armored utility vehicles (UAZ-23632-148 Esaul) in a training area near non-government-controlled Oleksandrivska in April 2021. Russian invasion of Ukraine Technicals were seen being used by Spetsnaz in Gomel, Belarus on February 24, 2022. Ukrainian forces reportedly used rocket launchers recovered from downed helicopters, mounted on technicals. Yemeni Civil War In the Yemeni Civil War, Houthis and Hadi/Alimi-aligned militias use technicals. Composition Technicals consist of weapons mounted on a civilian vehicle, such as a four-wheel drive pickup truck. Many pickups have been used as technicals including Ford Ranger and Mitsubishi Triton, but the most favoured are the Toyota Hilux and Toyota Land Cruiser. They are typically fitted with heavy machine guns (especially the DShK and M2 Browning), anti-aircraft artillery (usually the ZPU or ZU-23-2), recoilless rifles (usually the SPG-9 or M40 recoilless rifle), anti-tank missiles launchers, multiple rocket launchers such as the Type 63 or the M-63 Plamen and in rare occasions rocket pods salvaged from downed attack helicopters like the S-5 rocket. Due to being soft-skinned vehicles, optional add-on hardware include ballistic glass, turret gun shields and improvised vehicle armor such as made of welded steel plates as defense against small arms fire to increase survival chances. A number of those technicals had their original tires changed to off-road tires, run-flat tires or specialized tires with central tire inflation system. As those modified tires improve technicals' performance on different terrains, while the run-flat tires or central tire inflation system equipped tires give the technicals opportunity to quickly get out of dangerous situations even when tires were damaged.
Technology
Maneuver
null
56217
https://en.wikipedia.org/wiki/Poaceae
Poaceae
Poaceae ( ), also called Gramineae ( ), is a large and nearly ubiquitous family of monocotyledonous flowering plants commonly known as grasses. It includes the cereal grasses, bamboos, the grasses of natural grassland and species cultivated in lawns and pasture. The latter are commonly referred to collectively as grass. With around 780 genera and around 12,000 species, the Poaceae is the fifth-largest plant family, following the Asteraceae, Orchidaceae, Fabaceae and Rubiaceae. The Poaceae are the most economically important plant family, providing staple foods from domesticated cereal crops such as maize, wheat, rice, oats, barley, and millet for people and as feed for meat-producing animals. They provide, through direct human consumption, just over one-half (51%) of all dietary energy; rice provides 20%, wheat supplies 20%, maize (corn) 5.5%, and other grains 6%. Some members of the Poaceae are used as building materials (bamboo, thatch, and straw); others can provide a source of biofuel, primarily via the conversion of maize to ethanol. Grasses have stems that are hollow except at the nodes and narrow alternate leaves borne in two ranks. The lower part of each leaf encloses the stem, forming a leaf-sheath. The leaf grows from the base of the blade, an adaptation allowing it to cope with frequent grazing. Grasslands such as savannah and prairie where grasses are dominant are estimated to constitute 40.5% of the land area of the Earth, excluding Greenland and Antarctica. Grasses are also an important part of the vegetation in many other habitats, including wetlands, forests and tundra. Though they are commonly called "grasses", groups such as the seagrasses, rushes and sedges fall outside this family. The rushes and sedges are related to the Poaceae, being members of the order Poales, but the seagrasses are members of the order Alismatales. However, all of them belong to the monocot group of plants. Description Grasses may be annual or perennial herbs, generally with the following characteristics (the image gallery can be used for reference): The stems of grasses, called culms, are usually cylindrical (more rarely flattened, but not 3-angled) and are hollow, plugged at the nodes, where the leaves are attached. Grass leaves are nearly always alternate and distichous (in one plane), and have parallel veins. Each leaf is differentiated into a lower sheath hugging the stem and a blade with entire (i.e., smooth) margins. The leaf blades of many grasses are hardened with silica phytoliths, which discourage grazing animals; some, such as sword grass, are sharp enough to cut human skin. A membranous appendage or fringe of hairs called the ligule lies at the junction between sheath and blade, preventing water or insects from penetrating into the sheath. Flowers of Poaceae are characteristically arranged in spikelets, each having one or more florets. The spikelets are further grouped into panicles or spikes. The part of the spikelet that bears the florets is called the rachilla. A spikelet consists of two (or sometimes fewer) bracts at the base, called glumes, followed by one or more florets. A floret consists of the flower surrounded by two bracts, one external—the lemma—and one internal—the palea. The flowers are usually hermaphroditic—maize being an important exception—and mainly anemophilous or wind-pollinated, although insects occasionally play a role. The perianth is reduced to two scales, called lodicules, that expand and contract to spread the lemma and palea; these are generally interpreted to be modified sepals. The fruit of grasses is a caryopsis, in which the seed coat is fused to the fruit wall. A tiller is a leafy shoot other than the first shoot produced from the seed. Growth and development Grass blades grow at the base of the blade and not from elongated stem tips. This low growth point evolved in response to grazing animals and allows grasses to be grazed or mown regularly without severe damage to the plant. Three general classifications of growth habit present in grasses: bunch-type (also called caespitose), stoloniferous, and rhizomatous. The success of the grasses lies in part in their morphology and growth processes and in part in their physiological diversity. There are both C3 and C4 grasses, referring to the photosynthetic pathway for carbon fixation. The C4 grasses have a photosynthetic pathway, linked to specialized Kranz leaf anatomy, which allows for increased water use efficiency, rendering them better adapted to hot, arid environments. The C3 grasses are referred to as "cool-season" grasses, while the C4 plants are considered "warm-season" grasses. Annual cool-season – wheat, rye, annual bluegrass (annual meadowgrass, Poa annua), and oat Perennial cool-season – orchardgrass (cocksfoot, Dactylis glomerata), fescue (Festuca spp.), Kentucky bluegrass and perennial ryegrass (Lolium perenne) Annual warm-season – maize, sudangrass, and pearl millet Perennial warm-season – big bluestem, Indiangrass, Bermudagrass and switchgrass. Although the C4 species are all in the PACMAD clade (see diagram below), it seems that various forms of C4 have arisen some twenty or more times, in various subfamilies or genera. In the Aristida genus for example, one species (A. longifolia) is C3 but the approximately 300 other species are C4. As another example, the whole tribe of Andropogoneae, which includes maize, sorghum, sugar cane, "Job's tears", and bluestem grasses, is C4. Around 46 percent of grass species are C4 plants. Taxonomy The name Poaceae was given by John Hendley Barnhart in 1895, based on the tribe Poeae described in 1814 by Robert Brown, and the type genus Poa described in 1753 by Carl Linnaeus. The term is derived from the Ancient Greek πόα (póa, "fodder"). Evolutionary history Grasses include some of the most versatile plant life-forms. They became widespread toward the end of the Cretaceous period, and fossilized dinosaur dung (coprolites) have been found containing phytoliths of a variety that include grasses that are related to modern rice and bamboo. Grasses have adapted to conditions in lush rain forests, dry deserts, cold mountains and even intertidal habitats, and are currently the most widespread plant type; grass is a valuable source of food and energy for all sorts of wildlife. A cladogram shows subfamilies and approximate species numbers in brackets: Before 2005, fossil findings indicated that grasses evolved around 55 million years ago. Finds of grass-like phytoliths in Cretaceous dinosaur coprolites from the latest Cretaceous (Maastrichtian) aged Lameta Formation of India have pushed this date back to 66 million years ago. In 2011, fossils from the same deposit were found to belong to the modern rice tribe Oryzeae, suggesting substantial diversification of major lineages by this time. In 2018, a study described grass microfossils extracted from the teeth of the hadrosauroid dinosaur Equijubus normani from northern China, dating to the Albian stage of the Early Cretaceous approximately 113–100 million years ago, which were found to belong to primitive lineages within Poaceae, similar in position to the Anomochlooideae. These are currently the oldest known grass fossils. Fossils of Phragmites have been found in the Late Cretaceous of North America, particularly in the Maastrichtian aged Laramie Formation. However slightly older fossils of Phragmites have been found in the Eastern coast of the US dating the Campanian (such as in the Black Creek Formation). The relationships among the three subfamilies Bambusoideae, Oryzoideae and Pooideae in the BOP clade have been resolved: Bambusoideae and Pooideae are more closely related to each other than to Oryzoideae. This separation occurred within the relatively short time span of about 4 million years. According to Lester Charles King, the spread of grasses in the Late Cenozoic would have changed patterns of hillslope evolution favouring slopes that are convex upslope and concave downslope and lacking a free face were common. King argued that this was the result of more slowly acting surface wash caused by carpets of grass which in turn would have resulted in relatively more soil creep. Subdivisions There are about 12,000 grass species in about 771 genera that are classified into 12 subfamilies. See the full list of Poaceae genera. Anomochlooideae Pilg. ex Potztal, a small lineage of broad-leaved grasses that includes two genera (Anomochloa, Streptochaeta) Pharoideae L.G.Clark & Judz., a small lineage of grasses of three genera, including Pharus and Leptaspis Puelioideae L.G.Clark, M.Kobay., S.Mathews, Spangler & E.A.Kellogg, a small lineage of the African genus Puelia Pooideae, including wheat, barley, oats, brome-grass (Bromus), reed-grasses (Calamagrostis) and many lawn and pasture grasses such as bluegrass (Poa) Bambusoideae, including bamboo Ehrhartoideae, including rice and wild rice Aristidoideae, including Aristida Arundinoideae, including giant reed and common reed Chloridoideae, including the lovegrasses (Eragrostis, about 350 species, including teff), dropseeds (Sporobolus, some 160 species), finger millet (Eleusine coracana (L.) Gaertn.), and the muhly grasses (Muhlenbergia, about 175 species) Panicoideae, including panic grass, maize, sorghum, sugarcane, most millets, fonio, "Job's tears", and bluestem grasses Micrairoideae Danthonioideae, including pampas grass Distribution The grass family is one of the most widely distributed and abundant groups of plants on Earth. Grasses are found on every continent, including Antarctica. The Antarctic hair grass, Deschampsia antarctica is one of only two flowering plant species native to the western Antarctic Peninsula. Ecology Grasses are the dominant vegetation in many habitats, including grassland, salt-marsh, reedswamp and steppes. They also occur as a smaller part of the vegetation in almost every other terrestrial habitat. Grass-dominated biomes are called grasslands. If only large, contiguous areas of grasslands are counted, these biomes cover 31% of the planet's land. Grasslands include pampas, steppes, and prairies. Grasses provide food to many grazing mammals, as well as to many species of butterflies and moths. Many types of animals eat grass as their main source of food, and are called graminivores – these include cattle, sheep, horses, rabbits and many invertebrates, such as grasshoppers and the caterpillars of many brown butterflies. Grasses are also eaten by omnivorous or even occasionally by primarily carnivorous animals. Grasses dominate certain biomes, especially temperate grasslands, because many species are adapted to grazing and fire. Grasses are unusual in that the meristem is near the bottom of the plant; hence, grasses can quickly recover from cropping at the top. The evolution of large grazing animals in the Cenozoic contributed to the spread of grasses. Without large grazers, fire-cleared areas are quickly colonized by grasses, and with enough rain, tree seedlings. Trees eventually outcompete most grasses. Trampling grazers kill seedling trees but not grasses. Sexual reproduction and meiosis Sexual reproduction and meiosis have been studied in rice, maize, wheat and barley. Meiosis research in these crop species is linked to crop improvement, since meiotic recombination is an important component of plant breeding. Unlike in animals, the specification of both male and female plant germlines occurs late in development during flowering. The transition from the sporophyte phase to the gametophyte state is initiated by meiotic entry. Uses Grasses are, in human terms, perhaps the most economically important plant family. Their economic importance stems from several areas, including food production, industry, and lawns. They have been grown as food for domesticated animals for up to 6,000 years and the grains of grasses such as wheat, rice, maize (corn) and barley have been the most important human food crops. Grasses are also used in the manufacture of thatch, paper, fuel, clothing, insulation, timber for fencing, furniture, scaffolding and construction materials, floor matting, sports turf and baskets. Food production Of all crops grown, 70% are grasses. Agricultural grasses grown for their edible seeds are called cereals or grains (although the latter term, when used agriculturally, refers to both cereals and similar seeds of other plant species, such as buckwheat and legumes). Three cereals—rice, wheat, and maize (corn)—provide more than half of all calories consumed by humans. Cereals constitute the major source of carbohydrates for humans and perhaps the major source of protein; these include rice (in southern and eastern Asia), maize (in Central and South America), and wheat and barley (in Europe, northern Asia and the Americas). Sugarcane is the major source of sugar production. Additional food uses of sugarcane include sprouted grain, shoots, and rhizomes, and in drink they include sugarcane juice and plant milk, as well as rum, beer, whisky, and vodka. Bamboo shoots are used in numerous Asian dishes and broths, and are available in supermarkets in various sliced forms, in both fresh, fermented and canned versions. Lemongrass is a grass used as a culinary herb for its citrus-like flavor and scent. Many species of grass are grown as pasture for foraging or as fodder for prescribed livestock feeds, particularly in the case of cattle, horses, and sheep. Such grasses may be cut and stored for later feeding, especially for the winter, in the form of bales of hay or straw, or in silos as silage. Straw (and sometimes hay) may also be used as bedding for animals. An example of a sod-forming perennial grass used in agriculture is Thinopyrum intermedium. Industry Grasses are used as raw material for a multitude of purposes, including construction and in the composition of building materials such as cob, for insulation, in the manufacture of paper and board such as oriented structural straw board. Grass fiber can be used for making paper, biofuel production, nonwoven fabrics, and as replacement for glass fibers used in reinforced plastics. Bamboo scaffolding is able to withstand typhoon-force winds that would break steel scaffolding. Larger bamboos and Arundo donax have stout culms that can be used in a manner similar to timber, Arundo is used to make reeds for woodwind instruments, and bamboo is used for innumerable implements. Phragmites australis (common reed) is important for thatching and wall construction of homes in Africa. Grasses are used in water treatment systems, in wetland conservation and land reclamation, and used to lessen the erosional impact of urban storm water runoff. Palaeoecological reconstructions Pollen morphology, particularly in the Poaceae family, is key to figuring out their evolutionary relationships and how environments have changed over time. Grass pollen grains, however, often look the same, making it hard to use them for detailed climate or environmental reconstructions. Grass pollen has a single pore and can vary a lot in size, from about 20 to over 100 micrometers, and this size difference has been looked into for clues about past habitats, to tell apart domesticated grasses from wild ones, and to indicate various biological features like how they perform photosynthesis, their breeding systems, and genetic complexity. Yet, there's ongoing debate about how effective pollen size is for piecing together historical landscapes and weather patterns, considering other factors such as genetic material amount might also affect pollen size. Despite these challenges, new techniques in Fourier-Transform Infrared Spectroscopy (FT-IR) and improved statistical methods are now helping to better identify these similar-looking pollen types. Lawn and ornamental use Grasses are the primary plants used in lawns, which themselves derive from grazed grasslands in Europe. They also provide an important means of erosion control (e.g., along roadsides), especially on sloping land. Grass lawns are an important covering of playing surfaces in many sports, including football (soccer), American football, tennis, golf, cricket, softball and baseball. Ornamental grasses, such as perennial bunch grasses, are used in many styles of garden design for their foliage, inflorescences and seed heads. They are often used in natural landscaping, xeriscaping and slope and beach stabilization in contemporary landscaping, wildlife gardening, and native plant gardening. They are used as screens and hedges. Sports turf Grass playing fields, courses and pitches are the traditional playing surfaces for many sports, including American football, association football, baseball, cricket, golf, and rugby. Grass surfaces are also sometimes used for horse racing and tennis. Type of maintenance and species of grass used may be important factors for some sports, less critical for others. In some sports facilities, including indoor domes and other places where maintenance of a grass field would be difficult, grass may be replaced with artificial turf, a synthetic grass-like substitute. Cricket In cricket, the pitch is the strip of carefully mowed and rolled grass where the bowler bowls. In the days leading up to the match it is repeatedly mowed and rolled to produce a very hard, flat surface for the ball to bounce off. Golf Grass on golf courses is kept in three distinct conditions: that of the rough, the fairway, and the putting green. Grass on the fairway is mown short and even, allowing the player to strike the ball cleanly. Playing from the rough is a disadvantage because the long grass may affect the flight of the ball. Grass on the putting green is the shortest and most even, ideally allowing the ball to roll smoothly over the surface. An entire industry revolves around the development and marketing of turf grass varieties. Tennis In tennis, grass is grown on very hard-packed soil, and the bounce of a tennis ball may vary depending on the grass's health, how recently it has been mowed, and the wear and tear of recent play. The surface is softer than hard courts and clay (other tennis surfaces), so the ball bounces lower, and players must reach the ball faster resulting in a different style of play which may suit some players more than others. Among the world's most prestigious court for grass tennis is Centre Court at Wimbledon, London which hosts the final of the annual Wimbledon Championships in England, one of the four Grand Slam tournaments. Economically important grasses A number of grasses are invasive species that damage natural ecosystems, including forms of Phragmites australis which are native to Eurasia but has spread around the world. Role in society Grasses have long had significance in human society. They have been cultivated as feed for people and domesticated animals for thousands of years. The primary ingredient of beer is usually barley or wheat, both of which have been used for this purpose for over 4,000 years. In some places, particularly in suburban areas, the maintenance of a grass lawn is a sign of a homeowner's responsibility to the overall appearance of their neighborhood. One work credits lawn maintenance to: In communities with drought problems, watering of lawns may be restricted to certain times of day or days of the week. Many US municipalities and homeowners' associations have rules which require lawns to be maintained to certain specifications, sanctioning those who allow the grass to grow too long. The smell of freshly cut grass is produced mainly by cis-3-Hexenal. Some common aphorisms involve grass. For example: "The grass is always greener on the other side" suggests an alternate state of affairs will always seem preferable to one's own. "Don't let the grass grow under your feet" tells someone to get moving. "A snake in the grass" means dangers that are hidden. "When elephants fight, it is the grass which suffers" tells of bystanders caught in the crossfire. A folk myth about grass is that it refuses to grow where any violent death has occurred. Image gallery
Biology and health sciences
Poales
null
56226
https://en.wikipedia.org/wiki/Electrum
Electrum
Electrum is a naturally occurring alloy of gold and silver, with trace amounts of copper and other metals. Its color ranges from pale to bright yellow, depending on the proportions of gold and silver. It has been produced artificially and is also known as "green gold". Electrum was used as early as the third millennium BC in the Old Kingdom of Egypt, sometimes as an exterior coating to the pyramidions atop ancient Egyptian pyramids and obelisks. It was also used in the making of ancient drinking vessels. The first known metal coins made were of electrum, dating back to the end of the 7th century or the beginning of the 6th century BC. Etymology The name electrum is the Latinized form of the Greek word ἤλεκτρον (ḗlektron), mentioned in the Odyssey, referring to a metallic substance consisting of gold alloyed with silver. The same word was also used for the substance amber, likely because of the pale yellow color of certain varieties. (It is from amber’s electrostatic properties that the modern English words electron and electricity are derived.) Electrum was often referred to as "white gold" in ancient times but could be more accurately described as pale gold because it is usually pale yellow or yellowish-white in color. The modern use of the term white gold usually refers to gold alloyed with any one or a combination of nickel, silver, platinum and palladium to produce a silver-colored gold. Composition Electrum consists primarily of gold and silver but is sometimes found with traces of platinum, copper and other metals. The name is mostly applied informally to compositions between 20–80% gold and 80–20% silver, but these are strictly called gold or silver depending on the dominant element. Analysis of the composition of electrum in ancient Greek coinage dating from about 600 BC shows that the gold content was about 55.5% in the coinage issued by Phocaea. In the early classical period the gold content of electrum ranged from 46% in Phokaia to 43% in Mytilene. In later coinage from these areas, dating to 326 BC, the gold content averaged 40% to 41%. In the Hellenistic period electrum coins with a regularly decreasing proportion of gold were issued by the Carthaginians. In the later Eastern Roman Empire controlled from Constantinople, the purity of the gold coinage was reduced. History Electrum is mentioned in an account of an expedition sent by Pharaoh Sahure of the Fifth Dynasty of Egypt. It is also discussed by Pliny the Elder in his Naturalis Historia. It is also mentioned in the Bible, in the first chapter of the book of the prophet Ezekiel. Early coinage The earliest known electrum coins, Lydian coins and East Greek coins found under the Temple of Artemis at Ephesus, are currently dated to the last quarter of the 7th century BC (625–600 BC). Electrum is believed to have been used in coins c. 600 BC in Lydia during the reign of Alyattes. Electrum was much better for coinage than gold, mostly because it was harder and more durable, but also because techniques for refining gold were not widespread at the time. The gold content of naturally occurring electrum in modern western Anatolia ranges from 70% to 90%, in contrast to the 45–55% of gold in electrum used in ancient Lydian coinage of the same geographical area. This suggests that the Lydians had already solved the refining technology for silver and were adding refined silver to the local native electrum some decades before introducing pure silver coins. In Lydia, electrum was minted into coins weighing , each valued at stater (meaning "standard"). Three of these coins—with a weight of about —totaled one stater, about one month's pay for a soldier. To complement the stater, fractions were made: the trite (third), the hekte (sixth), and so forth, including of a stater, and even down to and of a stater. The stater was about to . Larger denominations, such as a one stater coin, were minted as well. Because of variation in the composition of electrum, it was difficult to determine the exact worth of each coin. Widespread trading was hampered by this problem, as the intrinsic value of each electrum coin could not be easily determined. This suggests that one reason for the invention of coinage in that area was to increase the profits from seigniorage by issuing currency with a lower gold content than the commonly circulating metal. These difficulties were eliminated circa 570 BC when the Croeseids, coins of pure gold and silver, were introduced. However, electrum currency remained common until approximately 350 BC. The simplest reason for this was that, because of the gold content, one 14.1 gram stater was worth as much as ten 14.1 gram silver pieces.
Physical sciences
Specific alloys
Chemistry
56228
https://en.wikipedia.org/wiki/Dairy
Dairy
A dairy is a place where milk is stored and where butter, cheese and other dairy products are made, or a place where those products are sold. It may be a room, a building or a larger establishment. In the United States, the word may also describe a dairy farm or the part of a mixed farm dedicated to milk for human consumption, whether from cows, buffaloes, goats, yaks, sheep, horses or camels. The attributive dairy describes milk-based products, derivatives and processes, and the animals and workers involved in their production, for example dairyman, dairymaid, dairy cattle or dairy goat. A dairy farm produces milk and a dairy factory processes it into a variety of dairy products. These establishments constitute the global dairy industry, part of the food industry. The word dairy comes from an Old English word for female servant as historically milking was done by dairymaids. Terminology Terminology differs between countries. In the United States, for example, an entire dairy farm is commonly called a "dairy". The building or farm area where milk is harvested from the cow is often called a "milking parlor" or "parlor", except in the case of smaller dairies, where cows are often put on pasture, and usually milked in "stanchion barns". The farm area where milk is stored in bulk tanks is known as the farm's "milk house". Milk is then hauled (usually by truck) to a "dairy plant", also referred to as a "dairy", where raw milk is further processed and prepared for commercial sale of dairy products. In New Zealand, farm areas for milk harvesting are also called "milking parlours", and are historically known as "milking sheds". As in the United States, sometimes milking sheds are referred to by their type, such as "herring bone shed" or "pit parlour". Parlour design has evolved from simple barns or sheds to large rotary structures in which the workflow (throughput of cows) is very efficiently handled. In some countries, especially those with small numbers of animals being milked, the farm may perform the functions of a dairy plant, processing their own milk into saleable dairy products, such as butter, cheese, or yogurt. This on-site processing is a traditional method of producing specialist milk products, common in Europe. In the United States a dairy can also be a place that processes, distributes and sells dairy products, or a room, building or establishment where milk is stored and processed into milk products, such as butter or cheese. In New Zealand English the singular use of the word dairy almost exclusively refers to a corner shop, or superette. This usage is historical as such shops were a common place for the public to buy milk products. History Milk producing animals have been domesticated for thousands of years. Initially, they were part of the subsistence farming that nomads engaged in. As the community moved about the country, their animals accompanied them. Protecting and feeding the animals were a major part of the symbiotic relationship between the animals and the herders. In the more recent past, people in agricultural societies owned dairy animals that they milked for domestic and local (village) consumption, a typical example of a cottage industry. The animals might serve multiple purposes (for example, as a draught animal for pulling a plow as a youngster, and at the end of its useful life as meat). In this case, the animals were normally milked by hand and the herd size was quite small, so that all of the animals could be milked in less than an hour—about 10 per milker. These tasks were performed by a dairymaid (dairywoman) or dairyman. The word dairy harkens back to Middle English dayerie, deyerie, from deye (female servant or dairymaid) and further back to Old English dæge (kneader of bread). With industrialisation and urbanisation, the supply of milk became a commercial industry, with specialised breeds of cattle being developed for dairy, as distinct from beef or draught animals. Initially, more people were employed as milkers, but it soon turned to mechanisation with machines designed to do the milking. Historically, the milking and the processing took place close together in space and time: on a dairy farm. People milked the animals by hand; on farms where only small numbers are kept, hand-milking may still be practised. Hand-milking is accomplished by grasping the teats (often pronounced tit or tits) in the hand and expressing milk either by squeezing the fingers progressively, from the udder end to the tip, or by squeezing the teat between thumb and index finger, then moving the hand downward from udder towards the end of the teat. The action of the hand or fingers is designed to close off the milk duct at the udder (upper) end and, by the movement of the fingers, close the duct progressively to the tip to express the trapped milk. Each half or quarter of the udder is emptied one milk-duct capacity at a time. The stripping action is repeated, using both hands for speed. Both methods result in the milk that was trapped in the milk duct being squirted out the end into a bucket that is supported between the knees (or rests on the ground) of the milker, who usually sits on a low stool. Traditionally the cow, or cows, would stand in the field or paddock while being milked. Young stock, heifers, would have to be trained to remain still to be milked. In many countries, the cows were tethered to a post and milked. Structure of the industry While most countries produce their own milk products, the structure of the dairy industry varies in different parts of the world. In major milk-producing countries most milk is distributed through whole sale markets. In Ireland and Australia, for example, farmers' co-operatives own many of the large-scale processors, while in the United States many farmers and processors do business through individual contracts. In the United States, the country's 196 farmers' cooperatives sold 86% of milk in the U.S. in 2002, with five cooperatives accounting for half that. This was down from 2,300 cooperatives in the 1940s. In developing countries, the past practice of farmers marketing milk in their own neighbourhoods is changing rapidly. Notable developments include considerable foreign investment in the dairy industry and a growing role for dairy cooperatives. Output of milk is growing rapidly in such countries and presents a major source of income growth for many farmers. As in many other branches of the food industry, dairy processing in the major dairy producing countries has become increasingly concentrated, with fewer but larger and more efficient plants operated by fewer workers. This is notably the case in the United States, Europe, Australia and New Zealand. In 2009, charges of antitrust violations have been made against major dairy industry players in the United States, which critics call "Big Milk". Another round of price fixing charges was settled in 2016. Government intervention in milk markets was common in the 20th century. A limited antitrust exemption was created for U.S. dairy cooperatives by the Capper–Volstead Act of 1922. In the 1930s, some U.S. states adopted price controls, and Federal Milk Marketing Orders started under the Agricultural Marketing Agreement Act of 1937 and continue in the 2000s. The Federal Milk Price Support Program began in 1949. The Northeast Dairy Compact regulated wholesale milk prices in New England from 1997 to 2001. Plants producing liquid milk and products with short shelf life, such as yogurts, creams and soft cheeses, tend to be located on the outskirts of urban centres close to consumer markets. Plants manufacturing items with longer shelf life, such as butter, milk powders, cheese and whey powders, tend to be situated in rural areas closer to the milk supply. Most large processing plants tend to specialise in a limited range of products. Exceptionally, however, large plants producing a wide range of products are still common in Eastern Europe, a holdover from the former centralised, supply-driven concept of the market under Communist governments. As processing plants grow fewer and larger, they tend to acquire bigger, more automated and more efficient equipment. While this technological tendency keeps manufacturing costs lower, the need for long-distance transportation often increases the environmental impact. Milk production is irregular, depending on cow biology. Producers must adjust the mix of milk which is sold in liquid form vs. processed foods (such as butter and cheese) depending on changing supply and demand. Milk supply contracts In the European Union, milk supply contracts are regulated by Article 148 of Regulation 1308/2013 – Establishing a common organisation of the markets in agricultural products and repealing Council Regulations (EEC) No 922/72, (EEC) No 234/79, (EC) No 1037/2001 and (EC) No 1234/2007, which permits member states to create a requirement for the supply of milk from a farmer to a raw milk processor to be backed by a written contract, or to ensure that the first purchaser of milk to make a written offer to the farmer, although in this case the farmer may not be required to enter into a contract. Thirteen EU member states including France and Spain have introduced laws on compulsory or mandatory written milk contracts (MWC's) between farmers and processors. The Scottish Government published an analysis of the dairy supply chain and the application of mandatory written contracts across the European Union in 2019, to evaluate the impact of the contracts where they have been adopted. In the UK, a voluntary code of best practice on contractual relationships in the dairy sector was agreed by industry during 2012: this set out minimum standards of good practice for contracts between producers and purchasers. During 2020 the UK government has undertaken a consultation exercise to determine which contractual measures, if any, would improve the resilience of the dairy industry for the future. The Australian government has also introduced a mandatory dairy code of conduct. Farming When it became necessary to milk larger cows, the cows would be brought to a shed or barn that was set up with stalls (milking stalls) where the cows could be confined their whole life while they were milked. One person could milk more cows this way, as many as 20 for a skilled worker. But having cows standing about in the yard and shed waiting to be milked is not good for the cow, as she needs as much time in the paddock grazing as is possible. It is usual to restrict the twice-daily milking to a maximum of an hour and a half each time. It makes no difference whether one milks 10 or 1000 cows, the milking time should not exceed a total of about three hours each day for any cow as they should be in stalls and laying down as long as possible to increase comfort which will in turn aid in milk production. A cow is physically milked for only about 10 minutes a day depending on her milk letdown time and the number of milkings per day. As herd sizes increased there was more need to have efficient milking machines, sheds, milk-storage facilities (vats), bulk-milk transport and shed cleaning capabilities and the means of getting cows from paddock to shed and back. As herd numbers increased so did the problems of animal health. In New Zealand two approaches to this problem have been used. The first was improved veterinary medicines (and the government regulation of the medicines) that the farmer could use. The other was the creation of veterinary clubs where groups of farmers would employ a veterinarian (vet) full-time and share those services throughout the year. It was in the vet's interest to keep the animals healthy and reduce the number of calls from farmers, rather than to ensure that the farmer needed to call for service and pay regularly. This daily milking routine goes on for about 300 to 320 days per year that the cow stays in milk. Some small herds are milked once a day for about the last 20 days of the production cycle but this is not usual for large herds. If a cow is left unmilked just once she is likely to reduce milk-production almost immediately and the rest of the season may see her dried off (giving no milk) and still consuming feed. However, once-a-day milking is now being practised more widely in New Zealand for profit and lifestyle reasons. This is effective because the fall in milk yield is at least partially offset by labour and cost savings from milking once per day. This compares to some intensive farm systems in the United States that milk three or more times per day due to higher milk yields per cow and lower marginal labour costs. Farmers who are contracted to supply liquid milk for human consumption (as opposed to milk for processing into butter, cheese, and so on—see milk) often have to manage their herd so that the contracted number of cows are in milk the year round, or the required minimum milk output is maintained. This is done by mating cows outside their natural mating time so that the period when each cow in the herd is giving maximum production is in rotation throughout the year. Northern hemisphere farmers who keep cows in barns almost all the year usually manage their herds to give continuous production of milk so that they get paid all year round. In the southern hemisphere the cooperative dairying systems allow for two months of no productivity because their systems are designed to take advantage of maximum grass and milk production in the spring and because the milk processing plants pay bonuses in the dry (winter) season to carry the farmers through the mid-winter break from milking. It also means that cows have a rest from milk production when they are most heavily pregnant. Some year-round milk farms are penalised financially for overproduction at any time in the year by being unable to sell their overproduction at current prices. Artificial insemination (AI) is common in all high-production herds in order to improve the genetics of the female offspring which will be raised for replacements. AI also reduces the need for keeping potentially dangerous bulls on the farm. Male calves are sold to be raised for beef or veal, or slaughtered due to lack of profitability. A cow will calve or freshen about once a year, until she is culled because of declining production, infertility or other health problems. Then the cow will be sold, most often going to slaughter. Industrial processing Dairy plants process the raw milk they receive from farmers so as to extend its marketable life. Two main types of processes are employed: heat treatment to ensure the safety of milk for human consumption and to lengthen its shelf-life, and dehydrating dairy products such as butter, hard cheese and milk powders so that they can be stored. Cream and butter Today, milk is separated by huge machines in bulk into cream and skim milk. The cream is processed to produce various consumer products, depending on its thickness, its suitability for culinary uses and consumer demand, which differs from place to place and country to country. Some milk is dried and powdered, some is condensed (by evaporation) mixed with varying amounts of sugar and canned. Most cream from New Zealand and Australian factories is made into butter. This is done by churning the cream until the fat globules coagulate and form a monolithic mass. This butter mass is washed and, sometimes, salted to improve keeping qualities. The residual buttermilk goes on to further processing. The butter is packaged (25 to 50 kg boxes) and chilled for storage and sale. At a later stage these packages are broken down into home-consumption sized packs. Skimmed milk The product left after the cream is removed is called skim, or skimmed, milk. To make a consumable liquid a portion of cream is returned to the skim milk to make low fat milk (semi-skimmed) for human consumption. By varying the amount of cream returned, producers can make a variety of low-fat milks to suit their local market. Whole milk is also made by adding cream back to the skim to form a standardised product. Other products, such as calcium, vitamin D, and flavouring, are also added to appeal to consumers. Casein Casein is the predominant phosphoprotein found in fresh milk. It has a very wide range of uses from being a filler for human foods, such as in ice cream, to the manufacture of products such as fabric, adhesives, and plastics. Cheese Cheese is another product made from milk. Whole milk is reacted to form curds that can be compressed, processed and stored to form cheese. In countries where milk is legally allowed to be processed without pasteurisation, a wide range of cheeses can be made using the bacteria found naturally in the milk. In most other countries, the range of cheeses is smaller and the use of artificial cheese curing is greater. Whey is also the byproduct of this process. Some people with lactose intolerance are able to eat certain types of cheese. This is because some traditionally made hard cheeses, and soft ripened cheeses may create less reaction than the equivalent amount of milk because of the processes involved. Fermentation and higher fat content contribute to lesser amounts of lactose. Traditionally made Emmental or Cheddar might contain 10% of the lactose found in whole milk. In addition, the ageing methods of traditional cheeses (sometimes over two years) reduce their lactose content to practically nothing. Commercial cheeses, however, are often manufactured by processes that do not have the same lactose-reducing properties. Ageing of some cheeses is governed by regulations; in other cases there is no quantitative indication of degree of ageing and concomitant lactose reduction, and lactose content is not usually indicated on labels. Whey In earlier times, whey or milk serum was considered to be a waste product and it was, mostly, fed to pigs as a convenient means of disposal. Beginning about 1950, and mostly since about 1980, lactose and many other products, mainly food additives, are made from both casein and cheese whey. Yogurt Yogurt (or yoghurt) making is a process similar to cheese making, only the process is arrested before the curd becomes very hard. Milk powders Milk is also processed by various drying processes into powders. Whole milk, skim milk, buttermilk, and whey products are dried into a powder form and used for human and animal consumption. The main difference between production of powders for human or for animal consumption is in the protection of the process and the product from contamination. Some people drink milk reconstituted from powdered milk, because milk is about 88% water and it is much cheaper to transport the dried product. Other milk products Kumis is produced commercially in Central Asia. Although traditionally made from mare's milk, modern industrial variants may use cow's milk. In India, which produces 22% of global milk production (as at 2018), a range of traditional milk-based products are produced commercially. Milking Originally, milking and processing took place on the dairy farm itself. Later, cream was separated from the milk by machine on the farm, and transported to a factory to be made into butter. The skim milk was fed to pigs. This allowed for the high cost of transport (taking the smallest volume high-value product), primitive trucks and the poor quality of roads. Only farms close to factories could afford to take whole milk, which was essential for cheesemaking in industrial quantities, to them. Originally milk was distributed in 'pails', a lidded bucket with a handle. These proved impractical for transport by road or rail, and so the milk churn was introduced, based on the tall conical shape of the butter churn. Later large railway containers, such as the British Railway Milk Tank Wagon were introduced, enabling the transport of larger quantities of milk, and over longer distances. The development of refrigeration and better road transport, in the late 1950s, has meant that most farmers milk their cows and only temporarily store the milk in large refrigerated bulk tanks, from where it is later transported by truck to central processing facilities. In many European countries, particularly the United Kingdom, milk is then delivered direct to customers' homes by a milk float. In the United States, a dairy cow produced about of milk per year in 1950, while the average Holstein cow in 2019 produces more than of milk per year. Milking machines Milking machines are used to harvest milk from cows when manual milking becomes inefficient or labour-intensive. One early model was patented in 1907. The milking unit is the portion of a milking machine for removing milk from an udder. It is made up of a claw, four teatcups, (Shells and rubber liners) long milk tube, long pulsation tube, and a pulsator. The claw is an assembly that connects the short pulse tubes and short milk tubes from the teatcups to the long pulse tube and long milk tube. (Cluster assembly) Claws are commonly made of stainless steel or plastic or both. Teatcups are composed of a rigid outer shell (stainless steel or plastic) that holds a soft inner liner or inflation. Transparent sections in the shell may allow viewing of liner collapse and milk flow. The annular space between the shell and liner is called the pulse chamber. Milking machines work in a way that is different from hand milking or calf suckling. Continuous vacuum is applied inside the soft liner to massage milk from the teat by creating a pressure difference across the teat canal (or opening at the end of the teat). Vacuum also helps keep the machine attached to the cow. The vacuum applied to the teat causes congestion of teat tissues (accumulation of blood and other fluids). Atmospheric air is admitted into the pulsation chamber about once per second (the pulsation rate) to allow the liner to collapse around the end of teat and relieve congestion in the teat tissue. The ratio of the time that the liner is open (milking phase) and closed (rest phase) is called the pulsation ratio. The four streams of milk from the teatcups are usually combined in the claw and transported to the milkline, or the collection bucket (usually sized to the output of one cow) in a single milk hose. Milk is then transported (manually in buckets) or with a combination of airflow and mechanical pump to a central storage vat or bulk tank. Milk is refrigerated on the farm in most countries either by passing through a heat-exchanger or in the bulk tank, or both. The photo to the right shows a bucket milking system with the stainless steel bucket visible on the far side of the cow. The two rigid stainless steel teatcup shells applied to the front two quarters of the udder are visible. The top of the flexible liner is visible at the top of the shells as are the short milk tubes and short pulsation tubes extending from the bottom of the shells to the claw. The bottom of the claw is transparent to allow observation of milk flow. When milking is completed the vacuum to the milking unit is shut off and the teatcups are removed. Milking machines keep the milk enclosed and safe from external contamination. The interior 'milk contact' surfaces of the machine are kept clean by a manual or automated washing procedures implemented after milking is completed. Milk contact surfaces must comply with regulations requiring food-grade materials (typically stainless steel and special plastics and rubber compounds) and are easily cleaned. Most milking machines are powered by electricity but, in case of electrical failure, there can be an alternative means of motive power, often an internal combustion engine, for the vacuum and milk pumps. Milking shed layouts Bail-style sheds This type of milking facility was the first development, after open-paddock milking, for many farmers. The building was a long, narrow, lean-to shed that was open along one long side. The cows were held in a yard at the open side and when they were about to be milked, they were positioned in one of the bails (stalls). Usually, the cows were restrained in the bail with a breech chain and a rope to restrain the outer back leg. The cow could not move about excessively and the milker could expect not to be kicked or trampled while sitting on a (three-legged) stool and milking into a bucket. When each cow was finished, she backed out into the yard again. The UK bail, initially developed by Wiltshire dairy farmer Arthur Hosier, was a six standing mobile shed with steps that the cow mounted, so the herdsman did not have to bend so low. The milking equipment was much as today, a vacuum from a pump, pulsators, a claw-piece with pipes leading to the four shells and liners that stimulate and suck the milk from the teat. The milk went into churns, via a cooler. As herd sizes increased a door was set into the front of each bail so that when the milking was done for any cow the milker could, after undoing the leg-rope and with a remote link, open the door and allow her to exit to the pasture. The door was closed, the next cow walked into the bail and was secured. When milking machines were introduced bails were set in pairs so that a cow was being milked in one paired bail while the other could be prepared for milking. When one was finished the machine's cups are swapped to the other cow. This is the same as for Swingover Milking Parlours as described below except that the cups are loaded on the udder from the side. As herd numbers increased it was easier to double-up the cup-sets and milk both cows simultaneously than to increase the number of bails. About 50 cows an hour can be milked in a shed with 8 bails by one person. Using the same teat cups for successive cows has the danger of transmitting infection, mastitis, from one cow to another. Some farmers have devised their own ways to disinfect the clusters between cows. Herringbone milking parlours In herringbone milking sheds, or parlours, cows enter, in single file, and line up almost perpendicular to the central aisle of the milking parlour on both sides of a central pit in which the milker works (you can visualise a fishbone with the ribs representing the cows and the spine being the milker's working area; the cows face outward). After washing the udder and teats the cups of the milking machine are applied to the cows, from the rear of their hind legs, on both sides of the working area. Large herringbone sheds can milk up to 600 cows efficiently with two people. Swingover milking parlours Swingover parlours are the same as herringbone parlours except they have only one set of milking cups to be shared between the two rows of cows, as one side is being milked the cows on the other side are moved out and replaced with unmilked ones. The advantage of this system is that it is less costly to equip, however it operates at slightly better than half-speed and one would not normally try to milk more than about 100 cows with one person. Rotary milking sheds Rotary milking sheds (also known as Rotary milking parlor) consist of a turntable with about 12 to 100 individual stalls for cows around the outer edge. A "good" rotary will be operated with 24–32 (~48–50+) stalls by one (two) milkers. The turntable is turned by an electric-motor drive at a rate that one turn is the time for a cow to be milked completely. As an empty stall passes the entrance a cow steps on, facing the center, and rotates with the turntable. The next cow moves into the next vacant stall and so on. The operator, or milker, cleans the teats, attaches the cups and does any other feeding or whatever husbanding operations that are necessary. Cows are milked as the platform rotates. The milker, or an automatic device, removes the milking machine cups and the cow backs out and leaves at an exit just before the entrance. The rotary system is capable of milking very large herds—over a thousand cows. Automatic milking sheds Automatic milking or 'robotic milking' sheds can be seen in Australia, New Zealand, the U.S., Canada, and many European countries. Current automatic milking sheds use the voluntary milking (VM) method. These allow the cows to voluntarily present themselves for milking at any time of the day or night, although repeat visits may be limited by the farmer through computer software. A robot arm is used to clean teats and apply milking equipment, while automated gates direct cow traffic, eliminating the need for the farmer to be present during the process. The entire process is computer controlled. Supplementary accessories in sheds Farmers soon realised that a milking shed was a good place to feed cows supplementary foods that overcame local dietary deficiencies or added to the cows' wellbeing and production. Each bail might have a box into which such feed is delivered as the cow arrives so that she is eating while being milked. A computer can read the eartag of each animal to ration the correct individual supplement. A close alternative is to use 'out-of-parlour-feeders', stalls that respond to a transponder around the cow's neck that is programmed to provide each cow with a supplementary feed, the quantity dependent on her production, stage in lactation, and the benefits of the main ration The holding yard at the entrance of the shed is important as a means of keeping cows moving into the shed. Most yards have a powered gate that ensures that the cows are kept close to the shed. Water is a vital commodity on a dairy farm: cows drink about 20 gallons (80 litres) a day, sheds need water to cool and clean them. Pumps and reservoirs are common at milking facilities. Water can be warmed by heat transfer with milk. Temporary milk storage Milk from a cow is transported to a nearby storage vessel by the airflow leaking around the cups on the cow or by a special "air inlet" (5–10 L/min free air) in the claw. From there it is pumped by a mechanical pump and cooled by a heat exchanger. The milk is then stored in a large vat, or bulk tank, which is usually refrigerated until collection for processing. Waste disposal and wastewater management In countries where cows are grazed outside year-round, waste disposal issues need to be dealt with. The most concentrated waste is at the milking shed, where the animal waste may be liquefied (during the water-washing process) or left in a more solid form, either to be returned to be used on farm ground as organic fertiliser. In the associated milk processing factories, most of the waste is washing water that is treated, usually by composting, and spread on farm fields in either liquid or solid form. This is much different from half a century ago, when the main products were butter, cheese and casein, and the rest of the milk had to be disposed of as waste (sometimes as animal feed). In the dairy industries, there are two main types of wastewater produced; dairy wastewater and cheese whey. Dairy wastewater consists of material losses from the dairy products, effluents from the washing of tanks and equipment and sanitary wastewater from toilets and sinks. The typical concentrations of BOD and total Kjeldahl nitrogen for dairy wastewater range from 1200 to 5000 mg/L and 30 to 200 mg/L, respectively. Cheese whey is the liquid remaining after the formation of curds. It contains important amounts of carbohydrates, proteins, lactic acid, fats, and salts and its BOD value can exceed 40,000 mg/L. Dairy wastewater management usually includes equalisation, neutralisation and physical separation followed by biological treatment, while cheese whey is treated in anaerobic digesters or passes through membranes for protein recovery. In dairy-intensive areas, various methods have been proposed for disposing of large quantities of milk. Large application rates of milk on the land, or disposing in a hole, is problematic as the residue from the decomposing milk will block the soil pores and thereby reduce the water infiltration rate through the soil profile. As recovery of this effect can take time, any land-based application needs to be well managed and considered. Other waste milk disposal methods commonly employed include solidification and disposal at a solid waste landfill, disposal at a wastewater treatment plant, or discharge into a sanitary sewer. Associated diseases Dairy products manufactured under unsanitary or unsuitable conditions have an increased chance of containing bacteria. Proper sanitation practices help to reduce the rate of bacterial contamination, and pasteurisation greatly decreases the amount of contaminated milk that reaches the consumer. Many countries have required government oversight and regulations regarding dairy production, including requirements for pasteurisation. Leptospirosis is an infection that can be transmitted to people who work in dairy production sites through exposure to urine or to contaminated water or soil. Cowpox is a virus that today is rarely found in either cows or humans. It is a historically important disease, as it led to the first vaccination against the now eradicated smallpox. Tuberculosis can be transmitted from cattle, mainly via milk products that are unpasteurised. The disease has been eradicated from many countries by testing for the disease and culling suspected animals. Brucellosis is a bacterial disease transmitted to humans by dairy products and direct animal contact. Brucellosis has been eradicated from certain countries by testing for the disease and culling suspected animals. Listeria is a bacterial disease associated with unpasteurised milk, and can affect some cheeses made in traditional ways. Careful observance of the traditional cheesemaking methods achieves reasonable protection for the consumer. Crohn's disease has been linked to infection with the bacterium M. paratuberculosis, which has been found in pasteurised retail milk in the UK and the USA. M. paratuberculosis causes a similar disorder, Johne's disease, in livestock. Animal rights A portion of the population, including vegans and many Jains, object to dairy production as unethical, cruel to animals, and environmentally deleterious. They do not consume dairy products given these ethical concerns. They state that cattle suffer under conditions employed by the dairy industry and how they are eventually killed for meat once their milk production declines. Animal rights scholars consider dairy as part of the animal–industrial complex. According to Kathleen Stachowski, the animal–industrial complex "naturalizes the human as a consumer of other animals," whose enormity includes "its long reach into our lives, and how well it has done its job normalizing brutality toward the animals whose very existence is forgotten". She states that the corporate dairy industry, the government, and schools forms the animal–industrial complex troika of immense influence, which hides from the public's view the animal rights violations and cruelties happening within the dairy industry. Stachowski also states that the troika "hijacks" schoolchildren by promoting milk in the K-12 nutrition education curriculum and making them "eat the products of industrial animal production". Bovine growth hormone In 1937, it was found that bovine somatotropin (BST or bovine growth hormone) would increase the yield of milk. Several pharmaceutical companies developed commercial rBST products and they have been approved for use in the U.S., Mexico, Brazil, India, Russia, and at least ten others. The World Health Organization, and others have stated that dairy products and meat from BST-treated cows are safe for human consumption. However, based on negative animal welfare effects, rBST has not been allowed in Canada, Australia, New Zealand, Japan, Israel, or the European Union since 2000 – and in the U.S. has lost popularity due to consumer demands for rBST-free cows, with only about 17% of all cows in America now receiving rBST. Climate change and dairy production
Technology
Forms
null
56230
https://en.wikipedia.org/wiki/Cichlid
Cichlid
Cichlids are fish from the family Cichlidae in the order Cichliformes. Traditionally Cichlids were classed in a suborder, the Labroidei, along with the wrasses (Labridae), in the order Perciformes, but molecular studies have contradicted this grouping. On the basis of fossil evidence, it first appeared in Argentina during the Early Eocene epoch, about 48.6 million years ago; however, molecular clock estimates have placed the family's origin as far back as 67 million years ago, during the late Cretaceous period. The closest living relative of cichlids is probably the convict blenny, and both families are classified in the 5th edition of Fishes of the World as the two families in the Cichliformes, part of the subseries Ovalentaria. This family is large, diverse, and widely dispersed. At least 1,650 species have been scientifically described, making it one of the largest vertebrate families. New species are discovered annually, and many species remain undescribed. The actual number of species is therefore unknown, with estimates varying between 2,000 and 3,000. Many cichlids, particularly tilapia, are important food fishes, while others, such as the Cichla species, are valued game fish. The family also includes many popular freshwater aquarium fish kept by hobbyists, including the angelfish, oscars, and discus. Cichlids have the largest number of endangered species among vertebrate families, most in the haplochromine group. Cichlids are particularly well known for having evolved rapidly into many closely related but morphologically diverse species within large lakes, particularly Lakes Tanganyika, Victoria, Malawi, and Edward. Their diversity in the African Great Lakes is important for the study of speciation in evolution. Many cichlids introduced into waters outside of their natural range have become nuisances. All cichlids practice some form of parental care for their eggs and fry, usually in the form of guarding the eggs and fry or mouthbrooding. Anatomy and appearance Cichlids span a wide range of body sizes, from species as small as in length (e.g., female Neolamprologus multifasciatus) to much larger species approaching in length (Boulengerochromis and Cichla). As a group, cichlids exhibit a similar diversity of body shapes, ranging from strongly laterally compressed species (such as Altolamprologus, Pterophyllum, and Symphysodon) to species that are cylindrical and highly elongated (such as Julidochromis, Teleogramma, Teleocichla, Crenicichla, and Gobiocichla). Generally, however, cichlids tend to be of medium size, ovate in shape, and slightly laterally compressed, and generally similar to the North American sunfishes in morphology, behavior, and ecology. Cichlids share a single key trait - the fusion of the lower pharyngeal bones into a single tooth-bearing structure. A complex set of muscles allows the upper and lower pharyngeal bones to be used as a second set of jaws for processing food, allowing a division of labor between the "true jaws" (mandibles) and the "pharyngeal jaws". Cichlids are efficient and often highly specialized feeders that capture and process a very wide variety of food items. This is assumed to be one reason why they are so diverse. The features that distinguish them from the other families in the Labroidei include: A single nostril on each side of the forehead, instead of two No bony shelf below the orbit of the eye Division of the lateral line organ into two sections, one on the upper half of the flank and a second along the midline of the flank from about halfway along the body to the base of the tail (except for genera Teleogramma and Gobiocichla) A distinctively shaped otolith The small intestine's left-side exit from the stomach instead of its right side as in other Labroidei Taxonomy Kullander (1998) recognizes eight subfamilies of cichlids: the Astronotinae, Cichlasomatinae, Cichlinae, Etroplinae, Geophaginae, Heterochromidinae, Pseudocrenilabrinae, and Retroculinae. A ninth subfamily, the Ptychochrominae, was later recognized by Sparks and Smith. Cichlid taxonomy is still debated, and classification of genera cannot yet be definitively given. A comprehensive system of assigning species to monophyletic genera is still lacking, and there is not complete agreement on what genera should be recognized in this family. As an example of the classification problems, Kullander placed the African genus Heterochromis phylogenetically within Neotropical cichlids, although later papers concluded otherwise. Other problems center upon the identity of the putative common ancestor for the Lake Victoria superflock (many closely related species sharing a single habitat), and the ancestral lineages of Lake Tanganyikan cichlids. Phylogeny derived from morphological characters shows differences at the genus level with phylogeny based on genetic loci. A consensus remains that the Cichlidae as a family are monophyletic. In cichlid taxonomy, dentition was formerly used as a classifying characteristic, but this was complicated because in many cichlids, tooth shapes change with age, due to wear, and cannot be relied upon. Genome sequencing and other technologies transformed cichlid taxonomy. Alternatively, all cichlid species native to the new world, can be classified under the subfamily Cichlinae, while Etroplinae can classify all cichlid species native to the old world. Distribution and habitat Cichlids are one of the largest vertebrate families in the world. They are most diverse in Africa and South America. Africa alone is host to at least an estimated 1,600 species. Central America and Mexico have about 120 species, as far north as the Rio Grande in South Texas. Madagascar has its own distinctive species (Katria, Oxylapia, Paratilapia, Paretroplus, Ptychochromis, and Ptychochromoides), only distantly related to those on the African mainland. Native cichlids are largely absent in Asia, except for 9 species in Israel, Lebanon, and Syria (Astatotilapia flaviijosephi, Oreochromis aureus, O. niloticus, Sarotherodon galilaeus, Coptodon zillii, and Tristramella spp.), two in Iran (Iranocichla), and three in India and Sri Lanka (Etroplus and Pseudetroplus). If disregarding Trinidad and Tobago (where the few native cichlids are members of genera that are widespread in the South American mainland), the three species from the genus Nandopsis are the only cichlids from the Antilles in the Caribbean, specifically Cuba and Hispaniola. Europe, Australia, Antarctica, and North America north of the Rio Grande drainage have no native cichlids, although in Florida, Hawaii, Japan, northern Australia, and elsewhere, feral populations of cichlids have become established as exotics. Although most cichlids are found at relatively shallow depths, several exceptions do exist. The deepest known occurrences are Trematocara at more than below the surface in Lake Tanganyika. Others found in relatively deep waters include species such as Alticorpus macrocleithrum and Pallidochromis tokolosh down to below the surface in Lake Malawi, and the whitish (nonpigmented) and blind Lamprologus lethops, which is believed to live as deep as below the surface in the Congo River. Cichlids are less commonly found in brackish and saltwater habitats, though many species tolerate brackish water for extended periods; Mayaheros urophthalmus, for example, is equally at home in freshwater marshes and mangrove swamps, and lives and breeds in saltwater environments such as the mangrove belts around barrier islands. Several species of Tilapia, Sarotherodon, and Oreochromis are euryhaline and can disperse along brackish coastlines between rivers. Only a few cichlids, however, inhabit primarily brackish or salt water, most notably Etroplus maculatus, Etroplus suratensis, and Sarotherodon melanotheron. The perhaps most extreme habitats for cichlids are the warm hypersaline lakes where the members of the genera Alcolapia and Danakilia are found. Lake Abaeded in Eritrea encompasses the entire distribution of D. dinicolai, and its temperature ranges from . With the exception of the species from Cuba, Hispaniola, and Madagascar, cichlids have not reached any oceanic island and have a predominantly Gondwanan distribution, showing the precise sister relationships predicted by vicariance: Africa-South America and India-Madagascar. The dispersal hypothesis, in contrast, requires cichlids to have negotiated thousands of kilometers of open ocean between India and Madagascar without colonizing any other island, or for that matter, crossing the Mozambique Channel to Africa. Although the vast majority of Malagasy cichlids are entirely restricted to fresh water, Ptychochromis grandidieri and Paretroplus polyactis are commonly found in coastal brackish water and are apparently salt tolerant, as is also the case for Etroplus maculatus and E. suratensis from India and Sri Lanka. Ecology Feeding Within the cichlid family, carnivores, herbivores, omnivores, planktivores, and detritivores are known, meaning the Cichlidae encompass essentially the full range of food consumption possible in the animal kingdom. Various species have morphological adaptations for specific food sources, but most cichlids consume a wider variety of foods based on availability. Carnivorous cichlids can be further divided into piscivorous and molluscivorous, since the morphology and hunting behavior differ greatly between the two categories. Piscivorous cichlids eat other fish, fry, larvae, and eggs. Some species eat the offspring of mouthbrooders by head-ramming, wherein the hunter shoves its head into the mouth of a female to expel her young and eat them. Molluscivorous cichlids have several hunting strategies amongst the varieties within the group. Lake Malawi cichlids consume substrate and filter it out through their gill rakers to eat the mollusks that were in the substrate. Gill rakers are finger-like structures that line the gills of some fish to catch any food that might escape through their gills. Many cichlids are primarily herbivores, feeding on algae (e.g. Petrochromis) and plants (e.g. Etroplus suratensis). Small animals, particularly invertebrates, are only a minor part of their diets. Other cichlids are detritivores and eat organic material, called Aufwuchs (offal); among these species are the tilapiines of the genera Oreochromis, Sarotherodon, and Tilapia. Other cichlids are predatory and eat little or no plant matter. These include generalists that catch a variety of small animals, including other fishes and insect larvae (e.g. Pterophyllum), as well as variety of specialists. Trematocranus is a specialized snail-eater, while Pungu maclareni feeds on sponges. A number of cichlids feed on other fish, either entirely or in part. Crenicichla species are stealth predators that lunge from concealment at passing small fish, while Rhamphochromis species are open-water pursuit predators that chase down their prey. Paedophagous cichlids such as the Caprichromis species eat other species' eggs or young, in some cases ramming the heads of mouthbrooding species to force them to disgorge their young. Among the more unusual feeding strategies are those of Corematodus, Docimodus evelynae, Plecodus, Perissodus, and Genyochromis spp., which feed on scales and fins of other fishes, a behavior known as lepidophagy, along with the death-mimicking behaviour of Nimbochromis and Parachromis species, which lay motionless, luring small fish to their side prior to ambush. This variety of feeding styles has helped cichlids to inhabit similarly varied habitats. Its pharyngeal teeth (in the throat) afford cichlids so many "niche" feeding strategies, because the jaws pick and hold food, while the pharyngeal teeth crush the prey. Behavior Aggression Aggressive behavior in cichlids is ritualized and consists of multiple displays used to seek confrontation while being involved in evaluation of competitors, coinciding with temporal proximity to mating. Displays of ritualized aggression in cichlids include a remarkably rapid change in coloration, during which a successfully dominant territorial male assumes a more vivid and brighter coloration, while a subordinate or "nonterritorial" male assumes a dull-pale coloration. In addition to color displays, cichlids employ their lateral lines to sense movements of water around their opponents to evaluate the competing male for physical traits/fitness. Male cichlids are very territorial due to the pressure of reproduction, and establish their territory and social status by physically driving out challenging males (novel intruders) through lateral displays (parallel orientation, uncovering gills), biting, or mouth fights (head-on collisions of open mouths, measuring jaw sizes, and biting each other's jaws). The cichlid social dichotomy is composed of a single dominant with multiple subordinates, where the physical aggression of males becomes a contest for resources (mates, territory, food). Female cichlids prefer to mate with a successfully alpha male with vivid coloration, whose territory has food readily available. Mating Cichlids mate either monogamously or polygamously. The mating system of a given cichlid species is not consistently associated with its brooding system. For example, although most monogamous cichlids are not mouthbrooders, Chromidotilapia, Gymnogeophagus, Spathodus, and Tanganicodus all include – or consist entirely of – monogamous mouthbrooders. In contrast, numerous open- or cave-spawning cichlids are polygamous; examples include many Apistogramma, Lamprologus, Nannacara, and Pelvicachromis species. Most adult male cichlids, specifically in the cichlid tribe Haplochromini, exhibit a unique pattern of oval-shaped color dots on their anal fins. These phenomena, known as egg spots, aid in the mouthbrooding mechanisms of cichlids. The egg spots consist of carotenoid-based pigment cells, which indicate a high cost to the organism, when considering that fish are not able to synthesize their own carotenoids. The mimicry of egg spots is used by males for the fertilization process. Mouthbrooding females lay eggs and immediately snatch them up with their mouths. Over millions of years, male cichlids have evolved egg spots to initiate the fertilization process more efficiently. When the females are snatching up the eggs into their mouth, the males gyrate their anal fins, which illuminates the egg spots on his tail. Afterwards, the female, believing these are her eggs, places her mouth to the anal fin (specifically the genital papilla) of the male, which is when he discharges sperm into her mouth and fertilizes the eggs. The genuine color of egg spots is a yellow, red, or orange inner circle with a colorless ring surrounding the shape. Through phylogenetic analysis, using the mitochondrial ND2 gene, the true egg spots are thought to have evolved in the common ancestor of the Astatoreochromis lineage and the modern Haplochrominis species. This ancestor was most likely riverine in origin, based on the most parsimonious representation of habitat type in the cichlid family. The presence of egg spots in a turbid riverine environment would seem particularly beneficial and necessary for intraspecies communication. Two pigmentation genes are found to be associated with egg-spot patterning and color arrangement. These are fhl2-a and fhl2-b, which are paralogs. These genes aid in pattern formation and cell-fate determination in early embryonic development. The highest expression of these genes was temporally correlated with egg-spot formation. A short, interspersed, repetitive element was also seen to be associated with egg spots. Specifically, it was evident upstream of the transcriptional start site of fhl2 in only Haplochrominis species with egg spots Self-fertilization The cichlid Benitochromis nigrodorsalis from Western Africa ordinarily undergoes biparental reproduction, but is also able to undergo facultative (optional) selfing (self-fertilization). Facultative selfing may be an adaptive option when a mating partner is unavailable. Brood care Pit spawning in cichlids Pit spawning, also referred to as substrate breeding, is a behavior in cichlid fish in which a fish builds a pit in the sand or ground, where a pair court and consequently spawn. Many different factors go into this behavior of pit spawning, including female choice of the male and pit size, as well as the male defense of the pits once they are dug in the sand. Cichlids are often divided into two main groups: mouthbrooders and substrate brooders. Different parenting investment levels and behaviors are associated with each type of reproduction. As pit spawning is a reproductive behavior, many different physiological changes occur in the cichlid while this process is occurring that interfere with social interaction. Different kinds of species that pit spawn, and many different morphological changes occur because of this behavioral experience. Pit spawning is an evolved behavior across the cichlid group. Phylogenetic evidence from cichlids in Lake Tanganyika could be helpful in uncovering the evolution of their reproductive behaviors. Several important behaviors are associated with pit spawning, including parental care, food provisioning, and brood guarding. Mouth brooding vs. pit spawning One of the differences studied in African cichlids is reproductive behavior. Some species pit spawn and some are known as mouth brooders. Mouthbrooding is a reproductive technique where the fish scoop up eggs and fry for protection. While this behavior differs from species to species in the details, the general basis of the behavior is the same. Mouthbrooding also affects how they choose their mates and breeding grounds. In a 1995 study, Nelson found that in pit-spawning females choose males for mating based on the size of the pit that they dig, as well as some of the physical characteristics seen in the males. Pit spawning also differs from mouth brooding in the size and postnatal care exhibited. Eggs that have been hatched from pit-spawning cichlids are usually smaller than those of mouthbrooders. Pit-spawners' eggs are usually around 2 mm, while mouthbrooders are typically around 7 mm. While different behaviors take place postnatally between mouthbrooders and pit spawners, some similarities exist. Females in both mouthbrooders and pit-spawning cichlids take care of their young after they are hatched. In some cases, both parents exhibit care, but the female always cares for the eggs and newly hatched fry. Pit spawning process Many species of cichlids use pit spawning, but one of the less commonly studied species that exhibits this behavior is the Neotropical Cichlasoma dimerus. This fish is a substrate breeder that displays biparental care after the fry have hatched from their eggs. One study examined reproductive and social behaviors of this species to see how they accomplished their pit spawning, including different physiological factors such as hormone levels, color changes, and plasma cortisol levels. The entire spawning process could take about 90 minutes and 400~800 eggs could be laid. The female deposits about 10 eggs at a time, attaching them to the spawning surface, which may be a pit constructed on the substrate or another surface. The number of eggs laid was correlated to the space available on the substrate. Once the eggs were attached, the male swam over the eggs and fertilized them. The parents would then dig pits in the sand, 10–20 cm wide and 5–10 cm deep, where larvae were transferred after hatching. Larvae began swimming 8 days after fertilization and parenting behaviors and some of the physiological factors measured changed. Color changes In the same study, color changes were present before and after the pit spawning occurred. For example, after the larvae were transferred and the pits were beginning to be protected, their fins turned a dark grey color. In another study, of the rainbow cichlid, Herotilapia multispinosa, color changes occurred throughout the spawning process. Before spawning, the rainbow cichlid was an olive color with grey bands. Once spawning behaviors started, the body and fins of the fish became a more golden color. When the eggs were finished being laid, the pelvic fin all the way back to the caudal fin turned to a darker color and blackened in both the males and the females. Pit sizes Females prefer a bigger pit size when choosing where to lay eggs. Differences are seen in the sizes of pits that created, as well as a change in the morphology of the pits. Evolutionary differences between species of fish may cause them to either create pits or castles when spawning. The differences were changes in the way that each species fed, their macrohabitats, and the abilities of their sensory systems. Evolution Cichlids are renowned for their recent, rapid evolutionary radiation, both across the entire clade and within different communities across separate habitats. Within their phylogeny, many parallel instances are seen of lineages evolving to the same trait and multiple cases of reversion to an ancestral trait. The family Cichlidae arose between 80 and 100 million years ago within the order Perciformes (perch-like fishes). Cichlidae can be split into a few groups based on their geographic location: Madagascar, Indian, African, and Neotropical (or South American). The most famous and diverse group, the African cichlids, can be further split either into Eastern and Western varieties, or into groups depending on which lake the species is from: Lake Malawi, Lake Victoria, or Lake Tanganyika. Of these subgroups, the Madagascar and Indian cichlids are the most basal and least diverse. Of the African cichlids, the West African or Lake Tanganyika cichlids are the most basal. Cichlids' common ancestor is believed to have been a spit-spawning species. Both Madagascar and Indian cichlids retain this feature. However, of the African cichlids, all extant substrate brooding species originate solely from Lake Tanganyika. The ancestor of the Lake Malawi and Lake Victoria cichlids were mouthbrooders. Similarly, only around 30% of South American cichlids are thought to retain the ancestral substrate-brooding trait. Mouthbrooding is thought to have evolved individually up to 14 times, and a return to substrate brooding as many as three separate times between both African and Neotropical species. Associated behaviors Cichlids have a great variety of behaviors associated with substrate brooding, including courtship and parental care alongside the brooding and nest-building behaviors needed for pit spawning. Cichlids' behavior typically revolves around establishing and defending territories when not courting, brooding, or raising young. Encounters between males and males or females and females are agonistic, while an encounter between a male and female leads to courtship. Courtship in male cichlids follows the establishment of some form of territory, sometimes coupled with building a bower to attract mates. After this, males may attempt to attract female cichlids to their territories by a variety of lekking display strategies or otherwise seek out females of their species. However, cichlids, at the time of spawning, undergo a behavioral change such that they become less receptive to outside interactions. This is often coupled with some physiological change in appearance. Brood care Cichlids can have maternal, paternal, or biparental care. Maternal care is most common among mouthbrooders, but cichlids' common ancestor is thought to exhibit paternal-only care. Other individuals outside of the parents may also play a role in raising young; in the biparental daffodil cichlid (Neolamprologus pulcher), closely related satellite males, those males that surround other males' territories and attempt to mate with female cichlids in the area, help rear the primary males' offspring and their own. A common form of brood care involves food provisioning. For example, females of lyretail cichlids (Neolamprologus modabu) dig at sandy substrate more to push nutritional detritus and zooplankton into the surrounding water. Adult of N. modabu perform this strategy to collect food for themselves, but dig more when offspring are present, likely to feed their fry. This substrate-disruption strategy is rather common and can also be seen in convict cichlids (Cichlasoma nigrofasciatum). Other cichlids have an ectothermal mucus that they grow and feed to their young, while still others chew and distribute caught food to offspring. These strategies, however, are less common in pit-spawning cichlids. Cichlids have highly organized breeding activities. All species show some form of parental care for both eggs and larvae, often nurturing free-swimming young until they are weeks or months old. Communal parental care, where multiple monogamous pairs care for a mixed school of young have also been observed in multiple cichlid species, including Amphilophus citrinellus, Etroplus suratensis, and Tilapia rendalli. Comparably, the fry of Neolamprologus brichardi, a species that commonly lives in large groups, are protected not only by the adults, but also by older juveniles from previous spawns. Several cichlids, including discus (Symphysodon spp.), some Amphilophus species, Etroplus, and Uaru species, feed their young with a skin secretion from mucous glands. The species Neolamprologus pulcher uses a cooperative breeding system, in which one breeding pair has many helpers that are subordinate to the dominant breeders. Parental care falls into one of four categories: substrate or open brooders, secretive cave brooders (also known as guarding speleophils), and at least two types of mouthbrooders, ovophile mouthbrooders and larvophile mouthbrooders. Open brooding Open- or substrate-brooding cichlids lay their eggs in the open, on rocks, leaves, or logs. Examples of open-brooding cichlids include Pterophyllum and Symphysodon species and Anomalochromis thomasi. Male and female parents usually engage in differing brooding roles. Most commonly, the male patrols the pair's territory and repels intruders, while the female fans water over the eggs, removing the infertile ones, and leading the fry while foraging. Both sexes are able to perform the full range of parenting behaviours. Cave brooding Secretive cave-spawning cichlids lay their eggs in caves, crevices, holes, or discarded mollusc shells, frequently attaching the eggs to the roof of the chamber. Examples include Pelvicachromis spp., Archocentrus spp., and Apistogramma spp. Free-swimming fry and parents communicate in captivity and in the wild. Frequently, this communication is based on body movements, such as shaking and pelvic fin flicking. In addition, open- and cave-brooding parents assist in finding food resources for their fry. Multiple neotropical cichlid species perform leaf-turning and fin-digging behaviors. Ovophile mouthbrooding Ovophile mouthbrooders incubate their eggs in their mouths as soon as they are laid, and frequently mouthbrood free-swimming fry for several weeks. Examples include many East African Rift lakes (Lake Malawi, Lake Tanganyika, and Lake Victoria) endemics, e.g.: Maylandia, Pseudotropheus, Tropheus, and Astatotilapia burtoni, along with some South American cichlids such as Geophagus steindachneri. Larvophile mouthbrooding Larvophile mouthbrooders lay eggs in the open or in a cave and take the hatched larvae into the mouth. Examples include some variants of Geophagus altifrons, and some Aequidens, Gymnogeophagus, and Satanoperca, as well as Oreochromis mossambicus and Oreochromis niloticus. Mouthbrooders, whether of eggs or larvae, are predominantly females. Exceptions that also involve the males include eretmodine cichlids (genera Spathodus, Eretmodus, and Tanganicodus), some Sarotherodon species (such as Sarotherodon melanotheron), Chromidotilapia guentheri, and some Aequidens species. This method appears to have evolved independently in several groups of African cichlids. Speciation Cichlids provide scientists with a unique perspective of speciation, having become extremely diverse in the recent geological past, those of Lake Victoria actually within the last 10,000 to 15,000 years, a small fraction of the millions taken for Galápagos finch speciation in Darwin's textbook case. Some of the contributing factors to their diversification are believed to be the various forms of prey processing displayed by cichlid pharyngeal jaw apparatuses. These different jaw apparatuses allow for a broad range of feeding strategies, including algae scraping, snail crushing, planktivory, piscivory, and insectivory. Some cichlids can also show phenotypic plasticity in their pharyngeal jaws, which can also help lead to speciation. In response to different diets or food scarcity, members of the same species can display different jaw morphologies that are better suited to different feeding strategies. As species members begin to concentrate around different food sources and continue their lifecycle, they most likely spawn with like individuals. This can reinforce the jaw morphology and given enough time, create new species. Such a process can happen through allopatric speciation, whereby species diverge according to different selection pressures in different geographical areas, or through sympatric speciation, by which new species evolve from a common ancestor while remaining in the same area. In Lake Apoyo in Nicaragua, Amphilophus zaliosus and its sister species Amphilophus citrinellus display many of the criteria needed for sympatric speciation. In the African rift lake system, cichlid species in numerous distinct lakes evolved from a shared hybrid swarm. Population status In 2010, the International Union for Conservation of Nature classified 184 species as vulnerable, 52 as endangered, and 106 as critically endangered. At present, the IUCN only lists Yssichromis sp. nov. argens as extinct in the wild, and six species are listed as entirely extinct, but many more possibly belong in these categories (for example, Haplochromis aelocephalus, H. apogonoides, H. dentex, H. dichrourus, and numerous other members of the genus Haplochromis have not been seen since the 1980s, but are maintained as critically endangered on the small chance that tiny –but currently unknown– populations survive). Lake Victoria Because of the introduced Nile perch (Lates niloticus), Nile tilapia (Oreochromis niloticus), and water hyacinth, deforestation that led to water siltation, and overfishing, many Lake Victoria cichlid species have become extinct or been drastically reduced. By around 1980, lake fisheries yielded only 1% cichlids, a drastic decline from 80% in earlier years. By far the largest Lake Victoria group is the haplochromine cichlids, with more than 500 species, but at least 200 of these (about 40%) have become extinct, and many others are seriously threatened. Initially it was feared that the percentage of extinct species was even higher, but some species have been rediscovered after the Nile perch started to decline in the 1990s. Some species have survived in nearby small satellite lakes, or in refugia among rocks or papyrus sedges (protecting them from the Nile perch), or have adapted to the human-induced changes in the lake itself. The species were often specialists and these were not affected to the same extent. For example, the piscivorous haplochromines were particularly hard hit with a high number of extinctions, while the zooplanktivorous haplochromines reached densities in 2001 that were similar to before the drastic decline, although consisting of fewer species and with some changes in their ecology. Food and game fish Although cichlids are mostly small- to medium-sized, many are notable as food and game fishes. With few thick rib bones and tasty flesh, artisan fishing is not uncommon in Central America and South America, as well as areas surrounding the African rift lakes. Tilapia The most important food cichlids, however, are the tilapiines of North Africa. Fast growing, tolerant of stocking density, and adaptable, tilapiine species have been introduced and farmed extensively in many parts of Asia and are increasingly common aquaculture targets elsewhere. Farmed tilapia production is about annually, with an estimated value of US$1.8 billion, about equal to that of salmon and trout. Unlike those carnivorous fish, tilapia can feed on algae or any plant-based food. This reduces the cost of tilapia farming, reduces fishing pressure on prey species, avoids concentrating toxins that accumulate at higher levels of the food chain, and makes tilapia the preferred "aquatic chickens" of the trade. Game fish Many large cichlids are popular game fish. The peacock bass (Cichla species) of South America is one of the most popular sportfish. It was introduced in many waters around the world. In Florida, this fish generates millions of hours of fishing and sportfishing revenue of more than US$8 million a year. Other cichlids preferred by anglers include the oscar, Mayan cichlid (Cichlasoma urophthalmus), and jaguar cichlid (Parachromis managuensis). Aquarium fish Since 1945, cichlids have become increasingly popular as aquarium fish. The most common species in hobbyist aquaria is Pterophyllum scalare from the Amazon River basin in tropical South America, known in the trade as the "angelfish". Other popular or readily available species include the oscar (Astronotus ocellatus), convict cichlid (Archocentrus nigrofasciatus) and discus fish (Symphysodon). Hybrids and selective breeding Some cichlids readily hybridize with related species, both in the wild and under artificial conditions. Other groups of fishes, such as European cyprinids, also hybridize. Unusually, cichlid hybrids have been put to extensive commercial use, in particular for aquaculture and aquaria. The hybrid red strain of tilapia, for example, is often preferred in aquaculture for its rapid growth. Tilapia hybridization can produce all-male populations to control stock density or prevent reproduction in ponds. Aquarium hybrids The most common aquarium hybrid is perhaps the blood parrot cichlid, which is a cross of several species, especially from species in the genus Amphilophus. (There are many hypotheses, but the most likely is: Amphilophus labiatus × Vieja synspillus With a triangular-shaped mouth, an abnormal spine, and an occasionally missing caudal fin (known as the "love heart" parrot cichlid), the fish is controversial among aquarists. Some have called blood parrot cichlids "the Frankenstein monster of the fish world". Another notable hybrid, the flowerhorn cichlid, was very popular in some parts of Asia from 2001 until late 2003, and is believed to bring good luck to its owner. The popularity of the flowerhorn cichlid declined in 2004. Owners released many specimens into the rivers and canals of Malaysia and Singapore, where they threaten endemic communities. Numerous cichlid species have been selectively bred to develop ornamental aquarium strains. The most intensive programs have involved angelfish and discus, and many mutations that affect both coloration and fins are known. Other cichlids have been bred for albino, leucistic, and xanthistic pigment mutations, including oscars, convict cichlid and Pelvicachromis pulcher. Both dominant and recessive pigment mutations have been observed. In convict cichlids, for example, a leucistic coloration is recessively inherited, while in Oreochromis niloticus niloticus, red coloration is caused by a dominant inherited mutation. This selective breeding may have unintended consequences. For example, hybrid strains of Mikrogeophagus ramirezi have health and fertility problems. Similarly, intentional inbreeding can cause physical abnormalities, such as the notched phenotype in angelfish. Genera The genus list is as per FishBase. Studies are continuing, however, on the members of this family, particularly the haplochromine cichlids of the African rift lakes. Gallery
Biology and health sciences
Acanthomorpha
null
56265
https://en.wikipedia.org/wiki/Thymus
Thymus
The thymus (: thymuses or thymi) is a specialized primary lymphoid organ of the immune system. Within the thymus, thymus cell lymphocytes or T cells mature. T cells are critical to the adaptive immune system, where the body adapts to specific foreign invaders. The thymus is located in the upper front part of the chest, in the anterior superior mediastinum, behind the sternum, and in front of the heart. It is made up of two lobes, each consisting of a central medulla and an outer cortex, surrounded by a capsule. The thymus is made up of immature T cells called thymocytes, as well as lining cells called epithelial cells which help the thymocytes develop. T cells that successfully develop react appropriately with MHC immune receptors of the body (called positive selection) and not against proteins of the body (called negative selection). The thymus is the largest and most active during the neonatal and pre-adolescent periods. By the early teens, the thymus begins to decrease in size and activity and the tissue of the thymus is gradually replaced by fatty tissue. Nevertheless, some T cell development continues throughout adult life. Abnormalities of the thymus can result in a decreased number of T cells and autoimmune diseases such as autoimmune polyendocrine syndrome type 1 and myasthenia gravis. These are often associated with cancer of the tissue of the thymus, called thymoma, or tissues arising from immature lymphocytes such as T cells, called lymphoma. Removal of the thymus is called thymectomy. Although the thymus has been identified as a part of the body since the time of the Ancient Greeks, it is only since the 1960s that the function of the thymus in the immune system has become clearer. Structure The thymus is an organ that sits behind the sternum in the upper front part of the chest, stretching upwards towards the neck. In children, the thymus is pinkish-gray, soft, and lobulated on its surfaces. At birth, it is about 4–6 cm long, 2.5–5 cm wide, and about 1 cm thick. It increases in size until puberty, where it may have a size of about 40–50 g, following which it decreases in size in a process known as involution. The thymus is located in the anterior mediastinum. It is made up of two lobes that meet in the upper midline, and stretch from below the thyroid in the neck to as low as the cartilage of the fourth rib. The lobes are covered by a capsule. The thymus lies behind the sternum, rests on the pericardium, and is separated from the aortic arch and great vessels by a layer of fascia. The left brachiocephalic vein may even be embedded within the thymus. In the neck, it lies on the front and sides of the trachea, behind the sternohyoid and sternothyroid muscles. Microanatomy The thymus consists of two lobes, merged in the middle, surrounded by a capsule that extends with blood vessels into the interior. The lobes consist of an outer rich with cells and an inner less dense . The lobes are divided into smaller lobules 0.5-2 mm diameter, between which extrude radiating insertions from the capsule along . The cortex is mainly made up of thymocytes and epithelial cells. The thymocytes, immature T cells, are supported by a network of the finely-branched epithelial reticular cells, which is continuous with a similar network in the medulla. This network forms an adventitia to the blood vessels, which enter the cortex via septa near the junction with the medulla. Other cells are also present in the thymus, including macrophages, dendritic cells, and a small amount of B cells, neutrophils and eosinophils. In the medulla, the network of epithelial cells is coarser than in the cortex, and the lymphoid cells are relatively fewer in number. Concentric, nest-like bodies called Hassall's corpuscles (also called thymic corpuscles) are formed by aggregations of the medullary epithelial cells. These are concentric, layered whorls of epithelial cells that increase in number throughout life. They are the remains of the epithelial tubes, which grow out from the third pharyngeal pouches of the embryo to form the thymus. Blood and nerve supply The arteries supplying the thymus are branches of the internal thoracic, and inferior thyroid arteries, with branches from the superior thyroid artery sometimes seen. The branches reach the thymus and travel with the septa of the capsule into the area between the cortex and medulla, where they enter the thymus itself; or alternatively directly enter the capsule. The veins of the thymus, the thymic veins, end in the left brachiocephalic vein, internal thoracic vein, and in the inferior thyroid veins. Sometimes the veins end directly in the superior vena cava. Lymphatic vessels travel only away from the thymus, accompanying the arteries and veins. These drain into the brachiocephalic, tracheobronchial and parasternal lymph nodes. The nerves supplying the thymus arise from the vagus nerve and the cervical sympathetic chain. Branches from the phrenic nerves reach the capsule of the thymus, but do not enter into the thymus itself. Variation The two lobes differ slightly in size, with the left lobe usually higher than the right. Thymic tissue may be found scattered on or around the gland, and occasionally within the thyroid. The thymus in children stretches variably upwards, at times to as high as the thyroid gland. Development The thymocytes and the epithelium of the thymus have different developmental origins. The epithelium of the thymus develops first, appearing as two outgrowths, one on either side, of the third pharyngeal pouch. It sometimes also involves the fourth pharyngeal pouch. These extend outward and backward into the surrounding mesoderm and neural crest-derived mesenchyme in front of the ventral aorta. Here the thymocytes and epithelium meet and join with connective tissue. The pharyngeal opening of each diverticulum is soon obliterated, but the neck of the flask persists for some time as a cellular cord. By further proliferation of the cells lining the flask, buds of cells are formed, which become surrounded and isolated by the invading mesoderm. The epithelium forms fine lobules, and develops into a sponge-like structure. During this stage, hematopoietic bone-marrow precursors migrate into the thymus. Normal development is dependent on the interaction between the epithelium and the hematopoietic thymocytes. Iodine is also necessary for thymus development and activity. Involution The thymus continues to grow after birth reaching the relative maximum size by puberty. It is most active in fetal and neonatal life. It increases to a mass of 20 to 50 grams by puberty. It then begins to decrease in size and activity in a process called thymic involution. After the first year of life the amount of T cells produced begins to fall. Fat and connective tissue fills a part of the thymic volume. During involution, the thymus decreases in size and activity. Fat cells are present at birth, but increase in size and number markedly after puberty, invading the gland from the walls between the lobules first, then into the cortex and medulla. This process continues into old age, where whether with a microscope or with the human eye, the thymus may be difficult to detect, although typically weighs 5–15 grams. Additionally, there is an increasing body of evidence showing that age-related thymic involution is found in most, if not all, vertebrate species with a thymus, suggesting that this is an evolutionary process that has been conserved.[40] The atrophy is due to the increased circulating level of sex hormones, and chemical or physical castration of an adult results in the thymus increasing in size and activity. Severe illness or human immunodeficiency virus infection may also result in involution. Function T cell maturation The thymus facilitates the maturation of T cells, an important part of the immune system providing cell-mediated immunity. T cells begin as hematopoietic precursors from the bone-marrow, and migrate to the thymus, where they are referred to as thymocytes. In the thymus, they undergo a process of maturation, which involves ensuring the cells react against antigens ("positive selection"), but that they do not react against antigens found on body tissue ("negative selection"). Once mature, T cells emigrate from the thymus to provide vital functions in the immune system. Each T cell has a distinct T cell receptor, suited to a specific substance, called an antigen. Most T cell receptors bind to the major histocompatibility complex on cells of the body. The MHC presents an antigen to the T cell receptor, which becomes active if this matches the specific T cell receptor. In order to be properly functional, a mature T cell needs to be able to bind to the MHC molecule ("positive selection"), and not to react against antigens that are actually from the tissues of body ("negative selection"). Positive selection occurs in the cortex and negative selection occurs in the medulla of the thymus. After this process T cells that have survived leave the thymus, regulated by sphingosine-1-phosphate. Further maturation occurs in the peripheral circulation. Some of this is because of hormones and cytokines secreted by cells within the thymus, including thymulin, thymopoietin, and thymosins. Positive selection T cells have distinct T cell receptors. These distinct receptors are formed by process of V(D)J recombination gene rearrangement stimulated by RAG1 and RAG2 genes. This process is error-prone, and some thymocytes fail to make functional T-cell receptors, whereas other thymocytes make T-cell receptors that are autoreactive. If a functional T cell receptor is formed, the thymocyte will begin to express simultaneously the cell surface proteins CD4 and CD8. The survival and nature of the T cell then depends on its interaction with surrounding thymic epithelial cells. Here, the T cell receptor interacts with the MHC molecules on the surface of epithelial cells. A T cell with a receptor that doesn't react, or reacts weakly will die by apoptosis. A T cell that does react will survive and proliferate. A mature T cell expresses only CD4 or CD8, but not both. This depends on the strength of binding between the TCR and MHC class 1 or class 2. A T cell receptor that binds mostly to MHC class I tends to produce a mature "cytotoxic" CD8 positive T cell; a T cell receptor that binds mostly to MHC class II tends to produce a CD4 positive T cell. Negative selection T cells that attack the body's own proteins are eliminated in the thymus, called "negative selection". Epithelial cells in the medulla and dendritic cells in the thymus express major proteins from elsewhere in the body. The gene that stimulates this is AIRE. Thymocytes that react strongly to self antigens do not survive, and die by apoptosis. Some CD4 positive T cells exposed to self antigens persist as T regulatory cells. Clinical significance Immunodeficiency As the thymus is where T cells develop, congenital problems with the development of the thymus can lead to immunodeficiency, whether because of a problem with the development of the thymus gland, or a problem specific to thymocyte development. Immunodeficiency can be profound. Loss of the thymus at an early age through genetic mutation (as in DiGeorge syndrome, CHARGE syndrome, or a very rare "nude" thymus causing absence of hair and the thymus) results in severe immunodeficiency and subsequent high susceptibility to infection by viruses, protozoa, and fungi. Nude mice with the very rare "nude" deficiency as a result of FOXN1 mutation are a strain of research mice as a model of T cell deficiency. The most common congenital cause of thymus-related immune deficiency results from the deletion of the 22nd chromosome, called DiGeorge syndrome. This results in a failure of development of the third and fourth pharyngeal pouches, resulting in failure of development of the thymus, and variable other associated problems, such as congenital heart disease, and abnormalities of mouth (such as cleft palate and cleft lip), failure of development of the parathyroid glands, and the presence of a fistula between the trachea and the oesophagus. Very low numbers of circulating T cells are seen. The condition is diagnosed by fluorescent in situ hybridization and treated with thymus transplantation. Severe combined immunodeficiency (SCID) are group of rare congenital genetic diseases that can result in combined T, B, and NK cell deficiencies. These syndromes are caused by mutations that affect the maturation of the hematopoietic progenitor cells, which are the precursors of both B and T cells. A number of genetic defects can cause SCID, including IL-2 receptor gene loss of function, and mutation resulting in deficiency of the enzyme adenine deaminase. Autoimmune disease Autoimmune polyendocrine syndrome Autoimmune polyendocrine syndrome type 1, is a rare genetic autoimmune syndrome that results from a genetic defect of the thymus tissues. Specifically, the disease results from defects in the autoimmune regulator (AIRE) gene, which stimulates expression of self antigens in the epithelial cells within the medulla of the thymus. Because of defects in this condition, self antigens are not expressed, resulting in T cells that are not conditioned to tolerate tissues of the body, and may treat them as foreign, stimulating an immune response and resulting in autoimmunity. People with APECED develop an autoimmune disease that affects multiple endocrine tissues, with the commonly affected organs being hypothyroidism of the thyroid gland, Addison's disease of the adrenal glands, and candida infection of body surfaces including the inner lining of the mouth and of the nails due to dysfunction of TH17 cells, and symptoms often beginning in childhood. Many other autoimmune diseases may also occur. Treatment is directed at the affected organs. Thymoma-associated multiorgan autoimmunity Thymoma-associated multiorgan autoimmunity can occur in people with thymoma. In this condition, the T cells developed in the thymus are directed against the tissues of the body. This is because the malignant thymus is incapable of appropriately educating developing thymocytes to eliminate self-reactive T cells. The condition is virtually indistinguishable from graft versus host disease. Myasthenia gravis Myasthenia gravis is an autoimmune disease most often due to antibodies that block acetylcholine receptors, involved in signalling between nerves and muscles. It is often associated with thymic hyperplasia or thymoma, with antibodies produced probably because of T cells that develop abnormally. Myasthenia gravis most often develops between young and middle age, causing easy fatiguing of muscle movements. Investigations include demonstrating antibodies (such as against acetylcholine receptors or muscle-specific kinase), and CT scan to detect thymoma or thymectomy. With regard to the thymus, removal of the thymus, called thymectomy may be considered as a treatment, particularly if a thymoma is found. Other treatments include increasing the duration of acetylcholine action at nerve synapses by decreasing the rate of breakdown. This is done by acetylcholinesterase inhibitors such as pyridostigmine. Cancer Thymomas Tumours originating from the thymic epithelial cells are called thymomas. They most often occur in adults older than 40. Tumours are generally detected when they cause symptoms, such as a neck mass or affecting nearby structures such as the superior vena cava; detected because of screening in patients with myasthenia gravis, which has a strong association with thymomas and hyperplasia; and detected as an incidental finding on imaging such as chest X-rays. Hyperplasia and tumours originating from the thymus are associated with other autoimmune diseases – such as hypogammaglobulinemia, Graves disease, pure red cell aplasia, pernicious anaemia and dermatomyositis, likely because of defects in negative selection in proliferating T cells. Thymomas can be benign; benign but by virtue of expansion, invading beyond the capsule of the thymus ("invasive thymoma"), or malignant (a carcinoma). This classification is based on the appearance of the cells. A WHO classification also exists but is not used as part of standard clinical practice. Benign tumours confined to the thymus are most common; followed by locally invasive tumours, and then by carcinomas. There is variation in reporting, with some sources reporting malignant tumours as more common. Invasive tumours, although not technically malignant, can still spread () to other areas of the body. Even though thymomas occur of epithelial cells, they can also contain thymocytes. Treatment of thymomas often requires surgery to remove the entire thymus. This may also result in temporary remission of any associated autoimmune conditions. Lymphomas Tumours originating from T cells of the thymus form a subset of acute lymphoblastic leukaemia (ALL). These are similar in symptoms, investigation approach and management to other forms of ALL. Symptoms that develop, like other forms of ALL, relate to deficiency of platelets, resulting in bruising or bleeding; immunosuppression resulting in infections; or infiltration by cells into parts of the body, resulting in an enlarged liver, spleen, lymph nodes or other sites. Blood test might reveal a large amount of white blood cells or lymphoblasts, and deficiency in other cell lines – such as low platelets or anaemia. Immunophenotyping will reveal cells that are CD3, a protein found on T cells, and help further distinguish the maturity of the T cells. Genetic analysis including karyotyping may reveal specific abnormalities that may influence prognosis or treatment, such as the Philadelphia translocation. Management can include multiple courses of chemotherapy, stem cell transplant, and management of associated problems, such as treatment of infections with antibiotics, and blood transfusions. Very high white cell counts may also require cytoreduction with apheresis. Tumours originating from the small population of B cells present in the thymus lead to primary mediastinal large B cell lymphomas. These are a rare subtype of Non-Hodgkins lymphoma, although by the activity of genes and occasionally microscopic shape, unusually they also have the characteristics of Hodgkins lymphomas. that occur most commonly in young and middle-aged, more prominent in females. Most often, when symptoms occur it is because of compression of structures near the thymus, such as the superior vena cava or the upper respiratory tract; when lymph nodes are affected it is often in the mediastinum and neck groups. Such tumours are often detected with a biopsy that is subject to immunohistochemistry. This will show the presence of clusters of differentiation, cell surface proteins – namely CD30, with CD19, CD20 and CD22, and with the absence of CD15. Other markers may also be used to confirm the diagnosis. Treatment usually includes the typical regimens of CHOP or EPOCH or other regimens; regimens generally including cyclophosphamide, an anthracycline, prednisone, and other chemotherapeutics; and potentially also a stem cell transplant. Thymic cysts The thymus may contain cysts, usually less than 4 cm in diameter. Thymic cysts are usually detected incidentally and do not generally cause symptoms. Thymic cysts can occur along the neck or in the chest (mediastinum). Cysts usually just contain fluid and are lined by either many layers of flat cells or column-shaped cells. Despite this, the presence of a cyst can cause problems similar to those of thymomas, by compressing nearby structures, and some may contact internal walls () and be difficult to distinguish from tumours. When cysts are found, investigation may include a workup for tumours, which may include CT or MRI scan of the area the cyst is suspected to be in. Surgical removal Thymectomy is the surgical removal of the thymus. The usual reason for removal is to gain access to the heart for surgery to correct congenital heart defects in the neonatal period. Other indications for thymectomy include the removal of thymomas and the treatment of myasthenia gravis. In neonates the relative size of the thymus obstructs surgical access to the heart and its surrounding vessels. Removal of the thymus in infancy results in often fatal immunodeficiency, because functional T cells have not developed. In older children and adults, which have a functioning lymphatic system with mature T cells also situated in other lymphoid organs, the effect is reduced, but includes failure to mount immune responses against new antigens, an increase in cancers, and an increase in all-cause mortality. Society and culture When used as food for humans, the thymus of animals is known as one of the kinds of sweetbread. History The thymus was known to the ancient Greeks, and its name comes from the Greek word θυμός (thumos), meaning "anger", or in Ancient Greek, "heart, soul, desire, life", possibly because of its location in the chest, near where emotions are subjectively felt; or else the name comes from the herb thyme (also in Greek θύμος or θυμάρι), which became the name for a "warty excrescence", possibly due to its resemblance to a bunch of thyme. Galen was the first to note that the size of the organ changed over the duration of a person's life. In the 19th century, a condition was identified as status thymicolymphaticus defined by an increase in lymphoid tissue and an enlarged thymus. It was thought to be a cause of sudden infant death syndrome but is now an obsolete term. The importance of the thymus in the immune system was discovered in 1961 by Jacques Miller, by surgically removing the thymus from one-day-old mice, and observing the subsequent deficiency in a lymphocyte population, subsequently named T cells after the organ of their origin. Until the discovery of its immunological role, the thymus had been dismissed as a "evolutionary accident", without functional importance. The role the thymus played in ensuring mature T cells tolerated the tissues of the body was uncovered in 1962, with the finding that T cells of a transplanted thymus in mice demonstrated tolerance towards tissues of the donor mouse. B cells and T cells were identified as different types of lymphocytes in 1968, and the fact that T cells required maturation in the thymus was understood. The subtypes of T cells (CD8 and CD4) were identified by 1975. The way that these subclasses of T cells matured – positive selection of cells that functionally bound to MHC receptors – was known by the 1990s. The important role of the AIRE gene, and the role of negative selection in preventing autoreactive T cells from maturing, was understood by 1994. Recently, advances in immunology have allowed the function of the thymus in T-cell maturation to be more fully understood. Other animals The thymus is present in all jawed vertebrates, where it undergoes the same shrinkage with age and plays the same immunological function as in other vertebrates. Recently, in 2011, a discrete thymus-like lympho-epithelial structure, termed the thymoid, was discovered in the gills of larval lampreys. Hagfish possess a protothymus associated with the pharyngeal velar muscles, which is responsible for a variety of immune responses. The thymus is also present in most other vertebrates with similar structure and function as the human thymus. A second thymus in the neck has been reported sometimes to occur in the mouse. As in humans, the guinea pig's thymus naturally atrophies as the animal reaches adulthood, but the athymic hairless guinea pig (which arose from a spontaneous laboratory mutation) possesses no thymic tissue whatsoever, and the organ cavity is replaced with cystic spaces. Additional images
Biology and health sciences
Immune system
Biology
56276
https://en.wikipedia.org/wiki/Family%20%28biology%29
Family (biology)
Family (, : ) is one of the eight major hierarchical taxonomic ranks in Linnaean taxonomy. It is classified between order and genus. A family may be divided into subfamilies, which are intermediate ranks between the ranks of family and genus. The official family names are Latin in origin; however, popular names are often used: for example, walnut trees and hickory trees belong to the family Juglandaceae, but that family is commonly referred to as the "walnut family". The delineation of what constitutes a family—or whether a described family should be acknowledged—is established and decided upon by active taxonomists. There are not strict regulations for outlining or acknowledging a family, yet in the realm of plants, these classifications often rely on both the vegetative and reproductive characteristics of plant species. Taxonomists frequently hold varying perspectives on these descriptions, leading to a lack of widespread consensus within the scientific community for extended periods. The continual publication of new data and diverse opinions plays a crucial role in facilitating adjustments and ultimately reaching a consensus over time. Nomenclature The naming of families is codified by various international bodies using the following suffixes: In fungal, algal, and botanical nomenclature, the family names of plants, fungi, and algae end with the suffix "-aceae", except for a small number of historic but widely used names including Compositae and Gramineae. In zoological nomenclature, the family names of animals end with the suffix "-idae". Name changes at the family level are regulated by the codes of nomenclature. For botanical families, some traditional names like Palmae (Arecaceae), Cruciferae (Brassicaceae), and Leguminosae (Fabaceae) are conserved alongside their standardized -aceae forms due to their historical significance and widespread use in the literature. Family names are typically formed from the stem of a type genus within the family. In zoology, when a valid family name is based on a genus that is later found to be a junior synonym, the family name may be maintained for stability if it was established before 1960. In botany, some family names that were found to be junior synonyms have been conserved due to their widespread use in the scientific literature. The family-group in zoological nomenclature includes several ranks: superfamily (-oidea), family (-idae), subfamily (-inae), and tribe (-ini). Under the principle of coordination, a name established at any of these ranks can be moved to another rank while retaining its original authorship and date, requiring only a change in suffix to reflect its new rank. New family descriptions are relatively rare in taxonomy, occurring in fewer than one in a hundred taxonomic publications. Such descriptions typically result from either the discovery of organisms with unique combinations of characters that do not fit existing families, or from phylogenetic analyses that reveal the need for reclassification. History The taxonomic term was first used by French botanist Pierre Magnol in his (1689) where he called the seventy-six groups of plants he recognised in his tables families (). The concept of rank at that time was not yet settled, and in the preface to the Magnol spoke of uniting his families into larger , which is far from how the term is used today. In his work Philosophia Botanica published in 1751, Carl Linnaeus employed the term familia to categorize significant plant groups such as trees, herbs, ferns, palms, and so on. Notably, he restricted the use of this term solely within the book's morphological section, where he delved into discussions regarding the vegetative and generative aspects of plants. Subsequently, in French botanical publications, from Michel Adanson's (1763) and until the end of the 19th century, the word was used as a French equivalent of the Latin (or ). The family concept in botany was further developed by the French botanists Antoine Laurent de Jussieu and Michel Adanson. Jussieu's 1789 Genera Plantarum divided plants into 100 'natural orders,' many of which correspond to modern plant families. However, the term 'family' did not become standardized in botanical usage until after the mid-nineteenth century. In zoology, the family as a rank intermediate between order and genus was introduced by Pierre André Latreille in his (1796). He used families (some of them were not named) in some but not in all his orders of "insects" (which then included all arthropods). The standardization of zoological family names began in the early nineteenth century. A significant development came in 1813 when William Kirby introduced the -idae suffix for animal family names, derived from the Greek 'eidos' meaning 'resemblance' or 'like'. The adoption of this naming convention helped establish families as an important taxonomic rank. By the mid-1800s, many of Linnaeus's broad genera were being elevated to family status to accommodate the rapidly growing number of newly discovered species. In nineteenth-century works such as the of Augustin Pyramus de Candolle and the of George Bentham and Joseph Dalton Hooker this word was used for what now is given the rank of family. Uses Families serve as valuable units for evolutionary, paleontological, and genetic studies due to their relatively greater stability compared to lower taxonomic levels like genera and species. Families play a significant practical role in biological education and research. They provide an efficient framework for teaching taxonomy, as they group organisms with general similarities while remaining specific enough to be useful for identification purposes. For example, in botany, learning the characteristics of major plant families helps students identify related species across different geographic regions, since families often have worldwide distribution patterns. In many groups of organisms, families serve as the primary level for taxonomic identification keys, making them particularly valuable for field guides and systematic work as they often represent readily recognizable groups of related organisms with shared characteristics. In ecological and biodiversity research, families frequently serve as the foundational level for identification in survey work and environmental studies. This is particularly useful because families often share life history traits or occupy similar ecological niches. Some families show strong correlations between their taxonomic grouping and ecological functions, though this relationship varies among different groups of organisms. The stability of family names has practical importance for applied biological work, though this stability faces ongoing challenges from new scientific findings. Modern molecular studies and phylogenetic analyses continue to refine the understanding of family relationships, sometimes leading to reclassification. The impact of these changes varies among different groups of organisms – while some families remain well-defined and easily recognizable, others require revision as new evidence emerges about evolutionary relationships. This balance between maintaining nomenclatural stability and incorporating new scientific discoveries remains an active area of taxonomic practice.
Biology and health sciences
Taxonomic rank
Biology
56277
https://en.wikipedia.org/wiki/Rhubarb
Rhubarb
Rhubarb is the fleshy, edible stalks (petioles) of species and hybrids (culinary rhubarb) of Rheum in the family Polygonaceae, which are cooked and used for food. The plant is a herbaceous perennial that grows from short, thick rhizomes. Historically, different plants have been called "rhubarb" in English. The large, triangular leaves contain high levels of oxalic acid and anthrone glycosides, making them inedible. The small flowers are grouped in large compound leafy greenish-white to rose-red inflorescences. The precise origin of culinary rhubarb is unknown. The species Rheum rhabarbarum (syn. R. undulatum) and R. rhaponticum were grown in Europe before the 18th century and used for medicinal purposes. By the early 18th century, these two species and a possible hybrid of unknown origin, R. × hybridum, were grown as vegetable crops in England and Scandinavia. They readily hybridize, and culinary rhubarb was developed by selecting open-pollinated seed, so its precise origin is almost impossible to determine. In appearance, samples of culinary rhubarb vary on a continuum between R. rhaponticum and R. rhabarbarum. However, modern rhubarb cultivars are tetraploids with 2n = 44, in contrast to 2n = 22 for the wild species. Rhubarb is a vegetable and is often put to the same culinary uses as fruits. The leaf stalks can be used raw while they have a crisp texture, but are most commonly cooked with sugar and used in pies, crumbles, and other desserts. They have a strong, tart taste. Many cultivars have been developed for human consumption, most of which are recognised as Rheum × hybridum by the Royal Horticultural Society. Etymology The word rhubarb is likely to have derived in the 14th century from the Old French , which came from the Latin and Greek , meaning 'foreign rhubarb'. The Greek physician Dioscorides used the Greek word (), whereas Galen later used (), Latin . These in turn derive from a Persian name for species of Rheum. The specific epithet rhaponticum, applying to one of the presumed parents of the cultivated plant, means 'rha from the region of the Black Sea' or the river Volga, Rha being its ancient name. Cultivation Rhubarb is grown widely, and with greenhouse production it is available throughout much of the year. It needs rainfall and an annual cold period of up to 7–9 weeks at 3 °C (37 °F), known as 'cold units', to grow well. The plant develops a substantial underground storage organ (rhubarb crowns) and this can be used for early production by transferring field-grown crowns to warm conditions. Rhubarb grown in hothouses (heated greenhouses) is called "hothouse rhubarb", and is typically made available at consumer markets in early spring, before outdoor cultivated rhubarb is available. Hothouse rhubarb is usually brighter red, tenderer and sweeter-tasting than outdoor rhubarb. After forcing for commercial production, the crowns are usually discarded. In temperate climates, rhubarb is one of the first food plants harvested, usually in mid- to late spring (April or May in the Northern Hemisphere, October or November in the Southern Hemisphere), and the season for field-grown plants lasts until the end of summer. In the United Kingdom, the first rhubarb of the year is harvested by candlelight in forcing sheds where all other light is excluded, a practice that produces a sweeter, more tender stalk. These sheds are dotted around the "Rhubarb Triangle" in Yorkshire between Wakefield, Leeds, and Morley. In the United States rhubarb is primarily produced in the states of Oregon, Washington, and Wisconsin with approximately 1,200 acres in production, of which 175 are covered in hothouses. In the northwestern US states of Oregon and Washington, there are typically two harvests, from late April to May and from late June into July; half of all US commercial production is in Pierce County, Washington. Rhubarb is ready to consume as soon as harvested, and freshly cut stalks are firm and glossy. Rhubarb damaged by severe cold should not be eaten, as it may be high in oxalic acid, which migrates from the leaves and can cause illness. The colour of rhubarb stalks can vary from the commonly associated crimson red, through speckled light pink, to simply light green. Rhubarb stalks are poetically described as "crimson stalks". The colour results from the presence of anthocyanins, and varies according to both rhubarb variety and production technique. The colour is not related to its suitability for cooking. Historical cultivation The Chinese call rhubarb "the great yellow" ( ), and have used rhubarb root for medicinal purposes. It appears in The Divine Farmer's Herb-Root Classic, which is thought to have been compiled about 1,800 years ago. Though Dioscurides' description of or indicates that a medicinal root brought to Greece from beyond the Bosphorus may have been rhubarb, commerce in the plant did not become securely established until Islamic times. During Islamic times, it was imported along the Silk Road, reaching Europe in the 14th century through the ports of Aleppo and Smyrna, where it became known as "Turkish rhubarb". Later, it began to arrive via new maritime routes and overland through Russia. The "Russian rhubarb" was the most valued, probably because of the rhubarb-specific quality control system maintained by the Russian Empire. The 2020 edition of Pharmacopoeia of the People's Republic of China lists the following species as medicinally acceptable: Rheum officinale, Rheum palmatum, and Rheum tanguticum. Grieve describes "Turkish rhubarb" as a mixture of R. palmatum and R. rhaponticum. The cost of transportation across Asia made rhubarb expensive in medieval Europe. It was several times the price of other valuable herbs and spices such as cinnamon, opium, and saffron. The merchant explorer Marco Polo therefore searched for the place where the plant was grown and harvested, discovering that it was cultivated in the mountains of Tangut province. The value of rhubarb can be seen in Ruy Gonzáles de Clavijo's report of his embassy in 1403–1405 to Timur in Samarkand: "The best of all merchandise coming to Samarkand was from China: especially silks, satins, musk, rubies, diamonds, pearls, and rhubarb...." The high price, as well as the increasing demand from apothecaries, stimulated efforts to cultivate the different species of rhubarb on European soil. R. rhaponticum × R. officinale came to be grown in England to produce the roots. R. alpinus was also allowed to grow wild. The local availability of the plants grown for medicinal purposes, together with the increasing abundance and decreasing price of sugar in the 18th century, galvanised its culinary adoption. Grieve claims a date of 1820 in England. Rhubarb was harvested in Scotland from at least 1786, having been introduced to the Botanical Garden in Edinburgh by the traveller Bruce of Kinnaird in 1774. He brought the seeds from Abyssinia and they produced 3,000 plants. Though it is often asserted that rhubarb first came to the United States in the 1820s, John Bartram was growing medicinal and culinary rhubarbs in Philadelphia from the 1730s, planting seeds sent to him by Peter Collinson. From the first, the familiar garden rhubarb was not the only Rheum in American gardens: Thomas Jefferson planted R. undulatum at Monticello in 1809 and 1811, observing that it was "Esculent rhubarb, the leaves excellent as Spinach." Cultivars The advocate of organic gardening Lawrence D. Hills listed his favourite rhubarb varieties for flavour as 'Hawke's Champagne', 'Victoria', 'Timperley Early', and 'Early Albert', also recommending 'Gaskin's Perpetual' for having the lowest level of oxalic acid, allowing it to be harvested over a much longer period of the growing season without developing excessive sourness. The Royal Horticultural Society has the UK's national collection of rhubarb that comprises 103 varieties. In 2021–2022 this was moved from southern England to the more northern garden RHS Bridgewater where winter cold and rainfall are better suited for rhubarb. The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit: 'Grandad's Favourite' 'Reed's Early Superb' 'Stein's Champagne' 'Timperley Early' Uses Rhubarb is grown primarily for its fleshy leafstalks, technically known as petioles. The use of rhubarb stalks as food is a relatively recent innovation. This usage was first recorded in 18th- to 19th-century England after affordable sugar became more widely available. Commonly, it is stewed with sugar or used in pies and desserts, but it can also be put into savoury dishes or pickled. Rhubarb can be dehydrated and infused with fruit juice. In the United States, it is usually infused with strawberry juice to mimic the popular strawberry rhubarb pie. Food The species Rheum ribes has been eaten in the Islamic world since the 10th century. In Northern Europe and North America, the stalks are commonly cut into pieces and stewed with added sugar until soft. The resulting compote, sometimes thickened with corn starch, can then be used in pies, tarts and crumbles. Alternatively, greater quantities of sugar can be added with pectin to make jams. A paired spice used is ginger, although cinnamon and nutmeg are also common additions. In the United Kingdom, as well as being used in the typical pies, tarts and crumbles, rhubarb compote is also combined with whipped cream or custard to make rhubarb fool. In the United States, the common usage of rhubarb in pies has led to it being nicknamed "pie plant", by which it is referred to in 19th-century cookbooks. Rhubarb in the US is also often paired with strawberries to make strawberry-rhubarb pie, though some rhubarb purists jokingly consider this "a rather unhappy marriage". Rhubarb can also be used to make alcoholic drinks, such as fruit wines or Finnish rhubarb sima (mead). It is also used to make Kompot. Nutrition Raw rhubarb is 94% water, 5% carbohydrates, 1% protein, and contains negligible fat (table). In a reference amount, raw rhubarb supplies of food energy, and is a rich source of vitamin K (28% of the Daily Value, DV), a moderate source of vitamin C (10% DV), and contains no other micronutrients in significant amounts (table). Traditional Chinese medicine In traditional Chinese medicine, rhubarb roots of several species were used as a laxative for several millennia, although there is no clinical evidence to indicate such use is effective. Phytochemistry and potential toxicity The roots and stems contain anthraquinones, such as emodin and rhein. Emodin "represents a genotoxic risk for humans" while rhein is "a compound devoid of genotoxic capabilities." The anthraquinones have been separated from powdered rhubarb root for purposes in traditional medicine, although long-term consumption of anthraquinones has been associated with acute kidney failure. The rhizomes contain stilbenoid compounds (including rhaponticin), and the flavanol glucosides (+)-catechin-5-O-glucoside and (−)-catechin-7-O-glucoside. Oxalic acid Rhubarb leaves contain poisonous substances, including oxalic acid, a nephrotoxin. The long term consumption of oxalic acid leads to kidney stone formation in humans. Humans have been poisoned after ingesting the leaves, a particular problem during World War I when the leaves were mistakenly recommended as a food source in Britain. The toxic rhubarb leaves have been used in flavouring extracts, after the oxalic acid is removed by treatment with precipitated chalk (i.e., calcium carbonate). The (median lethal dose) for pure oxalic acid in rats is about 375 mg/kg body weight, or about 25 grams for a human. Other sources give a much higher oral LDLo (lowest published lethal dose) of 600 mg/kg. While the oxalic acid content of rhubarb leaves can vary, a typical value is about 0.5%, meaning a 65 kg adult would need to eat 4 to 8 kg (9 to 18 lbs) to obtain a lethal dose, depending on which lethal dose is assumed. Cooking the leaves with baking soda can make them more poisonous by producing soluble oxalates. The leaves are believed to also contain an additional, unidentified toxin, which might be an anthraquinone glycoside (also known as senna glycosides). In the petioles (leaf stalks), the proportion of oxalic acid is about 10% of the total 2–2.5% acidity, which derives mainly from malic acid. Serious cases of rhubarb poisoning are not well documented. Both fatal and non-fatal cases of rhubarb poisoning may be caused not by oxalates, but rather by toxic anthraquinone glycosides. Pests Rhubarb is a host to the rhubarb curculio, Lixus concavus, which is a weevil. Damage is mainly visible on leaves and stalks, with gummosis and oval or circular feeding and egg-laying sites. Hungry wildlife may dig up and eat rhubarb roots in the spring, as stored starches are turned to sugars for new foliage growth. Cookbook Gallery
Biology and health sciences
Plants with edible fruit-like structures
Plants
56309
https://en.wikipedia.org/wiki/Land%20use
Land use
Land use is an umbrella term to describe what happens on a parcel of land. It concerns the benefits derived from using the land, and also the land management actions that humans carry out there. The following categories are used for land use: forest land, cropland (agricultural land), grassland, wetlands, settlements and other lands. The way humans use land, and how land use is changing, has many impacts on the environment. Effects of land use choices and changes by humans include for example urban sprawl, soil erosion, soil degradation, land degradation and desertification. Land use and land management practices have a major impact on natural resources including water, soil, nutrients, plants and animals. Land use change is "the change from one land-use category to another". Land-use change, together with use of fossil fuels, are the major anthropogenic sources of carbon dioxide, a dominant greenhouse gas. Human activity is the most significant cause of land cover change, and humans are also directly impacted by the environmental consequences of these changes. For example, deforestation (the systematic and permanent conversion of previously forested land for other uses) has historically been a primary facilitator of land use and land cover change. The study of land change relies on the synthesis of a wide range of data and a diverse range of data collection methods. These include land cover monitoring and assessments, modeling risk and vulnerability, and land change modeling. Definition and categories The IPCC defines the term land use as the "total of arrangements, activities and inputs applied to a parcel of land". The same report groups land use into the following categories: forest land, cropland (agricultural land), grassland, wetlands, settlements and other lands. Another definition is that of the United Nations' Food and Agriculture Organization: "Land use concerns the products and/or benefits obtained from use of the land as well as the land management actions (activities) carried out by humans to produce those products and benefits." As of the early 1990s, about 13% of the Earth was considered arable land, with 26% in pasture, 32% forests and woodland, and 1.5% urban areas. As of 2015, the total arable land is 10.7% of the land surface, with 1.3% being permanent cropland. For example, the US Department of Agriculture has identified six major types of land use in the United States. Acreage statistics for each type of land use in the contiguous 48 states in 2017 were as follows: Special use areas in the table above include national parks (29 M acres) and state parks (15 M), wildlife areas (64.4 M), highways (21 M), railroads (3M), military bases (25 M), airports (3M) and a few others. Miscellaneous includes cemeteries, golf courses, marshes, deserts, and other areas of "low economic value". The total land area of the United States is 9.1 M km2 but the total used here refers only to the contiguous 48 states, without Alaska etc. Land use change Land use change is "the change from one land-use category to another". Land-use change, together with use of fossil fuels, are the major anthropogenic sources of carbon dioxide, a dominant greenhouse gas. Human activity is the most significant cause of land cover change, and humans are also directly impacted by the environmental consequences of these changes. Collective land use and land cover changes have fundamentally altered the functioning of key Earth systems. For instance, human changes to land use and land cover have a profound impact climate at a local and regional level, which in turn contributes to climate change. Land use by humans has a long history, first emerging more than 10,000 years ago. Human changes to land surfaces have been documented for centuries as having significant impacts on both earth systems and human well-being. Deforestation is an example of large-scale land use change. The deforestation of temperate regions since 1750 has had a major effect on land cover. The reshaping of landscapes to serve human needs, such as the deforestation for farmland, can have long-term effects on earth systems and exacerbate the causes of climate change. Although the burning of fossil fuels is the primary driver of present-day climate change, prior to the Industrial Revolution, deforestation and irrigation were the largest sources of human-driven greenhouse gas emissions. Even today, 35% of anthropogenic carbon dioxide contributions can be attributed to land use or land cover changes. Currently, almost 50% of Earth’s non-ice land surface has been transformed by human activities, with approximately 40% of that land used for agriculture, surpassing natural systems as the principal source of nitrogen emissions. Increasing land conversion by humans in future is not inevitable: In a discussion on response options to climate change mitigation and adaptation an IPCC special report stated that "a number of response options such as increased food productivity, dietary choices and food losses, and waste reduction, can reduce demand for land conversion, thereby potentially freeing land and creating opportunities for enhanced implementation of other response options". Analytical methods Land change science relies heavily on the synthesis of a wide range of data and a diverse range of data collection methods, some of which are detailed below. Land cover monitoring and assessments A primary function of land change science is to document and model long-term patterns of landscape change, which may result from both human activity and natural processes. In the course of monitoring and assessing land cover and land use changes, scientists look at several factors, including where land-cover and land-use are changing, the extent and timescale of changes, and how changes vary through time. To this end, scientists use a variety of tools, including satellite imagery and other sources of remotely sensed data (e.g., aircraft imagery), field observations, historical accounts, and reconstruction modeling. These tools, particularly satellite imagery, allow land change scientists to accurately monitor land-change rates and create a consistent, long-term record to quantify change variability over time. Through observing patterns in land cover changes, scientists can determine the consequences of these changes, predict the impact of future changes, and use this information to inform strategic land management. Modeling risk and vulnerability Modeling risk and vulnerability is also one of land change science's practical applications. Accurate predictions of how human activity will influence land cover change over time, as well as the impact that such changes have on the sustainability of ecological and human systems, can inform the creation of policy designed to address these changes. Studying risk and vulnerability entails the development of quantitative, qualitative, and geospatial models, methods, and support tools. The purpose of these tools is to communicate the vulnerability of both human communities and natural ecosystems to hazard events or long-term land change. Modeling risk and vulnerability requires analyses of community sensitivity to hazards, an understanding of geographic distributions of people and infrastructure, and accurate calculation of the probability of specific disturbances occurring. Land change modeling A key method for studying risk and vulnerability is land change modeling (LCM), which can be used to simulate changes and land use and land cover. LCMs can be used to predict how land use and land cover may change under alternate circumstances, which is useful for risk assessment, in that it allows for the prediction of potential impacts and can be used to inform policy decisions, albeit with some uncertainty. Examples of land use change Deforestation Deforestation is the systematic and permanent conversion of previously forested land for other uses. It has historically been a primary facilitator of land use and land cover change. Forests are a vital part of the global ecosystem and are essential to carbon capture, ecological processes, and biodiversity. However, since the invention of agriculture, global forest cover has diminished by 35%. There is rarely one direct or underlying cause for deforestation. Rather, deforestation is the result of intertwining systemic forces working simultaneously or sequentially to change land cover. Deforestation occurs for many interconnected reasons. For instance, mass deforestation is often viewed as the product of industrial agriculture, yet a considerable portion old-growth forest deforestation is the result of small-scale migrant farming. As forest cover is removed, forest resources become exhausted and increasing populations lead to scarcity, which prompts people to move again to previously undisturbed forest, restarting the process of deforestation. There are several reasons behind this continued migration: poverty-driven lack of available farmland and high costs may lead to an increase in farming intensity on existing farmland. This leads to the overexploitation of farmland, and down the line results in desertification, another land cover change, which renders soil unusable and unprofitable, requiring farmers to seek out untouched and unpopulated old-growth forests. In addition to rural migration and subsistence farming, economic development can also play a substantial role in deforestation. For example, road and railway expansions designed to increase quality of life have resulted in significant deforestation in the Amazon and Central America. Moreover, the underlying drivers of economic development are often linked to global economic engagement, ranging from increased exports to a foreign debt. Urbanization Broadly, urbanization is the increasing number of people who live in urban areas. Urbanization refers to both urban population growth and the physical growth of urban areas. According to the United Nations, the global urban population has increased rapidly since 1950, from 751 million to 4.2 billion in 2018, and current trends predict this number will continue to grow. Accompanying this population shift are significant changes in economic flow, culture and lifestyle, and spatial population distribution. Although urbanized areas cover just 3% of the Earth's surface, they nevertheless have a significant impact on land use and land cover change. Urbanization is important to land use and land cover change for a variety of reasons. In particular, urbanization affects land change elsewhere through the shifting of urban-rural linkages, or the ecological footprint of the transfer of goods and services between urban and rural areas. Increases in urbanization lead to increases in consumption, which puts increased pressure on surrounding rural lands. The outward spread of urban areas can also take over adjacent land formerly used for crop cultivation. Urbanization additionally affects land cover through the urban heat island effect. Heat islands occur when, due to high concentrations of structures, such as buildings and roads, that absorb and re-emit solar radiation, and low concentrations of vegetative cover, urban areas experience higher temperatures than surrounding areas. The high temperatures associated with heat islands can compromise human health, particularly in low-income areas. Decline of the Aral Sea The rapid decline of the Aral Sea is an example how local-scale land use and land change can have compounded impacts on regional climate systems, particularly when human activities heavily disrupt natural climatic cycles, how land change science can be used to map and study such changes. In 1960, the Aral Sea, located in Central Asia, was the world's fourth largest lake. However, a water diversion project, undertaken by the Soviet Union to irrigate arid plains in what is now Kazakhstan, Uzbekistan, and Turkmenistan, resulted in the Aral Sea losing 85% of its land cover and 90% of its volume. The loss of the Aral Sea has had a significant effect on human-environment interactions in the region, including the decimation of the sea's fishing industry and the salinization of agricultural lands by the wind-spread of dried sea salt beds. Additionally, scientists have been able to use technology such as NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) to track changes to the Aral Sea and its surrounding climate over time. This use of modeling and satellite imagery to track human-caused land cover change is characteristic of the scope of land change science. Regulation Commonly, political jurisdictions will undertake land-use planning and regulate the use of land in an attempt to avoid land-use conflicts. Land use plans are implemented through land division and use ordinances and regulations, such as zoning regulations. The urban growth boundary is one form of land-use regulation. For example, Portland, Oregon is required to have an urban growth boundary which contains at least of vacant land. Additionally, Oregon restricts the development of farmland. The regulations are controversial, but an economic analysis concluded that farmland appreciated similarly to the other land. United States In colonial America, few regulations were originally put into place regarding the usage of land. As society shifted from rural to urban, public land regulation became important, especially to city governments trying to control industry, commerce, and housing within their boundaries. The first zoning ordinance was passed in New York City in 1916, and, by the 1930s, most states had adopted zoning laws. In the 1970s, concerns about the environment and historic preservation led to further regulation. Today, federal, state, and local governments regulate growth and development through statutory law. The majority of controls on land, however, stem from the actions of private developers and individuals. Judicial decisions and enforcement of private land-use arrangements can reinforce public regulation, and achieve forms and levels of control that regulatory zoning cannot. There is growing concern that land use regulation is a direct cause of housing segregation in the United States today. Two major federal laws passed in the 1960s limit the use of land significantly. These are the National Historic Preservation Act of 1966 (today embodied in 16 U.S.C. 461 et seq.) and the National Environmental Policy Act of 1969 (42 U.S.C. 4321 et seq.).
Physical sciences
Earth science basics: General
Earth science
56315
https://en.wikipedia.org/wiki/Mango
Mango
A mango is an edible stone fruit produced by the tropical tree Mangifera indica. It originated from the region between northwestern Myanmar, Bangladesh, and northeastern India. M. indica has been cultivated in South and Southeast Asia since ancient times resulting in two types of modern mango cultivars: the "Indian type" and the "Southeast Asian type". Other species in the genus Mangifera also produce edible fruits that are also called "mangoes", the majority of which are found in the Malesian ecoregion. Worldwide, there are several hundred cultivars of mango. Depending on the cultivar, mango fruit varies in size, shape, sweetness, skin color, and flesh color, which may be pale yellow, gold, green, or orange. Mango is the national fruit of India, Pakistan and the Philippines, while the mango tree is the national tree of Bangladesh. Etymology The English word mango (plural mangoes or mangos) originated in the 16th century from the Portuguese word , from the Malay , and ultimately from the Tamil (, 'mango tree') + (, 'unripe fruit/vegetable') or the Malayalam (, 'mango tree') + (, 'unripe fruit'). The scientific name, Mangifera indica, refers to a plant bearing mangoes in India. Description Mango trees grow to tall, with a crown radius of . The trees are long-lived, as some specimens still fruit after 300 years. In deep soil, the taproot descends to a depth of , with profuse, wide-spreading feeder roots and anchor roots penetrating deeply into the soil. The leaves are evergreen, alternate, simple, long, and broad; when the leaves are young they are orange-pink, rapidly changing to a dark, glossy red, then dark green as they mature. The flowers are produced in terminal panicles long; each flower is small and white with five petals long, with a mild, sweet fragrance. Over 500 varieties of mangoes are known, many of which ripen in summer, while some give a double crop. The fruit takes four to five months from flowering to ripening. The ripe fruit varies according to cultivar in size, shape, color, sweetness, and eating quality. Depending on the cultivar, fruits are variously yellow, orange, red, or green. The fruit has a single flat, oblong pit that can be fibrous or hairy on the surface and does not separate easily from the pulp. The fruits may be somewhat round, oval, or kidney-shaped, ranging from in length and from to in weight per individual fruit. The skin is leather-like, waxy, smooth, and fragrant, with colors ranging from green to yellow, yellow-orange, yellow-red, or blushed with various shades of red, purple, pink, or yellow when fully ripe. Ripe intact mangoes give off a distinctive resinous, sweet smell. Inside the pit thick is a thin lining covering a single seed, long. Mangoes have recalcitrant seeds which do not survive freezing and drying. Mango trees grow readily from seeds, with germination success highest when seeds are obtained from mature fruits. Taxonomy Mangoes originated from the region between northwestern Myanmar, Bangladesh, and northeastern India. The mango is considered an evolutionary anachronism, whereby seed dispersal was once accomplished by a now-extinct evolutionary forager, such as a megafauna mammal. From their center of origin, mangoes diverged into two genetically distinct populations: the subtropical Indian group and the tropical Southeast Asian group. The Indian group is characterized by having monoembryonic fruits, while polyembryonic fruits characterize the Southeast Asian group. It was previously believed that mangoes originated from a single domestication event in South Asia before being spread to Southeast Asia, but a 2019 study found no evidence of a center of diversity in India. Instead, it identified a higher unique genetic diversity in Southeast Asian cultivars than in Indian cultivars, indicating that mangoes may have originally been domesticated first in Southeast Asia before being introduced to South Asia. However, the authors also cautioned that the diversity in Southeast Asian mangoes might be the result of other reasons (like interspecific hybridization with other Mangifera species native to the Malesian ecoregion). Nevertheless, the existence of two distinct genetic populations also identified by the study indicates that the domestication of the mango is more complex than previously assumed and would at least indicate multiple domestication events in Southeast Asia and South Asia. Cultivars There are hundreds of named mango cultivars. In mango orchards, several cultivars are often grown to improve pollination. Many desired cultivars are monoembryonic and must be propagated by grafting, or they do not breed true. A common monoembryonic cultivar is 'Alphonso', an important export product, considered "the king of mangoes". Cultivars that excel in one climate may fail elsewhere. For example, Indian cultivars such as 'Julie,' a prolific cultivar in Jamaica, require annual fungicide treatments to escape the lethal fungal disease anthracnose in Florida. Asian mangoes are resistant to anthracnose. The current world market is dominated by the cultivar 'Tommy Atkins', a seedling of 'Haden' that first fruited in 1940 in southern Florida and was initially rejected commercially by Florida researchers. Growers and importers worldwide have embraced the cultivar for its excellent productivity and disease resistance, shelf life, transportability, size, and appealing color. Although the Tommy Atkins cultivar is commercially successful, other cultivars may be preferred by consumers for eating pleasure, such as Alphonso. Generally, ripe mangoes have an orange-yellow or reddish peel and are juicy for eating, while exported fruit are often picked while underripe with green peels. Although producing ethylene while ripening, unripened exported mangoes do not have the same juiciness or flavor as fresh fruit. Distribution and habitat From tropical Asia, mangoes were introduced to East Africa by Arab and Persian traders in the ninth to tenth centuries. The 14th-century Moroccan traveler Ibn Battuta reported it at Mogadishu. It was spread further into other areas around the world during the Colonial Era. The Portuguese Empire spread the mango from their colony in Goa to East and West Africa. From West Africa, they introduced it to Brazil from the 16th to the 17th centuries. From Brazil, it spread northwards to the Caribbean and eastern Mexico by the mid to late 18th century. The Spanish Empire also introduced mangoes directly from the Philippines to western Mexico via the Manila galleons from at least the 16th century. Mangoes were only introduced to Florida by 1833. Cultivation The mango is now cultivated in most frost-free tropical and warmer subtropical climates. It is cultivated extensively in South Asia, Southeast Asia, East and West Africa, the tropical and subtropical Americas, and the Caribbean. Mangoes are also grown in Andalusia, Spain (mainly in Málaga province), as its coastal subtropical climate is one of the few places in mainland Europe that permits the growth of tropical plants and fruit trees. The Canary Islands are another notable Spanish producer of the fruit. Other minor cultivators include North America (in South Florida and the California Coachella Valley), Hawai'i, and Australia. Many commercial cultivars are grafted onto the cold-hardy rootstock of the Gomera-1 mango cultivar, originally from Cuba. Its root system is well adapted to a coastal Mediterranean climate. Many of the 1,000+ mango cultivars are easily cultivated using grafted saplings, ranging from the "turpentine mango" (named for its strong taste of turpentine) to the Bullock's Heart. Dwarf or semidwarf varieties serve as ornamental plants and can be grown in containers. A wide variety of diseases can afflict mangoes. A breakthrough in mango cultivation was the use of potassium nitrate and ethrel to induce flowering in mangoes. The discovery was made by Filipino horticulturist Ramon Barba in 1974 and was developed from the unique traditional method of inducing mango flowering using smoke in the Philippines. It allowed mango plantations to induce regular flowering and fruiting year-round. Previously, mangoes were seasonal because they only flowered every 16 to 18 months. The method is now used in most mango-producing countries. Production In 2022, world production of mangoes (report includes mangosteens and guavas) was 59 million tonnes, led by India with 44% of the total (table). Uses Culinary Mangoes are generally sweet, although the taste and texture of the flesh vary across cultivars; some, such as Alphonso, have a soft, pulpy, juicy texture similar to an overripe plum, while others, such as Tommy Atkins, are firmer with a fibrous texture. The skin of unripe, pickled, or cooked mango can be eaten, but it has the potential to cause contact dermatitis of the lips, gingiva, or tongue in susceptible people. Mangoes are used in many cuisines. Sour, unripe mangoes are used in chutneys (i.e., mango chutney), pickles, daals and other side dishes in Indian cuisine. A summer drink called aam panna is made with mangoes. Mango pulp made into jelly or cooked with red gram dhal and green chilies may be served with cooked rice. Mango lassi is consumed throughout South Asia, prepared by mixing ripe mangoes or mango pulp with buttermilk and sugar. Ripe mangoes are also used to make curries. Aamras is a thick juice made of mangoes with sugar or milk and is consumed with chapatis or pooris. The pulp from ripe mangoes is also used to make jam called mangada. Andhra aavakaaya is a pickle made from raw, unripe, pulpy, and sour mango mixed with chili powder, fenugreek seeds, mustard powder, salt, and groundnut oil. Mango is also used to make dahl and chunda (a sweet and spicy, grated mango delicacy). In Indonesian cuisine, unripe mango is processed into asinan, rujak and sambal pencit/mangga muda, or eaten with edible salt. Mangoes are used to make murabba (fruit preserves), muramba (a sweet, grated mango delicacy), amchur (dried and powdered unripe mango), and pickles, including a spicy mustard-oil pickle and alcohol. Ripe mangoes are cut into thin layers, desiccated, folded and then cut. The fruit is also added to cereal products such as muesli and oat granola. Mango is used to make juices, smoothies, ice cream, fruit bars, raspados, aguas frescas, pies, and sweet chili sauce, or mixed with chamoy, a sweet and spicy chili paste. In Central America, mango is either eaten green, mixed with salt, vinegar, black pepper, and hot sauce, or ripe in various forms. Pieces of mango can be mashed and used as a topping on ice cream or blended with milk and ice as milkshakes. Sweet glutinous rice is flavored with coconut, then served with sliced mango as mango sticky rice. In other parts of Southeast Asia, mangoes are pickled with fish sauce and rice vinegar. Green mangoes can be used in mango salad with fish sauce and dried shrimp. Mango with condensed milk may be used as a topping for shaved ice. Raw green mangoes can be sliced and eaten like a salad. In most parts of Southeast Asia, they are commonly eaten with fish sauce, vinegar, soy sauce, or with a dash of salt (plain or spicy)a combination usually known as "mango salad" in English. In the Philippines, green mangoes are also commonly eaten with bagoong (salty fish or shrimp paste), salt, soy sauce, vinegar or chilis. Mango float and mango cake, which use slices of ripe mangoes, are eaten in the Philippines. Dried strips of sweet, ripe mango (sometimes combined with seedless tamarind to form mangorind) are also consumed. Mangoes may be used to make juices, mango nectar, and as a flavoring and major ingredient in mango ice cream and sorbetes. Phytochemistry Numerous phytochemicals are present in mango peel and pulp, such as the triterpene lupeol. Mango peel pigments under study include carotenoids, such as the provitamin A compound, beta-carotene, lutein and alpha-carotene, and polyphenols, such as quercetin, kaempferol, gallic acid, caffeic acid, catechins and tannins. Mango contains a unique xanthonoid called mangiferin. Phytochemical and nutrient content appears to vary across mango cultivars. Up to 25 different carotenoids have been isolated from mango pulp, the densest of which was beta-carotene, which accounts for the yellow-orange pigmentation of most mango cultivars. Mango leaves also have significant polyphenol content, including xanthonoids, mangiferin and gallic acid. Flavor The flavor of mango fruits is conferred by several volatile organic chemicals mainly belonging to terpene, furanone, lactone, and ester classes. Different varieties or cultivars of mangoes can have flavors made up of different volatile chemicals or the same volatile chemicals in different quantities. In general, New World mango cultivars are characterized by the dominance of δ-3-carene, a monoterpene flavorant; whereas, high concentration of other monoterpenes such as (Z)-ocimene and myrcene, as well as the presence of lactones and furanones, is the unique feature of Old World cultivars. In India, 'Alphonso' is one of the most popular cultivars. In 'Alphonso' mango, the lactones and furanones are synthesized during ripening, whereas terpenes and the other flavorants are present in both the developing (immature) and ripening fruits. Ethylene, a ripening-related hormone well known to be involved in ripening of mango fruits, causes changes in the flavor composition of mango fruits upon exogenous application, as well. In contrast to the huge amount of information available on the chemical composition of mango flavor, the biosynthesis of these chemicals has not been studied in depth; only a handful of genes encoding the enzymes of flavor biosynthetic pathways have been characterized to date. Toxicity Contact with oils in mango leaves, stems, sap, and skin can cause dermatitis and anaphylaxis in susceptible individuals. Those with a history of contact dermatitis induced by urushiol (an allergen found in poison ivy, poison oak, or poison sumac) may be most at risk for mango contact dermatitis. Other mango compounds potentially responsible for dermatitis or allergic reactions include mangiferin. Cross-reactions may occur between mango allergens and urushiol. Sensitized individuals may not be able to eat peeled mangos or drink mango juice safely. When mango trees are flowering in spring, local people with allergies may experience breathing difficulty, itching of the eyes, or facial swelling, even before flower pollen becomes airborne. In this case, the irritant is likely to be the vaporized essential oil from flowers. During the primary ripening season of mangoes, contact with mango plant parts – primarily sap, leaves, and fruit skin – is the most common cause of plant dermatitis in Hawaii. Nutrition A raw mango is 84% water, 15% carbohydrates, 1% protein, and has negligible fat (table). The energy value per 100 g (3.5 oz) serving of raw mango is 250 kJ (60 calories). Fresh mango contains only vitamin C and folate in significant amounts of the Daily Value as 44% and 11%, respectively (table). Culture The mango is the national fruit of India. It is also the national tree of Bangladesh. In India, harvest and sale of mangoes is during March–May and this is annually covered by news agencies. The mango has a traditional context in the culture of South Asia. In his edicts, the Mauryan emperor Ashoka references the planting of fruit- and shade-bearing trees along imperial roads: "On the roads banyan-trees were caused to be planted by me, (in order that) they might afford shade to cattle and men, (and) mango-groves were caused to be planted." In medieval India, the Indo-Persian poet Amir Khusrau termed the mango "Naghza Tarin Mewa Hindustan" – "the fairest fruit of Hindustan". Mangoes were enjoyed at the court of the Delhi Sultan Alauddin Khijli. The Mughal Empire was especially fond of the fruits: Babur praises the mango in his Babarnameh. At the same time, Sher Shah Suri inaugurated the creation of the Chaunsa variety after his victory over the Mughal emperor Humayun. Mughal patronage of horticulture led to the grafting of thousands of mangoes varieties, including the famous Totapuri, which was the first variety to be exported to Iran and Central Asia. Akbar (15561605) is said to have planted a mango orchard of 100,000 trees near Darbhanga, Bihar, while Jahangir and Shah Jahan ordered the planting of mango orchards in Lahore and Delhi and the creation of mango-based desserts. The Jain goddess Ambika is traditionally represented as sitting under a mango tree. Mango blossoms are also used in the worship of the goddess Saraswati. Mango leaves decorate archways and doors in Indian houses during weddings and celebrations such as Ganesh Chaturthi. Mango motifs and paisleys are widely used in different Indian embroidery styles, and are found in Kashmiri shawls, Kanchipuram and silk sarees. In Tamil Nadu, the mango is referred to as one of the three royal fruits, along with banana and jackfruit, for their sweetness and flavor. This triad of fruits is referred to as ma-pala-vazhai. The classical Sanskrit poet Kalidasa sang the praises of mangoes. Mangoes were the subject of the mango cult in China during the Cultural Revolution as symbols of chairman Mao Zedong's love for the people.
Biology and health sciences
Sapindales
null
56333
https://en.wikipedia.org/wiki/Root
Root
In vascular plants, the roots are the organs of a plant that are modified to provide anchorage for the plant and take in water and nutrients into the plant body, which allows plants to grow taller and faster. They are most often below the surface of the soil, but roots can also be aerial or aerating, that is, growing up above the ground or especially above water. Function The major functions of roots are absorption of water, plant nutrition and anchoring of the plant body to the ground. Anatomy Root morphology is divided into four zones: the root cap, the apical meristem, the elongation zone, and the hair. The root cap of new roots helps the root penetrate the soil. These root caps are sloughed off as the root goes deeper creating a slimy surface that provides lubrication. The apical meristem behind the root cap produces new root cells that elongate. Then, root hairs form that absorb water and mineral nutrients from the soil. The first root in seed producing plants is the radicle, which expands from the plant embryo after seed germination. When dissected, the arrangement of the cells in a root is root hair, epidermis, epiblem, cortex, endodermis, pericycle and, lastly, the vascular tissue in the centre of a root to transport the water absorbed by the root to other places of the plant. Perhaps the most striking characteristic of roots that distinguishes them from other plant organs such as stem-branches and leaves is that roots have an endogenous origin, i.e., they originate and develop from an inner layer of the mother axis, such as pericycle. In contrast, stem-branches and leaves are exogenous, i.e., they start to develop from the cortex, an outer layer. In response to the concentration of nutrients, roots also synthesise cytokinin, which acts as a signal as to how fast the shoots can grow. Roots often function in storage of food and nutrients. The roots of most vascular plant species enter into symbiosis with certain fungi to form mycorrhizae, and a large range of other organisms including bacteria also closely associate with roots. Root system architecture (RSA) Definition In its simplest form, the term root system architecture (RSA) refers to the spatial configuration of a plant's root system. This system can be extremely complex and is dependent upon multiple factors such as the species of the plant itself, the composition of the soil and the availability of nutrients. Root architecture plays the important role of providing a secure supply of nutrients and water as well as anchorage and support. The configuration of root systems serves to structurally support the plant, compete with other plants and for uptake of nutrients from the soil. Roots grow to specific conditions, which, if changed, can impede a plant's growth. For example, a root system that has developed in dry soil may not be as efficient in flooded soil, yet plants are able to adapt to other changes in the environment, such as seasonal changes. Terms and components The main terms used to classify the architecture of a root system are: All components of the root architecture are regulated through a complex interaction between genetic responses and responses due to environmental stimuli. These developmental stimuli are categorised as intrinsic, the genetic and nutritional influences, or extrinsic, the environmental influences and are interpreted by signal transduction pathways. Extrinsic factors affecting root architecture include gravity, light exposure, water and oxygen, as well as the availability or lack of nitrogen, phosphorus, sulphur, aluminium and sodium chloride. The main hormones (intrinsic stimuli) and respective pathways responsible for root architecture development include: Growth Early root growth is one of the functions of the apical meristem located near the tip of the root. The meristem cells more or less continuously divide, producing more meristem, root cap cells (these are sacrificed to protect the meristem), and undifferentiated root cells. The latter become the primary tissues of the root, first undergoing elongation, a process that pushes the root tip forward in the growing medium. Gradually these cells differentiate and mature into specialized cells of the root tissues. Growth from apical meristems is known as primary growth, which encompasses all elongation. Secondary growth encompasses all growth in diameter, a major component of woody plant tissues and many nonwoody plants. For example, storage roots of sweet potato have secondary growth but are not woody. Secondary growth occurs at the lateral meristems, namely the vascular cambium and cork cambium. The former forms secondary xylem and secondary phloem, while the latter forms the periderm. In plants with secondary growth, the vascular cambium, originating between the xylem and the phloem, forms a cylinder of tissue along the stem and root. The vascular cambium forms new cells on both the inside and outside of the cambium cylinder, with those on the inside forming secondary xylem cells, and those on the outside forming secondary phloem cells. As secondary xylem accumulates, the "girth" (lateral dimensions) of the stem and root increases. As a result, tissues beyond the secondary phloem including the epidermis and cortex, in many cases tend to be pushed outward and are eventually "sloughed off" (shed). At this point, the cork cambium begins to form the periderm, consisting of protective cork cells. The walls of cork cells contains suberin thickenings, which is an extra cellular complex biopolymer. The suberin thickenings functions by providing a physical barrier, protection against pathogens and by preventing water loss from the surrounding tissues. In addition, it also aids the process of wound healing in plants. It is also postulated that suberin could be a component of the apoplastic barrier (present at the outer cell layers of roots) which prevents toxic compounds from entering the root and reduces radial oxygen loss (ROL) from the aerenchyma during waterlogging. In roots, the cork cambium originates in the pericycle, a component of the vascular cylinder. The vascular cambium produces new layers of secondary xylem annually. The xylem vessels are dead at maturity (in some) but are responsible for most water transport through the vascular tissue in stems and roots. Tree roots usually grow to three times the diameter of the branch spread, only half of which lie underneath the trunk and canopy. The roots from one side of a tree usually supply nutrients to the foliage on the same side. Some families however, such as Sapindaceae (the maple family), show no correlation between root location and where the root supplies nutrients on the plant. Regulation There is a correlation of roots using the process of plant perception to sense their physical environment to grow, including the sensing of light, and physical barriers. Plants also sense gravity and respond through auxin pathways, resulting in gravitropism. Over time, roots can crack foundations, snap water lines, and lift sidewalks. Research has shown that roots have ability to recognize 'self' and 'non-self' roots in same soil environment. The correct environment of air, mineral nutrients and water directs plant roots to grow in any direction to meet the plant's needs. Roots will shy or shrink away from dry or other poor soil conditions. Gravitropism directs roots to grow downward at germination, the growth mechanism of plants that also causes the shoot to grow upward. Different types of roots such as primary, seminal, lateral and crown are maintained at different gravitropic setpoint angles i.e. the direction in which they grow. Recent research show that root angle in cereal crops such as barley and wheat is regulated by a novel gene called Enhanced Gravitropism 1 (EGT1). Research indicates that plant roots growing in search of productive nutrition can sense and avoid soil compaction through diffusion of the gas ethylene. Shade avoidance response In order to avoid shade, plants utilize a shade avoidance response. When a plant is under dense vegetation, the presence of other vegetation nearby will cause the plant to avoid lateral growth and experience an increase in upward shoot, as well as downward root growth. In order to escape shade, plants adjust their root architecture, most notably by decreasing the length and amount of lateral roots emerging from the primary root. Experimentation of mutant variants of Arabidopsis thaliana found that plants sense the Red to Far Red light ratio that enters the plant through photoreceptors known as phytochromes. Nearby plant leaves will absorb red light and reflect far-red light, which will cause the ratio red to far red light to lower. The phytochrome PhyA that senses this Red to Far Red light ratio is localized in both the root system as well as the shoot system of plants, but through knockout mutant experimentation, it was found that root localized PhyA does not sense the light ratio, whether directly or axially, that leads to changes in the lateral root architecture. Research instead found that shoot localized PhyA is the phytochrome responsible for causing these architectural changes of the lateral root. Research has also found that phytochrome completes these architectural changes through the manipulation of auxin distribution in the root of the plant. When a low enough Red to Far Red ratio is sensed by PhyA, the phyA in the shoot will be mostly in its active form. In this form, PhyA stabilize the transcription factor HY5 causing it to no longer be degraded as it is when phyA is in its inactive form. This stabilized transcription factor is then able to be transported to the roots of the plant through the phloem, where it proceeds to induce its own transcription as a way to amplify its signal. In the roots of the plant HY5 functions to inhibit an auxin response factor known as ARF19, a response factor responsible for the translation of PIN3 and LAX3, two well known auxin transporting proteins. Thus, through manipulation of ARF19, the level and activity of auxin transporters PIN3 and LAX3 is inhibited. Once inhibited, auxin levels will be low in areas where lateral root emergence normally occurs, resulting in a failure for the plant to have the emergence of the lateral root primordium through the root pericycle. With this complex manipulation of Auxin transport in the roots, lateral root emergence will be inhibited in the roots and the root will instead elongate downwards, promoting vertical plant growth in an attempt to avoid shade. Research of Arabidopsis has led to the discovery of how this auxin mediated root response works. In an attempt to discover the role that phytochrome plays in lateral root development, Salisbury et al. (2007) worked with Arabidopsis thaliana grown on agar plates. Salisbury et al. used wild type plants along with varying protein knockout and gene knockout Arabidopsis mutants to observe the results these mutations had on the root architecture, protein presence, and gene expression. To do this, Salisbury et al. used GFP fluorescence along with other forms of both macro and microscopic imagery to observe any changes various mutations caused. From these research, Salisbury et al. were able to theorize that shoot located phytochromes alter auxin levels in roots, controlling lateral root development and overall root architecture. In the experiments of van Gelderen et al. (2018), they wanted to see if and how it is that the shoot of A. thaliana alters and affects root development and root architecture. To do this, they took Arabidopsis plants, grew them in agar gel, and exposed the roots and shoots to separate sources of light. From here, they altered the different wavelengths of light the shoot and root of the plants were receiving and recorded the lateral root density, amount of lateral roots, and the general architecture of the lateral roots. To identify the function of specific photoreceptors, proteins, genes, and hormones, they utilized various Arabidopsis knockout mutants and observed the resulting changes in lateral roots architecture. Through their observations and various experiments, van Gelderen et al. were able to develop a mechanism for how root detection of Red to Far-red light ratios alter lateral root development. Types A true root system consists of a primary root and secondary roots (or lateral roots). the diffuse root system: the primary root is not dominant; the whole root system is fibrous and branches in all directions. Most common in monocots. The main function of the fibrous root is to anchor the plant. Specialized The roots, or parts of roots, of many plant species have become specialized to serve adaptive purposes besides the two primary functions, described in the introduction. Adventitious roots arise out-of-sequence from the more usual root formation of branches of a primary root, and instead originate from the stem, branches, leaves, or old woody roots. They commonly occur in monocots and pteridophytes, but also in many dicots, such as clover (Trifolium), ivy (Hedera), strawberry (Fragaria) and willow (Salix). Most aerial roots and stilt roots are adventitious. In some conifers adventitious roots can form the largest part of the root system. Adventitious root formation is enhanced in many plant species during (partial) submergence, to increase gas exchange and storage of gases like oxygen. Distinct types of adventitious roots can be classified and are dependent on morphology, growth dynamics and function. Aerating roots (or knee root or knee or pneumatophores): roots rising above the ground, especially above water such as in some mangrove genera (Avicennia, Sonneratia). In some plants like Avicennia the erect roots have a large number of breathing pores for exchange of gases. Aerial roots: roots entirely above the ground, such as in ivy (Hedera) or in epiphytic orchids. Many aerial roots are used to receive water and nutrient intake directly from the air – from fogs, dew or humidity in the air. Some rely on leaf systems to gather rain or humidity and even store it in scales or pockets. Other aerial roots, such as mangrove aerial roots, are used for aeration and not for water absorption. Other aerial roots are used mainly for structure, functioning as prop roots, as in maize or anchor roots or as the trunk in strangler fig. In some Epiphytes – plants living above the surface on other plants, aerial roots serve for reaching to water sources or reaching the surface, and then functioning as regular surface roots. Canopy roots/arboreal roots: roots that form when tree branches support mats of epiphytes and detritus, which hold water and nutrients in the canopy. They grow out into these mats, likely to utilize the available nutrients and moisture. Coarse roots: roots that have undergone secondary thickening and have a woody structure. These roots have some ability to absorb water and nutrients, but their main function is transport and to provide a structure to connect the smaller diameter, fine roots to the rest of the plant. Contractile roots: roots that pull bulbs or corms of monocots, such as hyacinth and lily, and some taproots, such as dandelion, deeper in the soil through expanding radially and contracting longitudinally. They have a wrinkled surface. Coralloid roots: similar to root nodules, these provide nitrogen to the plant. They are often larger than nodules, branched, and located at or near the soil surface, and harbor nitrogen-fixing cyanobacteria. They are only found in cycads. Dimorphic root systems: roots with two distinctive forms for two separate functions Fine roots: typically primary roots <2 mm diameter that have the function of water and nutrient uptake. They are often heavily branched and support mycorrhizas. These roots may be short lived, but are replaced by the plant in an ongoing process of root 'turnover'. Haustorial roots: roots of parasitic plants that can absorb water and nutrients from another plant, such as in mistletoe (Viscum album) and dodder. Propagative roots: roots that form adventitious buds that develop into aboveground shoots, termed suckers, which form new plants, as in common milkweed (Asclepias syriaca), Canada thistle (Cirsium arvense), and many others. Photosynthetic roots: roots that are green and photosynthesize, providing sugar to the plant. They are similar to phylloclades. Several orchids have these, such as Dendrophylax and Taeniophyllum. Proteoid roots or cluster roots: dense clusters of rootlets of limited growth that develop under low phosphate or low iron conditions in Proteaceae and some plants from the following families Betulaceae, Casuarinaceae, Elaeagnaceae, Moraceae, Fabaceae and Myricaceae. Root nodules: roots that harbor nitrogen-fixing soil bacteria. These are often very short and rounded. Root nodules are found in virtually all legumes. Stilt roots: adventitious support roots, common among mangroves. They grow down from lateral branches, branching in the soil. Storage roots: roots modified for storage of food or water, such as carrots and beets. They include some taproots and tuberous roots. Structural roots: large roots that have undergone considerable secondary thickening and provide mechanical support to woody plants and trees. Surface roots: roots that proliferate close below the soil surface, exploiting water and easily available nutrients. Where conditions are close to optimum in the surface layers of soil, the growth of surface roots is encouraged and they commonly become the dominant roots. Tuberous roots: fleshy and enlarged lateral roots for food or water storage, e.g. sweet potato. A type of storage root distinct from taproot. Depths The distribution of vascular plant roots within soil depends on plant form, the spatial and temporal availability of water and nutrients, and the physical properties of the soil. The deepest roots are generally found in deserts and temperate coniferous forests; the shallowest in tundra, boreal forest and temperate grasslands. The deepest observed living root, at least below the ground surface, was observed during the excavation of an open-pit mine in Arizona, US. Some roots can grow as deep as the tree is high. The majority of roots on most plants are however found relatively close to the surface where nutrient availability and aeration are more favourable for growth. Rooting depth may be physically restricted by rock or compacted soil close below the surface, or by anaerobic soil conditions. Records Evolutionary history The fossil record of roots—or rather, infilled voids where roots rotted after death—spans back to the late Silurian, about 430 million years ago. Their identification is difficult, because casts and molds of roots are so similar in appearance to animal burrows. They can be discriminated using a range of features. The evolutionary development of roots likely happened from the modification of shallow rhizomes (modified horizontal stems) which anchored primitive vascular plants combined with the development of filamentous outgrowths (called rhizoids) which anchored the plants and conducted water to the plant from the soil. Environmental interactions Light has been shown to have some impact on roots, but it's not been studied as much as the effect of light on other plant systems. Early research in the 1930s found that light decreased the effectiveness of Indole-3-acetic acid on adventitious root initiation. Studies of the pea in the 1950s shows that lateral root formation was inhibited by light, and in the early 1960s researchers found that light could induce positive gravitropic responses in some situations. The effects of light on root elongation has been studied for monocotyledonous and dicotyledonous plants, with the majority of studies finding that light inhibited root elongation, whether pulsed or continuous. Studies of Arabidopsis in the 1990s showed negative phototropism and inhibition of the elongation of root hairs in light sensed by phyB. Certain plants, namely Fabaceae, form root nodules in order to associate and form a symbiotic relationship with nitrogen-fixing bacteria called rhizobia. Owing to the high energy required to fix nitrogen from the atmosphere, the bacteria take carbon compounds from the plant to fuel the process. In return, the plant takes nitrogen compounds produced from ammonia by the bacteria. Soil temperature is a factor that effects root initiation and length. Root length is usually impacted more dramatically by temperature than overall mass, where cooler temperatures tend to cause more lateral growth because downward extension is limited by cooler temperatures at subsoil levels. Needs vary by plant species, but in temperate regions cool temperatures may limit root systems. Cool temperature species like oats, rapeseed, rye, wheat fare better in lower temperatures than summer annuals like maize and cotton. Researchers have found that plants like cotton develop wider and shorter taproots in cooler temperatures. The first root originating from the seed usually has a wider diameter than root branches, so smaller root diameters are expected if temperatures increase root initiation. Root diameter also decreases when the root elongates. Plant interactions Plants can interact with one another in their environment through their root systems. Studies have demonstrated that plant-plant interaction occurs among root systems via the soil as a medium. Researchers have tested whether plants growing in ambient conditions would change their behavior if a nearby plant was exposed to drought conditions. Since nearby plants showed no changes in stomatal aperture researchers believe the drought signal spread through the roots and soil, not through the air as a volatile chemical signal. Soil interactions Soil microbiota can suppress both disease and beneficial root symbionts (mycorrhizal fungi are easier to establish in sterile soil). Inoculation with soil bacteria can increase internode extension, yield and quicken flowering. The migration of bacteria along the root varies with natural soil conditions. For example, research has found that the root systems of wheat seeds inoculated with Azotobacter showed higher populations in soils favorable to Azotobacter growth. Some studies have been unsuccessful in increasing the levels of certain microbes (such as P. fluorescens) in natural soil without prior sterilization. Grass root systems are beneficial at reducing soil erosion by holding the soil together. Perennial grasses that grow wild in rangelands contribute organic matter to the soil when their old roots decay after attacks by beneficial fungi, protozoa, bacteria, insects and worms release nutrients. Scientists have observed significant diversity of the microbial cover of roots at around 10 percent of three week old root segments covered. On younger roots there was even low coverage, but even on 3-month-old roots the coverage was only around 37%. Before the 1970s, scientists believed that the majority of the root surface was covered by microorganisms. Nutrient absorption Researchers studying maize seedlings found that calcium absorption was greatest in the apical root segment, and potassium at the base of the root. Along other root segments absorption was similar. Absorbed potassium is transported to the root tip, and to a lesser extent other parts of the root, then also to the shoot and grain. Calcium transport from the apical segment is slower, mostly transported upward and accumulated in stem and shoot. Researchers found that partial deficiencies of K or P did not change the fatty acid composition of phosphatidyl choline in Brassica napus L. plants. Calcium deficiency did, on the other hand, lead to a marked decline of polyunsaturated compounds that would be expected to have negative impacts for integrity of the plant membrane, that could effect some properties like its permeability, and is needed for the ion uptake activity of the root membranes. Economic importance The term root crops refers to any edible underground plant structure, but many root crops are actually stems, such as potato tubers. Edible roots include cassava, sweet potato, beet, carrot, rutabaga, turnip, parsnip, radish, yam and horseradish. Spices obtained from roots include sassafras, angelica, sarsaparilla and licorice. Sugar beet is an important source of sugar. Yam roots are a source of estrogen compounds used in birth control pills. The fish poison and insecticide rotenone is obtained from roots of Lonchocarpus spp. Important medicines from roots are ginseng, aconite, ipecac, gentian and reserpine. Several legumes that have nitrogen-fixing root nodules are used as green manure crops, which provide nitrogen fertilizer for other crops when plowed under. Specialized bald cypress roots, termed knees, are sold as souvenirs, lamp bases and carved into folk art. Native Americans used the flexible roots of white spruce for basketry. Tree roots can heave and destroy concrete sidewalks and crush or clog buried pipes. The aerial roots of strangler fig have damaged ancient Mayan temples in Central America and the temple of Angkor Wat in Cambodia. Trees stabilize soil on a slope prone to landslides. The root hairs work as an anchor on the soil. Vegetative propagation of plants via cuttings depends on adventitious root formation. Hundreds of millions of plants are propagated via cuttings annually including chrysanthemum, poinsettia, carnation, ornamental shrubs and many houseplants. Roots can also protect the environment by holding the soil to reduce soil erosion. This is especially important in areas such as sand dunes.
Biology and health sciences
Plant: General
null
56360
https://en.wikipedia.org/wiki/Sorbitol
Sorbitol
Sorbitol (), less commonly known as glucitol (), is a sugar alcohol with a sweet taste which the human body metabolizes slowly. It can be obtained by reduction of glucose, which changes the converted aldehyde group (−CHO) to a primary alcohol group (−CH2OH). Most sorbitol is made from potato starch, but it is also found in nature, for example in apples, pears, peaches, and prunes. It is converted to fructose by sorbitol-6-phosphate 2-dehydrogenase. Sorbitol is an isomer of mannitol, another sugar alcohol; the two differ only in the orientation of the hydroxyl group on carbon2. While similar, the two sugar alcohols have very different sources in nature, melting points, and uses. As an over-the-counter drug, sorbitol is used as a laxative to treat constipation. Synthesis Sorbitol may be synthesised via a glucose reduction reaction in which the converted aldehyde group is converted into a hydroxyl group. The reaction requires NADH and is catalyzed by aldose reductase. Glucose reduction is the first step of the polyol pathway of glucose metabolism, and is implicated in multiple diabetic complications. C6H12O6 + NADH + H+ -> C6H14O6 + NAD+The mechanism involves a tyrosine residue in the active site of aldehyde reductase. The hydrogen atom on NADH is transferred to the electrophilic aldehyde carbon atom; electrons on the aldehyde carbon-oxygen double bond are transferred to the oxygen that abstracts the proton on tyrosine side chain to form the hydroxyl group. The role of aldehyde reductase tyrosine phenol group is to serve as a general acid to provide proton to the reduced aldehyde oxygen on glucose. Glucose reduction is not the major glucose metabolism pathway in a normal human body, where the glucose level is in the normal range. However, in diabetic patients whose blood glucose level is high, up to 1/3 of their glucose could go through the glucose reduction pathway. This will consume NADH and eventually leads to cell damage. Uses Sweetener Sorbitol is a sugar substitute, and when used in food it has the INS number and E number 420. Sorbitol is about 60% as sweet as sucrose (table sugar). Sorbitol is referred to as a nutritive sweetener because it provides some dietary energy. It is partly absorbed from the small intestine and metabolized in the body, and partly fermented in the large intestine. The fermentation produces short-chain fatty acids, acetic acid, propionic acid, and butyric acid, which are mostly absorbed and provide energy, but also carbon dioxide, methane, and hydrogen which do not provide energy. Even though the heat of combustion of sorbitol is higher than that of glucose (having two extra hydrogen atoms), the net energy contribution is between 2.5 and 3.4 kilocalories per gram, versus the approximately 4 kilocalories (17 kilojoules) for carbohydrates. It is often used in diet foods (including diet drinks and ice cream), mints, cough syrups, and sugar-free chewing gum. Most bacteria cannot use sorbitol for energy, but it can be slowly fermented in the mouth by Streptococcus mutans, a bacterium that causes tooth decay. In contrast, many other sugar alcohols such as isomalt and xylitol are considered non-acidogenic. It also occurs naturally in many stone fruits and berries from trees of the genus Sorbus. Medical applications Laxative As is the case with other sugar alcohols, foods containing sorbitol can cause gastrointestinal distress. Sorbitol can be used as a laxative when taken orally or as an enema. Sorbitol works as a laxative by drawing water into the large intestine, stimulating bowel movements. Sorbitol has been determined safe for use by the elderly, although it is not recommended without the advice of a physician. Sorbitol is commonly used orally as a one-time dose of 70% solution. It may also be used as a one-time rectal enema. Other medical applications Sorbitol is used in bacterial culture media to distinguish the pathogenic Escherichia coli O157:H7 from most other strains of E. coli, because it is usually unable to ferment sorbitol, unlike 93% of known E. coli strains. A treatment for hyperkalaemia (elevated blood potassium) uses sorbitol and the ion-exchange resin sodium polystyrene sulfonate (tradename Kayexalate). The resin exchanges sodium ions for potassium ions in the bowel, while sorbitol helps to eliminate it. In 2010, the U.S. FDA issued a warning of increased risk for gastrointestinal necrosis with this combination. Sorbitol is also used in the manufacture of softgel capsules to store single doses of liquid medicines. Health care, food, and cosmetic uses Sorbitol often is used in modern cosmetics as a humectant and thickener. It is also used in mouthwash and toothpaste. Some transparent gels can be made only with sorbitol, because of its high refractive index. Sorbitol is used as a cryoprotectant additive (mixed with sucrose and sodium polyphosphates) in the manufacture of surimi, a processed fish paste. It is also used as a humectant in some cigarettes. Beyond its use as a sugar substitute in reduced-sugar foods, sorbitol is also used as a humectant in cookies and low-moisture foods like peanut butter and fruit preserves. In baking, it is also valuable because it acts as a plasticizer, and slows down the staling process. Miscellaneous uses A mixture of sorbitol and potassium nitrate has found some success as an amateur solid rocket fuel. It has similar performance to sucrose-based rocket candy, but is easier to cast, less hygroscopic and does not caramelize. Sorbitol is identified as a potential key chemical intermediate for production of fuels from biomass resources. Carbohydrate fractions in biomass such as cellulose undergo sequential hydrolysis and hydrogenation in the presence of metal catalysts to produce sorbitol. Complete reduction of sorbitol opens the way to alkanes, such as hexane, which can be used as a biofuel. Hydrogen required for this reaction can be produced by aqueous phase catalytic reforming of sorbitol. 19 C6H14O6 → 13 C6H14 + 36 CO2 + 42 H2O The above chemical reaction is exothermic, and 1.5 moles of sorbitol generate approximately 1 mole of hexane. When hydrogen is co-fed, no carbon dioxide is produced. Sorbitol based polyols are used in the production of polyurethane foam for the construction industry. It is also added after electroporation of yeasts in transformation protocols, allowing the cells to recover by raising the osmolarity of the medium. Medical importance Aldose reductase is the first enzyme in the sorbitol-aldose reductase pathway responsible for the reduction of glucose to sorbitol, as well as the reduction of galactose to galactitol. Too much sorbitol trapped in retinal cells, the cells of the lens, and the Schwann cells that myelinate peripheral nerves, is a frequent result of long-term hyperglycemia that accompanies poorly controlled diabetes. This can damage these cells, leading to retinopathy, cataracts and peripheral neuropathy, respectively. Sorbitol is fermented in the colon and produces short-chain fatty acids, which are beneficial to overall colon health. Potential adverse effects Sorbitol may cause allergic reactions in some people. Common side effects from use as a laxative are stomach cramps, vomiting, diarrhea or rectal bleeding. Compendial status Food Chemicals Codex European Pharmacopoeia 6.1 British Pharmacopoeia 2009 Japanese Pharmacopoeia 17
Physical sciences
Sugar alcohols
Chemistry
56369
https://en.wikipedia.org/wiki/Bell%27s%20theorem
Bell's theorem
Bell's theorem is a term encompassing a number of closely related results in physics, all of which determine that quantum mechanics is incompatible with local hidden-variable theories, given some basic assumptions about the nature of measurement. "Local" here refers to the principle of locality, the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated by physical fields cannot propagate faster than the speed of light. "Hidden variables" are supposed properties of quantum particles that are not included in quantum theory but nevertheless affect the outcome of experiments. In the words of physicist John Stewart Bell, for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local." The first such result was introduced by Bell in 1964, building upon the Einstein–Podolsky–Rosen paradox, which had called attention to the phenomenon of quantum entanglement. Bell deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. Such a constraint would later be named a Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Multiple variations on Bell's theorem were put forward in the following years, using different assumptions and obtaining different Bell (or "Bell-type") inequalities. The first rudimentary experiment designed to test Bell's theorem was performed in 1972 by John Clauser and Stuart Freedman. More advanced experiments, known collectively as Bell tests, have been performed many times since. Often, these experiments have had the goal of "closing loopholes", that is, ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. Bell tests have consistently found that physical systems obey quantum mechanics and violate Bell inequalities; which is to say that the results of these experiments are incompatible with local hidden-variable theories. The exact nature of the assumptions required to prove a Bell-type constraint on correlations has been debated by physicists and by philosophers. While the significance of Bell's theorem is not in doubt, different interpretations of quantum mechanics disagree about what exactly it implies. Theorem There are many variations on the basic idea, some employing stronger mathematical assumptions than others. Significantly, Bell-type theorems do not refer to any particular theory of local hidden variables, but instead show that quantum physics violates general assumptions behind classical pictures of nature. The original theorem proved by Bell in 1964 is not the most amenable to experiment, and it is convenient to introduce the genre of Bell-type inequalities with a later example. Hypothetical characters Alice and Bob stand in widely separated locations. Their colleague Victor prepares a pair of particles and sends one to Alice and the other to Bob. When Alice receives her particle, she chooses to perform one of two possible measurements (perhaps by flipping a coin to decide which). Denote these measurements by and . Both and are binary measurements: the result of is either or , and likewise for . When Bob receives his particle, he chooses one of two measurements, and , which are also both binary. Suppose that each measurement reveals a property that the particle already possessed. For instance, if Alice chooses to measure and obtains the result , then the particle she received carried a value of for a property . Consider the combinationBecause both and take the values , then either or . In the former case, the quantity must equal 0, while in the latter case, . So, one of the terms on the right-hand side of the above expression will vanish, and the other will equal . Consequently, if the experiment is repeated over many trials, with Victor preparing new pairs of particles, the absolute value of the average of the combination across all the trials will be less than or equal to 2. No single trial can measure this quantity, because Alice and Bob can only choose one measurement each, but on the assumption that the underlying properties exist, the average value of the sum is just the sum of the averages for each term. Using angle brackets to denote averages This is a Bell inequality, specifically, the CHSH inequality. Its derivation here depends upon two assumptions: first, that the underlying physical properties and exist independently of being observed or measured (sometimes called the assumption of realism); and second, that Alice's choice of action cannot influence Bob's result or vice versa (often called the assumption of locality). Quantum mechanics can violate the CHSH inequality, as follows. Victor prepares a pair of qubits which he describes by the Bell state where and are the eigenstates of one of the Pauli matrices, Victor then passes the first qubit to Alice and the second to Bob. Alice and Bob's choices of possible measurements are also defined in terms of the Pauli matrices. Alice measures either of the two observables and : and Bob measures either of the two observables Victor can calculate the quantum expectation values for pairs of these observables using the Born rule: While only one of these four measurements can be made in a single trial of the experiment, the sum gives the sum of the average values that Victor expects to find across multiple trials. This value exceeds the classical upper bound of 2 that was deduced from the hypothesis of local hidden variables. The value is in fact the largest that quantum physics permits for this combination of expectation values, making it a Tsirelson bound. The CHSH inequality can also be thought of as a game in which Alice and Bob try to coordinate their actions. Victor prepares two bits, and , independently and at random. He sends bit to Alice and bit to Bob. Alice and Bob win if they return answer bits and to Victor, satisfying Or, equivalently, Alice and Bob win if the logical AND of and is the logical XOR of and . Alice and Bob can agree upon any strategy they desire before the game, but they cannot communicate once the game begins. In any theory based on local hidden variables, Alice and Bob's probability of winning is no greater than , regardless of what strategy they agree upon beforehand. However, if they share an entangled quantum state, their probability of winning can be as large as Variations and related results Bell (1964) Bell's 1964 paper points out that under restricted conditions, local hidden-variable models can reproduce the predictions of quantum mechanics. He then demonstrates that this cannot hold true in general. Bell considers a refinement by David Bohm of the Einstein–Podolsky–Rosen (EPR) thought experiment. In this scenario, a pair of particles are formed together in such a way that they are described by a spin singlet state (which is an example of an entangled state). The particles then move apart in opposite directions. Each particle is measured by a Stern–Gerlach device, a measuring instrument that can be oriented in different directions and that reports one of two possible outcomes, representable by and . The configuration of each measuring instrument is represented by a unit vector, and the quantum-mechanical prediction for the correlation between two detectors with settings and is In particular, if the orientation of the two detectors is the same (), then the outcome of one measurement is certain to be the negative of the outcome of the other, giving . And if the orientations of the two detectors are orthogonal (), then the outcomes are uncorrelated, and . Bell proves by example that these special cases can be explained in terms of hidden variables, then proceeds to show that the full range of possibilities involving intermediate angles cannot. Bell posited that a local hidden-variable model for these correlations would explain them in terms of an integral over the possible values of some hidden parameter : where is a probability density function. The two functions and provide the responses of the two detectors given the orientation vectors and the hidden variable: Crucially, the outcome of detector does not depend upon , and likewise the outcome of does not depend upon , because the two detectors are physically separated. Now we suppose that the experimenter has a choice of settings for the second detector: it can be set either to or to . Bell proves that the difference in correlation between these two choices of detector setting must satisfy the inequality However, it is easy to find situations where quantum mechanics violates the Bell inequality. For example, let the vectors and be orthogonal, and let lie in their plane at a 45° angle from both of them. Then while but Therefore, there is no local hidden-variable model that can reproduce the predictions of quantum mechanics for all choices of , , and Experimental results contradict the classical curves and match the curve predicted by quantum mechanics as long as experimental shortcomings are accounted for. Bell's 1964 theorem requires the possibility of perfect anti-correlations: the ability to make a probability-1 prediction about the result from the second detector, knowing the result from the first. This is related to the "EPR criterion of reality", a concept introduced in the 1935 paper by Einstein, Podolsky, and Rosen. This paper posits: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." GHZ–Mermin (1990) Daniel Greenberger, Michael A. Horne, and Anton Zeilinger presented a four-particle thought experiment in 1990, which David Mermin then simplified to use only three particles. In this thought experiment, Victor generates a set of three spin-1/2 particles described by the quantum state where as above, and are the eigenvectors of the Pauli matrix . Victor then sends a particle each to Alice, Bob, and Charlie, who wait at widely separated locations. Alice measures either or on her particle, and so do Bob and Charlie. The result of each measurement is either or . Applying the Born rule to the three-qubit state , Victor predicts that whenever the three measurements include one and two 's, the product of the outcomes will always be . This follows because is an eigenvector of with eigenvalue , and likewise for and . Therefore, knowing Alice's result for a measurement and Bob's result for a measurement, Victor can predict with probability 1 what result Charlie will return for a measurement. According to the EPR criterion of reality, there would be an "element of reality" corresponding to the outcome of a measurement upon Charlie's qubit. Indeed, this same logic applies to both measurements and all three qubits. Per the EPR criterion of reality, then, each particle contains an "instruction set" that determines the outcome of a or measurement upon it. The set of all three particles would then be described by the instruction set with each entry being either or , and each or measurement simply returning the appropriate value. If Alice, Bob, and Charlie all perform the measurement, then the product of their results would be . This value can be deduced from because the square of either or is . Each factor in parentheses equals , so and the product of Alice, Bob, and Charlie's results will be with probability unity. But this is inconsistent with quantum physics: Victor can predict using the state that the measurement will instead yield with probability unity. This thought experiment can also be recast as a traditional Bell inequality or, equivalently, as a nonlocal game in the same spirit as the CHSH game. In it, Alice, Bob, and Charlie receive bits from Victor, promised to always have an even number of ones, that is, , and send him back bits . They win the game if have an odd number of ones for all inputs except , when they need to have an even number of ones. That is, they win the game iff . With local hidden variables the highest probability of victory they can have is 3/4, whereas using the quantum strategy above they win it with certainty. This is an example of quantum pseudo-telepathy. Kochen–Specker theorem (1967) In quantum theory, orthonormal bases for a Hilbert space represent measurements that can be performed upon a system having that Hilbert space. Each vector in a basis represents a possible outcome of that measurement. Suppose that a hidden variable exists, so that knowing the value of would imply certainty about the outcome of any measurement. Given a value of , each measurement outcome – that is, each vector in the Hilbert space – is either impossible or guaranteed. A Kochen–Specker configuration is a finite set of vectors made of multiple interlocking bases, with the property that a vector in it will always be impossible when considered as belonging to one basis and guaranteed when taken as belonging to another. In other words, a Kochen–Specker configuration is an "uncolorable set" that demonstrates the inconsistency of assuming a hidden variable can be controlling the measurement outcomes. Free will theorem The Kochen–Specker type of argument, using configurations of interlocking bases, can be combined with the idea of measuring entangled pairs that underlies Bell-type inequalities. This was noted beginning in the 1970s by Kochen, Heywood and Redhead, Stairs, and Brown and Svetlichny. As EPR pointed out, obtaining a measurement outcome on one half of an entangled pair implies certainty about the outcome of a corresponding measurement on the other half. The "EPR criterion of reality" posits that because the second half of the pair was not disturbed, that certainty must be due to a physical property belonging to it. In other words, by this criterion, a hidden variable must exist within the second, as-yet unmeasured half of the pair. No contradiction arises if only one measurement on the first half is considered. However, if the observer has a choice of multiple possible measurements, and the vectors defining those measurements form a Kochen–Specker configuration, then some outcome on the second half will be simultaneously impossible and guaranteed. This type of argument gained attention when an instance of it was advanced by John Conway and Simon Kochen under the name of the free will theorem. The Conway–Kochen theorem uses a pair of entangled qutrits and a Kochen–Specker configuration discovered by Asher Peres. Quasiclassical entanglement As Bell pointed out, some predictions of quantum mechanics can be replicated in local hidden-variable models, including special cases of correlations produced from entanglement. This topic has been studied systematically in the years since Bell's theorem. In 1989, Reinhard Werner introduced what are now called Werner states, joint quantum states for a pair of systems that yield EPR-type correlations but also admit a hidden-variable model. Werner states are bipartite quantum states that are invariant under unitaries of symmetric tensor-product form: In 2004, Robert Spekkens introduced a toy model that starts with the premise of local, discretized degrees of freedom and then imposes a "knowledge balance principle" that restricts how much an observer can know about those degrees of freedom, thereby making them into hidden variables. The allowed states of knowledge ("epistemic states") about the underlying variables ("ontic states") mimic some features of quantum states. Correlations in the toy model can emulate some aspects of entanglement, like monogamy, but by construction, the toy model can never violate a Bell inequality. History Background The question of whether quantum mechanics can be "completed" by hidden variables dates to the early years of quantum theory. In his 1932 textbook on quantum mechanics, the Hungarian-born polymath John von Neumann presented what he claimed to be a proof that there could be no "hidden parameters". The validity and definitiveness of von Neumann's proof were questioned by Hans Reichenbach, in more detail by Grete Hermann, and possibly in conversation though not in print by Albert Einstein. (Simon Kochen and Ernst Specker rejected von Neumann's key assumption as early as 1961, but did not publish a criticism of it until 1967.) Einstein argued persistently that quantum mechanics could not be a complete theory. His preferred argument relied on a principle of locality: Consider a mechanical system constituted of two partial systems A and B which have interaction with each other only during limited time. Let the ψ function before their interaction be given. Then the Schrödinger equation will furnish the ψ function after their interaction has taken place. Let us now determine the physical condition of the partial system A as completely as possible by measurements. Then the quantum mechanics allows us to determine the ψ function of the partial system B from the measurements made, and from the ψ function of the total system. This determination, however, gives a result which depends upon which of the determining magnitudes specifying the condition of A has been measured (for instance coordinates or momenta). Since there can be only one physical condition of B after the interaction and which can reasonably not be considered as dependent on the particular measurement we perform on the system A separated from B it may be concluded that the ψ function is not unambiguously coordinated with the physical condition. This coordination of several ψ functions with the same physical condition of system B shows again that the ψ function cannot be interpreted as a (complete) description of a physical condition of a unit system. The EPR thought experiment is similar, also considering two separated systems A and B described by a joint wave function. However, the EPR paper adds the idea later known as the EPR criterion of reality, according to which the ability to predict with probability 1 the outcome of a measurement upon B implies the existence of an "element of reality" within B. In 1951, David Bohm proposed a variant of the EPR thought experiment in which the measurements have discrete ranges of possible outcomes, unlike the position and momentum measurements considered by EPR. The year before, Chien-Shiung Wu and Irving Shaknov had successfully measured polarizations of photons produced in entangled pairs, thereby making the Bohm version of the EPR thought experiment practically feasible. By the late 1940s, the mathematician George Mackey had grown interested in the foundations of quantum physics, and in 1957 he drew up a list of postulates that he took to be a precise definition of quantum mechanics. Mackey conjectured that one of the postulates was redundant, and shortly thereafter, Andrew M. Gleason proved that it was indeed deducible from the other postulates. Gleason's theorem provided an argument that a broad class of hidden-variable theories are incompatible with quantum mechanics. More specifically, Gleason's theorem rules out hidden-variable models that are "noncontextual". Any hidden-variable model for quantum mechanics must, in order to avoid the implications of Gleason's theorem, involve hidden variables that are not properties belonging to the measured system alone but also dependent upon the external context in which the measurement is made. This type of dependence is often seen as contrived or undesirable; in some settings, it is inconsistent with special relativity. The Kochen–Specker theorem refines this statement by constructing a specific finite subset of rays on which no such probability measure can be defined. Tsung-Dao Lee came close to deriving Bell's theorem in 1960. He considered events where two kaons were produced traveling in opposite directions, and came to the conclusion that hidden variables could not explain the correlations that could be obtained in such situations. However, complications arose due to the fact that kaons decay, and he did not go so far as to deduce a Bell-type inequality. Bell's publications Bell chose to publish his theorem in a comparatively obscure journal because it did not require page charges, in fact paying the authors who published there at the time. Because the journal did not provide free reprints of articles for the authors to distribute, however, Bell had to spend the money he received to buy copies that he could send to other physicists. While the articles printed in the journal themselves listed the publication's name simply as Physics, the covers carried the trilingual version Physics Physique Физика to reflect that it would print articles in English, French and Russian. Prior to proving his 1964 result, Bell also proved a result equivalent to the Kochen–Specker theorem (hence the latter is sometimes also known as the Bell–Kochen–Specker or Bell–KS theorem). However, publication of this theorem was inadvertently delayed until 1966. In that paper, Bell argued that because an explanation of quantum phenomena in terms of hidden variables would require nonlocality, the EPR paradox "is resolved in the way which Einstein would have liked least." Experiments In 1967, the unusual title Physics Physique Физика caught the attention of John Clauser, who then discovered Bell's paper and began to consider how to perform a Bell test in the laboratory. Clauser and Stuart Freedman would go on to perform a Bell test in 1972. This was only a limited test, because the choice of detector settings was made before the photons had left the source. In 1982, Alain Aspect and collaborators performed the first Bell test to remove this limitation. This began a trend of progressively more stringent Bell tests. The GHZ thought experiment was implemented in practice, using entangled triplets of photons, in 2000. By 2002, testing the CHSH inequality was feasible in undergraduate laboratory courses. In Bell tests, there may be problems of experimental design or set-up that affect the validity of the experimental findings. These problems are often referred to as "loopholes". The purpose of the experiment is to test whether nature can be described by local hidden-variable theory, which would contradict the predictions of quantum mechanics. The most prevalent loopholes in real experiments are the detection and locality loopholes. The detection loophole is opened when a small fraction of the particles (usually photons) are detected in the experiment, making it possible to explain the data with local hidden variables by assuming that the detected particles are an unrepresentative sample. The locality loophole is opened when the detections are not done with a spacelike separation, making it possible for the result of one measurement to influence the other without contradicting relativity. In some experiments there may be additional defects that make local-hidden-variable explanations of Bell test violations possible. Although both the locality and detection loopholes had been closed in different experiments, a long-standing challenge was to close both simultaneously in the same experiment. This was finally achieved in three experiments in 2015. Regarding these results, Alain Aspect writes that "no experiment ... can be said to be totally loophole-free," but he says the experiments "remove the last doubts that we should renounce" local hidden variables, and refers to examples of remaining loopholes as being "far fetched" and "foreign to the usual way of reasoning in physics." These efforts to experimentally validate violations of the Bell inequalities would later result in Clauser, Aspect, and Anton Zeilinger being awarded the 2022 Nobel Prize in Physics. Interpretations Reactions to Bell's theorem have been many and varied. Maximilian Schlosshauer, Johannes Kofler, and Zeilinger write that Bell inequalities provide "a wonderful example of how we can have a rigorous theoretical result tested by numerous experiments, and yet disagree about the implications." The Copenhagen interpretation Copenhagen-type interpretations generally take the violation of Bell inequalities as grounds to reject the assumption often called counterfactual definiteness or "realism", which is not necessarily the same as abandoning realism in a broader philosophical sense. For example, Roland Omnès argues for the rejection of hidden variables and concludes that "quantum mechanics is probably as realistic as any theory of its scope and maturity ever will be". Likewise, Rudolf Peierls took the message of Bell's theorem to be that, because the premise of locality is physically reasonable, "hidden variables cannot be introduced without abandoning some of the results of quantum mechanics". This is also the route taken by interpretations that descend from the Copenhagen tradition, such as consistent histories (often advertised as "Copenhagen done right"), as well as QBism. Many-worlds interpretation of quantum mechanics The Many-worlds interpretation, also known as the Everett interpretation, is dynamically local, meaning that it does not call for action at a distance, and deterministic, because it consists of the unitary part of quantum mechanics without collapse. It can generate correlations that violate a Bell inequality because it violates an implicit assumption by Bell that measurements have a single outcome. In fact, Bell's theorem can be proven in the Many-Worlds framework from the assumption that a measurement has a single outcome. Therefore, a violation of a Bell inequality can be interpreted as a demonstration that measurements have multiple outcomes. The explanation it provides for the Bell correlations is that when Alice and Bob make their measurements, they split into local branches. From the point of view of each copy of Alice, there are multiple copies of Bob experiencing different results, so Bob cannot have a definite result, and the same is true from the point of view of each copy of Bob. They will obtain a mutually well-defined result only when their future light cones overlap. At this point we can say that the Bell correlation starts existing, but it was produced by a purely local mechanism. Therefore, the violation of a Bell inequality cannot be interpreted as a proof of non-locality. Non-local hidden variables Most advocates of the hidden-variables idea believe that experiments have ruled out local hidden variables. They are ready to give up locality, explaining the violation of Bell's inequality by means of a non-local hidden variable theory, in which the particles exchange information about their states. This is the basis of the Bohm interpretation of quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. One challenge for non-local hidden variable theories is to explain why this instantaneous communication can exist at the level of the hidden variables, but it cannot be used to send signals. A 2007 experiment ruled out a large class of non-Bohmian non-local hidden variable theories, though not Bohmian mechanics itself. The transactional interpretation, which postulates waves traveling both backwards and forwards in time, is likewise non-local. Superdeterminism A necessary assumption to derive Bell's theorem is that the hidden variables are not correlated with the measurement settings. This assumption has been justified on the grounds that the experimenter has "free will" to choose the settings, and that it is necessary to do science in the first place. A (hypothetical) theory where the choice of measurement is necessarily correlated with the system being measured is known as superdeterministic. A few advocates of deterministic models have not given up on local hidden variables. For example, Gerard 't Hooft has argued that superdeterminism cannot be dismissed.
Physical sciences
Quantum mechanics
Physics
56398
https://en.wikipedia.org/wiki/Phase%20diagram
Phase diagram
A phase diagram in physical chemistry, engineering, mineralogy, and materials science is a type of chart used to show conditions (pressure, temperature, etc.) at which thermodynamically distinct phases (such as solid, liquid or gaseous states) occur and coexist at equilibrium. Overview Common components of a phase diagram are lines of equilibrium or phase boundaries, which refer to lines that mark conditions under which multiple phases can coexist at equilibrium. Phase transitions occur along lines of equilibrium. Metastable phases are not shown in phase diagrams as, despite their common occurrence, they are not equilibrium phases. Triple points are points on phase diagrams where lines of equilibrium intersect. Triple points mark conditions at which three different phases can coexist. For example, the water phase diagram has a triple point corresponding to the single temperature and pressure at which solid, liquid, and gaseous water can coexist in a stable equilibrium ( and a partial vapor pressure of ). The pressure on a pressure-temperature diagram (such as the water phase diagram shown) is the partial pressure of the substance in question. The solidus is the temperature below which the substance is stable in the solid state. The liquidus is the temperature above which the substance is stable in a liquid state. There may be a gap between the solidus and liquidus; within the gap, the substance consists of a mixture of crystals and liquid (like a "slurry"). Working fluids are often categorized on the basis of the shape of their phase diagram. Types 2-dimensional diagrams Pressure vs temperature The simplest phase diagrams are pressure–temperature diagrams of a single simple substance, such as water. The axes correspond to the pressure and temperature. The phase diagram shows, in pressure–temperature space, the lines of equilibrium or phase boundaries between the three phases of solid, liquid, and gas. The curves on the phase diagram show the points where the free energy (and other derived properties) becomes non-analytic: their derivatives with respect to the coordinates (temperature and pressure in this example) change discontinuously (abruptly). For example, the heat capacity of a container filled with ice will change abruptly as the container is heated past the melting point. The open spaces, where the free energy is analytic, correspond to single phase regions. Single phase regions are separated by lines of non-analytical behavior, where phase transitions occur, which are called phase boundaries. In the diagram on the right, the phase boundary between liquid and gas does not continue indefinitely. Instead, it terminates at a point on the phase diagram called the critical point. This reflects the fact that, at extremely high temperatures and pressures, the liquid and gaseous phases become indistinguishable, in what is known as a supercritical fluid. In water, the critical point occurs at around Tc = , pc = and ρc = 356 kg/m3. The existence of the liquid–gas critical point reveals a slight ambiguity in labelling the single phase regions. When going from the liquid to the gaseous phase, one usually crosses the phase boundary, but it is possible to choose a path that never crosses the boundary by going to the right of the critical point. Thus, the liquid and gaseous phases can blend continuously into each other. The solid–liquid phase boundary can only end in a critical point if the solid and liquid phases have the same symmetry group. For most substances, the solid–liquid phase boundary (or fusion curve) in the phase diagram has a positive slope so that the melting point increases with pressure. This is true whenever the solid phase is denser than the liquid phase. The greater the pressure on a given substance, the closer together the molecules of the substance are brought to each other, which increases the effect of the substance's intermolecular forces. Thus, the substance requires a higher temperature for its molecules to have enough energy to break out of the fixed pattern of the solid phase and enter the liquid phase. A similar concept applies to liquid–gas phase changes. Water is an exception which has a solid-liquid boundary with negative slope so that the melting point decreases with pressure. This occurs because ice (solid water) is less dense than liquid water, as shown by the fact that ice floats on water. At a molecular level, ice is less dense because it has a more extensive network of hydrogen bonding which requires a greater separation of water molecules. Other exceptions include antimony and bismuth. At very high pressures above 50 GPa (500 000 atm), liquid nitrogen undergoes a liquid-liquid phase transition to a polymeric form and becomes denser than solid nitrogen at the same pressure. Under these conditions therefore, solid nitrogen also floats in its liquid. The value of the slope dP/dT is given by the Clausius–Clapeyron equation for fusion (melting) where ΔHfus is the heat of fusion which is always positive, and ΔVfus is the volume change for fusion. For most substances ΔVfus is positive so that the slope is positive. However for water and other exceptions, ΔVfus is negative so that the slope is negative. Other thermodynamic properties In addition to temperature and pressure, other thermodynamic properties may be graphed in phase diagrams. Examples of such thermodynamic properties include specific volume, specific enthalpy, or specific entropy. For example, single-component graphs of temperature vs. specific entropy (T vs. s) for water/steam or for a refrigerant are commonly used to illustrate thermodynamic cycles such as a Carnot cycle, Rankine cycle, or vapor-compression refrigeration cycle. Any two thermodynamic quantities may be shown on the horizontal and vertical axes of a two-dimensional diagram. Additional thermodynamic quantities may each be illustrated in increments as a series of lines—curved, straight, or a combination of curved and straight. Each of these iso-lines represents the thermodynamic quantity at a certain constant value. 3-dimensional diagrams It is possible to envision three-dimensional (3D) graphs showing three thermodynamic quantities. For example, for a single component, a 3D Cartesian coordinate type graph can show temperature (T) on one axis, pressure (p) on a second axis, and specific volume (v) on a third. Such a 3D graph is sometimes called a p–v–T diagram. The equilibrium conditions are shown as curves on a curved surface in 3D with areas for solid, liquid, and vapor phases and areas where solid and liquid, solid and vapor, or liquid and vapor coexist in equilibrium. A line on the surface called a triple line is where solid, liquid and vapor can all coexist in equilibrium. The critical point remains a point on the surface even on a 3D phase diagram. An orthographic projection of the 3D p–v–T graph showing pressure and temperature as the vertical and horizontal axes collapses the 3D plot into the standard 2D pressure–temperature diagram. When this is done, the solid–vapor, solid–liquid, and liquid–vapor surfaces collapse into three corresponding curved lines meeting at the triple point, which is the collapsed orthographic projection of the triple line. Binary mixtures Other much more complex types of phase diagrams can be constructed, particularly when more than one pure component is present. In that case, concentration becomes an important variable. Phase diagrams with more than two dimensions can be constructed that show the effect of more than two variables on the phase of a substance. Phase diagrams can use other variables in addition to or in place of temperature, pressure and composition, for example the strength of an applied electrical or magnetic field, and they can also involve substances that take on more than just three states of matter. One type of phase diagram plots temperature against the relative concentrations of two substances in a binary mixture called a binary phase diagram, as shown at right. Such a mixture can be either a solid solution, eutectic or peritectic, among others. These two types of mixtures result in very different graphs. Another type of binary phase diagram is a boiling-point diagram for a mixture of two components, i. e. chemical compounds. For two particular volatile components at a certain pressure such as atmospheric pressure, a boiling-point diagram shows what vapor (gas) compositions are in equilibrium with given liquid compositions depending on temperature. In a typical binary boiling-point diagram, temperature is plotted on a vertical axis and mixture composition on a horizontal axis. A two component diagram with components A and B in an "ideal" solution is shown. The construction of a liquid vapor phase diagram assumes an ideal liquid solution obeying Raoult's law and an ideal gas mixture obeying Dalton's law of partial pressure. A tie line from the liquid to the gas at constant pressure would indicate the two compositions of the liquid and gas respectively. A simple example diagram with hypothetical components 1 and 2 in a non-azeotropic mixture is shown at right. The fact that there are two separate curved lines joining the boiling points of the pure components means that the vapor composition is usually not the same as the liquid composition the vapor is in equilibrium with. See Vapor–liquid equilibrium for more information. In addition to the above-mentioned types of phase diagrams, there are many other possible combinations. Some of the major features of phase diagrams include congruent points, where a solid phase transforms directly into a liquid. There is also the peritectoid, a point where two solid phases combine into one solid phase during cooling. The inverse of this, when one solid phase transforms into two solid phases during cooling, is called the eutectoid. A complex phase diagram of great technological importance is that of the iron–carbon system for less than 7% carbon (see steel). The x-axis of such a diagram represents the concentration variable of the mixture. As the mixtures are typically far from dilute and their density as a function of temperature is usually unknown, the preferred concentration measure is mole fraction. A volume-based measure like molarity would be inadvisable. Ternary phase diagrams A system with three components is called a ternary system. At constant pressure the maximum number of independent variables is three – the temperature and two concentration values. For a representation of ternary equilibria a three-dimensional phase diagram is required. Often such a diagram is drawn with the composition as a horizontal plane and the temperature on an axis perpendicular to this plane. To represent composition in a ternary system an equilateral triangle is used, called Gibbs triangle (see also Ternary plot). The temperature scale is plotted on the axis perpendicular to the composition triangle. Thus, the space model of a ternary phase diagram is a right-triangular prism. The prism sides represent corresponding binary systems A-B, B-C, A-C. However, the most common methods to present phase equilibria in a ternary system are the following: 1) projections on the concentration triangle ABC of the liquidus, solidus, solvus surfaces; 2) isothermal sections; 3) vertical sections. Crystals Polymorphic and polyamorphic substances have multiple crystal or amorphous phases, which can be graphed in a similar fashion to solid, liquid, and gas phases. Mesophases Some organic materials pass through intermediate states between solid and liquid; these states are called mesophases. Attention has been directed to mesophases because they enable display devices and have become commercially important through the so-called liquid-crystal technology. Phase diagrams are used to describe the occurrence of mesophases.
Physical sciences
Chemistry: General
null
56434
https://en.wikipedia.org/wiki/Julia%20set
Julia set
In complex dynamics, the Julia set and the Fatou set are two complementary sets (Julia "laces" and Fatou "dusts") defined from a function. Informally, the Fatou set of the function consists of values with the property that all nearby values behave similarly under repeated iteration of the function, and the Julia set consists of values such that an arbitrarily small perturbation can cause drastic changes in the sequence of iterated function values. Thus the behavior of the function on the Fatou set is "regular", while on the Julia set its behavior is "chaotic". The Julia set of a function    is commonly denoted and the Fatou set is denoted These sets are named after the French mathematicians Gaston Julia and Pierre Fatou whose work began the study of complex dynamics during the early 20th century. Formal definition Let be a non-constant meromorphic function from the Riemann sphere onto itself. Such functions are precisely the non-constant complex rational functions, that is, where and are complex polynomials. Assume that p and q have no common roots, and at least one has degree larger than 1. Then there is a finite number of open sets that are left invariant by and are such that: The union of the sets is dense in the plane and behaves in a regular and equal way on each of the sets . The last statement means that the termini of the sequences of iterations generated by the points of are either precisely the same set, which is then a finite cycle, or they are finite cycles of circular or annular shaped sets that are lying concentrically. In the first case the cycle is attracting, in the second case it is neutral. These sets are the Fatou domains of , and their union is the Fatou set of . Each of the Fatou domains contains at least one critical point of , that is, a (finite) point z satisfying , or if the degree of the numerator is at least two larger than the degree of the denominator , or if for some c and a rational function satisfying this condition. The complement of is the Julia set of . If all the critical points are preperiodic, that is they are not periodic but eventually land on a periodic cycle, then is all the sphere. Otherwise, is a nowhere dense set (it is without interior points) and an uncountable set (of the same cardinality as the real numbers). Like , is left invariant by , and on this set the iteration is repelling, meaning that for all w in a neighbourhood of z (within ). This means that behaves chaotically on the Julia set. Although there are points in the Julia set whose sequence of iterations is finite, there are only a countable number of such points (and they make up an infinitesimal part of the Julia set). The sequences generated by points outside this set behave chaotically, a phenomenon called deterministic chaos. There has been extensive research on the Fatou set and Julia set of iterated rational functions, known as rational maps. For example, it is known that the Fatou set of a rational map has either 0, 1, 2 or infinitely many components. Each component of the Fatou set of a rational map can be classified into one of four different classes. Equivalent descriptions of the Julia set is the smallest closed set containing at least three points which is completely invariant under f. is the closure of the set of repelling periodic points. For all but at most two points the Julia set is the set of limit points of the full backwards orbit (This suggests a simple algorithm for plotting Julia sets, see below.) If f is an entire function, then is the boundary of the set of points which converge to infinity under iteration. If f is a polynomial, then is the boundary of the filled Julia set; that is, those points whose orbits under iterations of f remain bounded. Properties of the Julia set and Fatou set The Julia set and the Fatou set of f are both completely invariant under iterations of the holomorphic function f: Examples For the Julia set is the unit circle and on this the iteration is given by doubling of angles (an operation that is chaotic on the points whose argument is not a rational fraction of ). There are two Fatou domains: the interior and the exterior of the circle, with iteration towards 0 and ∞, respectively. For the Julia set is the line segment between −2 and 2. There is one Fatou domain: the points not on the line segment iterate towards ∞. (Apart from a shift and scaling of the domain, this iteration is equivalent to on the unit interval, which is commonly used as an example of chaotic system.) The functions f and g are of the form , where c is a complex number. For such an iteration the Julia set is not in general a simple curve, but is a fractal, and for some values of c it can take surprising shapes. See the pictures below. For some functions f(z) we can say beforehand that the Julia set is a fractal and not a simple curve. This is because of the following result on the iterations of a rational function: This means that each point of the Julia set is a point of accumulation for each of the Fatou domains. Therefore, if there are more than two Fatou domains, each point of the Julia set must have points of more than two different open sets infinitely close, and this means that the Julia set cannot be a simple curve. This phenomenon happens, for instance, when f(z) is the Newton iteration for solving the equation : The image on the right shows the case n = 3. Quadratic polynomials A very popular complex dynamical system is given by the family of complex quadratic polynomials, a special case of rational maps. Such quadratic polynomials can be expressed as where c is a complex parameter. Fix some large enough that (For example, if is in the Mandelbrot set, then so we may simply let ) Then the filled Julia set for this system is the subset of the complex plane given by where is the nth iterate of The Julia set of this function is the boundary of . The parameter plane of quadratic polynomials – that is, the plane of possible c values – gives rise to the famous Mandelbrot set. Indeed, the Mandelbrot set is defined as the set of all c such that is connected. For parameters outside the Mandelbrot set, the Julia set is a Cantor space: in this case it is sometimes referred to as Fatou dust. In many cases, the Julia set of c looks like the Mandelbrot set in sufficiently small neighborhoods of c. This is true, in particular, for so-called Misiurewicz parameters, i.e. parameters c for which the critical point is pre-periodic. For instance: At c = i, the shorter, front toe of the forefoot, the Julia set looks like a branched lightning bolt. At c = −2, the tip of the long spiky tail, the Julia set is a straight line segment. In other words, the Julia sets are locally similar around Misiurewicz points. Generalizations The definition of Julia and Fatou sets easily carries over to the case of certain maps whose image contains their domain; most notably transcendental meromorphic functions and Adam Epstein's finite-type maps. Julia sets are also commonly defined in the study of dynamics in several complex variables. Pseudocode The below pseudocode implementations hard code the functions for each fractal. Consider implementing complex number operations to allow for more dynamic and reusable code. Pseudocode for normal Julia sets R = escape radius # choose R > 0 such that R**2 - R >= sqrt(cx**2 + cy**2) for each pixel (x, y) on the screen, do: { zx = scaled x coordinate of pixel; # (scale to be between -R and R) # zx represents the real part of z. zy = scaled y coordinate of pixel; # (scale to be between -R and R) # zy represents the imaginary part of z. iteration = 0; max_iteration = 1000; while (zx * zx + zy * zy < R**2 AND iteration < max_iteration) { xtemp = zx * zx - zy * zy; zy = 2 * zx * zy + cy; zx = xtemp + cx; iteration = iteration + 1; } if (iteration == max_iteration) return black; else return iteration; } Pseudocode for multi-Julia sets R = escape radius # choose R > 0 such that R**n - R >= sqrt(cx**2 + cy**2) for each pixel (x, y) on the screen, do: { zx = scaled x coordinate of pixel; # (scale to be between -R and R) zy = scaled y coordinate of pixel; # (scale to be between -R and R) iteration = 0; max_iteration = 1000; while (zx * zx + zy * zy < R**2 AND iteration < max_iteration) { xtmp = (zx * zx + zy * zy) ^ (n / 2) * cos(n * atan2(zy, zx)) + cx; zy = (zx * zx + zy * zy) ^ (n / 2) * sin(n * atan2(zy, zx)) + cy; zx = xtmp; iteration = iteration + 1; } if (iteration == max_iteration) return black; else return iteration; } Another recommended option is to reduce color banding between iterations by using a renormalization formula for the iteration. Such formula is given to be, where is the escaping iteration, bounded by some such that and , and is the magnitude of the last iterate before escaping. This can be implemented, very simply, like so: # simply replace the last 4 lines of code from the last example with these lines of code: if(iteration == max_iteration) return black; else abs_z = zx * zx + zy * zy; return iteration + 1 - log(log(abs_z))/log(n); The difference is shown below with a Julia set defined as where . The potential function and the real iteration number The Julia set for is the unit circle, and on the outer Fatou domain, the potential function φ(z) is defined by φ(z) = log|z|. The equipotential lines for this function are concentric circles. As we have where is the sequence of iteration generated by z. For the more general iteration , it has been proved that if the Julia set is connected (that is, if c belongs to the (usual) Mandelbrot set), then there exist a biholomorphic map ψ between the outer Fatou domain and the outer of the unit circle such that . This means that the potential function on the outer Fatou domain defined by this correspondence is given by: This formula has meaning also if the Julia set is not connected, so that we for all c can define the potential function on the Fatou domain containing ∞ by this formula. For a general rational function f(z) such that ∞ is a critical point and a fixed point, that is, such that the degree m of the numerator is at least two larger than the degree n of the denominator, we define the potential function on the Fatou domain containing ∞ by: where d = m − n is the degree of the rational function. If N is a very large number (e.g. 10100), and if k is the first iteration number such that , we have that for some real number , which should be regarded as the real iteration number, and we have that: where the last number is in the interval [0, 1). For iteration towards a finite attracting cycle of order r, we have that if is a point of the cycle, then (the r-fold composition), and the number is the attraction of the cycle. If w is a point very near and w′ is w iterated r times, we have that Therefore, the number is almost independent of k. We define the potential function on the Fatou domain by: If ε is a very small number and k is the first iteration number such that , we have that for some real number , which should be regarded as the real iteration number, and we have that: If the attraction is ∞, meaning that the cycle is super-attracting, meaning again that one of the points of the cycle is a critical point, we must replace α by where w′ is w iterated r times and the formula for φ(z) by: And now the real iteration number is given by: For the colouring we must have a cyclic scale of colours (constructed mathematically, for instance) and containing H colours numbered from 0 to H−1 (H = 500, for instance). We multiply the real number by a fixed real number determining the density of the colours in the picture, and take the integral part of this number modulo H. The definition of the potential function and our way of colouring presuppose that the cycle is attracting, that is, not neutral. If the cycle is neutral, we cannot colour the Fatou domain in a natural way. As the terminus of the iteration is a revolving movement, we can, for instance, colour by the minimum distance from the cycle left fixed by the iteration. Field lines In each Fatou domain (that is not neutral) there are two systems of lines orthogonal to each other: the equipotential lines (for the potential function or the real iteration number) and the field lines. If we colour the Fatou domain according to the iteration number (and not the real iteration number , as defined in the previous section), the bands of iteration show the course of the equipotential lines. If the iteration is towards ∞ (as is the case with the outer Fatou domain for the usual iteration ), we can easily show the course of the field lines, namely by altering the colour according as the last point in the sequence of iteration is above or below the x-axis (first picture), but in this case (more precisely: when the Fatou domain is super-attracting) we cannot draw the field lines coherently - at least not by the method we describe here. In this case a field line is also called an external ray. Let z be a point in the attracting Fatou domain. If we iterate z a large number of times, the terminus of the sequence of iteration is a finite cycle C, and the Fatou domain is (by definition) the set of points whose sequence of iteration converges towards C. The field lines issue from the points of C and from the (infinite number of) points that iterate into a point of C. And they end on the Julia set in points that are non-chaotic (that is, generating a finite cycle). Let r be the order of the cycle C (its number of points) and let be a point in C. We have (the r-fold composition), and we define the complex number α by If the points of C are , α is the product of the r numbers . The real number 1/|α| is the attraction of the cycle, and our assumption that the cycle is neither neutral nor super-attracting, means that . The point is a fixed point for , and near this point the map has (in connection with field lines) character of a rotation with the argument β of α (that is, ). In order to colour the Fatou domain, we have chosen a small number ε and set the sequences of iteration to stop when , and we colour the point z according to the number k (or the real iteration number, if we prefer a smooth colouring). If we choose a direction from given by an angle θ, the field line issuing from in this direction consists of the points z such that the argument ψ of the number satisfies the condition that For if we pass an iteration band in the direction of the field lines (and away from the cycle), the iteration number k is increased by 1 and the number ψ is increased by β, therefore the number is constant along the field line. A colouring of the field lines of the Fatou domain means that we colour the spaces between pairs of field lines: we choose a number of regularly situated directions issuing from , and in each of these directions we choose two directions around this direction. As it can happen that the two field lines of a pair do not end in the same point of the Julia set, our coloured field lines can ramify (endlessly) in their way towards the Julia set. We can colour on the basis of the distance to the center line of the field line, and we can mix this colouring with the usual colouring. Such pictures can be very decorative (second picture). A coloured field line (the domain between two field lines) is divided up by the iteration bands, and such a part can be put into a one-to-one correspondence with the unit square: the one coordinate is (calculated from) the distance from one of the bounding field lines, the other is (calculated from) the distance from the inner of the bounding iteration bands (this number is the non-integral part of the real iteration number). Therefore, we can put pictures into the field lines (third picture). Plotting the Julia set Methods : Distance Estimation Method for Julia set (DEM/J) Inverse Iteration Method (IIM) Using backwards (inverse) iteration (IIM) As mentioned above, the Julia set can be found as the set of limit points of the set of pre-images of (essentially) any given point. So we can try to plot the Julia set of a given function as follows. Start with any point z we know to be in the Julia set, such as a repelling periodic point, and compute all pre-images of z under some high iterate of f. Unfortunately, as the number of iterated pre-images grows exponentially, this is not feasible computationally. However, we can adjust this method, in a similar way as the "random game" method for iterated function systems. That is, in each step, we choose at random one of the inverse images of f. For example, for the quadratic polynomial fc, the backwards iteration is described by At each step, one of the two square roots is selected at random. Note that certain parts of the Julia set are quite difficult to access with the reverse Julia algorithm. For this reason, one must modify IIM/J ( it is called MIIM/J) or use other methods to produce better images. Using DEM/J As a Julia set is infinitely thin we cannot draw it effectively by backwards iteration from the pixels. It will appear fragmented because of the impracticality of examining infinitely many startpoints. Since the iteration count changes vigorously near the Julia set, a partial solution is to imply the outline of the set from the nearest color contours, but the set will tend to look muddy. A better way to draw the Julia set in black and white is to estimate the distance of pixels (DEM) from the set and to color every pixel whose center is close to the set. The formula for the distance estimation is derived from the formula for the potential function φ(z). When the equipotential lines for φ(z) lie close, the number is large, and conversely, therefore the equipotential lines for the function should lie approximately regularly. It has been proven that the value found by this formula (up to a constant factor) converges towards the true distance for z converging towards the Julia set. We assume that f(z) is rational, that is, where p(z) and q(z) are complex polynomials of degrees m and n, respectively, and we have to find the derivative of the above expressions for φ(z). And as it is only that varies, we must calculate the derivative of with respect to z. But as (the k-fold composition), is the product of the numbers , and this sequence can be calculated recursively by , starting with (before the calculation of the next iteration ). For iteration towards ∞ (more precisely when , so that ∞ is a super-attracting fixed point), we have () and consequently: For iteration towards a finite attracting cycle (that is not super-attracting) containing the point and having order r, we have and consequently: For a super-attracting cycle, the formula is: We calculate this number when the iteration stops. Note that the distance estimation is independent of the attraction of the cycle. This means that it has meaning for transcendental functions of "degree infinity" (e.g. sin(z) and tan(z)). Besides drawing of the boundary, the distance function can be introduced as a 3rd dimension to create a solid fractal landscape.
Mathematics
Other
null
56435
https://en.wikipedia.org/wiki/Obesity
Obesity
Obesity is a medical condition, sometimes considered a disease, in which excess body fat has accumulated to such an extent that it can potentially have negative effects on health. People are classified as obese when their body mass index (BMI)—a person's weight divided by the square of the person's height—is over ; the range is defined as overweight. Some East Asian countries use lower values to calculate obesity. Obesity is a major cause of disability and is correlated with various diseases and conditions, particularly cardiovascular diseases, type 2 diabetes, obstructive sleep apnea, certain types of cancer, and osteoarthritis. Obesity has individual, socioeconomic, and environmental causes. Some known causes are diet, low physical activity, automation, urbanization, genetic susceptibility, medications, mental disorders, economic policies, endocrine disorders, and exposure to endocrine-disrupting chemicals. While many people living with obesity attempt to lose weight and are often successful, maintaining weight loss long-term is rare. Obesity prevention requires a complex approach, including interventions at medical, societal, community, family, and individual levels. Changes to diet as well as exercising are the main treatments recommended by health professionals. Diet quality can be improved by reducing the consumption of energy-dense foods, such as those high in fat or sugars, and by increasing the intake of dietary fiber, although the World Health Organization stresses that these improvements are a societal responsibility and that these dietary choices should be made the most available, affordable, and accessible options. Medications can be used, along with a suitable diet, to reduce appetite or decrease fat absorption. If diet, exercise, and medication are not effective, a gastric balloon or surgery may be performed to reduce stomach volume or length of the intestines, leading to feeling full earlier, or a reduced ability to absorb nutrients from food. Many do not realize that metabolic surgery is not only about reducing intake, it has also been shown to alter gut hormones for a period of time. Obesity is a leading preventable cause of death worldwide, with increasing rates in adults and children. In 2022, over 1 billion people lived with obesity worldwide (879 million adults and 159 million children), representing more than a double of adult cases (and four times higher than cases among children) registered in 1990. Obesity is more common in women than in men. Today, obesity is stigmatized in most of the world. Conversely, some cultures, past and present, have a favorable view of obesity, seeing it as a symbol of wealth and fertility. The World Health Organization, the US, Canada, Japan, Portugal, Germany, the European Parliament and medical societies, e.g. the American Medical Association, classify obesity as a disease. Others, such as the UK, do not. Classification Obesity is typically defined as a substantial accumulation of body fat that could impact health. Medical organizations tend to classify people living with obesity as based on body mass index (BMI) – a ratio of a person's weight in kilograms to the square of their height in meters. For adults, the World Health Organization (WHO) defines "overweight" as a BMI 25 or higher, and "obesity" as a BMI 30 or higher. The U.S. Centers for Disease Control and Prevention (CDC) further subdivides obesity based on BMI, with a BMI 30 to 35 called class 1 obesity; 35 to 40, class 2 obesity; and 40+, class 3 obesity. For children, obesity measures take age into consideration along with height and weight. For children aged 5–19, the WHO defines obesity as a BMI two standard deviations above the median for their age (a BMI around 18 for a five-year old; around 30 for a 19-year old). For children under five, the WHO defines obesity as a weight three standard deviations above the median for their height. Some modifications to the WHO definitions have been made by particular organizations. The surgical literature breaks down class II and III or only class III obesity into further categories whose exact values are still disputed. Any BMI ≥ 35 or 40 kg/m2 is severe obesity. A BMI of ≥ 35 kg/m2 and experiencing obesity-related health conditions or ≥ 40 or 45 kg/m2 is morbid obesity. A BMI of ≥ 45 or 50 kg/m2 is super obesity. As Asian populations develop negative health consequences at a lower BMI than Caucasians, some nations have redefined obesity; Japan has defined obesity as any BMI greater than 25 kg/m2 while China uses a BMI of greater than 28 kg/m2. The preferred obesity metric in scholarly circles is the body fat percentage (BF%) – the ratio of the total weight of person's fat to his or her body weight, and BMI is viewed merely as a way to approximate BF%. According to American Society of Bariatric Physicians, levels in excess of 32% for women and 25% for men are generally considered to indicate obesity. BMI is now viewed as outdated in numerous countries. It ignores variations between individuals in amounts of lean body mass, particularly muscle mass. Individuals involved in heavy physical labor or sports may have high BMI values despite having little fat. For example, more than half of all NFL players are classified as "obese" (BMI ≥ 30), and 1 in 4 are classified as "extremely obese" (BMI ≥ 35), according to the BMI metric. However, their mean body fat percentage, 14%, is well within what is considered a healthy range. Similarly, Sumo wrestlers may be categorized by BMI as "severely obese" or "very severely obese" but many Sumo wrestlers are not categorized as obese when body fat percentage is used instead (having <25% body fat). Some Sumo wrestlers were found to have no more body fat than a non-Sumo comparison group, with high BMI values resulting from their high amounts of lean body mass. Canada utilises BMI sparingly within their method of defining levels of obesity through use of the Edmonton Scale (for adult obesity). This scale also introduces factors such as Quality of Life, Mental Health & Mobility amongst others. In recent years, Canada chose to allow both Chilli & Ireland to adapt their obesity guidelines to suit both countries' health systems. In Ireland, obesity is now defined as "a Complex, Chronic & Relapsing Disease". Effects on health Obesity increases a person's risk of developing various metabolic diseases, cardiovascular disease, osteoarthritis, Alzheimer disease, depression, and certain types of cancer. Depending on the degree of obesity and the presence of comorbid disorders, obesity is associated with an estimated 2–20 year shorter life expectancy. High BMI is a marker of risk for, but not a direct cause of, diseases caused by diet and physical activity. Mortality Obesity is one of the leading preventable causes of death worldwide. The mortality risk is lowest at a BMI of 20–25 kg/m2 in non-smokers and at 24–27 kg/m2 in current smokers, with risk increasing along with changes in either direction. This appears to apply in at least four continents. Other research suggests that the association of BMI and waist circumference with mortality is U- or J-shaped, while the association between waist-to-hip ratio and waist-to-height ratio with mortality is more positive. In Asians the risk of negative health effects begins to increase between 22 and 25 kg/m2. In 2021, the World Health Organization estimated that obesity caused at least 2.8 million deaths annually. On average, obesity reduces life expectancy by six to seven years, a BMI of 30–35 kg/m2 reduces life expectancy by two to four years, while severe obesity (BMI ≥ 40 kg/m2) reduces life expectancy by ten years. Morbidity Obesity increases the risk of many physical and mental conditions. These comorbidities are most commonly shown in metabolic syndrome, a combination of medical disorders which includes: diabetes mellitus type 2, high blood pressure, high blood cholesterol, and high triglyceride levels. A study from the RAK Hospital found that obese people are at a greater risk of developing long COVID. The CDC has found that obesity is the single strongest risk factor for severe COVID-19 illness. Complications are either directly caused by obesity or indirectly related through mechanisms sharing a common cause such as a poor diet or a sedentary lifestyle. The strength of the link between obesity and specific conditions varies. One of the strongest is the link with type 2 diabetes. Excess body fat underlies 64% of cases of diabetes in men and 77% of cases in women. Health consequences fall into two broad categories: those attributable to the effects of increased fat mass (such as osteoarthritis, obstructive sleep apnea, social stigmatization) and those due to the increased number of fat cells (diabetes, cancer, cardiovascular disease, non-alcoholic fatty liver disease). Increases in body fat alter the body's response to insulin, potentially leading to insulin resistance. Increased fat also creates a proinflammatory state, and a prothrombotic state. Metrics of health Newer research has focused on methods of identifying healthier obese people by clinicians, and not treating obese people as a monolithic group. Obese people who do not experience medical complications from their obesity are sometimes called (metabolically) healthy obese, but the extent to which this group exists (especially among older people) is in dispute. The number of people considered metabolically healthy depends on the definition used, and there is no universally accepted definition. There are numerous obese people who have relatively few metabolic abnormalities, and a minority of obese people have no medical complications. The guidelines of the American Association of Clinical Endocrinologists call for physicians to use risk stratification with obese patients when considering how to assess their risk of developing type 2 diabetes. In 2014, the BioSHaRE–EU Healthy Obese Project (sponsored by Maelstrom Research, a team under the Research Institute of the McGill University Health Centre) came up with two definitions for healthy obesity, one more strict and one less so: To come up with these criteria, BioSHaRE controlled for age and tobacco use, researching how both may effect the metabolic syndrome associated with obesity, but not found to exist in the metabolically healthy obese. Other definitions of metabolically healthy obesity exist, including ones based on waist circumference rather than BMI, which is unreliable in certain individuals. Another identification metric for health in obese people is calf strength, which is positively correlated with physical fitness in obese people. Body composition in general is hypothesized to help explain the existence of metabolically healthy obesity—the metabolically healthy obese are often found to have low amounts of ectopic fat (fat stored in tissues other than adipose tissue) despite having overall fat mass equivalent in weight to obese people with metabolic syndrome. Survival paradox Although the negative health consequences of obesity in the general population are well supported by the available research evidence, health outcomes in certain subgroups seem to be improved at an increased BMI, a phenomenon known as the obesity survival paradox. The paradox was first described in 1999 in overweight and obese people undergoing hemodialysis and has subsequently been found in those with heart failure and peripheral artery disease (PAD). In people with heart failure, those with a BMI between 30.0 and 34.9 had lower mortality than those with a normal weight. This has been attributed to the fact that people often lose weight as they become progressively more ill. Similar findings have been made in other types of heart disease. People with class I obesity and heart disease do not have greater rates of further heart problems than people of normal weight who also have heart disease. In people with greater degrees of obesity, however, the risk of further cardiovascular events is increased. Even after cardiac bypass surgery, no increase in mortality is seen in the overweight and obese. One study found that the improved survival could be explained by the more aggressive treatment obese people receive after a cardiac event. Another study found that if one takes into account chronic obstructive pulmonary disease (COPD) in those with PAD, the benefit of obesity no longer exists. Causes The "a calorie is a calorie" model of obesity posits a combination of excessive food energy intake and a lack of physical activity as the cause of most cases of obesity. A limited number of cases are due primarily to genetics, medical reasons, or psychiatric illness. The satiety value shows that the feeling of satiety per calorie varies between food types. Increasing rates of obesity at a societal level are felt to be due to an easily accessible and palatable diet, increased reliance on cars, and mechanized manufacturing. Some other factors have been proposed as causes towards rising rates of obesity worldwide, including insufficient sleep, endocrine disruptors, increased usage of certain medications (such as atypical antipsychotics), increases in ambient temperature, decreased rates of smoking, demographic changes, increasing maternal age of first-time mothers, changes to epigenetic dysregulation from the environment, increased phenotypic variance via assortative mating, social pressure to diet, among others. According to one study, factors like these may play as big of a role as excessive food energy intake and a lack of physical activity; however, the relative magnitudes of the effects of any proposed cause of obesity is varied and uncertain, as there is a general need for randomized controlled trials on humans before definitive statement can be made. According to the Endocrine Society, there is "growing evidence suggesting that obesity is a disorder of the energy homeostasis system, rather than simply arising from the passive accumulation of excess weight". Diet 19612001–03Map of dietary energy availability per person per day in 1961 (left) and 2001–2003 (right) Calories per person per day (kilojoules per person per day) Excess appetite for palatable, high-calorie food (especially fat, sugar, and certain animal proteins) is seen as the primary factor driving obesity worldwide, likely because of imbalances in neurotransmitters affecting the drive to eat. Dietary energy supply per capita varies markedly between different regions and countries. It has also changed significantly over time. From the early 1970s to the late 1990s the average food energy available per person per day (the amount of food bought) increased in all parts of the world except Eastern Europe. The United States had the highest availability with per person in 1996. This increased further in 2003 to . During the late 1990s, Europeans had per person, in the developing areas of Asia there were per person, and in sub-Saharan Africa people had per person. Total food energy consumption has been found to be related to obesity. The widespread availability of dietary guidelines has done little to address the problems of overeating and poor dietary choice. From 1971 to 2000, obesity rates in the United States increased from 14.5% to 30.9%. During the same period, an increase occurred in the average amount of food energy consumed. For women, the average increase was per day ( in 1971 and in 2004), while for men the average increase was per day ( in 1971 and in 2004). Most of this extra food energy came from an increase in carbohydrate consumption rather than fat consumption. The primary sources of these extra carbohydrates are sweetened beverages, which now account for almost 25 percent of daily food energy in young adults in America, and potato chips. Consumption of sweetened beverages such as soft drinks, fruit drinks, and iced tea is believed to be contributing to the rising rates of obesity and to an increased risk of metabolic syndrome and type 2 diabetes. Vitamin D deficiency is related to diseases associated with obesity. As societies become increasingly reliant on energy-dense, big-portions, and fast-food meals, the association between fast-food consumption and obesity becomes more concerning. In the United States, consumption of fast-food meals tripled and food energy intake from these meals quadrupled between 1977 and 1995. Agricultural policy and techniques in the United States and Europe have led to lower food prices. In the United States, subsidization of corn, soy, wheat, and rice through the U.S. farm bill has made the main sources of processed food cheap compared to fruits and vegetables. Calorie count laws and nutrition facts labels attempt to steer people toward making healthier food choices, including awareness of how much food energy is being consumed. Obese people consistently under-report their food consumption as compared to people of normal weight. This is supported both by tests of people carried out in a calorimeter room and by direct observation. Sedentary lifestyle A sedentary lifestyle may play a significant role in obesity. Worldwide there has been a large shift towards less physically demanding work, and currently at least 30% of the world's population gets insufficient exercise. This is primarily due to increasing use of mechanized transportation and a greater prevalence of labor-saving technology in the home. In children, there appear to be declines in levels of physical activity (with particularly strong declines in the amount of walking and physical education), likely due to safety concerns, changes in social interaction (such as fewer relationships with neighborhood children), and inadequate urban design (such as too few public spaces for safe physical activity). World trends in active leisure time physical activity are less clear. The World Health Organization indicates people worldwide are taking up less active recreational pursuits, while research from Finland found an increase and research from the United States found leisure-time physical activity has not changed significantly. Physical activity in children may not be a significant contributor. In both children and adults, there is an association between television viewing time and the risk of obesity. Increased media exposure increases the rate of childhood obesity, with rates increasing proportionally to time spent watching television. Genetics Like many other medical conditions, obesity is the result of an interplay between genetic and environmental factors. Polymorphisms in various genes controlling appetite and metabolism predispose to obesity when sufficient food energy is present. As of 2006, more than 41 of these sites on the human genome have been linked to the development of obesity when a favorable environment is present. People with two copies of the FTO gene (fat mass and obesity associated gene) have been found on average to weigh 3–4 kg more and have a 1.67-fold greater risk of obesity compared with those without the risk allele. The differences in BMI between people that are due to genetics varies depending on the population examined from 6% to 85%. Obesity is a major feature in several syndromes, such as Prader–Willi syndrome, Bardet–Biedl syndrome, Cohen syndrome, and MOMO syndrome. (The term "non-syndromic obesity" is sometimes used to exclude these conditions.) In people with early-onset severe obesity (defined by an onset before 10 years of age and body mass index over three standard deviations above normal), 7% harbor a single point DNA mutation. Studies that have focused on inheritance patterns rather than on specific genes have found that 80% of the offspring of two obese parents were also obese, in contrast to less than 10% of the offspring of two parents who were of normal weight. Different people exposed to the same environment have different risks of obesity due to their underlying genetics. The thrifty gene hypothesis postulates that, due to dietary scarcity during human evolution, people are prone to obesity. Their ability to take advantage of rare periods of abundance by storing energy as fat would be advantageous during times of varying food availability, and individuals with greater adipose reserves would be more likely to survive famine. This tendency to store fat, however, would be maladaptive in societies with stable food supplies. This theory has received various criticisms, and other evolutionarily-based theories such as the drifty gene hypothesis and the thrifty phenotype hypothesis have also been proposed. Other illnesses Certain physical and mental illnesses and the pharmaceutical substances used to treat them can increase risk of obesity. Medical illnesses that increase obesity risk include several rare genetic syndromes (listed above) as well as some congenital or acquired conditions: hypothyroidism, Cushing's syndrome, growth hormone deficiency, and some eating disorders such as binge eating disorder and night eating syndrome. However, obesity is not regarded as a psychiatric disorder, and therefore is not listed in the DSM-IVR as a psychiatric illness. The risk of overweight and obesity is higher in patients with psychiatric disorders than in persons without psychiatric disorders. Obesity and depression influence each other mutually, with obesity increasing the risk of clinical depression, and also depression leading to a higher chance of developing obesity. Drug-induced obesity Certain medications may cause weight gain or changes in body composition; these include insulin, sulfonylureas, thiazolidinediones, atypical antipsychotics, antidepressants, steroids, certain anticonvulsants (phenytoin and valproate), pizotifen, and some forms of hormonal contraception. Social determinants While genetic influences are important to understanding obesity, they cannot completely explain the dramatic increase seen within specific countries or globally. Though it is accepted that energy consumption in excess of energy expenditure leads to increases in body weight on an individual basis, the cause of the shifts in these two factors on the societal scale is much debated. There are a number of theories as to the cause but most believe it is a combination of various factors. The correlation between social class and BMI varies globally. Research in 1989 found that in developed countries women of a high social class were less likely to be obese. No significant differences were seen among men of different social classes. In the developing world, women, men, and children from high social classes had greater rates of obesity. In 2007 repeating the same research found the same relationships, but they were weaker. The decrease in strength of correlation was felt to be due to the effects of globalization. Among developed countries, levels of adult obesity, and percentage of teenage children who are overweight, are correlated with income inequality. A similar relationship is seen among US states: more adults, even in higher social classes, are obese in more unequal states. Many explanations have been put forth for associations between BMI and social class. It is thought that in developed countries, the wealthy are able to afford more nutritious food, they are under greater social pressure to remain slim, and have more opportunities along with greater expectations for physical fitness. In undeveloped countries the ability to afford food, high energy expenditure with physical labor, and cultural values favoring a larger body size are believed to contribute to the observed patterns. Attitudes toward body weight held by people in one's life may also play a role in obesity. A correlation in BMI changes over time has been found among friends, siblings, and spouses. Stress and perceived low social status appear to increase risk of obesity. Smoking has a significant effect on an individual's weight. Those who quit smoking gain an average of 4.4 kilograms (9.7 lb) for men and 5.0 kilograms (11.0 lb) for women over ten years. However, changing rates of smoking have had little effect on the overall rates of obesity. In the United States, the number of children a person has is related to their risk of obesity. A woman's risk increases by 7% per child, while a man's risk increases by 4% per child. This could be partly explained by the fact that having dependent children decreases physical activity in Western parents. In the developing world urbanization is playing a role in increasing rate of obesity. In China overall rates of obesity are below 5%; however, in some cities rates of obesity are greater than 20%. In part, this may be because of urban design issues (such as inadequate public spaces for physical activity). Time spent in motor vehicles, as opposed to active transportation options such as cycling or walking, is correlated with increased risk of obesity. Malnutrition in early life is believed to play a role in the rising rates of obesity in the developing world. Endocrine changes that occur during periods of malnutrition may promote the storage of fat once more food energy becomes available. Gut bacteria The study of the effect of infectious agents on metabolism is still in its early stages. Gut flora has been shown to differ between lean and obese people. There is an indication that gut flora can affect the metabolic potential. This apparent alteration is believed to confer a greater capacity to harvest energy contributing to obesity. Whether these differences are the direct cause or the result of obesity has yet to be determined unequivocally. The use of antibiotics among children has also been associated with obesity later in life. An association between viruses and obesity has been found in humans and several different animal species. The amount that these associations may have contributed to the rising rate of obesity is yet to be determined. Other factors Not getting enough sleep is also associated with obesity. Whether one causes the other is unclear. Even if short sleep does increase weight gain, it is unclear if this is to a meaningful degree or if increasing sleep would be of benefit. Some have proposed that chemical compounds called "obesogens" may play a role in obesity. Certain aspects of personality are associated with being obese. Loneliness, neuroticism, impulsivity, and sensitivity to reward are more common in people who are obese while conscientiousness and self-control are less common in people who are obese. Because most of the studies on this topic are questionnaire-based, it is possible that these findings overestimate the relationships between personality and obesity: people who are obese might be aware of the social stigma of obesity and their questionnaire responses might be biased accordingly. Similarly, the personalities of people who are obese as children might be influenced by obesity stigma, rather than these personality factors acting as risk factors for obesity. In relation to globalization, it is known that trade liberalization is linked to obesity; research, based on data from 175 countries during 1975–2016, showed that obesity prevalence was positively correlated with trade openness, and the correlation was stronger in developing countries. Pathophysiology Two distinct but related processes are considered to be involved in the development of obesity: sustained positive energy balance (energy intake exceeding energy expenditure) and the resetting of the body weight "set point" at an increased value. The second process explains why finding effective obesity treatments has been difficult. While the underlying biology of this process still remains uncertain, research is beginning to clarify the mechanisms. At a biological level, there are many possible pathophysiological mechanisms involved in the development and maintenance of obesity. This field of research had been almost unapproached until the leptin gene was discovered in 1994 by J. M. Friedman's laboratory. While leptin and ghrelin are produced peripherally, they control appetite through their actions on the central nervous system. In particular, they and other appetite-related hormones act on the hypothalamus, a region of the brain central to the regulation of food intake and energy expenditure. There are several circuits within the hypothalamus that contribute to its role in integrating appetite, the melanocortin pathway being the most well understood. The circuit begins with an area of the hypothalamus, the arcuate nucleus, that has outputs to the lateral hypothalamus (LH) and ventromedial hypothalamus (VMH), the brain's feeding and satiety centers, respectively. The arcuate nucleus contains two distinct groups of neurons. The first group coexpresses neuropeptide Y (NPY) and agouti-related peptide (AgRP) and has stimulatory inputs to the LH and inhibitory inputs to the VMH. The second group coexpresses pro-opiomelanocortin (POMC) and cocaine- and amphetamine-regulated transcript (CART) and has stimulatory inputs to the VMH and inhibitory inputs to the LH. Consequently, NPY/AgRP neurons stimulate feeding and inhibit satiety, while POMC/CART neurons stimulate satiety and inhibit feeding. Both groups of arcuate nucleus neurons are regulated in part by leptin. Leptin inhibits the NPY/AgRP group while stimulating the POMC/CART group. Thus a deficiency in leptin signaling, either via leptin deficiency or leptin resistance, leads to overfeeding and may account for some genetic and acquired forms of obesity. Management The main treatment for obesity consists of weight loss via lifestyle interventions, including prescribed diets and physical exercise. Although it is unclear what diets might support long-term weight loss, and although the effectiveness of low-calorie diets is debated, lifestyle changes that reduce calorie consumption or increase physical exercise over the long term also tend to produce some sustained weight loss, despite slow weight regain over time. Although 87% of participants in the National Weight Control Registry were able to maintain 10% body weight loss for 10 years, the most appropriate dietary approach for long term weight loss maintenance is still unknown. In the US, intensive behavioral interventions combining both dietary changes and exercise are recommended. Intermittent fasting has no additional benefit of weight loss compared to continuous energy restriction. Adherence is a more important factor in weight loss success than whatever kind of diet an individual undertakes. Several hypo-caloric diets are effective. In the short-term low carbohydrate diets appear better than low fat diets for weight loss. In the long term, however, all types of low-carbohydrate and low-fat diets appear equally beneficial. Heart disease and diabetes risks associated with different diets appear to be similar. Promotion of the Mediterranean diets among the obese may lower the risk of heart disease. Decreased intake of sweet drinks is also related to weight-loss. Success rates of long-term weight loss maintenance with lifestyle changes are low, ranging from 2–20%. Dietary and lifestyle changes are effective in limiting excessive weight gain in pregnancy and improve outcomes for both the mother and the child. Intensive behavioral counseling is recommended in those who are both obese and have other risk factors for heart disease. Health policy Obesity is a complex public health and policy problem because of its prevalence, costs, and health effects. As such, managing it requires changes in the wider societal context and effort by communities, local authorities, and governments. Public health efforts seek to understand and correct the environmental factors responsible for the increasing prevalence of obesity in the population. Solutions look at changing the factors that cause excess food energy consumption and inhibit physical activity. Efforts include federally reimbursed meal programs in schools, limiting direct junk food marketing to children, and decreasing access to sugar-sweetened beverages in schools. The World Health Organization recommends the taxing of sugary drinks. When constructing urban environments, efforts have been made to increase access to parks and to develop pedestrian routes. Mass media campaigns seem to have limited effectiveness in changing behaviors that influence obesity, but may increase knowledge and awareness regarding physical activity and diet, which might lead to changes in the long term. Campaigns might also be able to reduce the amount of time spent sitting or lying down and positively affect the intention to be active physically. Nutritional labelling with energy information on menus might be able to help reducing energy intake while dining in restaurants. Some call for policy against ultra-processed foods. Medical interventions Medication Since the introduction of medicines for the management of obesity in the 1930s, many compounds have been tried. Most of them reduce body weight by small amounts, and several of them are no longer marketed for obesity because of their side effects. Out of 25 anti-obesity medications withdrawn from the market between 1964 and 2009, 23 acted by altering the functions of chemical neurotransmitters in the brain. The most common side effects of these drugs that led to withdrawals were mental disturbances, cardiac side effects, and drug abuse or drug dependence. Deaths were reportedly associated with seven products. Five medications beneficial for long-term use are: orlistat, lorcaserin, liraglutide, phentermine–topiramate, and naltrexone–bupropion. They result in weight loss after one year ranged from 3.0 to 6.7 kg (6.6-14.8 lbs) over placebo. Orlistat, liraglutide, and naltrexone–bupropion are available in both the United States and Europe, phentermine–topiramate is available only in the United States. European regulatory authorities rejected lorcaserin and phentermine-topiramate, in part because of associations of heart valve problems with lorcaserin and more general heart and blood vessel problems with phentermine–topiramate. Lorcaserin was available in the United States and then removed from the market in 2020 due to its association with cancer. Orlistat use is associated with high rates of gastrointestinal side effects and concerns have been raised about negative effects on the kidneys. There is no information on how these drugs affect longer-term complications of obesity such as cardiovascular disease or death; however, liraglutide, when used for type 2 diabetes, does reduce cardiovascular events. In 2019 a systematic review compared the effects on weight of various doses of fluoxetine (60 mg/d, 40 mg/d, 20 mg/d, 10 mg/d) in obese adults. When compared to placebo, all dosages of fluoxetine appeared to contribute to weight loss but lead to increased risk of experiencing side effects such as dizziness, drowsiness, fatigue, insomnia and nausea during period of treatment. However, these conclusions were from low certainty evidence. When comparing, in the same review, the effects of fluoxetine on weight of obese adults, to other anti-obesity agents, omega-3 gel and not receiving a treatment, the authors could not reach conclusive results due to poor quality of evidence. Among antipsychotic drugs for treating schizophrenia clozapine is the most effective, but it also has the highest risk of causing the metabolic syndrome, of which obesity is the main feature. For people who gain weight because of clozapine, taking metformin may reportedly improve three of the five components of the metabolic syndrome: waist circumference, fasting glucose, and fasting triglycerides. Surgery The most effective treatment for obesity is bariatric surgery. The types of procedures include laparoscopic adjustable gastric banding, Roux-en-Y gastric bypass, vertical-sleeve gastrectomy, and biliopancreatic diversion. Surgery for severe obesity is associated with long-term weight loss, improvement in obesity-related conditions, and decreased overall mortality; however, improved metabolic health results from the weight loss, not the surgery. One study found a weight loss of between 14% and 25% (depending on the type of procedure performed) at 10 years, and a 29% reduction in all cause mortality when compared to standard weight loss measures. Complications occur in about 17% of cases and reoperation is needed in 7% of cases. Epidemiology In earlier historical periods obesity was rare and achievable only by a small elite, although already recognised as a problem for health. But as prosperity increased in the Early Modern period, it affected increasingly larger groups of the population. Prior to the 1970s, obesity was a relatively rare condition even in the wealthiest of nations, and when it did exist it tended to occur among the wealthy. Then, a confluence of events started to change the human condition. The average BMI of populations in first-world countries started to increase, and consequently there was a rapid increase in the proportion of people overweight and obese. In 1997, the WHO formally recognized obesity as a global epidemic. As of 2008, the WHO estimates that at least 500 million adults (greater than 10%) are obese, with higher rates among women than men. The global prevalence of obesity more than doubled between 1980 and 2014. In 2014, more than 600 million adults were obese, equal to about 13 percent of the world's adult population, with that figure growing to 16% by 2022, according to the World Health Organisation The percentage of adults affected in the United States as of 2015–2016 is about 39.6% overall (37.9% of males and 41.1% of females). In 2000, the World Health Organization (WHO) stated that overweight and obesity were replacing more traditional public health concerns such as undernutrition and infectious diseases as one of the most significant cause of poor health. The rate of obesity also increases with age at least up to 50 or 60 years old and severe obesity in the United States, Australia, and Canada is increasing faster than the overall rate of obesity. The OECD has projected an increase in obesity rates until at least 2030, especially in the United States, Mexico and England with rates reaching 47%, 39% and 35%, respectively. Once considered a problem only of high-income countries, obesity rates are rising worldwide and affecting both the developed and developing world. These increases have been felt most dramatically in urban settings. Sex- and gender-based differences also influence the prevalence of obesity. Globally there are more obese women than men, but the numbers differ depending on how obesity is measured. History Etymology Obesity is from the Latin obesitas, which means "stout, fat, or plump". Ēsus is the past participle of edere (to eat), with ob (over) added to it. The Oxford English Dictionary documents its first usage in 1611 by Randle Cotgrave. Historical attitudes Ancient Greek medicine recognizes obesity as a medical disorder and records that the Ancient Egyptians saw it in the same way. Hippocrates wrote that "Corpulence is not only a disease itself, but the harbinger of others". The Indian surgeon Sushruta (6th century BCE) related obesity to diabetes and heart disorders. He recommended physical work to help cure it and its side effects. For most of human history, mankind struggled with food scarcity. Obesity has thus historically been viewed as a sign of wealth and prosperity. It was common among high officials in Ancient East Asian civilizations. In the 17th century, English medical author Tobias Venner is credited with being one of the first to refer to the term as a societal disease in a published English language book. With the onset of the Industrial Revolution, it was realized that the military and economic might of nations were dependent on both the body size and strength of their soldiers and workers. Increasing the average body mass index from what is now considered underweight to what is now the normal range played a significant role in the development of industrialized societies. Height and weight thus both increased through the 19th century in the developed world. During the 20th century, as populations reached their genetic potential for height, weight began increasing much more than height, resulting in obesity. In the 1950s, increasing wealth in the developed world decreased child mortality, but as body weight increased, heart and kidney disease became more common. During this time period, insurance companies realized the connection between weight and life expectancy and increased premiums for the obese. Many cultures throughout history have viewed obesity as the result of a character flaw. The obesus or fat character in Ancient Greek comedy was a glutton and figure of mockery. During Christian times, food was viewed as a gateway to the sins of sloth and lust. In modern Western culture, excess weight is often regarded as unattractive, and obesity is commonly associated with various negative stereotypes. People of all ages can face social stigmatization and may be targeted by bullies or shunned by their peers. Public perceptions in Western society regarding healthy body weight differ from those regarding the weight that is considered ideal – and both have changed since the beginning of the 20th century. The weight that is viewed as an ideal has become lower since the 1920s. This is illustrated by the fact that the average height of Miss America pageant winners increased by 2% from 1922 to 1999, while their average weight decreased by 12%. On the other hand, people's views concerning healthy weight have changed in the opposite direction. In Britain, the weight at which people considered themselves to be overweight was significantly higher in 2007 than in 1999. These changes are believed to be due to increasing rates of adiposity leading to increased acceptance of extra body fat as being normal. Obesity is still seen as a sign of wealth and well-being in many parts of Africa. This has become particularly common since the HIV epidemic began. The arts The first sculptural representations of the human body 20,000–35,000 years ago depict obese females. Some attribute the Venus figurines to the tendency to emphasize fertility while others feel they represent "fatness" in the people of the time. Corpulence is, however, absent in both Greek and Roman art, probably in keeping with their ideals regarding moderation. This continued through much of Christian European history, with only those of low socioeconomic status being depicted as obese. During the Renaissance some of the upper class began flaunting their large size, as can be seen in portraits of Henry VIII of England and Alessandro dal Borro. Rubens (1577–1640) regularly depicted heavyset women in his pictures, from which derives the term Rubenesque. These women, however, still maintained the "hourglass" shape with its relationship to fertility. During the 19th century, views on obesity changed in the Western world. After centuries of obesity being synonymous with wealth and social status, slimness began to be seen as the desirable standard. In his 1819 print, The Belle Alliance, or the Female Reformers of Blackburn!!!, artist George Cruikshank criticised the work of female reformers in Blackburn and used fatness as a means to portray them as unfeminine. Society and culture Economic impact In addition to its health impacts, obesity leads to many problems, including disadvantages in employment and increased business costs. In 2005, the medical costs attributable to obesity in the US were an estimated $190.2 billion or 20.6% of all medical expenditures, while the cost of obesity in Canada was estimated at CA$2 billion in 1997 (2.4% of total health costs). The total annual direct cost of overweight and obesity in Australia in 2005 was A$21 billion. Overweight and obese Australians also received A$35.6 billion in government subsidies. The estimated range for annual expenditures on diet products is $40 billion to $100 billion in the US alone. The Lancet Commission on Obesity in 2019 called for a global treaty—modelled on the WHO Framework Convention on Tobacco Control—committing countries to address obesity and undernutrition, explicitly excluding the food industry from policy development. They estimate the global cost of obesity $2 trillion a year, about or 2.8% of world GDP. Obesity prevention programs have been found to reduce the cost of treating obesity-related disease. However, the longer people live, the more medical costs they incur. Researchers, therefore, conclude that reducing obesity may improve the public's health, but it is unlikely to reduce overall health spending. Sin taxes such as a sugary drink tax have been implemented in certain countries globally to curb dietary and consumer habits, and as an effort to offset the economic tolls. Obesity can lead to social stigmatization and disadvantages in employment. When compared to their ideal weight counterparts, workers with obesity, on average have higher rates of absenteeism from work and take more disability leave, thus increasing costs for employers and decreasing productivity. A study examining Duke University employees found that people with a BMI over 40 kg/m2 filed twice as many workers' compensation claims as those whose BMI was 18.5–24.9 kg/m2. They also had more than 12 times as many lost work days. The most common injuries in this group were due to falls and lifting, thus affecting the lower extremities, wrists or hands, and backs. The Alabama State Employees' Insurance Board approved a controversial plan to charge obese workers $25 a month for health insurance that would otherwise be free unless they take steps to lose weight and improve their health. These measures started in January 2010 and apply to those state workers whose BMI exceeds 35 kg/m2 and who fail to make improvements in their health after one year. This becomes a Catch 22 position as many insurance companies will refuse to pay for treatment methods for workers living with obesity. Some research shows that people with obesity are less likely to be hired for a job and are less likely to be promoted. People with obesity are also paid less than their counterparts who do not live with obesity for an equivalent job; women with obesity on average make 6% less and men with obesity make 3% less. Specific industries, such as the airline, healthcare and food industries, have special concerns. Due to rising rates of obesity, airlines face higher fuel costs and pressures to increase seating width. In 2000, the extra weight of passengers with obesity cost airlines US$275 million. The healthcare industry has had to invest in special facilities for handling patients with class III obesity, including special lifting equipment and bariatric ambulances. Costs for restaurants are increased by litigation accusing them of causing obesity. In 2005, the US Congress discussed legislation to prevent civil lawsuits against the food industry in relation to obesity; however, it did not become law. With the American Medical Association's 2013 classification of obesity as a chronic disease, it is thought that health insurance companies will more likely pay for obesity treatment, counseling and surgery, and the cost of research and development of adipose treatment pills or gene therapy treatments should be more affordable if insurers help to subsidize their cost. The AMA classification is not legally binding, however, so health insurers still have the right to reject coverage for a treatment or procedure. In 2014, The European Court of Justice ruled that morbid obesity is a disability. The Court said that if an employee's obesity prevents them from "full and effective participation of that person in professional life on an equal basis with other workers", then it shall be considered a disability and that firing someone on such grounds is discriminatory. In low-income countries, obesity can be a signal of wealth. A 2023 experimental study found that obese individuals in Uganda were more likely to access credit. Size acceptance The principal goal of the fat acceptance movement is to decrease discrimination against people who are overweight and obese. However, some in the movement are also attempting to challenge the established relationship between obesity and negative health outcomes. A number of organizations exist that promote the acceptance of obesity. They have increased in prominence in the latter half of the 20th century. The US-based National Association to Advance Fat Acceptance (NAAFA) was formed in 1969 and describes itself as a civil rights organization dedicated to ending size discrimination. The International Size Acceptance Association (ISAA) is a non-governmental organization (NGO) which was founded in 1997. It has more of a global orientation and describes its mission as promoting size acceptance and helping to end weight-based discrimination. These groups often argue for the recognition of obesity as a disability under the US Americans With Disabilities Act (ADA). The American legal system, however, has decided that the potential public health costs exceed the benefits of extending this anti-discrimination law to cover obesity. Industry influence on research In 2015, the New York Times published an article on the Global Energy Balance Network, a nonprofit founded in 2014 that advocated for people to focus on increasing exercise rather than reducing calorie intake to avoid obesity and to be healthy. The organization was founded with at least $1.5M in funding from the Coca-Cola Company, and the company has provided $4M in research funding to the two founding scientists Gregory A. Hand and Steven N. Blair since 2008. Reports Many organizations have published reports pertaining to obesity. In 1998, the first US Federal guidelines were published, titled "Clinical Guidelines on the Identification, Evaluation, and Treatment of Overweight and Obesity in Adults: The Evidence Report". In 2006, the Canadian Obesity Network, now known as Obesity Canada published the "Canadian Clinical Practice Guidelines (CPG) on the Management and Prevention of Obesity in Adults and Children". This is a comprehensive evidence-based guideline to address the management and prevention of overweight and obesity in adults and children. In 2004, the United Kingdom Royal College of Physicians, the Faculty of Public Health and the Royal College of Paediatrics and Child Health released the report "Storing up Problems", which highlighted the growing problem of obesity in the UK. The same year, the House of Commons Health Select Committee published its "most comprehensive inquiry [...] ever undertaken" into the impact of obesity on health and society in the UK and possible approaches to the problem. In 2006, the National Institute for Health and Clinical Excellence (NICE) issued a guideline on the diagnosis and management of obesity, as well as policy implications for non-healthcare organizations such as local councils. A 2007 report produced by Derek Wanless for the King's Fund warned that unless further action was taken, obesity had the capacity to debilitate the National Health Service financially. In 2022 the National Institute for Health and Care Research (NIHR) published a comprehensive review of research on what local authorities can do to reduce obesity. The Obesity Policy Action (OPA) framework divides measure into upstream policies, midstream policies, and downstream policies. Upstream policies have to do with changing society, while midstream policies try to alter behaviors believed to contribute to obesity at the individual level, while downstream policies treat currently obese people. Childhood obesity The healthy BMI range varies with the age and sex of the child. Obesity in children and adolescents is defined as a BMI greater than the 95th percentile. The reference data that these percentiles are based on is from 1963 to 1994 and thus has not been affected by the recent increases in rates of obesity. Childhood obesity has reached epidemic proportions in the 21st century, with rising rates in both the developed and the developing world. Rates of obesity in Canadian boys have increased from 11% in the 1980s to over 30% in the 1990s, while during this same time period rates increased from 4 to 14% in Brazilian children. In the UK, there were 60% more obese children in 2005 compared to 1989. In the US, the percentage of overweight and obese children increased to 16% in 2008, a 300% increase over the prior 30 years. As with obesity in adults, many factors contribute to the rising rates of childhood obesity. Changing diet and decreasing physical activity are believed to be the two most important causes for the recent increase in the incidence of child obesity. Advertising of unhealthy foods to children also contributes, as it increases their consumption of the product. Antibiotics in the first 6 months of life have been associated with excess weight at age seven to twelve years of age. Because childhood obesity often persists into adulthood and is associated with numerous chronic illnesses, children who are obese are often tested for hypertension, diabetes, hyperlipidemia, and fatty liver disease. Treatments used in children are primarily lifestyle interventions and behavioral techniques, although efforts to increase activity in children have had little success. In the United States, medications are not FDA approved for use in this age group. Brief weight management interventions in primary care (e.g. delivered by a physician or nurse practitioner) have only a marginal positive effect in reducing childhood overweight or obesity. Multi-component behaviour change interventions that include changes to dietary and physical activity may reduce BMI in the short term in children aged 6 to 11 years, although the benefits are small and quality of evidence is low. Other animals Obesity in pets is common in many countries. In the United States, 23–41% of dogs are overweight, and about 5.1% are obese. The rate of obesity in cats was slightly higher at 6.4%. In Australia, the rate of obesity among dogs in a veterinary setting has been found to be 7.6%. The risk of obesity in dogs is related to whether or not their owners are obese; however, there is no similar correlation between cats and their owners.
Biology and health sciences
Health, fitness, and medicine
null
56437
https://en.wikipedia.org/wiki/Infrared%20astronomy
Infrared astronomy
Infrared astronomy is a sub-discipline of astronomy which specializes in the observation and analysis of astronomical objects using infrared (IR) radiation. The wavelength of infrared light ranges from 0.75 to 300 micrometers, and falls in between visible radiation, which ranges from 380 to 750 nanometers, and submillimeter waves. Infrared astronomy began in the 1830s, a few decades after the discovery of infrared light by William Herschel in 1800. Early progress was limited, and it was not until the early 20th century that conclusive detections of astronomical objects other than the Sun and Moon were made in infrared light. After a number of discoveries were made in the 1950s and 1960s in radio astronomy, astronomers realized the information available outside the visible wavelength range, and modern infrared astronomy was established. Infrared and optical astronomy are often practiced using the same telescopes, as the same mirrors or lenses are usually effective over a wavelength range that includes both visible and infrared light. Both fields also use solid state detectors, though the specific type of solid state photodetectors used are different. Infrared light is absorbed at many wavelengths by water vapor in the Earth's atmosphere, so most infrared telescopes are at high elevations in dry places, above as much of the atmosphere as possible. There have also been infrared observatories in space, including the Spitzer Space Telescope, the Herschel Space Observatory, and more recently the James Webb Space Telescope. History The discovery of infrared radiation is attributed to William Herschel, who performed an experiment in 1800 where he placed a thermometer in sunlight of different colors after it passed through a prism. He noticed that the temperature increase induced by sunlight was highest outside the visible spectrum, just beyond the red color. That the temperature increase was highest at infrared wavelengths was due to the spectral response of the prism rather than properties of the Sun, but the fact that there was any temperature increase at all prompted Herschel to deduce that there was invisible radiation from the Sun. He dubbed this radiation "calorific rays", and went on to show that it could be reflected, transmitted, and absorbed just like visible light. Efforts were made starting in the 1830s and continuing through the 19th century to detect infrared radiation from other astronomical sources. Radiation from the Moon was first detected in 1856 by Charles Piazzi Smyth, the Astronomer Royal for Scotland, during an expedition to Tenerife to test his ideas about mountain top astronomy. Ernest Fox Nichols used a modified Crookes radiometer in an attempt to detect infrared radiation from Arcturus and Vega, but Nichols deemed the results inconclusive. Even so, the ratio of flux he reported for the two stars is consistent with the modern value, so George Rieke gives Nichols credit for the first detection of a star other than our own in the infrared. The field of infrared astronomy continued to develop slowly in the early 20th century, as Seth Barnes Nicholson and Edison Pettit developed thermopile detectors capable of accurate infrared photometry and sensitive to a few hundreds of stars. The field was mostly neglected by traditional astronomers until the 1960s, with most scientists who practiced infrared astronomy having actually been trained physicists. The success of radio astronomy during the 1950s and 1960s, combined with the improvement of infrared detector technology, prompted more astronomers to take notice, and infrared astronomy became well established as a subfield of astronomy. Infrared space telescopes entered service. In 1983, IRAS made an all-sky survey. In 1995, the European Space Agency created the Infrared Space Observatory. Before this satellite ran out of liquid helium in 1998, it discovered protostars and water in our universe (even on Saturn and Uranus). On 25 August 2003, NASA launched the Spitzer Space Telescope, previously known as the Space Infrared Telescope Facility. In 2009, the telescope ran out of liquid helium and lost the ability to see far infrared. It had discovered stars, the Double Helix Nebula, and light from extrasolar planets. It continued working in 3.6 and 4.5 micrometer bands. Since then, other infrared telescopes helped find new stars that are forming, nebulae, and stellar nurseries. Infrared telescopes have opened up a whole new part of the galaxy for us. They are also useful for observing extremely distant things, like quasars. Quasars move away from Earth. The resulting large redshift make them difficult targets with an optical telescope. Infrared telescopes give much more information about them. During May 2008, a group of international infrared astronomers proved that intergalactic dust greatly dims the light of distant galaxies. In actuality, galaxies are almost twice as bright as they look. The dust absorbs much of the visible light and re-emits it as infrared light. Modern infrared astronomy Infrared radiation with wavelengths just longer than visible light, known as near-infrared, behaves in a very similar way to visible light, and can be detected using similar solid state devices (because of this, many quasars, stars, and galaxies were discovered). For this reason, the near infrared region of the spectrum is commonly incorporated as part of the "optical" spectrum, along with the near ultraviolet. Many optical telescopes, such as those at Keck Observatory, operate effectively in the near infrared as well as at visible wavelengths. The far-infrared extends to submillimeter wavelengths, which are observed by telescopes such as the James Clerk Maxwell Telescope at Mauna Kea Observatory. Like all other forms of electromagnetic radiation, infrared is utilized by astronomers to study the universe. Indeed, infrared measurements taken by the 2MASS and WISE astronomical surveys have been particularly effective at unveiling previously undiscovered star clusters. Examples of such embedded star clusters are FSR 1424, FSR 1432, Camargo 394, Camargo 399, Majaess 30, and Majaess 99. Infrared telescopes, which includes most major optical telescopes as well as a few dedicated infrared telescopes, need to be chilled with liquid nitrogen and shielded from warm objects. The reason for this is that objects with temperatures of a few hundred kelvins emit most of their thermal energy at infrared wavelengths. If infrared detectors were not kept cooled, the radiation from the detector itself would contribute noise that would dwarf the radiation from any celestial source. This is particularly important in the mid-infrared and far-infrared regions of the spectrum. To achieve higher angular resolution, some infrared telescopes are combined to form astronomical interferometers. The effective resolution of an interferometer is set by the distance between the telescopes, rather than the size of the individual telescopes. When used together with adaptive optics, infrared interferometers, such as two 10 meter telescopes at Keck Observatory or the four 8.2 meter telescopes that make up the Very Large Telescope Interferometer, can achieve high angular resolution. The principal limitation on infrared sensitivity from ground-based telescopes is the Earth's atmosphere. Water vapor absorbs a significant amount of infrared radiation, and the atmosphere itself emits at infrared wavelengths. For this reason, most infrared telescopes are built in very dry places at high altitude, so that they are above most of the water vapor in the atmosphere. Suitable locations on Earth include Mauna Kea Observatory at 4205 meters above sea level, the Paranal Observatory at 2635 meters in Chile and regions of high altitude ice-desert such as Dome C in Antarctic. Even at high altitudes, the transparency of the Earth's atmosphere is limited except in infrared windows, or wavelengths where the Earth's atmosphere is transparent. The main infrared windows are listed below: As is the case for visible light telescopes, space is the ideal place for infrared telescopes. Telescopes in space can achieve higher resolution, as they do not suffer from blurring caused by the Earth's atmosphere, and are also free from infrared absorption caused by the Earth's atmosphere. Current infrared telescopes in space include the Herschel Space Observatory, the Spitzer Space Telescope, the Wide-field Infrared Survey Explorer and the James Webb Space Telescope. Since putting telescopes in orbit is expensive, there are also airborne observatories, such as the Stratospheric Observatory for Infrared Astronomy and the Kuiper Airborne Observatory. These observatories fly above most, but not all, of the atmosphere, and water vapor in the atmosphere absorbs some of infrared light from space. Infrared technology One of the most common infrared detector arrays used at research telescopes is HgCdTe arrays. These operate well between 0.6 and 5 micrometre wavelengths. For longer wavelength observations or higher sensitivity other detectors may be used, including other narrow gap semiconductor detectors, low temperature bolometer arrays or photon-counting Superconducting Tunnel Junction arrays. Special requirements for infrared astronomy include: very low dark currents to allow long integration times, associated low noise readout circuits and sometimes very high pixel counts. Low temperature is often achieved by a coolant, which can run out. Space missions have either ended or shifted to "warm" observations when the coolant supply used up. For example, WISE ran out of coolant in October 2010, about ten months after being launched. (
Physical sciences
Basics
Astronomy
56440
https://en.wikipedia.org/wiki/Orbital%20inclination
Orbital inclination
Orbital inclination measures the tilt of an object's orbit around a celestial body. It is expressed as the angle between a reference plane and the orbital plane or axis of direction of the orbiting object. For a satellite orbiting the Earth directly above the Equator, the plane of the satellite's orbit is the same as the Earth's equatorial plane, and the satellite's orbital inclination is 0°. The general case for a circular orbit is that it is tilted, spending half an orbit over the northern hemisphere and half over the southern. If the orbit swung between 20° north latitude and 20° south latitude, then its orbital inclination would be 20°. Orbits The inclination is one of the six orbital elements describing the shape and orientation of a celestial orbit. It is the angle between the orbital plane and the plane of reference, normally stated in degrees. For a satellite orbiting a planet, the plane of reference is usually the plane containing the planet's equator. For planets in the Solar System, the plane of reference is usually the ecliptic, the plane in which the Earth orbits the Sun. This reference plane is most practical for Earth-based observers. Therefore, Earth's inclination is, by definition, zero. Inclination can instead be measured with respect to another plane, such as the Sun's equator or the invariable plane (the plane that represents the angular momentum of the Solar System, approximately the orbital plane of Jupiter). Natural and artificial satellites The inclination of orbits of natural or artificial satellites is measured relative to the equatorial plane of the body they orbit, if they orbit sufficiently closely. The equatorial plane is the plane perpendicular to the axis of rotation of the central body. An inclination of 30° could also be described using an angle of 150°. The convention is that the normal orbit is prograde, an orbit in the same direction as the planet rotates. Inclinations greater than 90° describe retrograde orbits (backward). Thus: An inclination of 0° means the orbiting body has a prograde orbit in the planet's equatorial plane. An inclination greater than 0° and less than 90° also describes a prograde orbit. An inclination of 63.4° is often called a critical inclination, when describing artificial satellites orbiting the Earth, because they have zero apogee drift. An inclination of exactly 90° is a polar orbit, in which the spacecraft passes over the poles of the planet. An inclination greater than 90° and less than 180° is a retrograde orbit. An inclination of exactly 180° is a retrograde equatorial orbit. For impact-generated moons of terrestrial planets not too far from their star, with a large planet–moon distance, the orbital planes of moons tend to be aligned with the planet's orbit around the star due to tides from the star, but if the planet–moon distance is small, it may be inclined. For gas giants, the orbits of moons tend to be aligned with the giant planet's equator, because these formed in circumplanetary disks. Strictly speaking, this applies only to regular satellites. Captured bodies on distant orbits vary widely in their inclinations, while captured bodies in relatively close orbits tend to have low inclinations owing to tidal effects and perturbations by large regular satellites. Exoplanets and multiple star systems The inclination of exoplanets or members of multi-star star systems is the angle of the plane of the orbit relative to the plane perpendicular to the line of sight from Earth to the object. An inclination of 0° is a face-on orbit, meaning the plane of the exoplanet's orbit is perpendicular to the line of sight with Earth. An inclination of 90° is an edge-on orbit, meaning the plane of the exoplanet's orbit is parallel to the line of sight with Earth. Since the word "inclination" is used in exoplanet studies for this line-of-sight inclination, the angle between the planet's orbit and its star's rotational axis is expressed using the term the "spin-orbit angle" or "spin-orbit alignment". In most cases the orientation of the star's rotational axis is unknown. Because the radial-velocity method more easily finds planets with orbits closer to edge-on, most exoplanets found by this method have inclinations between 45° and 135°, although in most cases the inclination is not known. Consequently, most exoplanets found by radial velocity have true masses no more than 40% greater than their minimum masses. If the orbit is almost face-on, especially for superjovians detected by radial velocity, then those objects may actually be brown dwarfs or even red dwarfs. One particular example is HD 33636 B, which has true mass 142 MJ, corresponding to an M6V star, while its minimum mass was 9.28 MJ. If the orbit is almost edge-on, then the planet can be seen transiting its star. Calculation In astrodynamics, the inclination can be computed from the orbital momentum vector (or any vector perpendicular to the orbital plane) as where is the z-component of . Mutual inclination of two orbits may be calculated from their inclinations to another plane using cosine rule for angles. Observations and theories Most planetary orbits in the Solar System have relatively small inclinations, both in relation to each other and to the Sun's equator: On the other hand, the dwarf planets Pluto and Eris have inclinations to the ecliptic of 17° and 44° respectively, and the large asteroid Pallas is inclined at 34°. In 1966, Peter Goldreich published a classic paper on the evolution of the Moon's orbit and on the orbits of other moons in the Solar System. He showed that, for each planet, there is a distance such that moons closer to the planet than that distance maintain an almost constant orbital inclination with respect to the planet's equator (with an orbital precession mostly due to the tidal influence of the planet), whereas moons farther away maintain an almost constant orbital inclination with respect to the ecliptic (with precession due mostly to the tidal influence of the sun). The moons in the first category, with the exception of Neptune's moon Triton, orbit near the equatorial plane. He concluded that these moons formed from equatorial accretion disks. But he found that the Moon, although it was once inside the critical distance from the Earth, never had an equatorial orbit as would be expected from various scenarios for its origin. This is called the lunar inclination problem, to which various solutions have since been proposed. Other meaning For planets and other rotating celestial bodies, the angle of the equatorial plane relative to the orbital plane – such as the tilt of the Earth's poles toward or away from the Sun – is sometimes also called inclination, but less ambiguous terms are axial tilt or obliquity.
Physical sciences
Celestial mechanics
Astronomy
56462
https://en.wikipedia.org/wiki/Carpal%20tunnel%20syndrome
Carpal tunnel syndrome
Carpal tunnel syndrome (CTS) is a nerve compression syndrome associated with the collected signs and symptoms of compression of the median nerve at the carpal tunnel in the wrist. Carpal tunnel syndrome usually has no known cause, but there are environmental and medical risk factors associated with the condition. CTS can affect both wrists. Other conditions can cause CTS such as wrist fracture or rheumatoid arthritis. After fracture, the resulting swelling, bleeding, and deformity compress the median nerve. With rheumatoid arthritis, the enlarged synovial lining of the tendons causes compression. The main symptoms are pain in the hand, numbness, and tingling in the thumb, index finger, middle finger, and the thumb side of the ring finger. Symptoms are typically most troublesome at night. Many people sleep with their wrists bent, and the ensuing symptoms may lead to awakening. Untreated, and over years to decades, CTS causes loss of sensibility, weakness, and shrinkage (atrophy) of the thenar muscles at the base of the thumb. Work-related factors such as vibration, wrist extension or flexion, hand force, and repetitive strain are risk factors for CTS. Other risk factors include being overweight, female, diabetes mellitus, rheumatoid arthritis, thyroid disease, and genetics. Diagnosis can be made with a high probability based on characteristic symptoms and signs. It can also be measured with electrodiagnostic tests. People wake less often at night if they wear a wrist splint. Injection of corticosteroids may or may not alleviate symptoms better than simulated (placebo) injections. There is no evidence that corticosteroid injection sustainably alters the natural history of the disease, which seems to be a gradual progression of neuropathy. Surgery to cut the transverse carpal ligament is the only known disease modifying treatment. Anatomy The carpal tunnel is an anatomical compartment located at the base of the palm. Nine flexor tendons and the median nerve pass through the carpal tunnel that is surrounded on three sides by the carpal bones that form an arch. The median nerve provides feeling or sensation to the thumb, index finger, long finger, and half of the ring finger. At the level of the wrist, the median nerve supplies the muscles at the base of the thumb that allow it to abduct, move away from the other four fingers, as well as move out of the plane of the palm. The carpal tunnel is located at the middle third of the base of the palm, bounded by the bony prominence of the scaphoid tubercle and trapezium at the base of the thumb, and the hamate hook that can be palpated along the axis of the ring finger. From the anatomical position, the carpal tunnel is bordered on the anterior surface by the transverse carpal ligament, also known as the flexor retinaculum. The flexor retinaculum is a strong, fibrous band that attaches to the pisiform and the hamulus of the hamate. The proximal boundary is the distal wrist skin crease, and the distal boundary is approximated by a line known as Kaplan's cardinal line. This line uses surface landmarks, and is drawn between the apex of the skin fold between the thumb and index finger to the palpated hamate hook. Pathophysiology The carpal tunnel is formed by the carpal bones and the transverse carpal ligament. The median nerve passes through this space along with the flexor tendons. Increased compartmental pressure for any reason can squeeze the median nerve. Theoretically, increased pressure can interfere with normal intraneural blood flow, eventually causing a cascade of physiological changes in the nerve itself. There is a dose-respondent curve such that greater and longer periods of pressure are associated with greater nerve dysfunction. The symptoms and signs of carpal tunnel syndrome causes are hypertrophy of the synovial tissue surrounding the flexor tendons such as with rheumatoid arthritis. Prolonged pressure can lead to a cascade of physiological changes in neural tissue. First, the blood-nerve barrier breaks down (increased permeability of perineureum and endothelial cells of endoneural blood vessels). If the pressure continues, the nerves will start the process of demyelination under the area of compression. This will result in abnormal nerve conduction even when the pressure is relieved leading to persistent sensory symptoms until remyelination can occur. If the compression continues and is severe enough, axons may be injured and Wallerian degeneration will occur. At this point there may be weakness and muscle atrophy, depending on the extent of axon damage. The critical pressure above which the microcirculatory environment of a nerve becomes compromised depends on diastolic/systolic blood pressure. Higher blood pressure will require higher external pressure on the nerve to disrupt its microvascular environment. The critical pressure necessary to disrupt the blood supply of a nerve is approximately 30mm Hg below diastolic blood pressure or 45mm Hg below mean arterial pressure. For normohypertensive (normal blood pressure) adults, the average values for systolic blood pressure is 116mm Hg diastolic blood pressure is 69mm Hg. Using this data, the average person would become symptomatic with approximately 39mm Hg of pressure in the wrist (69 - 30 = 39 and 69 + (116 - 69)/3 - 45 ~ 40). Carpal tunnel syndrome patients tend to have elevated carpal tunnel pressures (12-31mm Hg) compared to controls (2.5 - 13mm Hg). Applying pressure to the carpal tunnel of normal subjects in a lab can produce mild neurophysiological changes at 30mm Hg with a rapid, complete sensory block at 60mm Hg. Carpal tunnel pressure may be affected by wrist movement/position, with flexion and extension capable of raising the tunnel pressure as high as 111mm Hg. Many of the activities associated with carpal tunnel symptoms such as driving, holding a phone, etc. involve flexing the wrist and it is likely due to an increase in carpal tunnel pressure during these activities. Nerve compression can result in various stages of nerve injury. The majority of carpal tunnel syndrome patients have a degree I nerve injury (Sunderland classification), also called neuropraxia. This is characterized by a conduction block, segmental demyelination, and intact axons. With no further compression, the nerves will remyelinate and fully recover. Severe carpal tunnel syndrome patients may have degree II/III injuries (Sunderland classification), or axonotmesis, where the axon is injured partially or fully. With axon injury there would be muscle weakness or atrophy, and with no further compression the nerves may only partially recover. While there is evidence that chronic compression is a major cause of carpal tunnel syndrome, it may not be the only cause. Several alternative, potentially speculative, theories exist which describe alternative forms of nerve entrapment. One is the theory of nerve scarring (specifically adherence between the mesoneurium and epineureum) preventing the nerve from gliding during wrist/finger movements, causing repetitive traction injuries. Another is the double crush syndrome, where compression may interfere with axonal transport, and two separate points of compression (e.g. neck and wrist), neither enough to cause local demyelination, may together impair normal nerve function. Epidemiology Carpal tunnel syndrome is estimated to affect one out of ten people during their lifetime and is the most common nerve compression syndrome. There is notable variation in such estimates based on how one defines the problem, in particular whether one studies people presenting with symptoms vs. measurable median neuropathy whether or not people are seeking care. Idiopathic neuropathy accounts for about 90% of all nerve compression syndromes. The best data regarding CTS comes from population-based studies, which demonstrate no relationship to gender, and increasing prevalence (accumulation) with age. Symptoms The characteristic symptom of CTS is numbness, tingling, or burning sensations in the thumb, index, middle, and radial half of the ring finger. These areas process sensation through the median nerve. Numbness or tingling is usually worse with sleep. People tend to sleep with their wrists flexed, which increases pressure on the nerve. Ache and discomfort may be reported in the forearm or even the upper arm. Symptoms that are not characteristic of CTS include pain in the wrists or hands, loss of grip strength, minor loss of sleep, and loss of manual dexterity. As the median neuropathy gets worse, there is loss of sensibility in the thumb, index, middle, and thumb side of the ring finger. As the neuropathy progresses, there may be first weakness, then to atrophy of the muscles of thenar eminence (the flexor pollicis brevis, opponens pollicis, and abductor pollicis brevis). The sensibility of the palm remains normal because the superficial sensory branch of the median nerve branches proximal to the transverse carpal ligament (TCL) and travels superficial to it. Median nerve symptoms may arise from nerve compression at the level of the thoracic outlet or the area where the median nerve passes between the two heads of the pronator teres in the forearm, although this is debated. Signs Severe CTS is associated with measurable loss of sensibility. Diminished threshold sensibility (the ability to distinguish different amounts of pressure) can be measured using Semmes-Weinstein monofilament testing. Diminished discriminant sensibility can be measured by testing two-point discrimination: the number of millimeters two points of contact need to be separated before you can distinguish them. A person with idiopathic carpal tunnel syndrome will not have any sensory loss over the thenar eminence (bulge of muscles in the palm of hand and at the base of the thumb). This is because the palmar branch of the median nerve, which innervates that area of the palm, separates from the median nerve and passes over the carpal tunnel. Severe CTS is also associated with weakness and atrophy of the muscles at the base of the thumb. The ability to palmarly abduct the thumb may be lost. CTS can be detected on examination using one of several maneuvers to provoke paresthesia (a sensation of tingling or "pins and needles" in the median nerve distribution). These so-called provocative signs include: Phalen's maneuver. Performed by fully flexing the wrist, then holding this position and awaiting symptoms. A positive test is one that results in paresthesia in the median nerve distribution within sixty seconds. Tinel's sign is performed by lightly tapping the median nerve just proximal to flexor retinaculum to elicit paresthesia. Durkan's test, carpal compression test, or applying firm pressure to the palm over the nerve for up to 30 seconds to elicit paresthesia. Hand elevation test The hand elevation test is performed by lifting both hands above the head. Paresthesia in the median nerve distribution within 2 minutes is considered positive. Diagnostic performance characteristics such as sensitivity and specificity are reported, but difficult to interpret because of the lack of a consensus reference standard for CTS. Causes Most presentations of CTS have no known disease cause (idiopathic). The association of other factors with CTS is a source of notable debate. It is important to distinguish factors that provoke symptoms, and factors that are associated with seeking care, from factors that make the neuropathy worse. Genetic factors are believed to be the most-important determinants of who develops carpal tunnel syndrome. In other words, one's wrist structure seems programmed at birth to develop CTS later in life. A genome-wide association study (GWAS) of carpal tunnel syndrome identified 50 genomic loci significantly associated with the disease, including several loci previously known to be associated with human height. Some other factors that contribute to carpal tunnel syndrome are conditions such as diabetes, alcoholism, vitamin deficiency or toxicity as well as exposure to toxins. Conditions such as these don't necessarily increase the interstitial pressure of the carpal tunnel. One case-control study noted that individuals classified as obese (BMI >29) are 2.5 times more likely than slender individuals (BMI <20) to be diagnosed with CTS. It is not clear whether this association is due to an alteration of pathophysiology, a variation in symptoms, or a variation in care-seeking. Discrete pathophysiology and CTS Hereditary neuropathy with susceptibility to pressure palsies is a genetic condition that appears to increase the probability of developing CTS. Heterozygous mutations in the gene SH3TC2, associated with Charcot-Marie-Tooth, may confer susceptibility to neuropathy, including CTS. Association between common benign tumors such as lipomas, ganglion, and vascular malformation should be handled with care. Such tumors are very common and are more likely to cause pressure on the median nerve. Similarly, the association between transthyretin amyloidosis-associated polyneuropathy and carpal tunnel syndrome is under investigation. Prior carpal tunnel release is often noted in individuals who later present with transthyretin amyloid-associated cardiomyopathy. There is consideration that bilateral carpal tunnel syndrome could be a reason to consider amyloidosis, timely diagnosis of which could improve heart health. Amyloidosis is rare, even among people with carpal tunnel syndrome (0.55% incidence within 10 years of carpal tunnel release). In the absence of other factors associated with a notable probability of amyloidosis, it is not clear that biopsy at the time of carpal tunnel release has a suitable balance between potential harms and potential benefits. Other specific pathophysiologies that can cause CTS via pressure include: Rheumatoid arthritis and other diseases that cause inflammation of the flexor tendons. With severe untreated hypothyroidism, generalized myxedema causes deposition of mucopolysaccharides within both the perineurium of the median nerve, as well as the tendons passing through the carpal tunnel. Association of CTS with lesser degrees of hypothyroidism is questioned. Pregnancy may bring out symptoms in genetically predisposed individuals, which may be caused by the temporary changes in hormones and fluid increase pressure in the carpal tunnel. High progesterone levels and water retention may increase the size of the synovium. Bleeding and swelling from a fracture or dislocation. This is referred to as acute carpal tunnel syndrome. Acromegaly causes excessive secretion of growth hormones. This causes the soft tissues and bones around the carpal tunnel to grow and compress the median nerve. Other considerations Double crush syndrome is a debated hypothesis that nerve compression or irritation of nerve branches contributing to the median nerve in the neck, or anywhere above the wrist, increases sensitivity of the nerve to compression in the wrist. There is little evidence to support this theory and some concern that it may be used to justify more surgery. CTS and activity Work-related factors that increase risk of CTS include vibration (5.4 odds ratio), hand force (4.2), and repetition (2.3). Exposure to wrist extension or flexion at work increases the risk of CTS by 2.0 times. , a systematic review of studies looking at the relationship between CTS and computer use has found current studies to be inconclusive and contradictory, due to poor study methods and confounding variables not being accounted for. The international debate regarding the relationship between CTS and repetitive hand use (at work in particular) is ongoing. The Occupational Safety and Health Administration (OSHA) has adopted rules and regulations regarding so-called "cumulative trauma disorders" based concerns regarding potential harm from exposure to repetitive tasks, force, posture, and vibration. A review of available scientific data by the National Institute for Occupational Safety and Health (NIOSH) indicated that job tasks that involve highly repetitive manual acts or specific wrist postures were associated with symptoms of CTS, but there was not a clear distinction of paresthesia (appropriate) from pain (inappropriate) and causation was not established. The distinction from work-related arm pains that are not carpal tunnel syndrome was unclear. It is proposed that repetitive use of the arm can affect the biomechanics of the upper limb or cause damage to tissues. It is proposed that postural and spinal assessment along with ergonomic assessments should be considered, based on observation that addressing these factors has been found to improve comfort in some studies although experimental data are lacking and the perceived benefits may not be specific to those interventions. A 2010 survey by NIOSH showed that two-thirds of the 5million carpal tunnel diagnosed in the US that year were related to work. Women are more likely to be diagnosed with work-related carpal tunnel syndrome than men. Many if not most patients described in published series of carpal tunnel release are older and often not working. Normal pressure of the carpal tunnel has been defined as a range of . Wrist flexion increases the pressure eight-fold and extension increases it ten-fold. There is speculation that repetitive flexion and extension in the wrist can cause thickening of the synovial tissue that lines the tendons within the carpal tunnel. Associated conditions A variety of patient factors can lead to CTS, including heredity, size of the carpal tunnel, associated local and systematic diseases, and certain habits. Non-traumatic causes generally happen over a period of time, and are not triggered by one certain event. Many of these factors are manifestations of physiologic aging. Diagnosis There is no consensus reference standard for the diagnosis of carpal tunnel syndrome. A combination of characteristic symptoms (how it feels) and signs (what the clinician finds on exam) are associated with a high probability of CTS without electrophysiological testing. Electrodiagnostic testing including electromyography, and nerve conduction studies can objectively measure and verify median neuropathy. Ultrasound can image and measure the cross sectional diameter of the median nerve, which has some correlation with CTS. The role of ultrasound in diagnosis—just as for electrodiagnostic testing—is a matter of debate. EDX cannot fully exclude the diagnosis of CTS due to the lack of sensitivity. The role of confirmatory electrodiagnostic testing is debated. The goal of electrodiagnostic testing is to compare the speed of conduction in the median nerve with conduction in other nerves supplying the hand. When the median nerve is compressed, it will conduct more slowly than normal and more slowly than other nerves. Nerve compression results in damage to the myelin sheath and manifests as delayed latencies and slowed conduction velocities. Electrodiagnosis rests upon demonstrating impaired median nerve conduction across the carpal tunnel in context of normal conduction elsewhere. It is often stated that normal electrodiagnostic studies do not preclude the diagnosis of carpal tunnel syndrome. The rationale for this is that a threshold of neuropathy must be reached before study results become abnormal and also that threshold values for abnormality vary. Others contend that idiopathic median neuropathy at the carpal tunnel with normal electrodiagnostic tests would represent very, very mild neuropathy that would be best managed as a normal median nerve. Even more important, notable symptoms with mild disease is strongly associated with unhelpful thoughts and symptoms of worry and despair. Notable CTS should remind clinicians to always consider the whole person, including their mindset and circumstances, in strategies to help people get and stay healthy. A joint report published by the American Association of Neuromuscular & Electrodiagnostic Medicine (AANEM), the American Academy of Physical Medicine and Rehabilitation (AAPM&R), and the American Academy of Neurology defines practice parameters, standards, and guidelines for EDX studies of CTS based on an extensive critical literature review. This joint review concluded median and sensory nerve conduction studies are valid and reproducible in a clinical laboratory setting and a clinical diagnosis of CTS can be made with a sensitivity greater than 85% and specificity greater than 95%. The AANEM has issued evidence-based practice guidelines for the diagnosis of carpal tunnel syndrome, both by electrodiagnostic studies and by neuromuscular ultrasound. Imaging The role of MRI or ultrasound imaging in the diagnosis of CTS is unclear. Their routine use is not recommended. Morphological MRI has high sensitivity but low specificity for CTS. High signal intensity may suggest accumulation of axonal transportation, myelin sheath degeneration or oedema. However, more recent quantitative MRI techniques which derive repeatable, reliable and objective biomarkers from nerves and skeletal muscle may have utility, including diffusion-weighted (typically diffusion tensor) MRI which has demonstrable normal values and aberrations in carpal tunnel syndrome. Differential diagnosis Cervical radiculopathy can also cause paresthesia abnormal sensibility in the hands and wrist. The distribution usually follows the nerve root, and the paresthesia may be provoked by neck movement. Electromyography and imaging of the cervical spine can help to differentiate cervical radiculopathy from carpal tunnel syndrome if the diagnosis is unclear. Carpal tunnel syndrome is sometimes applied as a label to anyone with pain, numbness, swelling, or burning in the radial side of the hands or wrists. When pain is the primary symptom, carpal tunnel syndrome is unlikely to be the source of the symptoms. When the symptoms and signs point to atrophy and muscle weakness more than numbness, consider neurodegenerative disorders such as Amyotrophic Lateral Sclerosis or Charcot-Marie Tooth. Prevention There is little or no data to support the concept that activity adjustment prevents carpal tunnel syndrome. The evidence for using a wrist rest at a computer keyboard is debated. There is also little research supporting that ergonomics is related to carpal tunnel syndrome. Given that biological factors such as genetic predisposition and anthropometric features are more strongly associated with carpal tunnel syndrome than occupational/environmental factors such as hand use, CTS might not be prevented by activity modifications. Some claim that worksite modifications such as switching from a QWERTY computer keyboard layout to Dvorak is helpful, but meta-analyses of the available studies note limited supported evidence. Treatment There are more than 50 types of treatments for CTS with varied levels of evidence and recommendation across healthcare guidelines, with evidence most strongly supporting surgery, steroids, splinting for wrist positioning, and physical or occupational therapy interventions. When selecting treatment, it is important to consider the severity and chronicity of the CTS pathophysiology and to distinguish treatments that can alter the natural history of the pathophysiology (disease-modifying treatments) and treatments that only alleviate symptoms (palliative treatments). The strongest evidence for disease-modifying treatment in chronic or severe CTS cases is carpal tunnel surgery to change the shape of the carpal tunnel. The American Academy of Orthopedic Surgeons recommends proceeding conservatively with a course of nonsurgical therapies tried before release surgery is considered. A different treatment should be tried if the current treatment fails to resolve the symptoms within 2 to 7 weeks. Early surgery with carpal tunnel release is indicated where there is evidence of median nerve denervation or a person elects to proceed directly to surgical treatment. Recommendations may differ when carpal tunnel syndrome is found in association with the following conditions: diabetes mellitus, coexistent cervical radiculopathy, hypothyroidism, polyneuropathy, pregnancy, rheumatoid arthritis, and carpal tunnel syndrome in the workplace. CTS related to another pathophysiology is addressed by treating that pathology. For instance, disease-modifying medications for rheumatoid arthritis or surgery for traumatic acute carpal tunnel syndrome. There is insufficient evidence to recommend gabapentin, non-steroidal anti-inflammatories (NSAIDs), yoga, acupuncture, low level laser therapy, magnet therapy, vitamin B6 or other supplements. Splint immobilization Wrist braces (splints) alleviate symptoms by keeping the wrist straight, which avoids the increased pressure in the carpal tunnel associated with wrist flexion or extension. They are used primarily to help people sleep. Many health professionals suggest that, for the best results, one should wear braces at night. When possible, braces can be worn during the activity primarily causing stress on the wrists. The brace should not generally be used during the day as wrist activity is needed to keep the wrist from becoming stiff and to prevent muscles from weakening. Corticosteroids Corticosteroid injections may provide temporary alleviation of symptoms although they are not clearly better than placebo. This form of treatment is thought to reduce discomfort in those with CTS due to its ability to decrease median nerve swelling. The use of ultrasound while performing the injection is more expensive but leads to faster resolution of CTS symptoms. The injections are done under local anesthesia. This treatment is not appropriate for extended periods, however. In general, local steroid injections are only used until more definitive treatment options can be used. Corticosteroid injections do not appear to slow disease progression. Surgery Release of the transverse carpal ligament is undertaken in carpal tunnel surgery. The purpose of cutting the transverse carpal ligament is to relieve pressure on the median nerve, and this is a type of nerve decompression surgery. It is recommended when there is constant (not just intermittent) numbness, muscle weakness, or atrophy, and when night-splinting or other palliative interventions no longer alleviate intermittent symptoms. The surgery may be done with local or regional anesthesia with or without sedation, or under general anesthesia. In general, milder cases can be controlled without surgery for months to years, but severe cases are unrelenting symptomatically and are likely to result in surgical treatment. Physical and occupational therapy There are many different techniques used in manual therapy for patients with CTS. Some examples are manual and instrumental soft tissue mobilizations, massage therapy, bone mobilizations or manipulations, and neurodynamic techniques, focused on skeletal system or soft tissue. A randomized control trial published in 2017 sought to examine the efficacy of manual therapy techniques for the treatment of carpal tunnel syndrome. The study included a total of 140 individuals diagnosed with carpal tunnel syndrome and the patients were divided into two groups. One group received treatment that consisted of manual therapy. In cases of epineural tethering in the upper extremity, manual therapy can reduce this dysfunction and can have a positive impact on the gliding of the nerves through the carpal tunnel while moving the elbow, fingers, or wrist. Manual therapy included the incorporation of specified neurodynamic techniques, functional massage, and carpal bone mobilizations. Another group only received treatment through electrophysical modalities. The duration of the study was over the course of 20 physical therapy sessions for both groups. Results of this study showed that the group being treated through manual techniques and mobilizations yielded a 290% reduction in overall pain when compared with pain levels after treatment. Total function improved by 47%. Conversely, the group being treated with electrophysical modalities reported a 47% reduction in overall pain with a 9% increase in function. Self-myofascial ligament stretching has been suggested as an effective technique, although a meta-analysis claimed this kind of therapy does not show significant improvement in symptoms or function. Tendon and nerve gliding exercises appear to be useful in carpal tunnel syndrome. Alternative medicine A 2018 Cochrane review on acupuncture and related interventions for the treatment of carpal tunnel syndrome concluded that, "Acupuncture and laser acupuncture may have little or no effect in the short term on symptoms of carpal tunnel syndrome (CTS) in comparison with placebo or sham acupuncture." It was also noted that all studies had an unclear or high overall risk of bias and that all evidence was of low or very low quality. Prognosis The natural history of untreated CTS seems to be gradual worsening of the neuropathy. It is difficult to prove that this is always the case, but the supportive evidence is compelling. Atrophy of the thenar muscles, weakness of palmar abduction, and loss of sensibility (constant numbness as opposed to intermittent paresthesia) are signs of advanced neuropathy. Advanced neuropathy is often permanent. The nerve will try to recover after surgery for more than 2 years, but the recovery may be incomplete. Paresthesia may increase after release of advanced carpal tunnel syndrome, and people may feel worse than they did prior to surgery for many months. Troublesome recovery seems related to symptoms of anxiety or depression, and unhelpful thoughts about symptoms (such as worst-case or catastrophic thinking) as well as advanced neuropathy with potentially permanent neuropathy. Recurrence of carpal tunnel syndrome after successful surgery is rare. Caution is warranted in considering additional surgery for people dissatisfied with the result of carpal tunnel release as perceived recurrence may more often be due to renewed awareness of persistent symptoms rather than worsening pathology. History CTS was first described around 1850, but infrequently diagnosed until findings publicized by neurologist W. Russell Brain in 1947. People were often diagnosed with acroparesthesia. Clinicians would often ascribe it to "poor circulation" and not pursue it further. Sir James Paget described median nerve compression at the carpal tunnel in two patients after trauma in 1854. The first was due to an injury where a cord had been wrapped around a man's wrist. The second was related to a distal radial fracture. For the first case Paget performed an amputation of the hand. For the second case Paget recommended a wrist splint. The first to notice the association between the carpal ligament pathology and median nerve compression appear to have been Pierre Marie and Charles Foix in 1913. They described the results of a postmortem of an 80-year-old man with bilateral carpal tunnel syndrome. They suggested that division of the carpal ligament would be curative in such cases. Putman had previously described a series of 37 patients and suggested a vasomotor origin. The association between the thenar muscle atrophy and compression was noted in 1914. The name "carpal tunnel syndrome" appears to have been coined by Moersch in 1938. Physician George S. Phalen of the Cleveland Clinic drew attention to the pathology of compression as the reason for CTS after working with a group of patients in the 1950s and 1960s. Treatment In 1933 Sir James Learmonth outlined a method of nerve decompression of the nerve at the wrist. This procedure appears to have been pioneered by the Canadian surgeons Herbert Galloway and Andrew MacKinnon in 1924 in Winnipeg but was not published. Endoscopic release was described in 1988.
Biology and health sciences
Types
Health
56465
https://en.wikipedia.org/wiki/Cassava
Cassava
Manihot esculenta, commonly called cassava, manioc, or yuca (among numerous regional names), is a woody shrub of the spurge family, Euphorbiaceae, native to South America, from Brazil, Paraguay and parts of the Andes. Although a perennial plant, cassava is extensively cultivated in tropical and subtropical regions as an annual crop for its edible starchy tuberous root. Cassava is predominantly consumed in boiled form, but substantial quantities are processed to extract cassava starch, called tapioca, which is used for food, animal feed, and industrial purposes. The Brazilian , and the related garri of West Africa, is an edible coarse flour obtained by grating cassava roots, pressing moisture off the obtained grated pulp, and finally drying it (and roasting in the case of both and garri). Cassava is the third-largest source of carbohydrates in food in the tropics, after rice and maize, making it an important staple; more than 500 million people depend on it. It offers the advantage of being exceptionally drought-tolerant, and able to grow productively on poor soil. The largest producer is Nigeria, while Thailand is the largest exporter of cassava starch. Cassava is grown in sweet and bitter varieties; both contain toxins, but the bitter varieties have them in much larger amounts. Cassava has to be prepared carefully for consumption, as improperly prepared material can contain sufficient cyanide to cause poisoning. The more toxic varieties of cassava have been used in some places as famine food during times of food insecurity. Farmers may however choose bitter cultivars to minimise crop losses. Etymology The generic name Manihot and the common name "manioc" both derive from the Guarani (Tupi) name mandioca or manioca for the plant. The specific name esculenta is Latin for 'edible'. The common name "cassava" is a 16th century word from the French or Portuguese cassave, in turn from Taíno caçabi. The common name "yuca" or "yucca" is most likely also from Taíno, via Spanish yuca or juca. Description The harvested part of a cassava plant is the storage root. This is long and tapered, with an easily detached rough brown rind. The white or yellowish flesh is firm and even in texture. Commercial cultivars can be wide at the top, and some long, with a woody vascular bundle running down the middle. The tuberous roots are largely starch, with small amounts of calcium (16 milligrams per 100 grams), phosphorus (27 mg/100 g), and vitamin C (20.6 mg/100 g). Cassava roots contains little protein, whereas the leaves are rich in it, except for being low in methionine, an essential amino acid. Genome The complete and haplotype-resolved African cassava (TME204) genome has been reconstructed and made available using the Hi-C technology. The genome shows abundant novel gene loci with enriched functionality related to chromatin organization, meristem development, and cell responses. Differentially expressed transcripts of different haplotype origins were enriched for different functionality during tissue development. In each tissue, 20–30% of transcripts showed allele-specific expression differences with <2% of direction-shifting. Despite high gene synteny, the HiFi genome assembly revealed extensive chromosome rearrangements and abundant intra-genomic and inter-genomic divergent sequences, with significant structural variations mostly related to long terminal repeat retrotransposons. Although smallholders are otherwise economically inefficient producers, they are vital to productivity at particular times. Small cassava farmers are no exception. Genetic diversity is vital when productivity has declined due to pests and diseases, and smallholders tend to retain less productive but more diverse gene pools. The molecular genetics of starchy root development in cassava have been analyzed and compared to other root and tuber crops, including possible (unproven) roles for (FT) orthologs. History Wild populations of M. esculenta subspecies flabellifolia, shown to be the progenitor of domesticated cassava, are centered in west-central Brazil, where it was likely first domesticated no more than 10,000 years ago. Forms of the modern domesticated species can also be found growing in the wild in the south of Brazil. By 4600 BC, cassava pollen appears in the Gulf of Mexico lowlands, at the San Andrés archaeological site. The oldest direct evidence of cassava cultivation comes from a 1,400-year-old Maya site, Joya de Cerén, in El Salvador. It became a staple food of the native populations of northern South America, southern Mesoamerica, and the Taino people in the Caribbean islands, who grew it using a high-yielding form of shifting agriculture by the time of European contact in 1492. Cassava was a staple food of pre-Columbian peoples in the Americas and is often portrayed in indigenous art. The Moche people often depicted cassava in their ceramics. Spaniards in their early occupation of Caribbean islands did not want to eat cassava or maize, which they considered insubstantial, dangerous, and not nutritious. They much preferred foods from Spain, specifically wheat bread, olive oil, red wine, and meat, and considered maize and cassava damaging to Europeans. The cultivation and consumption of cassava were nonetheless continued in both Portuguese and Spanish America. Mass production of cassava bread became the first Cuban industry established by the Spanish. Ships departing to Europe from Cuban ports such as Havana, Santiago, Bayamo, and Baracoa carried goods to Spain, but sailors needed to be provisioned for the voyage. The Spanish also needed to replenish their boats with dried meat, water, fruit, and large amounts of cassava bread. Sailors complained that it caused them digestive problems. Portuguese traders introduced cassava to Africa from Brazil in the 16th century. Around the same period, it was introduced to Asia through Columbian Exchange by Portuguese and Spanish traders, who planted it in their colonies in Goa, Malacca, Eastern Indonesia, Timor and the Philippines. Cassava has also become an important crop in Asia. While it is a valued food staple in parts of eastern Indonesia, it is primarily cultivated for starch extraction and bio-fuel production in Thailand, Cambodia and Vietnam. Cassava is sometimes described as the "bread of the tropics" but should not be confused with the tropical and equatorial bread tree (Encephalartos), the breadfruit (Artocarpus altilis) or the African breadfruit (Treculia africana). This description definitely holds in Africa and parts of South America; in Asian countries such as Vietnam fresh cassava barely features in human diets. Cassava was introduced to East Africa around 1850 by Arab and European settlers, who promoted its cultivation as a reliable crop to mitigate the effects of drought and famine. There is a legend that cassava was introduced in 1880–1885 to the South Indian state of Kerala by the King of Travancore, Vishakham Thirunal Maharaja, after a great famine hit the kingdom, as a substitute for rice. However, cassava was cultivated in the state before that time. Cassava is called kappa or maricheeni in Malayalam, and tapioca in Indian English usage. Cultivation Optimal conditions for cassava cultivation are mean annual temperatures between , annual precipitation between , and an annual growth period of no less than 240 days. Cassava is propagated by cutting the stem into sections of approximately , these being planted prior to the wet season. Cassava growth is favorable under temperatures ranging from , but it can tolerate temperatures as low as and as high as . These conditions are found, among other places, in the northern part of the Gulf Coastal Plain in Mexico. In this part of Mexico the following soil types have been shown to be good for cassava cultivation: phaeozem, regosol, arenosol, andosol and luvisol. Harvesting Before harvest, the leafy stems are removed. The harvest is gathered by pulling up the base of the stem and cutting off the tuberous roots. Handling and storage Cassava deteriorates after harvest, when the tuberous roots are first cut. The healing mechanism produces coumaric acid, which oxidizes and blackens the roots, making them inedible after a few days. This deterioration is related to the accumulation of reactive oxygen species initiated by cyanide release during mechanical harvesting. Cassava shelf life may be increased up to three weeks by overexpressing a cyanide-insensitive alternative oxidase, which suppressed ROS by 10-fold. Post-harvest deterioration is a major obstacle to the export of cassava. Fresh cassava can be preserved like potato, using thiabendazole or bleach as a fungicide, then wrapping in plastic, freezing, or applying a wax coating. While alternative methods for controlling post-harvest deterioration have been proposed, such as preventing reactive oxygen species effects by using plastic bags during storage and transport, coating the roots with wax, or freezing roots, such strategies have proved to be economically or technically impractical, leading to breeding of cassava varieties with improved durability after harvest, achieved by different mechanisms. One approach used gamma rays to try to silence a gene involved in triggering deterioration; another strategy selected for plentiful carotenoids, antioxidants which may help to reduce oxidization after harvest. Pests and diseases Cassava is subject to pests from multiple taxonomic groups, including nematodes, and insects, as well as diseases caused by viruses, bacteria, and fungi. All cause reductions in yield, and some cause serious losses of crops. Viruses Several viruses cause enough damage to cassava crops to be of economic importance. The African cassava mosaic virus causes the leaves of the cassava plant to wither, limiting the growth of the root. An outbreak of the virus in Africa in the 1920s led to a major famine. The virus is spread by the whitefly and by the transplanting of diseased plants into new fields. Sometime in the late-1980s, a mutation occurred in Uganda that made the virus even more harmful, causing the complete loss of leaves. This mutated virus spread at a rate of per year, and as of 2005 was found throughout Uganda, Rwanda, Burundi, the Democratic Republic of the Congo and the Republic of the Congo. Viruses are a severe production limitation in the tropics. They are the primary reason for the complete lack of yield increases in the 25 years . Cassava brown streak virus disease is a major threat to cultivation worldwide. Cassava mosaic virus (CMV) is widespread in Africa, causing cassava mosaic disease (CMD). Bredeson et al. 2016 find the M. esculenta cultivars most widely used on that continent have M. carthaginensis subsp. glaziovii genes of which some appear to be CMD resistance genes. Although the ongoing CMD pandemic affects both East and Central Africa, Legg et al. found that these two areas have two distinct subpopulations of the vector, Bemisia tabaci whiteflies. Genetically engineered cassava offers opportunities for the improvement of virus resistance, including CMV and CBSD resistance. Bacteria Among the most serious bacterial pests is Xanthomonas axonopodis pv. manihotis, which causes bacterial blight of cassava. This disease originated in South America and has followed cassava around the world. Bacterial blight has been responsible for near catastrophic losses and famine in past decades, and its mitigation requires active management practices. Several other bacteria attack cassava, including the related Xanthomonas campestris pv. cassavae, which causes bacterial angular leaf spot. Fungi Several fungi bring about significant crop losses, one of the most serious being cassava root rot; the pathogens involved are species of Phytophthora, the genus which causes potato blight. Cassava root rot can result in losses of as much as 80 percent of the crop. A major pest is a rust caused by Uromyces manihotis. Superelongation disease, caused by Elsinoë brasiliensis, can cause losses of over 80 percent of young cassava in Latin America and the Caribbean when temperature and rainfall are high. Nematodes Nematode pests of cassava are thought to cause harms ranging from negligible to seriously damaging, making the choice of management methods difficult. A wide range of plant parasitic nematodes have been reported associated with cassava worldwide. These include Pratylenchus brachyurus, Rotylenchulus reniformis, Helicotylenchus spp., Scutellonema spp. and Meloidogyne spp., of which Meloidogyne incognita and Meloidogyne javanica are the most widely reported and economically important. Meloidogyne spp. feeding produces physically damaging galls with eggs inside them. Galls later merge as the females grow and enlarge, and they interfere with water and nutrient supply. Cassava roots become tough with age and restrict the movement of the juveniles and the egg release. It is therefore possible that extensive galling can be observed even at low densities following infection. Other pests and diseases can gain entry through the physical damage caused by gall formation, leading to rots. They have not been shown to cause direct damage to the enlarged tuberous roots, but plant height can be reduced if the root system is reduced. Nematicides reduce the numbers of galls per feeder root, along with fewer rots in the tuberous roots. The organophosphorus nematicide femaniphos does not reduce crop growth or harvest yield. Nematicide use in cassava does not increase harvested yield significantly, but lower infestation at harvest and lower subsequent storage loss provide a higher effective yield. The use of tolerant and resistant cultivars is the most practical management method in most locales. Insects Insects such as stem borers and other beetles, moths including Chilomima clarkei, scale insects, fruit flies, shootflies, burrower bugs, grasshoppers, leafhoppers, gall midges, leafcutter ants, and termites contribute to losses of cassava in the field, while others contribute to serious losses, between 19% and 30%, of dried cassava in storage. In Africa, a previous issue was the cassava mealybug (Phenacoccus manihoti) and cassava green mite (Mononychellus tanajoa). These pests can cause up to 80 percent crop loss, which is extremely detrimental to the production of subsistence farmers. These pests were rampant in the 1970s and 1980s but were brought under control following the establishment of the Biological Control Centre for Africa of the International Institute of Tropical Agriculture (IITA) under the leadership of Hans Rudolf Herren. The Centre investigated biological control for cassava pests; two South American natural enemies Anagyrus lopezi (a parasitoid wasp) and Typhlodromalus aripo (a predatory mite) were found to effectively control the cassava mealybug and the cassava green mite, respectively. Production In 2022, world production of cassava root was 330 million tonnes, led by Nigeria with 18% of the total (table). Other major growers were Democratic Republic of the Congo and Thailand. Cassava is the third-largest source of carbohydrates in food in the tropics, after rice and maize. making it an important staple; more than 500 million people depend on it. It offers the advantage of being exceptionally drought-tolerant, and able to grow productively on poor soil. Cassava grows well within 30° of the equator, where it can be produced at up to above sea level, and with of rain per year. These environmental tolerances suit it to conditions across much of South America and Africa. Cassava yields a large amount of food energy per unit area of land per day – , as compared with for rice, for wheat and for maize. Cassava, yams (Dioscorea spp.), and sweet potatoes (Ipomoea batatas) are important sources of food in the tropics. The cassava plant gives the third-highest yield of carbohydrates per cultivated area among crop plants, after sugarcane and sugar beets. Cassava plays a particularly important role in agriculture in developing countries, especially in sub-Saharan Africa, because it does well on poor soils and with low rainfall, and because it is a perennial that can be harvested as required. Its wide harvesting window allows it to act as a famine reserve and is invaluable in managing labor schedules. It offers flexibility to resource-poor farmers because it serves as either a subsistence or a cash crop. Worldwide, 800 million people depend on cassava as their primary food staple. Toxicity Cassava roots, peels and leaves are dangerous to eat raw because they contain linamarin and lotaustralin, which are toxic cyanogenic glycosides. These are decomposed by the cassava enzyme linamarase, releasing poisonous hydrogen cyanide. Cassava varieties are often categorized as either bitter (high in cyanogenic glycosides) or sweet (low in those bitter compounds). Sweet cultivars can contain as little as 20 milligrams of cyanide per kilogram of fresh roots, whereas bitter cultivars may contain as much as 1000 milligrams per kilogram. Cassavas grown during drought are especially high in these toxins. A dose of 25 mg of pure cassava cyanogenic glucoside, which contains 2.5 mg of cyanide, is sufficient to kill a rat. Excess cyanide residue from improper preparation causes goiters and acute cyanide poisoning, and is linked to ataxia (a neurological disorder affecting the ability to walk, also known as konzo). It has also been linked to tropical fibrocalcific pancreatitis in humans, leading to chronic pancreatitis. Symptoms of acute cyanide intoxication appear four or more hours after ingesting raw or poorly processed cassava: vertigo, vomiting, goiter, ataxia, partial paralysis, collapse, and death. It can be treated easily with an injection of thiosulfate (which makes sulfur available for the patient's body to detoxify by converting the poisonous cyanide into thiocyanate). Chronic, low-level exposure to cyanide may contribute to both goiter and tropical ataxic neuropathy, also called konzo, which can be fatal. The risk is highest in famines, when as many as 3 percent of the population may be affected. Like many other root and tuber crops, both bitter and sweet varieties of cassava contain antinutritional factors and toxins, with the bitter varieties containing much larger amounts. The more toxic varieties of cassava have been used in some places as famine food during times of food insecurity. For example, during the shortages in Venezuela in the late 2010s, dozens of deaths were reported due to Venezuelans resorting to eating bitter cassava in order to curb starvation. Cases of cassava poisoning were also documented during the famine accompanying the Great Leap Forward (1958–1962) in China. Farmers may select bitter cultivars to reduce crop losses. Societies that traditionally eat cassava generally understand that processing (soaking, cooking, fermentation, etc.) is necessary to avoid getting sick. Brief soaking (four hours) of cassava is not sufficient, but soaking for 18–24 hours can remove up to half the level of cyanide. Drying may not be sufficient, either. For some smaller-rooted, sweet varieties, cooking is sufficient to eliminate all toxicity. The cyanide is carried away in the processing water and the amounts produced in domestic consumption are too small to have environmental impact. The larger-rooted, bitter varieties used for production of flour or starch must be processed to remove the cyanogenic glucosides. The large roots are peeled and then ground into flour, which is then soaked in water, squeezed dry several times, and toasted. The starch grains that flow with the water during the soaking process are also used in cooking. The flour is used throughout South America and the Caribbean. Industrial production of cassava flour, even at the cottage level, may generate enough cyanide and cyanogenic glycosides in the effluents to have a severe environmental impact. Uses Food and drink There are many ways of cooking cassava. It has to be prepared correctly to remove its toxicity. The root of the sweet variety is mild to the taste, like potatoes; Jewish households sometimes use it in cholent. It can be made into a flour that is used in breads, cakes and cookies. In Brazil, farofa, a dry meal made from cooked powdered cassava, is roasted in butter, eaten as a side dish, or sprinkled on other food. In Taiwanese culture, later spread to the United States, cassava "juices" are dried to a fine powder and used to make tapioca, a popular starch used to make bubbles, a chewy topping in bubble tea. Alcoholic beverages made from cassava include cauim (Brazil), kasiri (Venezuela, Guyana, Suriname), parakari or kari (Venezuela, Guyana, Surinam), and nihamanchi (South America), Preparation of bitter cassava An ancestral method used by the indigenous people of the Caribbean to detoxify cassava is by peeling, grinding, and mashing; filtering the mash through a basket tube (sebucan or tipiti) to remove the hydrogen cyanide; and drying and sieving the mash for flour. The poisonous filtrate water was boiled to release the hydrogen cyanide, and used as a base for stews. A safe processing method known as the "wetting method" is to mix the cassava flour with water into a thick paste, spread it in a thin layer over a basket and then let it stand for five hours at 30 °C in the shade. In that time, about 83% of the cyanogenic glycosides are broken down by linamarase; the resulting hydrogen cyanide escapes to the atmosphere, making the flour safe for consumption the same evening. The traditional method used in West Africa is to peel the roots and put them into water for three days to ferment. The roots are then dried or cooked. In Nigeria and several other west African countries, including Ghana, Cameroon, Benin, Togo, Ivory Coast, and Burkina Faso, they are usually grated and lightly fried in palm oil to preserve them. The result is a foodstuff called garri. Fermentation is also used in other places such as Indonesia, such as Tapai. The fermentation process also reduces the level of antinutrients, making the cassava a more nutritious food. The reliance on cassava as a food source and the resulting exposure to the goitrogenic effects of thiocyanate has been responsible for the endemic goiters seen in the Akoko area of southwestern Nigeria. Bioengineering has been applied to grow cassava with lower cyanogenic glycosides combined with fortification of vitamin A, iron and protein to improve the nutrition of people in sub-Saharan Africa. In Guyana the traditional cassareep is made from bitter cassava juice. The juice is boiled until it is reduced by half in volume, to the consistency of molasses and flavored with spices—including cloves, cinnamon, salt, sugar, and cayenne pepper. Traditionally, cassareep was boiled in a soft pot, the actual "pepper pot", which would absorb the flavors and also impart them (even if dry) to foods such as rice and chicken cooked in it. The poisonous but volatile hydrogen cyanide is evaporated by heating. Nevertheless, improperly cooked cassava has been blamed for a number of deaths. Amerindians from Guyana reportedly made an antidote by steeping chili peppers in rum. The natives of Guyana traditionally brought the product to town in bottles, and it is available on the US market in bottled form. Nutrition Raw cassava is 60% water, 38% carbohydrates, 1% protein, and has negligible fat (table). In a reference serving, raw cassava provides of food energy and 23% of the Daily Value (DV) of vitamin C, but otherwise has no micronutrients in significant content (i.e., above 10% of the relevant DV). Biofuel Cassava has been studied as a feedstock to produce ethanol as a biofuel, including to improve the efficiency of conversion from cassava flour, and to convert crop residues such as stems and leaves as well as the more easily processed roots. China has created facilities to produce substantial amounts of ethanol fuel from cassava roots. Animal feed Cassava roots and hay are used worldwide as animal feed. Young cassava hay is harvested at three to four month, when it reaches about above ground; it is dried in the sun until its dry matter content approaches 85 percent. The hay contains 20–27 percent protein and 1.5–4 percent tannin. It is valued as a source of roughage for ruminants such as cattle. Laundry starch Cassava is used in laundry products, especially as starch to stiffen shirts and other garments. Folklore In Java, a myth relates that food derives from the body of Dewi Teknowati, who killed herself rather than accept the advances of the god Batara Guru. She was buried, and her lower leg grew into a cassava plant. In Trinidad, folk stories tell of a saapina or snake-woman; the word is related to sabada, meaning to pound, for what is traditionally a woman's work of pounding cassava. The identity of the Macushi people of Guyana is closely bound up with the growth and processing of cassava in their slash-and-burn subsistence lifestyle. A story tells that the great spirit Makunaima climbed a tree, cutting off pieces with his axe; when they landed on the ground, each piece became a type of animal. The opossum brought the people to the tree, where they found all the types of food, including bitter cassava. A bird told the people how to prepare the cassava safely.
Biology and health sciences
Malpighiales
null
56483
https://en.wikipedia.org/wiki/Tourette%20syndrome
Tourette syndrome
Tourette syndrome or Tourette's syndrome (abbreviated as TS or Tourette's) is a common neurodevelopmental disorder that begins in childhood or adolescence. It is characterized by multiple movement (motor) tics and at least one vocal (phonic) tic. Common tics are blinking, coughing, throat clearing, sniffing, and facial movements. These are typically preceded by an unwanted urge or sensation in the affected muscles known as a premonitory urge, can sometimes be suppressed temporarily, and characteristically change in location, strength, and frequency. Tourette's is at the more severe end of a spectrum of tic disorders. The tics often go unnoticed by casual observers. Tourette's was once regarded as a rare and bizarre syndrome and has popularly been associated with coprolalia (the utterance of obscene words or socially inappropriate and derogatory remarks). It is no longer considered rare; about 1% of school-age children and adolescents are estimated to have Tourette's, though coprolalia occurs only in a minority. There are no specific tests for diagnosing Tourette's; it is not always correctly identified, because most cases are mild, and the severity of tics decreases for most children as they pass through adolescence. Therefore, many go undiagnosed or may never seek medical attention. Extreme Tourette's in adulthood, though sensationalized in the media, is rare, but for a small minority, severely debilitating tics can persist into adulthood. Tourette's does not affect intelligence or life expectancy. There is no cure for Tourette's and no single most effective medication. In most cases, medication for tics is not necessary, and behavioral therapies are the first-line treatment. Education is an important part of any treatment plan, and explanation alone often provides sufficient reassurance that no other treatment is necessary. Other conditions, such as attention deficit hyperactivity disorder (ADHD) and obsessive–compulsive disorder (OCD), are more likely to be present among those who are referred to specialty clinics than they are among the broader population of persons with Tourette's. These co-occurring conditions often cause more impairment to the individual than the tics; hence it is important to correctly distinguish co-occurring conditions and treat them. Tourette syndrome was named by French neurologist Jean-Martin Charcot for his intern, Georges Gilles de la Tourette, who published in 1885 an account of nine patients with a "convulsive tic disorder". While the exact cause is unknown, it is believed to involve a combination of genetic and environmental factors. The mechanism appears to involve dysfunction in neural circuits between the basal ganglia and related structures in the brain. Classification Most published research on Tourette syndrome originates in the United States; in international TS research and clinical practice, the Diagnostic and Statistical Manual of Mental Disorders (DSM) is preferred over the World Health Organization (WHO) classification, which is criticized in the 2021 European Clinical Guidelines. In the fifth version of the DSM (DSM-5), published in 2013, Tourette syndrome is classified as a motor disorder (a disorder of the nervous system that causes abnormal and involuntary movements). It is listed in the neurodevelopmental disorder category. Tourette's is at the more severe end of the spectrum of tic disorders; its diagnosis requires multiple motor tics and at least one vocal tic to be present for more than a year. Tics are sudden, repetitive, nonrhythmic movements that involve discrete muscle groups, while vocal (phonic) tics involve laryngeal, pharyngeal, oral, nasal or respiratory muscles to produce sounds. The tics must not be explained by other medical conditions or substance use. Other tic disorders include persistent (chronic) motor or vocal tics, in which one type of tic (motor or vocal, but not both) has been present for more than a year; and provisional tic disorder, in which motor or vocal tics have been present for less than one year. The fifth edition of the DSM replaced what had been called transient tic disorder with provisional tic disorder, recognizing that "transient" can only be defined in retrospect. Some experts believe that TS and persistent (chronic) motor or vocal tic disorder should be considered the same condition, because vocal tics are also motor tics in the sense that they are muscular contractions of nasal or respiratory muscles. Tourette syndrome is defined only slightly differently by the WHO; in its ICD-11, the International Statistical Classification of Diseases and Related Health Problems, Tourette syndrome is classified as a disease of the nervous system and a neurodevelopmental disorder, and only one motor tic and one or more vocal tics are required for diagnosis. Older versions of the ICD called it "combined vocal and multiple motor tic disorder [de la Tourette]". Genetic studies indicate that tic disorders cover a spectrum that is not recognized by the clear-cut distinctions in the current diagnostic framework. Since 2008, studies have suggested that Tourette's is not a unitary condition with a distinct mechanism, as described in the existing classification systems. Instead, the studies suggest that subtypes should be recognized to distinguish "pure TS" from TS that is accompanied by attention deficit hyperactivity disorder (ADHD), obsessive–compulsive disorder (OCD) or other disorders, similar to the way that subtypes have been established for other conditions, such as type 1 and type 2 diabetes. Elucidation of these subtypes awaits fuller understanding of the genetic and other causes of tic disorders. Characteristics Tics Tics are movements or sounds that take place "intermittently and unpredictably out of a background of normal motor activity", having the appearance of "normal behaviors gone wrong". The tics associated with Tourette's wax and wane; they change in number, frequency, severity, anatomical location, and complexity; each person experiences a unique pattern of fluctuation in their severity and frequency. Tics may also occur in "bouts of bouts", which also vary among people. The variation in tic severity may occur over hours, days, or weeks. Tics may increase when someone is experiencing stress, fatigue, anxiety, or illness, or when engaged in relaxing activities like watching TV. They sometimes decrease when an individual is engrossed in or focused on an activity like playing a musical instrument. In contrast to the abnormal movements associated with other movement disorders, the tics of Tourette's are nonrhythmic, often preceded by an unwanted urge, and temporarily suppressible. Over time, about 90% of individuals with Tourette's feel an urge preceding the tic, similar to the urge to sneeze or scratch an itch. The urges and sensations that precede the expression of a tic are referred to as premonitory sensory phenomena or premonitory urges. People describe the urge to express the tic as a buildup of tension, pressure, or energy which they ultimately choose consciously to release, as if they "had to do it" to relieve the sensation or until it feels "just right". The urge may cause a distressing sensation in the part of the body associated with the resulting tic; the tic is a response that relieves the urge in the anatomical location of the tic. Examples of this urge are the feeling of having something in one's throat, leading to a tic to clear one's throat, or a localized discomfort in the shoulders leading to shrugging the shoulders. The actual tic may be felt as relieving this tension or sensation, similar to scratching an itch or blinking to relieve an uncomfortable feeling in the eye. Some people with Tourette's may not be aware of the premonitory urge associated with tics. Children may be less aware of it than are adults, but their awareness tends to increase with maturity; by the age of ten, most children recognize the premonitory urge. Premonitory urges which precede the tic make suppression of the impending tic possible. Because of the urges that precede them, tics are described as semi-voluntary or "unvoluntary", rather than specifically involuntary; they may be experienced as a voluntary, suppressible response to the unwanted premonitory urge. The ability to suppress tics varies among individuals, and may be more developed in adults than children. People with tics are sometimes able to suppress them for limited periods of time, but doing so often results in tension or mental exhaustion. People with Tourette's may seek a secluded spot to release the suppressed urge, or there may be a marked increase in tics after a period of suppression at school or work. Children may suppress tics while in the doctor's office, so they may need to be observed when not aware of being watched. Complex tics related to speech include coprolalia, echolalia and palilalia. Coprolalia is the spontaneous utterance of socially objectionable or taboo words or phrases. Although it is the most publicized symptom of Tourette's, only about 10% of people with Tourette's exhibit it, and it is not required for a diagnosis. Also see Echolalia (repeating the words of others) and palilalia (repeating one's own words) occur in a minority of cases. Complex motor tics include copropraxia (obscene or forbidden gestures, or inappropriate touching), echopraxia (repetition or imitation of another person's actions) and palipraxia (repeating one's own movements). Onset and progression There is no typical case of Tourette syndrome, but the age of onset and the severity of symptoms follow a fairly reliable course. Although onset may occur anytime before eighteen years, the typical age of onset of tics is from five to seven, and is usually before adolescence. A 1998 study from the Yale Child Study Center showed that tic severity increased with age until it reached its highest point between ages eight and twelve. Severity declines steadily for most children as they pass through adolescence, when half to two-thirds of children see a dramatic decrease in tics. In people with TS, the first tics to appear usually affect the head, face, and shoulders, and include blinking, facial movements, sniffing and throat clearing. Vocal tics often appear months or years after motor tics but can appear first. Among people who experience more severe tics, complex tics may develop, including "arm straightening, touching, tapping, jumping, hopping and twirling". There are different movements in contrasting disorders (for example, the autism spectrum disorders), such as self-stimulation and stereotypies. The severity of symptoms varies widely among people with Tourette's, and many cases may be undetected.Hollis C, Pennant M, Cuenca J, et al. (January 2016). "Clinical effectiveness and patient perspectives of different treatment strategies for tics in children and adolescents with Tourette syndrome: a systematic review and qualitative analysis ". Health Technology Assessment. Southampton (UK): NIHR Journals Library. 20 (4): 1–450. . . Most cases are mild and almost unnoticeable; many people with TS may not realize they have tics. Because tics are more commonly expressed in private, Tourette syndrome may go unrecognized, and casual observers might not notice tics. Most studies of TS involve males, who have a higher prevalence of TS than females, and gender-based differences are not well studied; a 2021 review suggested that the characteristics and progression for females, particularly in adulthood, may differ and better studies are needed. Most adults with TS have mild symptoms and do not seek medical attention. While tics subside for the majority after adolescence, some of the "most severe and debilitating forms of tic disorder are encountered" in adults. In some cases, what appear to be adult-onset tics can be childhood tics re-surfacing. Co-occurring conditions Because people with milder symptoms are unlikely to be referred to specialty clinics, studies of Tourette's have an inherent bias towards more severe cases.
Biology and health sciences
Mental disorders
Health
56484
https://en.wikipedia.org/wiki/Low-pass%20filter
Low-pass filter
A low-pass filter is a filter that passes signals with a frequency lower than a selected cutoff frequency and attenuates signals with frequencies higher than the cutoff frequency. The exact frequency response of the filter depends on the filter design. The filter is sometimes called a high-cut filter, or treble-cut filter in audio applications. A low-pass filter is the complement of a high-pass filter. In optics, high-pass and low-pass may have different meanings, depending on whether referring to the frequency or wavelength of light, since these variables are inversely related. High-pass frequency filters would act as low-pass wavelength filters, and vice versa. For this reason, it is a good practice to refer to wavelength filters as short-pass and long-pass to avoid confusion, which would correspond to high-pass and low-pass frequencies. Low-pass filters exist in many different forms, including electronic circuits such as a hiss filter used in audio, anti-aliasing filters for conditioning signals before analog-to-digital conversion, digital filters for smoothing sets of data, acoustic barriers, blurring of images, and so on. The moving average operation used in fields such as finance is a particular kind of low-pass filter and can be analyzed with the same signal processing techniques as are used for other low-pass filters. Low-pass filters provide a smoother form of a signal, removing the short-term fluctuations and leaving the longer-term trend. Filter designers will often use the low-pass form as a prototype filter. That is a filter with unity bandwidth and impedance. The desired filter is obtained from the prototype by scaling for the desired bandwidth and impedance and transforming into the desired bandform (that is, low-pass, high-pass, band-pass or band-stop). Examples Examples of low-pass filters occur in acoustics, optics and electronics. A stiff physical barrier tends to reflect higher sound frequencies, acting as an acoustic low-pass filter for transmitting sound. When music is playing in another room, the low notes are easily heard, while the high notes are attenuated. An optical filter with the same function can correctly be called a low-pass filter, but conventionally is called a longpass filter (low frequency is long wavelength), to avoid confusion. In an electronic low-pass RC filter for voltage signals, high frequencies in the input signal are attenuated, but the filter has little attenuation below the cutoff frequency determined by its RC time constant. For current signals, a similar circuit, using a resistor and capacitor in parallel, works in a similar manner. (See current divider discussed in more detail below.) Electronic low-pass filters are used on inputs to subwoofers and other types of loudspeakers, to block high pitches that they cannot efficiently reproduce. Radio transmitters use low-pass filters to block harmonic emissions that might interfere with other communications. The tone knob on many electric guitars is a low-pass filter used to reduce the amount of treble in the sound. An integrator is another time constant low-pass filter. Telephone lines fitted with DSL splitters use low-pass filters to separate DSL from POTS signals (and high-pass vice versa), which share the same pair of wires (transmission channel). Low-pass filters also play a significant role in the sculpting of sound created by analogue and virtual analogue synthesisers. See subtractive synthesis. A low-pass filter is used as an anti-aliasing filter before sampling and for reconstruction in digital-to-analog conversion. Ideal and real filters An ideal low-pass filter completely eliminates all frequencies above the cutoff frequency while passing those below unchanged; its frequency response is a rectangular function and is a brick-wall filter. The transition region present in practical filters does not exist in an ideal filter. An ideal low-pass filter can be realized mathematically (theoretically) by multiplying a signal by the rectangular function in the frequency domain or, equivalently, convolution with its impulse response, a sinc function, in the time domain. However, the ideal filter is impossible to realize without also having signals of infinite extent in time, and so generally needs to be approximated for real ongoing signals, because the sinc function's support region extends to all past and future times. The filter would therefore need to have infinite delay, or knowledge of the infinite future and past, to perform the convolution. It is effectively realizable for pre-recorded digital signals by assuming extensions of zero into the past and future, or, more typically, by making the signal repetitive and using Fourier analysis. Real filters for real-time applications approximate the ideal filter by truncating and windowing the infinite impulse response to make a finite impulse response; applying that filter requires delaying the signal for a moderate period of time, allowing the computation to "see" a little bit into the future. This delay is manifested as phase shift. Greater accuracy in approximation requires a longer delay. Truncating an ideal low-pass filter result in ringing artifacts via the Gibbs phenomenon, which can be reduced or worsened by the choice of windowing function. Design and choice of real filters involves understanding and minimizing these artifacts. For example, simple truncation of the sinc function will create severe ringing artifacts, which can be reduced using window functions that drop off more smoothly at the edges. The Whittaker–Shannon interpolation formula describes how to use a perfect low-pass filter to reconstruct a continuous signal from a sampled digital signal. Real digital-to-analog converters uses real filter approximations. Time response The time response of a low-pass filter is found by solving the response to the simple low-pass RC filter. Using Kirchhoff's Laws we arrive at the differential equation Step input response example If we let be a step function of magnitude then the differential equation has the solution where is the cutoff frequency of the filter. Frequency response The most common way to characterize the frequency response of a circuit is to find its Laplace transform transfer function, . Taking the Laplace transform of our differential equation and solving for we get Difference equation through discrete time sampling A discrete difference equation is easily obtained by sampling the step input response above at regular intervals of where and is the time between samples. Taking the difference between two consecutive samples we have Solving for we get Where Using the notation and , and substituting our sampled value, , we get the difference equation Error analysis Comparing the reconstructed output signal from the difference equation, , to the step input response, , we find that there is an exact reconstruction (0% error). This is the reconstructed output for a time-invariant input. However, if the input is time variant, such as , this model approximates the input signal as a series of step functions with duration producing an error in the reconstructed output signal. The error produced from time variant inputs is difficult to quantify but decreases as . Discrete-time realization Many digital filters are designed to give low-pass characteristics. Both infinite impulse response and finite impulse response low pass filters, as well as filters using Fourier transforms, are widely used. Simple infinite impulse response filter The effect of an infinite impulse response low-pass filter can be simulated on a computer by analyzing an RC filter's behavior in the time domain, and then discretizing the model. From the circuit diagram to the right, according to Kirchhoff's Laws and the definition of capacitance: where is the charge stored in the capacitor at time . Substituting equation into equation gives , which can be substituted into equation so that This equation can be discretized. For simplicity, assume that samples of the input and output are taken at evenly spaced points in time separated by time. Let the samples of be represented by the sequence , and let be represented by the sequence , which correspond to the same points in time. Making these substitutions, Rearranging terms gives the recurrence relation That is, this discrete-time implementation of a simple RC low-pass filter is the exponentially weighted moving average By definition, the smoothing factor is within the range . The expression for yields the equivalent time constant in terms of the sampling period and smoothing factor , Recalling that so note and are related by, and If =0.5, then the RC time constant equals the sampling period. If , then RC is significantly larger than the sampling interval, and . The filter recurrence relation provides a way to determine the output samples in terms of the input samples and the preceding output. The following pseudocode algorithm simulates the effect of a low-pass filter on a series of digital samples: // Return RC low-pass filter output samples, given input samples, // time interval dt, and time constant RC function lowpass(real[1..n] x, real dt, real RC) var real[1..n] y var real α := dt / (RC + dt) y[1] := α * x[1] for i from 2 to n y[i] := α * x[i] + (1-α) * y[i-1] return y The loop that calculates each of the n outputs can be refactored into the equivalent: for i from 2 to n y[i] := y[i-1] + α * (x[i] - y[i-1]) That is, the change from one filter output to the next is proportional to the difference between the previous output and the next input. This exponential smoothing property matches the exponential decay seen in the continuous-time system. As expected, as the time constant RC increases, the discrete-time smoothing parameter decreases, and the output samples respond more slowly to a change in the input samples ; the system has more inertia. This filter is an infinite-impulse-response (IIR) single-pole low-pass filter. Finite impulse response Finite-impulse-response filters can be built that approximate the sinc function time-domain response of an ideal sharp-cutoff low-pass filter. For minimum distortion, the finite impulse response filter has an unbounded number of coefficients operating on an unbounded signal. In practice, the time-domain response must be time truncated and is often of a simplified shape; in the simplest case, a running average can be used, giving a square time response. Fourier transform For non-realtime filtering, to achieve a low pass filter, the entire signal is usually taken as a looped signal, the Fourier transform is taken, filtered in the frequency domain, followed by an inverse Fourier transform. Only O(n log(n)) operations are required compared to O(n2) for the time domain filtering algorithm. This can also sometimes be done in real time, where the signal is delayed long enough to perform the Fourier transformation on shorter, overlapping blocks. Continuous-time realization There are many different types of filter circuits, with different responses to changing frequency. The frequency response of a filter is generally represented using a Bode plot, and the filter is characterized by its cutoff frequency and rate of frequency rolloff. In all cases, at the cutoff frequency, the filter attenuates the input power by half or 3 dB. So the order of the filter determines the amount of additional attenuation for frequencies higher than the cutoff frequency. A first-order filter, for example, reduces the signal amplitude by half (so power reduces by a factor of 4, or , every time the frequency doubles (goes up one octave); more precisely, the power rolloff approaches 20 dB per decade in the limit of high frequency. The magnitude Bode plot for a first-order filter looks like a horizontal line below the cutoff frequency, and a diagonal line above the cutoff frequency. There is also a "knee curve" at the boundary between the two, smoothly transitioning between the two straight-line regions. If the transfer function of a first-order low-pass filter has a zero as well as a pole, the Bode plot flattens out again, at some maximum attenuation of high frequencies; such an effect is caused for example by a little bit of the input leaking around the one-pole filter; this one-pole–one-zero filter is still a first-order low-pass. See Pole–zero plot and RC circuit. A second-order filter attenuates high frequencies more steeply. The Bode plot for this type of filter resembles that of a first-order filter, except that it falls off more quickly. For example, a second-order Butterworth filter reduces the signal amplitude to one-fourth of its original level every time the frequency doubles (so power decreases by 12 dB per octave, or 40 dB per decade). Other all-pole second-order filters may roll off at different rates initially depending on their Q factor, but approach the same final rate of 12 dB per octave; as with the first-order filters, zeroes in the transfer function can change the high-frequency asymptote. See RLC circuit. Third- and higher-order filters are defined similarly. In general, the final rate of power rolloff for an order- all-pole filter is 6 dB per octave (20 dB per decade). On any Butterworth filter, if one extends the horizontal line to the right and the diagonal line to the upper-left (the asymptotes of the function), they intersect at exactly the cutoff frequency, 3 dB below the horizontal line. The various types of filters (Butterworth filter, Chebyshev filter, Bessel filter, etc.) all have different-looking knee curves. Many second-order filters have "peaking" or resonance that puts their frequency response above the horizontal line at this peak. The meanings of 'low' and 'high'—that is, the cutoff frequency—depend on the characteristics of the filter. The term "low-pass filter" merely refers to the shape of the filter's response; a high-pass filter could be built that cuts off at a lower frequency than any low-pass filter—it is their responses that set them apart. Electronic circuits can be devised for any desired frequency range, right up through microwave frequencies (above 1 GHz) and higher. Laplace notation Continuous-time filters can also be described in terms of the Laplace transform of their impulse response, in a way that lets all characteristics of the filter be easily analyzed by considering the pattern of poles and zeros of the Laplace transform in the complex plane. (In discrete time, one can similarly consider the Z-transform of the impulse response.) For example, a first-order low-pass filter can be described by the continuous time transfer function, in the Laplace domain, as: where H is the transfer function, s is the Laplace transform variable (complex angular frequency), τ is the filter time constant, is the cutoff frequency, and K is the gain of the filter in the passband. The cutoff frequency is related to the time constant by: Electronic low-pass filters First-order passive RC filter One simple low-pass filter circuit consists of a resistor in series with a load, and a capacitor in parallel with the load. The capacitor exhibits reactance, and blocks low-frequency signals, forcing them through the load instead. At higher frequencies, the reactance drops, and the capacitor effectively functions as a short circuit. The combination of resistance and capacitance gives the time constant of the filter (represented by the Greek letter tau). The break frequency, also called the turnover frequency, corner frequency, or cutoff frequency (in hertz), is determined by the time constant: or equivalently (in radians per second): This circuit may be understood by considering the time the capacitor needs to charge or discharge through the resistor: At low frequencies, there is plenty of time for the capacitor to charge up to practically the same voltage as the input voltage. At high frequencies, the capacitor only has time to charge up a small amount before the input switches direction. The output goes up and down only a small fraction of the amount the input goes up and down. At double the frequency, there's only time for it to charge up half the amount. Another way to understand this circuit is through the concept of reactance at a particular frequency: Since direct current (DC) cannot flow through the capacitor, DC input must flow out the path marked (analogous to removing the capacitor). Since alternating current (AC) flows very well through the capacitor, almost as well as it flows through a solid wire, AC input flows out through the capacitor, effectively short circuiting to the ground (analogous to replacing the capacitor with just a wire). The capacitor is not an "on/off" object (like the block or pass fluidic explanation above). The capacitor variably acts between these two extremes. It is the Bode plot and frequency response that show this variability. RL filter A resistor–inductor circuit or RL filter is an electric circuit composed of resistors and inductors driven by a voltage or current source. A first-order RL circuit is composed of one resistor and one inductor and is the simplest type of RL circuit. A first-order RL circuit is one of the simplest analogue infinite impulse response electronic filters. It consists of a resistor and an inductor, either in series driven by a voltage source or in parallel driven by a current source. Second-order passive RLC filter An RLC circuit (the letters R, L, and C can be in a different sequence) is an electrical circuit consisting of a resistor, an inductor, and a capacitor, connected in series or in parallel. The RLC part of the name is due to those letters being the usual electrical symbols for resistance, inductance, and capacitance, respectively. The circuit forms a harmonic oscillator for current and will resonate in a similar way as an LC circuit will. The main difference that the presence of the resistor makes is that any oscillation induced in the circuit will die away over time if it is not kept going by a source. This effect of the resistor is called damping. The presence of the resistance also reduces the peak resonant frequency somewhat. Some resistance is unavoidable in real circuits, even if a resistor is not specifically included as a component. An ideal, pure LC circuit is an abstraction for the purpose of theory. There are many applications for this circuit. They are used in many different types of oscillator circuits. Another important application is for tuning, such as in radio receivers or television sets, where they are used to select a narrow range of frequencies from the ambient radio waves. In this role, the circuit is often called a tuned circuit. An RLC circuit can be used as a band-pass filter, band-stop filter, low-pass filter, or high-pass filter. The RLC filter is described as a second-order circuit, meaning that any voltage or current in the circuit can be described by a second-order differential equation in circuit analysis. Second-order low-pass filter in standard form The transfer function of a second-order low-pass filter can be expressed as a function of frequency as shown in Equation 1, the Second-Order Low-Pass Filter Standard Form. In this equation, is the frequency variable, is the cutoff frequency, is the frequency scaling factor, and is the quality factor. Equation 1 describes three regions of operation: below cutoff, in the area of cutoff, and above cutoff. For each area, Equation 1 reduces to: : - The circuit passes signals multiplied by the gain factor . : - Signals are phase-shifted 90° and modified by the quality factor . : - Signals are phase-shifted 180° and attenuated by the square of the frequency ratio. This behavior is detailed by Jim Karki in "Active Low-Pass Filter Design" (Texas Instruments, 2023). With attenuation at frequencies above increasing by a power of two, the last formula describes a second-order low-pass filter. The frequency scaling factor is used to scale the cutoff frequency of the filter so that it follows the definitions given before. Higher order passive filters Higher-order passive filters can also be constructed (see diagram for a third-order example). First order active An active low-pass filter adds an active device to create an active filter that allows for gain in the passband. In the operational amplifier circuit shown in the figure, the cutoff frequency (in hertz) is defined as: or equivalently (in radians per second): The gain in the passband is −R2/R1, and the stopband drops off at −6 dB per octave (that is −20 dB per decade) as it is a first-order filter.
Technology
Signal processing
null
56486
https://en.wikipedia.org/wiki/High-pass%20filter
High-pass filter
A high-pass filter (HPF) is an electronic filter that passes signals with a frequency higher than a certain cutoff frequency and attenuates signals with frequencies lower than the cutoff frequency. The amount of attenuation for each frequency depends on the filter design. A high-pass filter is usually modeled as a linear time-invariant system. It is sometimes called a low-cut filter or bass-cut filter in the context of audio engineering. High-pass filters have many uses, such as blocking DC from circuitry sensitive to non-zero average voltages or radio frequency devices. They can also be used in conjunction with a low-pass filter to produce a band-pass filter. In the optical domain filters are often characterised by wavelength rather than frequency. High-pass and low-pass have the opposite meanings, with a "high-pass" filter (more commonly "short-pass") passing only shorter wavelengths (higher frequencies), and vice versa for "low-pass" (more commonly "long-pass"). Description In electronics, a filter is a two-port electronic circuit which removes frequency components from a signal (time-varying voltage or current) applied to its input port. A high-pass filter attenuates frequency components below a certain frequency, called its cutoff frequency, allowing higher frequency components to pass through. This contrasts with a low-pass filter, which attenuates frequencies higher than a certain frequency, and a bandpass filter, which allows a certain band of frequencies through and attenuates frequencies both higher and lower than the band. In optics a high pass filter is a transparent or translucent window of colored material that allows light longer than a certain wavelength to pass through and attenuates light of shorter wavelengths. Since light is often measured not by frequency but by wavelength, which is inversely related to frequency, a high pass optical filter, which attenuates light frequencies below a cutoff frequency, is often called a short-pass filter; it attenuates longer wavelengths. Continuous-time circuits First-order passive A resistor and either a capacitor or an inductor can be configured as a first-order high-pass filter. The simple first-order capacitive high-pass filter shown in Figure 1 is implemented by placing an input voltage across the series combination of a capacitor and a resistor and using the voltage across the resistor as an output. The transfer function of this linear time-invariant system is: The product of the resistance and capacitance (R×C) is the time constant (τ); it is inversely proportional to the cutoff frequency fc, that is, where fc is in hertz, τ is in seconds, R is in ohms, and C is in farads. The filter's frequency response reaches -3dB referenced to the at an infinite frequency at the cutoff frequency. First-order active Figure 2 shows an active electronic implementation of a first-order high-pass filter using an operational amplifier. The transfer function of this linear time-invariant system is: In this case, the filter has a passband gain of −R2/R1 and has a cutoff frequency of Because this filter is active, it may have non-unity passband gain. That is, high-frequency signals are inverted and amplified by R2/R1. All of these first-order high-pass filters are called differentiators, because they perform differentiation for signals whose frequency band is well below the filter's cutoff frequency. Higher orders Filters of higher order have steeper slope in the stopband, such that the slope of n-order filters equals 20n dB per decade. Higher order filters can be achieved simply by cascading these first order filters. While impedance matching and loading must be taken into account when chaining passive filters, active filters can be easily chained because the signal is restored by the output of the op amp at each stage. Various filter topologies and network synthesis filters for higher orders exist, which ease design. Discrete-time realization Discrete-time high-pass filters can also be designed. Discrete-time filter design is beyond the scope of this article; however, a simple example comes from the conversion of the continuous-time high-pass filter above to a discrete-time realization. That is, the continuous-time behavior can be discretized. From the circuit in Figure 1 above, according to Kirchhoff's Laws and the definition of capacitance: where is the charge stored in the capacitor at time . Substituting Equation (Q) into Equation (I) and then Equation (I) into Equation (V) gives: This equation can be discretized. For simplicity, assume that samples of the input and output are taken at evenly spaced points in time separated by time. Let the samples of be represented by the sequence , and let be represented by the sequence which correspond to the same points in time. Making these substitutions: And rearranging terms gives the recurrence relation That is, this discrete-time implementation of a simple continuous-time RC high-pass filter is By definition, . The expression for parameter yields the equivalent time constant in terms of the sampling period and : . Recalling that so then and are related by: and . If , then the time constant equal to the sampling period. If , then is significantly smaller than the sampling interval, and . Algorithmic implementation The filter recurrence relation provides a way to determine the output samples in terms of the input samples and the preceding output. The following pseudocode algorithm will simulate the effect of a high-pass filter on a series of digital samples, assuming equally spaced samples: // Return RC high-pass filter output samples, given input samples, // time interval dt, and time constant RC function highpass(real[1..n] x, real dt, real RC) var real[1..n] y var real α := RC / (RC + dt) y[1] := x[1] for i from 2 to n y[i] := α × y[i−1] + α × (x[i] − x[i−1]) return y The loop which calculates each of the outputs can be refactored into the equivalent: for i from 2 to n y[i] := α × (y[i−1] + x[i] − x[i−1]) However, the earlier form shows how the parameter α changes the impact of the prior output and current change in input . In particular, A large α implies that the output will decay very slowly but will also be strongly influenced by even small changes in input. By the relationship between parameter α and time constant above, a large α corresponds to a large and therefore a low corner frequency of the filter. Hence, this case corresponds to a high-pass filter with a very narrow stopband. Because it is excited by small changes and tends to hold its prior output values for a long time, it can pass relatively low frequencies. However, a constant input (i.e., an input with ) will always decay to zero, as would be expected with a high-pass filter with a large . A small α implies that the output will decay quickly and will require large changes in the input (i.e., is large) to cause the output to change much. By the relationship between parameter α and time constant above, a small α corresponds to a small and therefore a high corner frequency of the filter. Hence, this case corresponds to a high-pass filter with a very wide stopband. Because it requires large (i.e., fast) changes and tends to quickly forget its prior output values, it can only pass relatively high frequencies, as would be expected with a high-pass filter with a small . Applications Audio High-pass filters have many applications. They are used as part of an audio crossover to direct high frequencies to a tweeter while attenuating bass signals which could interfere with, or damage, the speaker. When such a filter is built into a loudspeaker cabinet it is normally a passive filter that also includes a low-pass filter for the woofer and so often employs both a capacitor and inductor (although very simple high-pass filters for tweeters can consist of a series capacitor and nothing else). As an example, the formula above, applied to a tweeter with a resistance of 10 Ω, will determine the capacitor value for a cut-off frequency of 5 kHz. , or approx 3.2 μF. An alternative, which provides good quality sound without inductors (which are prone to parasitic coupling, are expensive, and may have significant internal resistance) is to employ bi-amplification with active RC filters or active digital filters with separate power amplifiers for each loudspeaker. Such low-current and low-voltage line level crossovers are called active crossovers. Rumble filters are high-pass filters applied to the removal of unwanted sounds near to the lower end of the audible range or below. For example, noises (e.g., footsteps, or motor noises from record players and tape decks) may be removed because they are undesired or may overload the RIAA equalization circuit of the preamp. High-pass filters are also used for AC coupling at the inputs of many audio power amplifiers, for preventing the amplification of DC currents which may harm the amplifier, rob the amplifier of headroom, and generate waste heat at the loudspeakers voice coil. One amplifier, the professional audio model DC300 made by Crown International beginning in the 1960s, did not have high-pass filtering at all, and could be used to amplify the DC signal of a common 9-volt battery at the input to supply 18 volts DC in an emergency for mixing console power. However, that model's basic design has been superseded by newer designs such as the Crown Macro-Tech series developed in the late 1980s which included 10 Hz high-pass filtering on the inputs and switchable 35 Hz high-pass filtering on the outputs. Another example is the QSC Audio PLX amplifier series which includes an internal 5 Hz high-pass filter which is applied to the inputs whenever the optional 50 and 30 Hz high-pass filters are turned off. Mixing consoles often include high-pass filtering at each channel strip. Some models have fixed-slope, fixed-frequency high-pass filters at 80 or 100 Hz that can be engaged; other models have sweepable high-pass filters, filters of fixed slope that can be set within a specified frequency range, such as from 20 to 400 Hz on the Midas Heritage 3000, or 20 to 20,000 Hz on the Yamaha M7CL digital mixing console. Veteran systems engineer and live sound mixer Bruce Main recommends that high-pass filters be engaged for most mixer input sources, except for those such as kick drum, bass guitar and piano, sources which will have useful low-frequency sounds. Main writes that DI unit inputs (as opposed to microphone inputs) do not need high-pass filtering as they are not subject to modulation by low-frequency stage wash—low frequency sounds coming from the subwoofers or the public address system and wrapping around to the stage. Main indicates that high-pass filters are commonly used for directional microphones which have a proximity effect—a low-frequency boost for very close sources. This low-frequency boost commonly causes problems up to 200 or 300 Hz, but Main notes that he has seen microphones that benefit from a 500 Hz high-pass filter setting on the console. Image High-pass and low-pass filters are also used in digital image processing to perform image modifications, enhancements, noise reduction, etc., using designs done in either the spatial domain or the frequency domain. The unsharp masking, or sharpening, operation used in image editing software is a high-boost filter, a generalization of high-pass.
Technology
Signal processing
null
56509
https://en.wikipedia.org/wiki/Potash
Potash
Potash ( ) includes various mined and manufactured salts that contain potassium in water-soluble form. The name derives from pot ash, plant ashes or wood ash soaked in water in a pot, the primary means of manufacturing potash before the Industrial Era. The word potassium is derived from potash. Potash is produced worldwide in amounts exceeding 71.9 million tonnes (~45.4 million tonnes K2O equivalent) per year as of 2021, with Canada being the largest producer, mostly for use in fertilizer. Various kinds of fertilizer-potash constitute the single greatest industrial use of the element potassium in the world. Potassium was first derived in 1807 by electrolysis of caustic potash (potassium hydroxide). Terminology Potash refers to potassium compounds and potassium-bearing materials, most commonly potassium carbonate. The word "potash" originates from the Middle Dutch , denoting "pot ashes" in 1477. The old method of making potassium carbonate () was by collecting or producing wood ash (the occupation of ash burners), leaching the ashes, and then evaporating the resulting solution in large iron pots, which left a white residue denominated "pot ash". Approximately 10% by weight of common wood ash can be recovered as potash. Later, "potash" became widely applied to naturally occurring minerals that contained potassium salts and the commercial product derived from them. The following table lists a number of potassium compounds that have "potash" in their traditional names: History Origin of potash ore Most of the world reserves of potassium (K) were deposited as sea water in ancient inland oceans. After the water evaporated, the potassium salts crystallized into beds of potash ore. These are the locations where potash is being mined today. The deposits are a naturally occurring mixture of potassium chloride (KCl) and sodium chloride (NaCl), more commonly known as table salt. Over time, as the surface of the earth changed, these deposits were covered by thousands of feet of earth. Bronze Age Potash (especially potassium carbonate) has been used in bleaching textiles, making glass, ceramic, and making soap, since the Bronze Age. Potash was principally obtained by leaching the ashes of land plants. 14th–17th century Potash mining Beginning in the 14th century potash was mined in Ethiopia. One of the world's largest deposits, 140 to 150 million tons, is located in the Dallol area of the Afar Region. Wood-derived potash Potash was one of the most important industrial chemicals. It was refined from the ashes of broadleaved trees and produced primarily in the forested areas of Europe, Russia, and North America. Although methods for producing artificial alkalis were invented in the late 18th century, these did not become economical until the late 19th century and so the dependence on organic sources of potash remained. Potash became an important international trade commodity in Europe from at least the early 14th century. It is estimated that European imports of potash required 6 or more million cubic metres each year from the early 17th century. Between 1420 and 1620, the primary exporting cities for wood-derived potash were Gdańsk, Königsberg and Riga. In the late 15th century, London was the lead importer due to its position as the centre of soft soap making while the Dutch dominated as suppliers and consumers in the 16th century. From the 1640s, geopolitical disruptions (i.e. Russo-Polish War (1654–1667)) meant that the centres of export moved from the Baltic to Archangelsk, Russia. In 1700, Russian ash was dominant though Gdańsk remained notable for the quality of its potash. 18th century Kelp ash On the Orkney islands, kelp ash provided potash and soda ash, production starting "possibly as early as 1719" and lasting for a century. The products were "eagerly sought after by the glass and soap industries of the time." North America By the 18th century, higher quality American potash was increasingly exported to Britain. In the late 18th and early 19th centuries, potash production provided settlers in North America badly needed cash and credit as they cleared wooded land for crops. To make full use of their land, settlers needed to dispose of excess wood. The easiest way to accomplish this was to burn any wood not needed for fuel or construction. Ashes from hardwood trees could then be used to make lye, which could either be used to make soap or boiled down to produce valuable potash. Hardwood could generate ashes at the rate of 60 to 100 bushels per acre (500 to 900 m3/km2). In 1790, the sale of ashes could generate $3.25 to $6.25 per acre ($800 to $1,500/km2) in rural New York State – nearly the same rate as hiring a laborer to clear the same area. Potash making became a major industry in British North America. Great Britain was always the most important market. The American potash industry followed the woodsman's ax across the country. The first US patent The first US patent of any kind was issued in 1790 to Samuel Hopkins for an improvement "in the making of Pot ash and Pearl ash by a new Apparatus and Process". Pearl ash was a purer quality made by calcination of potash in a reverberatory furnace or kiln. Potash pits were once used in England to produce potash that was used in making soap for the preparation of wool for yarn production. 19th century After about 1820, New York replaced New England as the most important source; by 1840 the center was in Ohio. Potash production was always a by-product industry, following from the need to clear land for agriculture. Canada From 1767, potash from wood ashes was exported from Canada. By 1811, 70% of the total 19.6 million lbs of potash imports to Britain came from Canada. Exports of potash and pearl ash reached 43,958 barrels in 1865. There were 519 asheries in operation in 1871. 20th century industrialization The wood-ash industry declined in the late 19th century when large-scale production of potash from mineral salts was established in Germany. In the early 20th century, the potash industry was dominated by a cartel in which Germany had the dominant role. WWI saw a brief resurgence of American asheries, with their product typically consisting of 66% hydroxide, 17% carbonate, 16% sulfate and other impurities. Later in the century, the cartel ended as new potash producers emerged in the USSR and Canada. In 1943, potash was discovered in Saskatchewan, Canada, during oil drilling. Active exploration began in 1951. In 1958, the Potash Company of America became the first potash producer in Canada with the commissioning of an underground potash mine at Patience Lake. As numerous potash producers in Canada developed, the Saskatchewan government became increasingly involved in the industry, leading to the creation of Canpotex in the 1970s. In 1964 the Canadian company Kalium Chemicals established the first potash mine using the solution process. The discovery was made during oil reserve exploration. The mine was developed near Regina, Saskatchewan. The mine reached depths greater than 1500 meters. It is now the Mosaic Corporation's Belle Plaine unit. The USSR's potash production had largely been for domestic use and use in the Council for Mutual Economic Assistance countries. After the dissolution of the USSR, Russian and Belarusian potash producers entered into direct competition with producers elsewhere in the world for the first time. In the beginning of the 20th century, potash deposits were found in the Dallol Depression in the Musely and Crescent localities near the Ethiopean-Eritrean border. The estimated reserves in Musely and Crescent are 173 and 12 million tonnes respectively. The latter is particularly suitable for surface mining. It was explored in the 1960s but the works stopped due to flooding in 1967. Attempts to continue mining in the 1990s were halted by the Eritrean–Ethiopian War and have not resumed as of 2009. Mining Shaft mining and strip mining All commercial potash deposits come originally from evaporite deposits and are often buried deep below the earth's surface. Potash ores are typically rich in potassium chloride (KCl), sodium chloride (NaCl) and other salts and clays, and are typically obtained by conventional shaft mining with the extracted ore ground into a powder. Most potash mines today are deep shaft mines as much as 4,400 feet (1,400 m) underground. Others are mined as strip mines, having been laid down in horizontal layers as sedimentary rock. In above-ground processing plants, the KCl is separated from the mixture to produce a high-analysis potassium fertilizer. Other potassium salts can be separated by various procedures, resulting in potassium sulfate and potassium-magnesium sulfate. Dissolution mining and evaporation methods Other methods include dissolution mining and evaporation methods from brines. In the evaporation method, hot water is injected into the potash, which is dissolved and then pumped to the surface where it is concentrated by solar induced evaporation. Amine reagents are then added to either the mined or evaporated solutions. The amine coats the KCl but not NaCl. Air bubbles cling to the amine + KCl and float it to the surface while the NaCl and clay sink to the bottom. The surface is skimmed for the amine + KCl, which is then dried and packaged for use as a K rich fertilizer—KCl dissolves readily in water and is available quickly for plant nutrition. Recovery of potassium fertilizer salts from sea water has been studied in India. During extraction of salt from seawater by evaporation, potassium salts get concentrated in bittern, an effluent from the salt industry. Production Potash deposits are distributed unevenly throughout the world. , deposits are being mined in Canada, Russia, China, Belarus, Israel, Germany, Chile, the United States, Jordan, Spain, the United Kingdom, Uzbekistan and Brazil, with the most significant deposits present under the great depths of the Prairie Evaporite Formation in Saskatchewan, Canada. Canada and Russia are the countries where the bulk of potash is produced; Belarus is also a major producer. The Permian Basin deposit includes the major mines outside of Carlsbad, New Mexico, to the world's purest potash deposit in Lea County, New Mexico (near the Carlsbad deposits), which is believed to be roughly 80% pure. (Osceola County, Michigan, has deposits 90+% pure; the only mine there was converted to salt production, however.) Canada is the largest producer, followed by Russia and Belarus. The most significant reserve of Canada's potash is located in the province of Saskatchewan and is mined by The Mosaic Company, Nutrien and K+S. In China, most potash deposits are concentrated in the deserts and salt flats of the endorheic basins of its western provinces, particularly Qinghai. Geological expeditions discovered the reserves in the 1950s but commercial exploitation lagged until Deng Xiaoping's Reform and Opening Up Policy in the 1980s. The 1989 opening of the Qinghai Potash Fertilizer Factory in the remote Qarhan Playa increased China's production of potassium chloride sixfold, from less than a year at Haixi and Tanggu to just under a year. In 2013, almost 70% of potash production was controlled by Canpotex, an exporting and marketing firm, and the Belarusian Potash Company. The latter was a joint venture between Belaruskali and Uralkali, but on July 30, 2013, Uralkali announced that it had ended the venture. Potash is water soluble and transporting it requires special transportation infrastructure. Occupational hazards Excessive respiratory disease due to environmental hazards, such as radon and asbestos, has been a concern for potash miners throughout history. Potash miners are liable to develop silicosis. Based on a study conducted between 1977 and 1987 of cardiovascular disease among potash workers, the overall mortality rates were low, but a noticeable difference in above-ground workers was documented. Consumption Fertilizers Potassium is the third major plant and crop nutrient after nitrogen and phosphorus. It has been used since antiquity as a soil fertilizer (about 90% of current use). Fertilizer use is the main driver behind potash consumption, especially for its use in fertilizing crops that contribute to high-protein diets. As of at least 2010, more than 95% of potash is mined for use in agricultural purposes. Elemental potassium does not occur in nature because it reacts violently with water. As part of various compounds, potassium makes up about 2.6% of the Earth's crust by mass and is the seventh most abundant element, similar in abundance to sodium at approximately 1.8% of the crust. Potash is important for agriculture because it improves water retention, yield, nutrient value, taste, color, texture and disease resistance of food crops. It has wide application to fruit and vegetables, rice, wheat and other grains, sugar, corn, soybeans, palm oil and cotton, all of which benefit from the nutrient's quality-enhancing properties. Demand for food and animal feed has been on the rise since 2000. The United States Department of Agriculture's Economic Research Service (ERS) attributes the trend to average annual population increases of 75 million people around the world. Geographically, economic growth in Asia and Latin America greatly contributed to the increased use of potash-based fertilizer. Rising incomes in developing countries also were a factor in the growing potash and fertilizer use. With more money in the household budget, consumers added more meat and dairy products to their diets. This shift in eating patterns required more acres to be planted, more fertilizer to be applied and more animals to be fed—all requiring more potash. After years of trending upward, fertilizer use slowed in 2008. The worldwide economic downturn is the primary reason for the declining fertilizer use, dropping prices, and mounting inventories. The world's largest consumers of potash are China, the United States, Brazil, and India. Brazil imports 90% of the potash it needs. Potash consumption for fertilizers is expected to increase to about 37.8 million tonnes by 2022. Potash imports and exports are often reported in K2O equivalent, although fertilizer never contains potassium oxide, per se, because potassium oxide is caustic and hygroscopic. Pricing At the beginning of 2008, potash prices started a meteoric climb from less than US$200 a tonne to a high of US$875 in February 2009. These subsequently dropped dramatically to an April 2010 low of US$310 level, before recovering in 2011–12, and relapsing again in 2013. For reference, prices in November 2011 were about US$470 per tonne, but as of May 2013 were stable at US$393. After the surprise breakup of the world's largest potash cartel at the end of July 2013, potash prices were poised to drop some 20 percent. At the end of December 2015, potash traded for US$295 a tonne. In April 2016 its price was US$269. In May 2017, prices had stabilised at around US$216 a tonne down 18% from the previous year. By January 2018, prices have been recovering to around US$225 a tonne. World potash demand tends to be price inelastic in the short-run and even in the long run. Other uses In addition to its use as a fertilizer, potassium chloride is important in many industrialized economies, where it is used in aluminium recycling, by the chloralkali industry to produce potassium hydroxide, in metal electroplating, oil-well drilling fluid, snow and ice melting, steel heat-treating, in medicine as a treatment for hypokalemia, and water softening. Potassium hydroxide is used for industrial water treatment and is the precursor of potassium carbonate, several forms of potassium phosphate, many other potassic chemicals, and soap manufacturing. Potassium carbonate is used to produce animal feed supplements, cement, fire extinguishers, food products, photographic chemicals, and textiles. It is also used in brewing beer, pharmaceutical preparations, and as a catalyst for synthetic rubber manufacturing. Also combined with silica sand to produce potassium silicate, sometimes known as waterglass, for use in paints and arc welding electrodes. These non-fertilizer uses have accounted for about 15% of annual potash consumption in the United States. Substitutes No substitutes exist for potassium as an essential plant nutrient and as an essential nutritional requirement for animals and humans. Manure and glauconite (greensand) are low-potassium-content sources that can be profitably transported only short distances to crop fields.
Physical sciences
Salts and ions: General
Chemistry
56511
https://en.wikipedia.org/wiki/Kidney%20dialysis
Kidney dialysis
Kidney dialysis (from Greek , , 'dissolution'; from , , 'through', and , , 'loosening or splitting') is the process of removing excess water, solutes, and toxins from the blood in people whose kidneys can no longer perform these functions naturally. Along with kidney transplantation, it is a type of renal replacement therapy. Dialysis may need to be initiated when there is a sudden rapid loss of kidney function, known as acute kidney injury (previously called acute renal failure), or when a gradual decline in kidney function, chronic kidney failure, reaches stage 5. Stage 5 chronic renal failure is reached when the glomerular filtration rate is less than 15% of the normal, creatinine clearance is less than 10 mL per minute, and uremia is present. Dialysis is used as a temporary measure in either acute kidney injury or in those awaiting kidney transplant and as a permanent measure in those for whom a transplant is not indicated or not possible. In West European countries, Australia, Canada, the United Kingdom, and the United States, dialysis is paid for by the government for those who are eligible. The first successful dialysis was performed in 1943. Background The kidneys have an important role in maintaining health. When the person is healthy, the kidneys maintain the body's internal equilibrium of water and minerals (sodium, potassium, chloride, calcium, phosphorus, magnesium, sulphate). The acidic metabolism end-products that the body cannot get rid of via respiration are also excreted through the kidneys. The kidneys also function as a part of the endocrine system, producing erythropoietin, calcitriol and renin. Erythropoietin is involved in the production of red blood cells and calcitriol plays a role in bone formation. Dialysis is an imperfect treatment to replace kidney function because it does not correct the compromised endocrine functions of the kidney. Dialysis treatments replace some of these functions through diffusion (waste removal) and ultrafiltration (fluid removal). Dialysis uses highly purified (also known as "ultrapure") water. Principle Dialysis works on the principles of the diffusion of solutes and ultrafiltration of fluid across a semipermeable membrane. Diffusion is a property of substances in water; substances in water tend to move from an area of high concentration to an area of low concentration. Blood flows by one side of a semipermeable membrane, and a dialysate, or special dialysis fluid, flows by the opposite side. A semipermeable membrane is a thin layer of material that contains holes of various sizes, or pores. Smaller solutes and fluid pass through the membrane, but the membrane blocks the passage of larger substances (for example, red blood cells and large proteins). This replicates the filtering process that takes place in the kidneys when the blood enters the kidneys and the larger substances are separated from the smaller ones in the glomerulus. The two main types of dialysis, hemodialysis and peritoneal dialysis, remove wastes and excess water from the blood in different ways. Hemodialysis removes wastes and water by circulating blood outside the body through an external filter, called a dialyzer, that contains a semipermeable membrane. The blood flows in one direction and the dialysate flows in the opposite. The counter-current flow of the blood and dialysate maximizes the concentration gradient of solutes between the blood and dialysate, which helps to remove more urea and creatinine from the blood. The concentrations of solutes normally found in the urine (for example potassium, phosphorus and urea) are undesirably high in the blood, but low or absent in the dialysis solution, and constant replacement of the dialysate ensures that the concentration of undesired solutes is kept low on this side of the membrane. The dialysis solution has levels of minerals like potassium and calcium that are similar to their natural concentration in healthy blood. For another solute, bicarbonate, dialysis solution level is set at a slightly higher level than in normal blood, to encourage the diffusion of bicarbonate into the blood, to act as a pH buffer to neutralize the metabolic acidosis that is often present in these patients. The levels of the components of dialysate are typically prescribed by a nephrologist according to the needs of the individual patient. In peritoneal dialysis, wastes and water are removed from the blood inside the body using the peritoneum as a natural semipermeable membrane. Waste and excess water move from the blood, across the visceral peritoneum due to its large surface area and into a special dialysis solution, called dialysate, in the peritoneal cavity within the abdomen. Types There are three primary and two secondary types of dialysis: hemodialysis (primary), peritoneal dialysis (primary), hemofiltration (primary), hemodiafiltration (secondary) and intestinal dialysis (secondary). Hemodialysis In hemodialysis, the patient's blood is pumped through the blood compartment of a dialyzer, exposing it to a partially permeable membrane. The dialyzer is composed of thousands of tiny hollow synthetic fibers. The fiber wall acts as the semipermeable membrane. Blood flows through the fibers, dialysis solution flows around the outside of the fibers, and water and wastes move between these two solutions. The cleansed blood is then returned via the circuit back to the body. Ultrafiltration occurs by increasing the hydrostatic pressure across the dialyzer membrane. This usually is done by applying a negative pressure to the dialysate compartment of the dialyzer. This pressure gradient causes water and dissolved solutes to move from blood to dialysate and allows the removal of several litres of excess fluid during a typical 4-hour treatment. In the United States, hemodialysis treatments are typically given in a dialysis center three times per week (due in the United States to Medicare reimbursement rules); however, as of 2005 over 2,500 people in the United States are dialyzing at home more frequently for various treatment lengths. Studies have demonstrated the clinical benefits of dialyzing 5 to 7 times a week, for 6 to 8 hours. This type of hemodialysis is usually called nocturnal daily hemodialysis and a study has shown it provides a significant improvement in both small and large molecular weight clearance and decreases the need for phosphate binders. These frequent long treatments are often done at home while sleeping, but home dialysis is a flexible modality and schedules can be changed day to day, week to week. In general, studies show that both increased treatment length and frequency are clinically beneficial. Hemo-dialysis was one of the most common procedures performed in U.S. hospitals in 2011, occurring in 909,000 stays (a rate of 29 stays per 10,000 population). Peritoneal dialysis In peritoneal dialysis, a sterile solution containing glucose (called dialysate) is run through a tube into the peritoneal cavity, the abdominal body cavity around the intestine, where the peritoneal membrane acts as a partially permeable membrane. This exchange is repeated 4–5 times per day; automatic systems can run more frequent exchange cycles overnight. Peritoneal dialysis is less efficient than hemodialysis, but because it is carried out for a longer period of time the net effect in terms of removal of waste products and of salt and water are similar to hemodialysis. Peritoneal dialysis is carried out at home by the patient, often without help. This frees patients from the routine of having to go to a dialysis clinic on a fixed schedule multiple times per week. Peritoneal dialysis can be performed with little to no specialized equipment (other than bags of fresh dialysate). Hemofiltration Hemofiltration is a similar treatment to hemodialysis, but it makes use of a different principle. The blood is pumped through a dialyzer or "hemofilter" as in dialysis, but no dialysate is used. A pressure gradient is applied; as a result, water moves across the very permeable membrane rapidly, "dragging" along with it many dissolved substances, including ones with large molecular weights, which are not cleared as well by hemodialysis. Salts and water lost from the blood during this process are replaced with a "substitution fluid" that is infused into the extracorporeal circuit during the treatment. Hemodiafiltration Hemodiafiltration is a combination between hemodialysis and hemofiltration, thus used to purify the blood from toxins when the kidney is not working normally and also used to treat acute kidney injury (AKI). Intestinal dialysis In intestinal dialysis, the diet is supplemented with soluble fibres such as acacia fibre, which is digested by bacteria in the colon. This bacterial growth increases the amount of nitrogen that is eliminated in fecal waste. An alternative approach utilizes the ingestion of 1 to 1.5 liters of non-absorbable solutions of polyethylene glycol or mannitol every fourth hour. Indications The decision to initiate dialysis or hemofiltration in patients with kidney failure depends on several factors. These can be divided into acute or chronic indications. Depression and kidney failure symptoms can be similar to each other. It is important that there is open communication between a dialysis team and the patient. Open communication will allow giving a better quality of life. Knowing the patients' needs will allow the dialysis team to provide more options like: changes in dialysis type like home dialysis for patients to be able to be more active or changes in eating habits to avoid unnecessary waste products. Acute indications Indications for dialysis in a patient with acute kidney injury are summarized with the vowel mnemonic of "AEIOU": Acidemia from metabolic acidosis in situations in which correction with sodium bicarbonate is impractical or may result in fluid overload. Electrolyte abnormality, such as severe hyperkalemia, especially when combined with AKI. Intoxication, that is, acute poisoning with a dialyzable substance. These substances can be represented by the mnemonic SLIME: salicylic acid, lithium, isopropanol, magnesium-containing laxatives and ethylene glycol. Overload of fluid not expected to respond to treatment with diuretics Uremia complications, such as pericarditis, encephalopathy, or gastrointestinal bleeding. Chronic indications Chronic dialysis may be indicated when a patient has symptomatic kidney failure and low glomerular filtration rate (GFR < 15 mL/min). Between 1996 and 2008, there was a trend to initiate dialysis at progressively higher estimated GFR, eGFR. A review of the evidence shows no benefit or potential harm with early dialysis initiation, which has been defined by start of dialysis at an estimated GFR of greater than 10 ml/min/1.732. Observational data from large registries of dialysis patients suggests that early start of dialysis may be harmful. The most recent published guidelines from Canada, for when to initiate dialysis, recommend an intent to defer dialysis until a patient has definite kidney failure symptoms, which may occur at an estimated GFR of 5–9 ml/min/1.732. Impact Effectiveness Even though it is not a cure for kidney failure, dialysis is a very effective treatment. Survival rates of kidney failure are generally longer with dialysis than without (having only conservative kidney management). However, from the age of 80 and in elderly patients with comorbidities there is no difference in survival between the two groups. Quality of life Dialysis is an intensive treatment that has a serious impact on those treated with it. Being on dialysis usually leads to a poor quality of life. However, there are strategies that can make it more tolerable. Receiving dialysis at home might improve people's quality of life and autonomy. Scheduling and adherence Dialysis is typically on a regular schedule of three times a week. Given that dialysis patients have little or no capacity to filtrate solutes and regulate their fluid volume due to kidney dysfunction, missing dialysis is potentially lethal. These patients can be hyperkalaemic leading to cardiac dysrhythmias and potential cardiac arrest, as well as fluid in the alveoli of their lungs which can impair breathing. Some medications can be used in the short term to decrease serum potassium and stabilise the cardiac muscle so as to facilitate stabilisation of acute patients in the setting of missed dialysis. Salbutamol and insulin can decrease serum potassium by up to 1.0mmol/L each by shifting potassium from the extracellular space into the intracellular spaces within skeletal muscle cells, and calcium gluconate is used to stabilise the myocardium in hyperkalaemic patients, in an attempt to reduce the likelihood of lethal arrhythmias arising from a high serum potassium. Survival without dialysis People who decide against dialysis treatment when reaching end-stage chronic kidney disease could survive several years and experience improvements in their mental well-being in addition to sustained physical well-being and overall quality of life until late in their illness course. However, use of acute care services in these cases is common and intensity of end-of-life care is highly variable among people opting out of dialysis. Cost The average annual total cost per dialysis patient varies between countries, for example in South Korea 19,812 USD, in New Zealand 26,479 USD and in Netherlands 89,958 USD, according to an 2021 article. Pediatric dialysis Over the past 20 years, children have benefited from major improvements in both technology and clinical management of dialysis. Morbidity during dialysis sessions has decreased with seizures being exceptional and hypotensive episodes rare. Pain and discomfort have been reduced with the use of chronic internal jugular venous catheters and anesthetic creams for fistula puncture. Non-invasive technologies to assess patient target dry weight and access flow can significantly reduce patient morbidity and health care costs. Mortality in paediatric and young adult patients on chronic hemodialysis is associated with multifactorial markers of nutrition, inflammation, anaemia and dialysis dose, which highlights the importance of multimodal intervention strategies besides adequate hemodialysis treatment as determined by Kt/V alone. Biocompatible synthetic membranes, specific small size material dialyzers and new low extra-corporeal volume tubing have been developed for young infants. Arterial and venous tubing length is made of minimum length and diameter, a <80 ml to <110 ml volume tubing is designed for pediatric patients and a >130 to <224 ml tubing are for adult patients, regardless of blood pump segment size, which can be of 6.4 mm for normal dialysis or 8.0mm for high flux dialysis in all patients. All dialysis machine manufacturers design their machine to do the pediatric dialysis. In pediatric patients, the pump speed should be kept at low side, according to patient blood output capacity, and the clotting with heparin dose should be carefully monitored. The high flux dialysis (see below) is not recommended for pediatric patients. In children, hemodialysis must be individualized and viewed as an "integrated therapy" that considers their long-term exposure to chronic renal failure treatment. Dialysis is seen only as a temporary measure for children compared with renal transplantation because this enables the best chance of rehabilitation in terms of educational and psychosocial functioning. Long-term chronic dialysis, however, the highest standards should be applied to these children to preserve their future "cardiovascular life"—which might include more dialysis time and on-line hemodiafiltration online hdf with synthetic high flux membranes with the surface area of 0.2 m2 to 0.8 m2 and blood tubing lines with the low volume yet large blood pump segment of 6.4/8.0 mm, if we are able to improve on the rather restricted concept of small-solute urea dialysis clearance. Dialyzable substances Characteristics Dialyzable substances—substances removable with dialysis—have these properties: Low molecular mass High water solubility Low protein binding capacity Prolonged elimination (long half-life) Small volume of distribution Substances Ethylene glycol Procainamide Methanol Isopropyl alcohol Barbiturates Lithium Bromide Sotalol Chloral hydrate Ethanol Acetone Atenolol Theophylline Salicylates Baclofen Dialysis in different countries United Kingdom The National Health Service provides dialysis in the United Kingdom. In 2022, there were more than 30,000 people on dialysis in the UK. For people who need to travel to dialysis centres, patient transport services are generally provided without charge. Cornwall Clinical Commissioning Group proposed to restrict this provision to people who did not have specific medical or financial reasons in 2018 but changed their minds after a campaign led by Kidney Care UK and decided to fund transport for people requiring dialysis three times a week for a minimum or six times a month for a minimum of three months. Home dialysis UK clinical guidelines recommend offering people a choice regarding where they get their dialysis. Research in the UK found that receiving dialysis at home can lead to better quality of life and is less costly than receiving dialysis in hospital. However, many people in the UK prefer to receive dialysis in hospital: In 2022, only 1 in 6 chose receiving it at home. There are various reasons why people do not choose home dialysis. Among these are preferring hospitals as a way of getting regular social contact, being concerned about necessary changes to their homes and their family members becoming carers. Other reasons include a lack of motivation, doubting abilities for self-managed treatment, and not having suitable housing or support at home. Hospital dialysis is also often presented as the norm by healthcare professionals. Encouraging people to have dialysis at home could reduce the impact of dialysis on people's social and professional lives. Some ways to help are offering peer support from other people on home dialysis, better education materials, and professionals being more familiar with home dialysis and its impact. Choosing home dialysis is more likely at kidney centers which have better organisational culture, leadership and attitude. United States Since 1972, insurance companies in the United States have covered the cost of dialysis and transplants for all citizens. By 2014, more than 460,000 Americans were undergoing treatment, the costs of which amount to six percent of the entire Medicare budget. Kidney disease is the ninth leading cause of death, and the U.S. has one of the highest mortality rates for dialysis care in the industrialized world. The rate of patients getting kidney transplants has been lower than expected. These outcomes have been blamed on a new for-profit dialysis industry responding to government payment policies. A 1999 study concluded that "patients treated in for-profit dialysis facilities have higher mortality rates and are less likely to be placed on the waiting list for a renal transplant than are patients who are treated in not-for-profit facilities", possibly because transplantation removes a constant stream of revenue from the facility. The insurance industry has complained about kickbacks and problematic relationships between charities and providers. China The Government of China provides the funding for dialysis treatment. There is a challenge to reach everyone who needs dialysis treatment because of the unequal distribution of health care resources and dialysis centers. There are 395,121 individuals who receive hemodialysis or peritoneal dialysis in China per year. The percentage of the Chinese population with Chronic Kidney Disease is 10.8%. The Chinese Government is trying to increase the amount of peritoneal dialysis taking place to meet the needs of the nation's individuals with Chronic Kidney Disease. Australia Dialysis is provided without cost to all patients through Medicare, with 75% of all dialysis being administered as haemodialysis to patients three times per week in a dialysis facility. The Northern Territory has the highest incidence rate per population of haemodialysis, with Indigenous Australians having higher rates of Chronic Kidney Disease and lower rates of functional kidney transplants than the broader population. The remote Central Australian town of Alice Springs, despite having a population of approximately 25000, has the largest dialysis unit in the Southern Hemisphere. Many people must move to Alice Springs from remote Indigenous communities to access health services such as haemodialysis, which results in housing shortages, overcrowding, and poor living conditions. History In 1913, Leonard Rowntree and John Jacob Abel of Johns Hopkins Hospital developed the first dialysis system which they successfully tested in animals. A Dutch doctor, Willem Johan Kolff, constructed the first working dialyzer in 1943 during the Nazi occupation of the Netherlands. Due to the scarcity of available resources, Kolff had to improvise and build the initial machine using sausage casings, beverage cans, a washing machine and various other items that were available at the time. Over the following two years (1944–1945), Kolff used his machine to treat 16 patients with acute kidney failure, but the results were unsuccessful. Then, in 1945, a 67-year-old comatose woman regained consciousness following 11 hours of hemodialysis with the dialyzer and lived for another seven years before dying from an unrelated condition. She was the first-ever patient successfully treated with dialysis. Gordon Murray of the University of Toronto independently developed a dialysis machine in 1945. Unlike Kolff's rotating drum, Murray's machine used fixed flat plates, more like modern designs. Like Kolff, Murray's initial success was in patients with acute renal failure. Nils Alwall of Lund University in Sweden modified a similar construction to the Kolff dialysis machine by enclosing it inside a stainless steel canister. This allowed the removal of fluids, by applying a negative pressure to the outside canister, thus making it the first truly practical device for hemodialysis. Alwall treated his first patient in acute kidney failure on 3 September 1946.
Technology
Techniques
null
56512
https://en.wikipedia.org/wiki/Capillary
Capillary
A capillary is a small blood vessel, from 5 to 10 micrometres in diameter, and is part of the microcirculation system. Capillaries are microvessels and the smallest blood vessels in the body. They are composed of only the tunica intima (the innermost layer of an artery or vein), consisting of a thin wall of simple squamous endothelial cells. They are the site of the exchange of many substances from the surrounding interstitial fluid, and they convey blood from the smallest branches of the arteries (arterioles) to those of the veins (venules). Other substances which cross capillaries include water, oxygen, carbon dioxide, urea, glucose, uric acid, lactic acid and creatinine. Lymph capillaries connect with larger lymph vessels to drain lymphatic fluid collected in microcirculation. Etymology Capillary comes from the Latin word , meaning "of or resembling hair", with use in English beginning in the mid-17th century. The meaning stems from the tiny, hairlike diameter of a capillary. While capillary is usually used as a noun, the word also is used as an adjective, as in "capillary action", in which a liquid flows without influence of external forces, such as gravity. Structure Blood flows from the heart through arteries, which branch and narrow into arterioles, and then branch further into capillaries where nutrients and wastes are exchanged. The capillaries then join and widen to become venules, which in turn widen and converge to become veins, which then return blood back to the heart through the venae cavae. In the mesentery, metarterioles form an additional stage between arterioles and capillaries. Individual capillaries are part of the capillary bed, an interweaving network of capillaries supplying tissues and organs. The more metabolically active a tissue is, the more capillaries are required to supply nutrients and carry away products of metabolism. There are two types of capillaries: true capillaries, which branch from arterioles and provide exchange between tissue and the capillary blood, and sinusoids, a type of open-pore capillary found in the liver, bone marrow, anterior pituitary gland, and brain circumventricular organs. Capillaries and sinusoids are short vessels that directly connect the arterioles and venules at opposite ends of the beds. Metarterioles are found primarily in the mesenteric microcirculation. Lymphatic capillaries are slightly larger in diameter than blood capillaries, and have closed ends (unlike the blood capillaries open at one end to the arterioles and open at the other end to the venules). This structure permits interstitial fluid to flow into them but not out. Lymph capillaries have a greater internal oncotic pressure than blood capillaries, due to the greater concentration of plasma proteins in the lymph. Types Blood capillaries are categorized into three types: continuous, fenestrated, and sinusoidal (also known as discontinuous). Continuous Continuous capillaries are continuous in the sense that the endothelial cells provide an uninterrupted lining, and they only allow smaller molecules, such as water and ions, to pass through their intercellular clefts. Lipid-soluble molecules can passively diffuse through the endothelial cell membranes along concentration gradients. Continuous capillaries can be further divided into two subtypes: Those with numerous transport vesicles, which are found primarily in skeletal muscles, fingers, gonads, and skin. Those with few vesicles, which are primarily found in the central nervous system. These capillaries are a constituent of the blood–brain barrier. Fenestrated Fenestrated capillaries have pores known as fenestrae (Latin for "windows") in the endothelial cells that are 60–80 nanometres (nm) in diameter. They are spanned by a diaphragm of radially oriented fibrils that allows small molecules and limited amounts of protein to diffuse. In the renal glomerulus the capillaries are wrapped in podocyte foot processes or pedicels, which have slit pores with a function analogous to the diaphragm of the capillaries. Both of these types of blood vessels have continuous basal laminae and are primarily located in the endocrine glands, intestines, pancreas, and the glomeruli of the kidney. Sinusoidal Sinusoidal capillaries or discontinuous capillaries are a special type of open-pore capillary, also known as a sinusoid, that have wider fenestrations that are 30–40 micrometres (μm) in diameter, with wider openings in the endothelium. Fenestrated capillaries have diaphragms that cover the pores whereas sinusoids lack a diaphragm and just have an open pore. These types of blood vessels allow red and white blood cells (7.5 μm – 25 μm diameter) and various serum proteins to pass, aided by a discontinuous basal lamina. These capillaries lack pinocytotic vesicles, and therefore use gaps present in cell junctions to permit transfer between endothelial cells, and hence across the membrane. Sinusoids are irregular spaces filled with blood and are mainly found in the liver, bone marrow, spleen, and brain circumventricular organs. Development During early embryonic development, new capillaries are formed through vasculogenesis, the process of blood vessel formation that occurs through a novel production of endothelial cells that then form vascular tubes. The term angiogenesis denotes the formation of new capillaries from pre-existing blood vessels and already-present endothelium which divides. The small capillaries lengthen and interconnect to establish a network of vessels, a primitive vascular network that vascularises the entire yolk sac, connecting stalk, and chorionic villi. Function The capillary wall performs an important function by allowing nutrients and waste substances to pass across it. Molecules larger than 3 nm such as albumin and other large proteins pass through transcellular transport carried inside vesicles, a process which requires them to go through the cells that form the wall. Molecules smaller than 3 nm such as water and gases cross the capillary wall through the space between cells in a process known as paracellular transport. These transport mechanisms allow bidirectional exchange of substances depending on osmotic gradients. Capillaries that form part of the blood–brain barrier only allow for transcellular transport as tight junctions between endothelial cells seal the paracellular space. Capillary beds may control their blood flow via autoregulation. This allows an organ to maintain constant flow despite a change in central blood pressure. This is achieved by myogenic response, and in the kidney by tubuloglomerular feedback. When blood pressure increases, arterioles are stretched and subsequently constrict (a phenomenon known as the Bayliss effect) to counteract the increased tendency for high pressure to increase blood flow. In the lungs, special mechanisms have been adapted to meet the needs of increased necessity of blood flow during exercise. When the heart rate increases and more blood must flow through the lungs, capillaries are recruited and are also distended to make room for increased blood flow. This allows blood flow to increase while resistance decreases. Extreme exercise can make capillaries vulnerable, with a breaking point similar to that of collagen. Capillary permeability can be increased by the release of certain cytokines, anaphylatoxins, or other mediators (such as leukotrienes, prostaglandins, histamine, bradykinin, etc.) highly influenced by the immune system. Starling equation The transport mechanisms can be further quantified by the Starling equation. The Starling equation defines the forces across a semipermeable membrane and allows calculation of the net flux: where: is the net driving force, is the proportionality constant, and is the net fluid movement between compartments. By convention, outward force is defined as positive, and inward force is defined as negative. The solution to the equation is known as the net filtration or net fluid movement (Jv). If positive, fluid will tend to leave the capillary (filtration). If negative, fluid will tend to enter the capillary (absorption). This equation has a number of important physiologic implications, especially when pathologic processes grossly alter one or more of the variables. According to Starling's equation, the movement of fluid depends on six variables: Capillary hydrostatic pressure (Pc) Interstitial hydrostatic pressure (Pi) Capillary oncotic pressure (c) Interstitial oncotic pressure (i) Filtration coefficient (Kf) Reflection coefficient (σ) Clinical significance Disorders of capillary formation as a developmental defect or acquired disorder are a feature in many common and serious disorders. Within a wide range of cellular factors and cytokines, issues with normal genetic expression and bioactivity of the vascular growth and permeability factor vascular endothelial growth factor (VEGF) appear to play a major role in many of the disorders. Cellular factors include reduced number and function of bone-marrow derived endothelial progenitor cells. and reduced ability of those cells to form blood vessels. Formation of additional capillaries and larger blood vessels (angiogenesis) is a major mechanism by which a cancer may help to enhance its own growth. Disorders of retinal capillaries contribute to the pathogenesis of age-related macular degeneration. Reduced capillary density (capillary rarefaction) occurs in association with cardiovascular risk factors and in patients with coronary heart disease. Therapeutics Major diseases where altering capillary formation could be helpful include conditions where there is excessive or abnormal capillary formation such as cancer and disorders harming eyesight; and medical conditions in which there is reduced capillary formation either for familial or genetic reasons, or as an acquired problem. In patients with the retinal disorder, neovascular age-related macular degeneration, local anti-VEGF therapy to limit the bio-activity of vascular endothelial growth factor has been shown to protect vision by limiting progression. In a wide range of cancers, treatment approaches have been studied, or are in development, aimed at decreasing tumour growth by reducing angiogenesis. Blood sampling Capillary blood sampling can be used to test for blood glucose (such as in blood glucose monitoring), hemoglobin, pH and lactate. It is generally performed by creating a small cut using a blood lancet, followed by sampling by capillary action on the cut with a test strip or small pipette. It is also used to test for sexually transmitted infections that are present in the blood stream, such as HIV, syphilis, and hepatitis B and C, where a finger is lanced and a small amount of blood is sampled into a test tube. History A 13th century manuscript by Ibn Nafis contains the earliest known description of capillaries. The manuscript records Ibn Nafis' prediction of the existence of the capillaries which he described as perceptible passages (manafidh) between pulmonary artery and pulmonary vein. These passages would later be identified by Marcello Malpighi as capillaries. He further states that the heart's two main chambers (right and left ventricles) are separate and that blood cannot pass through the (interventricular) septum. William Harvey did not explicitly predict the existence of capillaries, but he saw the need for some sort of connection between the arterial and venous systems. In 1653, he wrote, "...the blood doth enter into every member through the arteries, and does return by the veins, and that the veins are the vessels and ways by which the blood is returned to the heart itself; and that the blood in the members and extremities does pass from the arteries into the veins (either mediately by an anastomosis, or immediately through the porosities of the flesh, or both ways) as before it did in the heart and thorax out of the veins, into the arteries..." Marcello Malpighi was the first to observe directly and correctly describe capillaries, discovering them in a frog's lung 8 years later, in 1661. August Krogh discovered how capillaries provide nutrients to animal tissue. For his work he was awarded the 1920 Nobel Prize in Physiology or Medicine. His 1922 estimate that total length of capillaries in a human body is as long as 100,000 km, had been widely adopted by textbooks and other secondary sources. This estimate was based on figures he gathered from "an extraordinarily large person". More recent estimates give a number between 9,000 and 19,000 km.
Biology and health sciences
Circulatory system
null
56527
https://en.wikipedia.org/wiki/Galactose
Galactose
Galactose (, galacto- + -ose, "milk sugar"), sometimes abbreviated Gal, is a monosaccharide sugar that is about as sweet as glucose, and about 65% as sweet as sucrose. It is an aldohexose and a C-4 epimer of glucose. A galactose molecule linked with a glucose molecule forms a lactose molecule. Galactan is a polymeric form of galactose found in hemicellulose, and forming the core of the galactans, a class of natural polymeric carbohydrates. D-Galactose is also known as brain sugar since it is a component of glycoproteins (oligosaccharide-protein compounds) found in nerve tissue. Etymology The word galactose was coined by Charles Weissman in the mid-19th century and is derived from Greek γαλακτος, galaktos, (of milk) and the generic chemical suffix for sugars -ose. The etymology is comparable to that of the word lactose in that both contain roots meaning "milk sugar". Lactose is a disaccharide of galactose plus glucose. Structure and isomerism Galactose exists in both open-chain and cyclic form. The open-chain form has a carbonyl at the end of the chain. Four isomers are cyclic, two of them with a pyranose (six-membered) ring, two with a furanose (five-membered) ring. Galactofuranose occurs in bacteria, fungi and protozoa, and is recognized by a putative chordate immune lectin intelectin through its exocyclic 1,2-diol. In the cyclic form there are two anomers, named alpha and beta, since the transition from the open-chain form to the cyclic form involves the creation of a new stereocenter at the site of the open-chain carbonyl. The IR spectra for galactose shows a broad, strong stretch from roughly wavenumber 2500 cm−1 to wavenumber 3700 cm−1. The Proton NMR spectra for galactose includes peaks at 4.7 ppm (D2O), 4.15 ppm (−CH2OH), 3.75, 3.61, 3.48 and 3.20 ppm (−CH2 of ring), 2.79–1.90 ppm (−OH). Relationship to lactose Galactose is a monosaccharide. When combined with glucose (another monosaccharide) through a condensation reaction, the result is a disaccharide called lactose. The hydrolysis of lactose to glucose and galactose is catalyzed by the enzymes lactase and β-galactosidase. The latter is produced by the lac operon in Escherichia coli. In nature, lactose is found primarily in milk and milk products. Consequently, various food products made with dairy-derived ingredients can contain lactose. Galactose metabolism, which converts galactose into glucose, is carried out by the three principal enzymes in a mechanism known as the Leloir pathway. The enzymes are listed in the order of the metabolic pathway: galactokinase (GALK), galactose-1-phosphate uridyltransferase (GALT), and UDP-galactose-4’-epimerase (GALE). In human lactation, galactose is required in a 1 to 1 ratio with glucose to enable the mammary glands to synthesize and secrete lactose. In a study where women were fed a diet containing galactose, 69 ± 6% of glucose and 54 ± 4% of galactose in the lactose they produced were derived directly from plasma glucose, while 7 ± 2% of the glucose and 12 ± 2% of the galactose in the lactose, were derived directly from plasma galactose. 25 ± 8% of the glucose and 35 ± 6% of the galactose was synthesized from smaller molecules such as glycerol or acetate in a process referred to in the paper as hexoneogenesis. This suggests that the synthesis of galactose is supplemented by direct uptake and of use of plasma galactose when present. Metabolism Glucose is more stable than galactose and is less susceptible to the formation of nonspecific glycoconjugates, molecules with at least one sugar attached to a protein or lipid. Many speculate that it is for this reason that a pathway for rapid conversion from galactose to glucose has been highly conserved among many species. The main pathway of galactose metabolism is the Leloir pathway; humans and other species, however, have been noted to contain several alternate pathways, such as the De Ley Doudoroff Pathway. The Leloir pathway consists of the latter stage of a two-part process that converts β-D-galactose to UDP-glucose. The initial stage is the conversion of β-D-galactose to α-D-galactose by the enzyme, mutarotase (GALM). The Leloir pathway then carries out the conversion of α-D-galactose to UDP-glucose via three principal enzymes: Galactokinase (GALK) phosphorylates α-D-galactose to galactose-1-phosphate, or Gal-1-P; Galactose-1-phosphate uridyltransferase (GALT) transfers a UMP group from UDP-glucose to Gal-1-P to form UDP-galactose; and finally, UDP galactose-4’-epimerase (GALE) interconverts UDP-galactose and UDP-glucose, thereby completing the pathway. The above mechanisms for galactose metabolism are necessary because the human body cannot directly convert galactose into energy, and must first go through one of these processes in order to utilize the sugar. Galactosemia is an inability to properly break down galactose due to a genetically inherited mutation in one of the enzymes in the Leloir pathway. As a result, the consumption of even small quantities is harmful to galactosemics. Sources Galactose is found in dairy products, avocados, sugar beets, other gums and mucilages. It is also synthesized by the body, where it forms part of glycolipids and glycoproteins in several tissues; and is a by-product from the third-generation ethanol production process (from macroalgae). Clinical significance Chronic systemic exposure of mice, rats, and Drosophila to D-galactose causes the acceleration of senescence (aging). It has been reported that high dose exposure of D-galactose (120 mg/kg) can cause reduced sperm concentration and sperm motility in rodents and has been extensively used as an aging model when administered subcutaneously. Two studies have suggested a possible link between galactose in milk and ovarian cancer. Other studies show no correlation, even in the presence of defective galactose metabolism. More recently, pooled analysis done by the Harvard School of Public Health showed no specific correlation between lactose-containing foods and ovarian cancer, and showed statistically insignificant increases in risk for consumption of lactose at 30 g/day. More research is necessary to ascertain possible risks. Some ongoing studies suggest galactose may have a role in treatment of focal segmental glomerulosclerosis (a kidney disease resulting in kidney failure and proteinuria). This effect is likely to be a result of binding of galactose to FSGS factor. Galactose is a component of the antigens (chemical markers) present on blood cells that distinguish blood type within the ABO blood group system. In O and A antigens, there are two monomers of galactose on the antigens, whereas in the B antigens there are three monomers of galactose. A disaccharide composed of two units of galactose, galactose-alpha-1,3-galactose (alpha-gal), has been recognized as a potential allergen present in mammal meat. Alpha-gal allergy may be triggered by lone star tick bites. Galactose in sodium saccharin solution has also been found to cause conditioned flavor avoidance in adult female rats within a laboratory setting when combined with intragastric injections. The reason for this flavor avoidance is still unknown, however it is possible that a decrease in the levels of the enzymes required to convert galactose to glucose in the liver of the rats could be responsible. History In 1855, E. O. Erdmann noted that hydrolysis of lactose produced a substance besides glucose. Galactose was first isolated and studied by Louis Pasteur in 1856 and he called it "lactose". In 1860, Berthelot renamed it "galactose" or "glucose lactique". In 1894, Emil Fischer and Robert Morrell determined the configuration of galactose.
Biology and health sciences
Carbohydrates
Biology
56552
https://en.wikipedia.org/wiki/Jet%20lag
Jet lag
Jet lag is a temporary physiological condition that occurs when a person's circadian rhythm is out of sync with the time zone they are in, and is a typical result from travelling rapidly across multiple time zones (east–west or west–east). For example, someone travelling from New York to London, i.e. from west to east, feels as if the time were five hours earlier than local time, and someone travelling from London to New York, i.e. from east to west, feels as if the time were five hours later than local time. The phase shift when travelling from east to west is referred to as phase-delay of the circadian cycle, whereas going west to east is phase-advance of the cycle. Most travellers find that it is harder to adjust time zones when travelling east. Jet lag was previously classified as a circadian rhythm sleep disorder. The condition may last several days before a traveller becomes fully adjusted to a new time zone; it takes on average one day per time zone crossed to reach circadian reentrainment. Jet lag is especially an issue for airline pilots, aircraft crew, and frequent travellers. Airlines have regulations aimed at combating pilot fatigue caused by jet lag. The term jet lag was created after the arrival of jet aircraft, because prior that it was uncommon to travel far and fast enough to cause the condition. Discovery According to a 1969 study by the Federal Aviation Administration, aviator Wiley Post was the first to write about the effects of flying across time zones in his 1931 co-authored book, Around the World in Eight Days. Signs and symptoms The symptoms of jet lag can be quite varied, depending on the amount of time zone alteration, time of day, and individual differences. Sleep disturbance occurs, with poor sleep upon arrival or sleep disruptions such as trouble falling asleep (when flying east), early awakening (when flying west), and trouble remaining asleep. Cognitive effects include poorer performance on mental tasks and concentration; dizziness, nausea, insomnia, confusion, anxiety, increased fatigue, headaches, and irritability; and problems with digestion, including indigestion, changes in the frequency and consistency of bowel movements, and reduced appetite. The symptoms are caused by a circadian rhythm that is out of sync with the day–night cycle of the destination, as well as the possibility of internal desynchronisation. Jet lag has been measured with simple analogue scales, but a study has shown that these are relatively blunt for assessing all the problems associated with jet lag. The Liverpool Jet Lag Questionnaire was developed to measure all the symptoms of jet lag at several times of day, and has been used to assess jet lag in athletes. Jet lag may require a change of three time zones or more to occur, though some individuals can be affected by as little as a single time zone or the single-hour shift to or from daylight saving time. Symptoms and consequences of jet lag can be a significant concern for athletes travelling east or west to competitions, as performance is often dependent on a combination of physical and mental characteristics that are affected by jet lag. This is often a common concern at international sporting events like the Olympics and FIFA World Cup. However many athletes arrive at least 2–4 weeks ahead of these events, to help adjust from any jet lag issues. Travel fatigue Travel fatigue is general fatigue, disorientation, and headache caused by a disruption in routine, time spent in a cramped space with little chance to move around, a low-oxygen environment, and dehydration caused by dry air and limited food and drink. It does not necessarily involve the shift in circadian rhythms that cause jet lag. Travel fatigue can occur without crossing time zones, and it often disappears after one day accompanied by a night of good quality sleep. Cause Jet lag is a chronobiological problem, similar to issues often induced by shift work and circadian rhythm sleep disorders. When travelling across a number of time zones, a person's body clock (circadian rhythm) will be out of synchronisation with the destination time, as it experiences daylight and darkness contrary to the rhythms to which it was accustomed. The body's natural pattern is disturbed, as the rhythms that dictate times for eating, sleeping, hormone regulation, body temperature variation, and other functions no longer correspond to the environment, nor to each other in some cases. To the degree that the body cannot immediately realign these rhythms, it is jet lagged.<ref>Cheng, Maria, 'How to avoid the worst of jet lag and maximize your travel time, Associated Press, August 21, 2024</ref> The speed at which the body adjusts to a new rhythm depends on the individual as well as the direction of travel; some people may require several days to adjust to a new time zone, while others experience little disruption. Crossing the International Date Line does not in itself contribute to jet lag, as the guide for calculating jet lag is the number of time zones crossed, with a maximum possible time difference of plus or minus 12 hours. If the absolute time difference between two locations is greater than 12 hours, one must subtract 24 from or add 24 to that number. For example, the time zone UTC+14 will be at the same time of day as UTC−10, though the former is one day ahead of the latter. Jet lag is linked only to the distance travelled along the east–west axis. A ten-hour flight between Europe and southern Africa does not cause jet lag, as the direction of travel is primarily north–south. A four-hour flight between Miami, Florida, and Phoenix, Arizona, in the United States may result in jet lag, as the direction of travel is primarily east–west. Double desynchronisation There are two separate processes related to biological timing: circadian oscillators and homeostasis. The circadian system is located in the suprachiasmatic nucleus (SCN) in the hypothalamus of the brain. The other process is homeostatic sleep propensity, which is a function of the amount of time elapsed since the last adequate sleep episode. The human body has a master clock in the SCN and peripheral oscillators in tissues. The SCN's role is to send signals to the peripheral oscillators, which synchronise them for physiological functions. The SCN responds to light information sent from the retina. It is hypothesised that peripheral oscillators respond to internal signals such as hormones, food intake, and "nervous stimuli". The implication of independent internal clocks may explain some of the symptoms of jet lag. People who travel across several time zones can, within a few days, adapt their sleep–wake cycles with light from the environment. However, their skeletal muscles, liver, lungs, and other organs will adapt at different rates. This internal biological de-synchronisation is exacerbated as the body is not in sync with the environmenta double desynchronisation'', which has implications for health and mood. Delayed sleep phase disorder Delayed sleep phase disorder is a medical disorder characterised by delayed sleeping time and a proportionately delayed waking time due to a phase delay in the internal biological master clock. Specific genotypes underlie this disorder. If allowed to sleep as dictated by their endogenous clock these individuals will not have any ill effects as a result of their phase-shifted sleeping time. Management Light exposure Light is the strongest stimulus, or zeitgeber, for realigning a person's circadian cycle, and the key to quick adaptation is therefore timed light exposure based on the traveller's sleep pattern, chronotype, and plans. Timed light exposure can be effective to help people match their circadian rhythms with the expected cycle at their destination; it requires strict adherence to timing. Light therapy is a popular method used by professional athletes to reduce jet lag. Timed correctly, the light may contribute to an advance or delay of the circadian phase to match the destination. The US Centers for Disease Control and Prevention (CDC) recommends mobile apps for the correct timing of light exposure and avoidance, when to use caffeine, and when to sleep. Melatonin administration In addition to timed light exposure, the right type and dose of melatonin, at the right time, can help travellers shift faster and sleep better as they are transitioning between time zones. There are issues regarding the appropriate timing of melatonin use, in addition to the legality of the substance in certain countries. For athletes, anti-doping agencies may prohibit or limit its use. Melatonin can be considered to be a darkness signal, with effects on circadian timing that are the opposite of the effects of exposure to light. Melatonin receptors are situated on the suprachiasmatic nucleus, which is the anatomical site of the circadian clock. The results of a few field studies of melatonin administration, monitoring circadian phase, have provided evidence for a correlation between the reduction of jet lag symptoms and the accelerated realignment of the circadian clock. Short duration trips In the case of short duration trips, jet lag may be minimised by maintaining a sleep-wake schedule based on the originating time zone after arriving at the destination, but this strategy is often impractical in regard to desired social activities or work obligations. Shifting one's sleep schedule before departure by 1–2 hours to match the destination time zone may also shorten the duration of jet lag. Symptoms can be further reduced through a combination of artificial exposure to light and rescheduling, as these have been shown to augment phase-shifting. Pharmacotherapy The short-term use of hypnotic medication has shown efficacy in reducing insomnia related to jet lag. In a study, zolpidem improved sleep quality and reduced awakenings for people travelling across five to nine time zones. The potential adverse effects of hypnotic agents, like amnesia and confusion, have led some doctors to advise patients to test such medications prior to using them for treating jet lag. Several cases using triazolam to promote sleep during a flight reported dramatic global amnesia. Mental health implications Jet lag may affect the mental health of vulnerable individuals. When travelling across time zones, there is a "phase-shift of body temperature, rapid-eye-movement sleep, melatonin production, and other circadian rhythms". A 2002 study found that relapse of bipolar and psychotic disorders occurred more frequently when seven or more time zones had been crossed in the past week than when three or fewer had been crossed. Although significant circadian rhythm disruption has been documented as affecting individuals with bipolar disorder, an Australian team studied suicide statistics from 1971 to 2001 to determine whether the one-hour shifts involved in daylight saving time had an effect. They found increased incidence of male suicide after the commencement of daylight saving time but not after returning to standard time.
Biology and health sciences
Mental disorders
Health
56558
https://en.wikipedia.org/wiki/Blood%20pressure
Blood pressure
Blood pressure (BP) is the pressure of circulating blood against the walls of blood vessels. Most of this pressure results from the heart pumping blood through the circulatory system. When used without qualification, the term "blood pressure" refers to the pressure in a brachial artery, where it is most commonly measured. Blood pressure is usually expressed in terms of the systolic pressure (maximum pressure during one heartbeat) over diastolic pressure (minimum pressure between two heartbeats) in the cardiac cycle. It is measured in millimeters of mercury (mmHg) above the surrounding atmospheric pressure, or in kilopascals (kPa). The difference between the systolic and diastolic pressures is known as pulse pressure, while the average pressure during a cardiac cycle is known as mean arterial pressure. Blood pressure is one of the vital signs—together with respiratory rate, heart rate, oxygen saturation, and body temperature—that healthcare professionals use in evaluating a patient's health. Normal resting blood pressure in an adult is approximately systolic over diastolic, denoted as "120/80 mmHg". Globally, the average blood pressure, age standardized, has remained about the same since 1975 to the present, at approximately 127/79 mmHg in men and 122/77 mmHg in women, although these average data mask significantly diverging regional trends. Traditionally, a health-care worker measured blood pressure non-invasively by auscultation (listening) through a stethoscope for sounds in one arm's artery as the artery is squeezed, closer to the heart, by an aneroid gauge or a mercury-tube sphygmomanometer. Auscultation is still generally considered to be the gold standard of accuracy for non-invasive blood pressure readings in clinic. However, semi-automated methods have become common, largely due to concerns about potential mercury toxicity, although cost, ease of use and applicability to ambulatory blood pressure or home blood pressure measurements have also influenced this trend. Early automated alternatives to mercury-tube sphygmomanometers were often seriously inaccurate, but modern devices validated to international standards achieve an average difference between two standardized reading methods of 5 mm Hg or less, and a standard deviation of less than 8 mm Hg. Most of these semi-automated methods measure blood pressure using oscillometry (measurement by a pressure transducer in the cuff of the device of small oscillations of intra-cuff pressure accompanying heartbeat-induced changes in the volume of each pulse). Blood pressure is influenced by cardiac output, systemic vascular resistance, blood volume and arterial stiffness, and varies depending on person's situation, emotional state, activity and relative health or disease state. In the short term, blood pressure is regulated by baroreceptors, which act via the brain to influence the nervous and the endocrine systems. Blood pressure that is too low is called hypotension, pressure that is consistently too high is called hypertension, and normal pressure is called normotension. Both hypertension and hypotension have many causes and may be of sudden onset or of long duration. Long-term hypertension is a risk factor for many diseases, including stroke, heart disease, and kidney failure. Long-term hypertension is more common than long-term hypotension. Classification, normal and abnormal values Systemic arterial pressure Blood pressure measurements can be influenced by circumstances of measurement. Guidelines use different thresholds for office (also known as clinic), home (when the person measures their own blood pressure at home), and ambulatory blood pressure (using an automated device over a 24-hour period). The risk of cardiovascular disease increases progressively above 90 mmHg, especially among women. Observational studies demonstrate that people who maintain arterial pressures at the low end of these pressure ranges have much better long-term cardiovascular health. There is an ongoing medical debate over what is the optimal level of blood pressure to target when using drugs to lower blood pressure with hypertension, particularly in older people. Blood pressure fluctuates from minute to minute and normally shows a circadian rhythm over a 24-hour period, with highest readings in the early morning and evenings and lowest readings at night. Loss of the normal fall in blood pressure at night is associated with a greater future risk of cardiovascular disease and there is evidence that night-time blood pressure is a stronger predictor of cardiovascular events than day-time blood pressure. Blood pressure varies over longer time periods (months to years) and this variability predicts adverse outcomes. Blood pressure also changes in response to temperature, noise, emotional stress, consumption of food or liquid, dietary factors, physical activity, changes in posture (such as standing-up), drugs, and disease. The variability in blood pressure and the better predictive value of ambulatory blood pressure measurements has led some authorities, such as the National Institute for Health and Care Excellence (NICE) in the UK, to advocate for the use of ambulatory blood pressure as the preferred method for diagnosis of hypertension. Various other factors, such as age and sex, also influence a person's blood pressure. Differences between left-arm and right-arm blood pressure measurements tend to be small. However, occasionally there is a consistent difference greater than 10 mmHg which may need further investigation, e.g. for peripheral arterial disease, obstructive arterial disease or aortic dissection. There is no accepted diagnostic standard for hypotension, although pressures less than 90/60 are commonly regarded as hypotensive. In practice blood pressure is considered too low only if symptoms are present. Systemic arterial pressure and age Fetal blood pressure In pregnancy, it is the fetal heart and not the mother's heart that builds up the fetal blood pressure to drive blood through the fetal circulation. The blood pressure in the fetal aorta is approximately 30 mmHg at 20 weeks of gestation, and increases to approximately 45 mmHg at 40 weeks of gestation. The average blood pressure for full-term infants: Systolic 65–95 mmHg Diastolic 30–60 mmHg Childhood In children the normal ranges for blood pressure are lower than for adults and depend on height. Reference blood pressure values have been developed for children in different countries, based on the distribution of blood pressure in children of these countries. Aging adults In adults in most societies, systolic blood pressure tends to rise from early adulthood onward, up to at least age 70; diastolic pressure tends to begin to rise at the same time but start to fall earlier in mid-life, approximately age 55. Mean blood pressure rises from early adulthood, plateauing in mid-life, while pulse pressure rises quite markedly after the age of 40. Consequently, in many older people, systolic blood pressure often exceeds the normal adult range, if the diastolic pressure is in the normal range this is termed isolated systolic hypertension. The rise in pulse pressure with age is attributed to increased stiffness of the arteries. An age-related rise in blood pressure is not considered healthy and is not observed in some isolated unacculturated communities. Systemic venous pressure Blood pressure generally refers to the arterial pressure in the systemic circulation. However, measurement of pressures in the venous system and the pulmonary vessels plays an important role in intensive care medicine but requires invasive measurement of pressure using a catheter. Venous pressure is the vascular pressure in a vein or in the atria of the heart. It is much lower than arterial pressure, with common values of 5 mmHg in the right atrium and 8 mmHg in the left atrium. Variants of venous pressure include: Central venous pressure, which is a good approximation of right atrial pressure, which is a major determinant of right ventricular end diastolic volume. (However, there can be exceptions in some cases.) The jugular venous pressure (JVP) is the indirectly observed pressure over the venous system. It can be useful in the differentiation of different forms of heart and lung disease. The portal venous pressure is the blood pressure in the portal vein. It is normally 5–10 mmHg Pulmonary pressure Normally, the pressure in the pulmonary artery is about 15 mmHg at rest. Increased blood pressure in the capillaries of the lung causes pulmonary hypertension, leading to interstitial edema if the pressure increases to above 20 mmHg, and to pulmonary edema at pressures above 25 mmHg. Aortic pressure Aortic pressure, also called central aortic blood pressure, or central blood pressure, is the blood pressure at the root of the aorta. Elevated aortic pressure has been found to be a more accurate predictor of both cardiovascular events and mortality, as well as structural changes in the heart, than has peripheral blood pressure (such as measured through the brachial artery). Traditionally it involved an invasive procedure to measure aortic pressure, but now there are non-invasive methods of measuring it indirectly without a significant margin of error. Certain researchers have argued for physicians to begin using aortic pressure, as opposed to peripheral blood pressure, as a guide for clinical decisions. The way antihypertensive drugs impact peripheral blood pressure can often be very different from the way they impact central aortic pressure. Mean systemic pressure If the heart is stopped, blood pressure falls, but it does not fall to zero. The remaining pressure measured after cessation of the heart beat and redistribution of blood throughout the circulation is termed the mean systemic pressure or mean circulatory filling pressure; typically this is proximally ~7 mmHg. Disorders of blood pressure Disorders of blood pressure control include high blood pressure, low blood pressure, and blood pressure that shows excessive or maladaptive fluctuation. High blood pressure Arterial hypertension can be an indicator of other problems and may have long-term adverse effects. Sometimes it can be an acute problem, such as in a hypertensive emergency when blood pressure is more than 180/120 mmHg. Levels of arterial pressure put mechanical stress on the arterial walls. Higher pressures increase heart workload and progression of unhealthy tissue growth (atheroma) that develops within the walls of arteries. The higher the pressure, the more stress that is present and the more atheroma tend to progress and the heart muscle tends to thicken, enlarge and become weaker over time. Persistent hypertension is one of the risk factors for strokes, heart attacks, heart failure, and arterial aneurysms, and is the leading cause of chronic kidney failure. Even moderate elevation of arterial pressure leads to shortened life expectancy. At severely high pressures, mean arterial pressures 50% or more above average, a person can expect to live no more than a few years unless appropriately treated. For people with high blood pressure, higher heart rate variability (HRV) is a risk factor for atrial fibrillation. Both high systolic pressure and high pulse pressure (the numerical difference between systolic and diastolic pressures) are risk factors. Elevated pulse pressure has been found to be a stronger independent predictor of cardiovascular events, especially in older populations, than has systolic, diastolic, or mean arterial pressure. In some cases, it appears that a decrease in excessive diastolic pressure can actually increase risk, probably due to the increased difference between systolic and diastolic pressures (ie. widened pulse pressure). If systolic blood pressure is elevated (>140 mmHg) with a normal diastolic blood pressure (<90 mmHg), it is called isolated systolic hypertension and may present a health concern. According to the 2017 American Heart Association blood pressure guidelines state that a systolic blood pressure of 130–139 mmHg with a diastolic pressure of 80–89 mmHg is "stage one hypertension". For those with heart valve regurgitation, a change in its severity may be associated with a change in diastolic pressure. In a study of people with heart valve regurgitation that compared measurements two weeks apart for each person, there was an increased severity of aortic and mitral regurgitation when diastolic blood pressure increased, whereas when diastolic blood pressure decreased, there was a decreased severity. Low blood pressure Blood pressure that is too low is known as hypotension. This is a medical concern if it causes signs or symptoms, such as dizziness, fainting, or in extreme cases in medical emergencies, circulatory shock. Causes of low arterial pressure include sepsis, hypovolemia, bleeding, cardiogenic shock, reflex syncope, hormonal abnormalities such as Addison's disease, eating disorders – particularly anorexia nervosa and bulimia. Orthostatic hypotension A large fall in blood pressure upon standing (typically a systolic/diastolic blood pressure decrease of >20/10 mmHg) is termed orthostatic hypotension (postural hypotension) and represents a failure of the body to compensate for the effect of gravity on the circulation. Standing results in an increased hydrostatic pressure in the blood vessels of the lower limbs. The consequent distension of the veins below the diaphragm (venous pooling) causes ~500 ml of blood to be relocated from the chest and upper body. This results in a rapid decrease in central blood volume and a reduction of ventricular preload which in turn reduces stroke volume, and mean arterial pressure. Normally this is compensated for by multiple mechanisms, including activation of the autonomic nervous system which increases heart rate, myocardial contractility and systemic arterial vasoconstriction to preserve blood pressure and elicits venous vasoconstriction to decrease venous compliance. Decreased venous compliance also results from an intrinsic myogenic increase in venous smooth muscle tone in response to the elevated pressure in the veins of the lower body. Other compensatory mechanisms include the veno-arteriolar axon reflex, the 'skeletal muscle pump' and 'respiratory pump'. Together these mechanisms normally stabilize blood pressure within a minute or less. If these compensatory mechanisms fail and arterial pressure and blood flow decrease beyond a certain point, the perfusion of the brain becomes critically compromised (i.e., the blood supply is not sufficient), causing lightheadedness, dizziness, weakness or fainting. Usually this failure of compensation is due to disease, or drugs that affect the sympathetic nervous system. A similar effect is observed following the experience of excessive gravitational forces (G-loading), such as routinely experienced by aerobatic or combat pilots 'pulling Gs' where the extreme hydrostatic pressures exceed the ability of the body's compensatory mechanisms. Variable or fluctuating blood pressure Some fluctuation or variation in blood pressure is normal. Variation in blood pressure that is significantly greater than the norm is known as labile hypertension and is associated with increased risk of cardiovascular disease brain small vessel disease, and dementia independent of the average blood pressure level. Recent evidence from clinical trials has also linked variation in blood pressure to mortality, stroke, heart failure, and cardiac changes that may give rise to heart failure. These data have prompted discussion of whether excessive variation in blood pressure should be treated, even among normotensive older adults. Older individuals and those who had received blood pressure medications are more likely to exhibit larger fluctuations in pressure, and there is some evidence that different antihypertensive agents have different effects on blood pressure variability; whether these differences translate to benefits in outcome is uncertain. Physiology During each heartbeat, blood pressure varies between a maximum (systolic) and a minimum (diastolic) pressure. The blood pressure in the circulation is principally due to the pumping action of the heart. However, blood pressure is also regulated by neural regulation from the brain (see Hypertension and the brain), as well as osmotic regulation from the kidney. Differences in mean blood pressure drive the flow of blood around the circulation. The rate of mean blood flow depends on both blood pressure and the resistance to flow presented by the blood vessels. In the absence of hydrostatic effects (e.g. standing), mean blood pressure decreases as the circulating blood moves away from the heart through arteries and capillaries due to viscous losses of energy. Mean blood pressure drops over the whole circulation, although most of the fall occurs along the small arteries and arterioles. Pulsatility also diminishes in the smaller elements of the arterial circulation, although some transmitted pulsatility is observed in capillaries. Gravity affects blood pressure via hydrostatic forces (e.g., during standing), and valves in veins, breathing, and pumping from contraction of skeletal muscles also influence blood pressure, particularly in veins. Hemodynamics A simple view of the hemodynamics of systemic arterial pressure is based around mean arterial pressure (MAP) and pulse pressure. Most influences on blood pressure can be understood in terms of their effect on cardiac output, systemic vascular resistance, or arterial stiffness (the inverse of arterial compliance). Cardiac output is the product of stroke volume and heart rate. Stroke volume is influenced by 1) the end-diastolic volume or filling pressure of the ventricle acting via the Frank–Starling mechanism—this is influenced by blood volume; 2) cardiac contractility; and 3) afterload, the impedance to blood flow presented by the circulation. In the short-term, the greater the blood volume, the higher the cardiac output. This has been proposed as an explanation of the relationship between high dietary salt intake and increased blood pressure; however, responses to increased dietary sodium intake vary between individuals and are highly dependent on autonomic nervous system responses and the renin–angiotensin system, changes in plasma osmolarity may also be important. In the longer-term the relationship between volume and blood pressure is more complex. In simple terms, systemic vascular resistance is mainly determined by the caliber of small arteries and arterioles. The resistance attributable to a blood vessel depends on its radius as described by the Hagen-Poiseuille's equation (resistance∝1/radius4). Hence, the smaller the radius, the higher the resistance. Other physical factors that affect resistance include: vessel length (the longer the vessel, the higher the resistance), blood viscosity (the higher the viscosity, the higher the resistance) and the number of vessels, particularly the smaller numerous, arterioles and capillaries. The presence of a severe arterial stenosis increases resistance to flow, however this increase in resistance rarely increases systemic blood pressure because its contribution to total systemic resistance is small, although it may profoundly decrease downstream flow. Substances called vasoconstrictors reduce the caliber of blood vessels, thereby increasing blood pressure. Vasodilators (such as nitroglycerin) increase the caliber of blood vessels, thereby decreasing arterial pressure. In the longer term a process termed remodeling also contributes to changing the caliber of small blood vessels and influencing resistance and reactivity to vasoactive agents. Reductions in capillary density, termed capillary rarefaction, may also contribute to increased resistance in some circumstances. In practice, each individual's autonomic nervous system and other systems regulating blood pressure, notably the kidney, respond to and regulate all these factors so that, although the above issues are important, they rarely act in isolation and the actual arterial pressure response of a given individual can vary widely in the short and long term. Pulse pressure The pulse pressure is the difference between the measured systolic and diastolic pressures, The pulse pressure is a consequence of the pulsatile nature of the cardiac output, i.e. the heartbeat. The magnitude of the pulse pressure is usually attributed to the interaction of the stroke volume of the heart, the compliance (ability to expand) of the arterial system—largely attributable to the aorta and large elastic arteries—and the resistance to flow in the arterial tree. Clinical significance of pulse pressure A healthy pulse pressure is around 40 mmHg. A pulse pressure that is consistently 60 mmHg or greater is likely to be associated with disease, and a pulse pressure of 50 mmHg or more increases the risk of cardiovascular disease as well as other complications such as eye and kidney disease. Pulse pressure is considered low if it is less than 25% of the systolic. (For example, if the systolic pressure is 120 mmHg, then the pulse pressure would be considered low if it is less than 30 mmHg, since 30 is 25% of 120.) A very low pulse pressure can be a symptom of disorders such as congestive heart failure. Elevated pulse pressure has been found to be a stronger independent predictor of cardiovascular events, especially in older populations, than has systolic, diastolic, or mean arterial pressure. This increased risk exists for both men and women and even when no other cardiovascular risk factors are present. The increased risk also exists even in cases in which diastolic pressure decreases over time while systolic remains steady. A meta-analysis in 2000 showed that a 10 mmHg increase in pulse pressure was associated with a 20% increased risk of cardiovascular mortality, and a 13% increase in risk for all coronary end points. The study authors also noted that, while risks of cardiovascular end points do increase with higher systolic pressures, at any given systolic blood pressure the risk of major cardiovascular end points increases, rather than decreases, with lower diastolic levels. This suggests that interventions that lower diastolic pressure without also lowering systolic pressure (and thus lowering pulse pressure) could actually be counterproductive. There are no drugs currently approved to lower pulse pressure, although some antihypertensive drugs may modestly lower pulse pressure, while in some cases a drug that lowers overall blood pressure may actually have the counterproductive side effect of raising pulse pressure. Pulse pressure can both widen or narrow in people with sepsis depending on the degree of hemodynamic compromise. A pulse pressure of over 70 mmHg in sepsis is correlated with an increased chance of survival and a more positive response to IV fluids. Mean arterial pressure Mean arterial pressure (MAP) is the average of blood pressure over a cardiac cycle and is determined by the cardiac output (CO), systemic vascular resistance (SVR), and central venous pressure (CVP): In practice, the contribution of CVP (which is small) is generally ignored and so MAP is often estimated from measurements of the systolic pressure, and the diastolic pressure,   using the equation: where k = 0.333 although other values for k have been advocated. Regulation of blood pressure The endogenous, homeostatic regulation of arterial pressure is not completely understood, but the following mechanisms of regulating arterial pressure have been well-characterized: Baroreceptor reflex: Baroreceptors in the high pressure receptor zones detect changes in arterial pressure. These baroreceptors send signals ultimately to the medulla of the brain stem, specifically to the rostral ventrolateral medulla (RVLM). The medulla, by way of the autonomic nervous system, adjusts the mean arterial pressure by altering both the force and speed of the heart's contractions, as well as the systemic vascular resistance. The most important arterial baroreceptors are located in the left and right carotid sinuses and in the aortic arch. Renin–angiotensin system (RAS): This system is generally known for its long-term adjustment of arterial pressure. This system allows the kidney to compensate for loss in blood volume or drops in arterial pressure by activating an endogenous vasoconstrictor known as angiotensin II. Aldosterone release: This steroid hormone is released from the adrenal cortex in response to activation of the renin-angiotensin system, high serum potassium levels, or elevated adrenocorticotropic hormone (ACTH). Renin converts angiotensinogen to angiotensin I, which is converted by angiotensin converting enzyme to angiotensin II. Angiotensin II then signals to the adrenal cortex to release aldosterone. Aldosterone stimulates sodium retention and potassium excretion by the kidneys and the consequent salt and water retention increases plasma volume, and indirectly, arterial pressure. Aldosterone may also exert direct pressor effects on vascular smooth muscle and central effects on sympathetic nervous system activity. Baroreceptors in low pressure receptor zones (mainly in the venae cavae and the pulmonary veins, and in the atria) result in feedback by regulating the secretion of antidiuretic hormone (ADH/vasopressin), renin and aldosterone. The resultant increase in blood volume results in an increased cardiac output by the Frank–Starling law of the heart, in turn increasing arterial blood pressure. These different mechanisms are not necessarily independent of each other, as indicated by the link between the RAS and aldosterone release. When blood pressure falls many physiological cascades commence in order to return the blood pressure to a more appropriate level. The blood pressure fall is detected by a decrease in blood flow and thus a decrease in glomerular filtration rate (GFR). Decrease in GFR is sensed as a decrease in Na+ levels by the macula densa. The macula densa causes an increase in Na+ reabsorption, which causes water to follow in via osmosis and leads to an ultimate increase in plasma volume. Further, the macula densa releases adenosine which causes constriction of the afferent arterioles. At the same time, the juxtaglomerular cells sense the decrease in blood pressure and release renin. Renin converts angiotensinogen (inactive form) to angiotensin I (active form). Angiotensin I flows in the bloodstream until it reaches the capillaries of the lungs where angiotensin-converting enzyme (ACE) acts on it to convert it into angiotensin II. Angiotensin II is a vasoconstrictor that will increase blood flow to the heart and subsequently the preload, ultimately increasing the cardiac output. Angiotensin II also causes an increase in the release of aldosterone from the adrenal glands. Aldosterone further increases the Na+ and H2O reabsorption in the distal convoluted tubule of the nephron. The RAS is targeted pharmacologically by ACE inhibitors and angiotensin II receptor antagonists (also known as angiotensin receptor blockers; ARB). The aldosterone system is directly targeted by aldosterone antagonists. The fluid retention may be targeted by diuretics; the antihypertensive effect of diuretics is due to its effect on blood volume. Generally, the baroreceptor reflex is not targeted in hypertension because if blocked, individuals may experience orthostatic hypotension and fainting. Measurement Arterial pressure is most commonly measured via a sphygmomanometer, which uses the height of a column of mercury, or an aneroid gauge, to reflect the blood pressure by auscultation. The most common automated blood pressure measurement technique is based on the oscillometric method. Fully automated oscillometric measurement has been available since 1981. This principle has recently been used to measure blood pressure with a smartphone. Measuring pressure invasively, by penetrating the arterial wall to take the measurement, is much less common and usually restricted to a hospital setting. Novel methods to measure blood pressure without penetrating the arterial wall, and without applying any pressure on patient's body are being explored, for example, cuffless measurements that uses only optical sensors. In office blood pressure measurement, terminal digit preference is common. According to one study, approximately 40% of recorded measurements ended with the digit zero, whereas "without bias, 10%–20% of measurements are expected to end in zero" In animals Blood pressure levels in non-human mammals may vary depending on the species. Heart rate differs markedly, largely depending on the size of the animal (larger animals have slower heart rates). The giraffe has a distinctly high arterial pressure of about 190 mm Hg, enabling blood perfusion through the -long neck to the head. In other species subjected to orthostatic blood pressure, such as arboreal snakes, blood pressure is higher than in non-arboreal snakes. A heart near to the head (short heart-to-head distance) and a long tail with tight integument favor blood perfusion to the head. As in humans, blood pressure in animals differs by age, sex, time of day, and environmental circumstances: measurements made in laboratories or under anesthesia may not be representative of values under free-living conditions. Rats, mice, dogs and rabbits have been used extensively to study the regulation of blood pressure. Hypertension in cats and dogs Hypertension in cats and dogs is generally diagnosed if the blood pressure is greater than 150 mm Hg (systolic), although sight hounds have higher blood pressures than most other dog breeds; a systolic pressure greater than 180 mmHg is considered abnormal in these dogs.
Biology and health sciences
Medical procedures
null
56561
https://en.wikipedia.org/wiki/Anesthesia
Anesthesia
Anesthesia (American English) or anaesthesia (British English) is a state of controlled, temporary loss of sensation or awareness that is induced for medical or veterinary purposes. It may include some or all of analgesia (relief from or prevention of pain), paralysis (muscle relaxation), amnesia (loss of memory), and unconsciousness. An individual under the effects of anesthetic drugs is referred to as being anesthetized. Anesthesia enables the painless performance of procedures that would otherwise require physical restraint in a non-anesthetized individual, or would otherwise be technically unfeasible. Three broad categories of anesthesia exist: General anesthesia suppresses central nervous system activity and results in unconsciousness and total lack of sensation, using either injected or inhaled drugs. Sedation suppresses the central nervous system to a lesser degree, inhibiting both anxiety and creation of long-term memories without resulting in unconsciousness. Regional and local anesthesia block transmission of nerve impulses from a specific part of the body. Depending on the situation, this may be used either on its own (in which case the individual remains fully conscious), or in combination with general anesthesia or sedation. Local anesthesia is simple infiltration by the clinician directly onto the region of interest (e.g. numbing a tooth for dental work). Peripheral nerve blocks use drugs targeted at peripheral nerves to anesthetize an isolated part of the body, such as an entire limb. Neuraxial blockade, mainly epidural and spinal anesthesia, can be performed in the region of the central nervous system itself, suppressing all incoming sensation from nerves supplying the area of the block. In preparing for a medical or veterinary procedure, the clinician chooses one or more drugs to achieve the types and degree of anesthesia characteristics appropriate for the type of procedure and the particular patient. The types of drugs used include general anesthetics, local anesthetics, hypnotics, dissociatives, sedatives, adjuncts, neuromuscular-blocking drugs, narcotics, and analgesics. The risks of complications during or after anesthesia are often difficult to separate from those of the procedure for which anesthesia is being given, but in the main they are related to three factors: the health of the individual, the complexity and stress of the procedure itself, and the anaesthetic technique. Of these factors, the individual's health has the greatest impact. Major perioperative risks can include death, heart attack, and pulmonary embolism whereas minor risks can include postoperative nausea and vomiting and hospital readmission. Some conditions, like local anesthetic toxicity, airway trauma or malignant hyperthermia, can be more directly attributed to specific anesthetic drugs and techniques. Medical uses The purpose of anesthesia can be distilled down to three basic goals or endpoints: hypnosis (a temporary loss of consciousness and with it a loss of memory. In a pharmacological context, the word hypnosis usually has this technical meaning, in contrast to its more familiar lay or psychological meaning of an altered state of consciousness not necessarily caused by drugs—see hypnosis). analgesia (lack of sensation which also blunts autonomic reflexes) muscle relaxation Different types of anesthesia affect the endpoints differently. Regional anesthesia, for instance, affects analgesia; benzodiazepine-type sedatives (used for sedation, or "twilight anesthesia") favor amnesia; and general anesthetics can affect all of the endpoints. The goal of anesthesia is to achieve the endpoints required for the given surgical procedure with the least risk to the subject. To achieve the goals of anesthesia, drugs act on different but interconnected parts of the nervous system. Hypnosis, for instance, is generated through actions on the nuclei in the brain and is similar to the activation of sleep. The effect is to make people less aware and less reactive to noxious stimuli. Loss of memory (amnesia) is created by action of drugs on multiple (but specific) regions of the brain. Memories are created as either declarative or non-declarative memories in several stages (short-term, long-term, long-lasting) the strength of which is determined by the strength of connections between neurons termed synaptic plasticity. Each anesthetic produces amnesia through unique effects on memory formation at variable doses. Inhalational anesthetics will reliably produce amnesia through general suppression of the nuclei at doses below those required for loss of consciousness. Drugs like midazolam produce amnesia through different pathways by blocking the formation of long-term memories. Nevertheless, a person can dream under anesthesia or are conscious of the procedure despite giving no indication of this during it. An estimated 22% of people do dream under general anesthesia, and one or two cases in a thousand have some consciousness, termed "anesthesia awareness". It is not known whether animals dream while under general anesthesia. Techniques Anesthesia is unique in that it is not a direct means of treatment; rather, it allows the clinician to do things that may treat, diagnose, or cure an ailment which would otherwise be painful or complicated. The best anesthetic, therefore, is the one with the lowest risk to the patient that still achieves the endpoints required to complete the procedure. The first stage in anesthesia is the pre-operative risk assessment consisting of the medical history, physical examination and lab tests. Diagnosing the patient's pre-operative physical status allows the clinician to minimize anesthetic risks. A well completed medical history will arrive at the correct diagnosis 56% of the time which increases to 73% with a physical examination. Lab tests help in diagnosis but only in 3% of cases, underscoring the need for a full history and physical examination prior to anesthetics. Incorrect pre-operative assessments or preparations are the root cause of 11% of all adverse anesthetic events. Safe anesthesia care depends greatly on well-functioning teams of highly trained healthcare workers. The medical specialty centred around anesthesia is called anesthesiology, and doctors specialised in the field are termed anesthesiologists. Additional healthcare professionals involved in anesthesia provision have varying titles and roles depending on the jurisdiction, and include anesthetic nurses, nurse anesthetists, anesthesiologist assistants, anaesthetic technicians, anaesthesia associates, operating department practitioners and anesthesia technologists. International standards for the safe practice of anesthesia, jointly endorsed by the World Health Organization and the World Federation of Societies of Anaesthesiologists, highly recommend that anesthesia should be provided, overseen or led by anesthesiologists, with the exception of minimal sedation or superficial procedures performed under local anesthesia. A trained, vigilant anesthesia provider should continually care for the patient; where the provider is not an anesthesiologist, they should be locally directed and supervised by an anesthesiologist, and in countries or settings where this is not feasible, care should be led by the most qualified local individual within a regional or national anesthesiologist-led framework. The same minimum standards for patient safety apply regardless of the provider, including continuous clinical and biometric monitoring of tissue oxygenation, perfusion and blood pressure; confirmation of correct placement of airway management devices by auscultation and carbon dioxide detection; use of the WHO Surgical Safety Checklist; and safe onward transfer of the patient's care following the procedure. One part of the risk assessment is based on the patient's health. The American Society of Anesthesiologists has developed a six-tier scale that stratifies the patient's pre-operative physical state. It is called the ASA physical status classification. The scale assesses risk as the patient's general health relates to an anesthetic. The more detailed pre-operative medical history aims to discover genetic disorders (such as malignant hyperthermia or pseudocholinesterase deficiency), habits (tobacco, drug and alcohol use), physical attributes (such as obesity or a difficult airway) and any coexisting diseases (especially cardiac and respiratory diseases) that might impact the anesthetic. The physical examination helps quantify the impact of anything found in the medical history in addition to lab tests. Aside from the generalities of the patient's health assessment, an evaluation of specific factors as they relate to the surgery also need to be considered for anesthesia. For instance, anesthesia during childbirth must consider not only the mother but the baby. Cancers and tumors that occupy the lungs or throat create special challenges to general anesthesia. After determining the health of the patient undergoing anesthesia and the endpoints that are required to complete the procedure, the type of anesthetic can be selected. Choice of surgical method and anesthetic technique aims to reduce risk of complications, shorten time needed for recovery and minimize the surgical stress response. General anesthesia Anesthesia is a combination of the endpoints (discussed above) that are reached by drugs acting on different but overlapping sites in the central nervous system. General anesthesia (as opposed to sedation or regional anesthesia) has three main goals: lack of movement (paralysis), unconsciousness, and blunting of the stress response. In the early days of anesthesia, anesthetics could reliably achieve the first two, allowing surgeons to perform necessary procedures, but many patients died because the extremes of blood pressure and pulse caused by the surgical insult were ultimately harmful. Eventually, the need for blunting of the surgical stress response was identified by Harvey Cushing, who injected local anesthetic prior to hernia repairs. This led to the development of other drugs that could blunt the response, leading to lower surgical mortality rates. The most common approach to reach the endpoints of general anesthesia is through the use of inhaled general anesthetics. Each anesthetic has its own potency, which is correlated to its solubility in oil. This relationship exists because the drugs bind directly to cavities in proteins of the central nervous system, although several theories of general anesthetic action have been described. Inhalational anesthetics are thought to exact their effects on different parts of the central nervous system. For instance, the immobilizing effect of inhaled anesthetics results from an effect on the spinal cord whereas sedation, hypnosis and amnesia involve sites in the brain. The potency of an inhalational anesthetic is quantified by its minimum alveolar concentration (MAC). The MAC is the percentage dose of anesthetic that will prevent a response to painful stimulus in 50% of subjects. The higher the MAC, generally, the less potent the anesthetic. The ideal anesthetic drug would provide hypnosis, amnesia, analgesia, and muscle relaxation without undesirable changes in blood pressure, pulse or breathing. In the 1930s, physicians started to augment inhaled general anesthetics with intravenous general anesthetics. The drugs used in combination offered a better risk profile to the subject under anesthesia and a quicker recovery. A combination of drugs was later shown to result in lower odds of dying in the first seven days after anesthetic. For instance, propofol (injection) might be used to start the anesthetic, fentanyl (injection) used to blunt the stress response, midazolam (injection) given to ensure amnesia and sevoflurane (inhaled) during the procedure to maintain the effects. More recently, several intravenous drugs have been developed which, if desired, allow inhaled general anesthetics to be avoided completely. Equipment The core instrument in an inhalational anesthetic delivery system is an anesthetic machine. It has vaporizers, ventilators, an anesthetic breathing circuit, waste gas scavenging system and pressure gauges. The purpose of the anesthetic machine is to provide anesthetic gas at a constant pressure, oxygen for breathing and to remove carbon dioxide or other waste anesthetic gases. Since inhalational anesthetics are flammable, various checklists have been developed to confirm that the machine is ready for use, that the safety features are active and the electrical hazards are removed. Intravenous anesthetic is delivered either by bolus doses or an infusion pump. There are also many smaller instruments used in airway management and monitoring the patient. The common thread to modern machinery in this field is the use of fail-safe systems that decrease the odds of catastrophic misuse of the machine. Monitoring Patients under general anesthesia must undergo continuous physiological monitoring to ensure safety. In the US, the American Society of Anesthesiologists (ASA) has established minimum monitoring guidelines for patients receiving general anesthesia, regional anesthesia, or sedation. These include electrocardiography (ECG), heart rate, blood pressure, inspired and expired gases, oxygen saturation of the blood (pulse oximetry), and temperature. In the UK the Association of Anaesthetists (AAGBI) have set minimum monitoring guidelines for general and regional anesthesia. For minor surgery, this generally includes monitoring of heart rate, oxygen saturation, blood pressure, and inspired and expired concentrations for oxygen, carbon dioxide, and inhalational anesthetic agents. For more invasive surgery, monitoring may also include temperature, urine output, blood pressure, central venous pressure, pulmonary artery pressure and pulmonary artery occlusion pressure, cardiac output, cerebral activity, and neuromuscular function. In addition, the operating room environment must be monitored for ambient temperature and humidity, as well as for accumulation of exhaled inhalational anesthetic agents, which might be deleterious to the health of operating room personnel. Sedation Sedation (also referred to as dissociative anesthesia or twilight anesthesia) creates hypnotic, sedative, anxiolytic, amnesic, anticonvulsant, and centrally produced muscle-relaxing properties. From the perspective of the person giving the sedation, the patient appears sleepy, relaxed and forgetful, allowing unpleasant procedures to be more easily completed. Sedatives such as benzodiazepines are usually given with pain relievers (such as narcotics, or local anesthetics or both) because they do not, by themselves, provide significant pain relief. From the perspective of the subject receiving a sedative, the effect is a feeling of general relaxation, amnesia (loss of memory) and time passing quickly. Many drugs can produce a sedative effect including benzodiazepines, propofol, thiopental, ketamine and inhaled general anesthetics. The advantage of sedation over a general anesthetic is that it generally does not require support of the airway or breathing (no tracheal intubation or mechanical ventilation) and can have less of an effect on the cardiovascular system which may add to a greater margin of safety in some patients. Regional anesthesia When pain is blocked from a part of the body using local anesthetics, it is generally referred to as regional anesthesia. There are many types of regional anesthesia either by injecting into the tissue itself, a vein that feeds the area or around a nerve trunk that supplies sensation to the area. The latter are called nerve blocks and are divided into peripheral or central nerve blocks. The following are the types of regional anesthesia: Infiltrative anesthesia: a small amount of local anesthetic is injected in a small area to stop any sensation (such as during the closure of a laceration, as a continuous infusion or "freezing" a tooth). The effect is almost immediate. Peripheral nerve block: local anesthetic is injected near a nerve that provides sensation to particular portion of the body. There is significant variation in the speed of onset and duration of anesthesia depending on the potency of the drug (e.g. Mandibular block, Fascia Iliaca Compartment Block). Intravenous regional anesthesia (also called a Bier block): dilute local anesthetic is infused to a limb through a vein with a tourniquet placed to prevent the drug from diffusing out of the limb. Central nerve block: Local anesthetic is injected or infused in or around a portion of the central nervous system (discussed in more detail below in spinal, epidural and caudal anesthesia). Topical anesthesia: local anesthetics that are specially formulated to diffuse through the mucous membranes or skin to give a thin layer of analgesia to an area (e.g. EMLA patches). Tumescent anesthesia: a large amount of very dilute local anesthetics are injected into the subcutaneous tissues during liposuction. Systemic local anesthetics: local anesthetics are given systemically (orally or intravenous) to relieve neuropathic pain. A 2018 Cochrane review found moderate quality evidence that regional anesthesia may reduce the frequency of persistent postoperative pain (PPP) from 3 to 18 months following thoracotomy and 3 to 12 months following caesarean. Low quality evidence was found 3 to 12 months following breast cancer surgery. This review acknowledges certain limitations that impact its applicability beyond the surgeries and regional anesthesia techniques reviewed. Nerve blocks When local anesthetic is injected around a larger diameter nerve that transmits sensation from an entire region it is referred to as a nerve block or regional nerve blockade. Nerve blocks are commonly used in dentistry, when the mandibular nerve is blocked for procedures on the lower teeth. With larger diameter nerves (such as the interscalene block for upper limbs or psoas compartment block for lower limbs) the nerve and position of the needle is localized with ultrasound or electrical stimulation. Evidence supports the use of ultrasound guidance alone, or in combination with peripheral nerve stimulation, as superior for improved sensory and motor block, a reduction in the need for supplementation and fewer complications. Because of the large amount of local anesthetic required to affect the nerve, the maximum dose of local anesthetic has to be considered. Nerve blocks are also used as a continuous infusion, following major surgery such as knee, hip and shoulder replacement surgery, and may be associated with lower complications. Nerve blocks are also associated with a lower risk of neurologic complications compared to the more central epidural or spinal neuraxial blocks. Spinal, epidural and caudal anesthesia Central neuraxial anesthesia is the injection of local anesthetic around the spinal cord to provide analgesia in the abdomen, pelvis or lower extremities. It is divided into either spinal (injection into the subarachnoid space), epidural (injection outside of the subarachnoid space into the epidural space) and caudal (injection into the cauda equina or tail end of the spinal cord). Spinal and epidural are the most commonly used forms of central neuraxial blockade. Spinal anesthesia is a "one-shot" injection that provides rapid onset and profound sensory anesthesia with lower doses of anesthetic, and is usually associated with neuromuscular blockade (loss of muscle control). Epidural anesthesia uses larger doses of anesthetic infused through an indwelling catheter which allows the anesthetic to be augmented should the effects begin to dissipate. Epidural anesthesia does not typically affect muscle control. Because central neuraxial blockade causes arterial and venous vasodilation, a drop in blood pressure is common. This drop is largely dictated by the venous side of the circulatory system which holds 75% of the circulating blood volume. The physiologic effects are much greater when the block is placed above the 5th thoracic vertebra. An ineffective block is most often due to inadequate anxiolysis or sedation rather than a failure of the block itself. Acute pain management Nociception (pain sensation) is not hard-wired into the body. Instead, it is a dynamic process wherein persistent painful stimuli can sensitize the system and either make pain management difficult or promote the development of chronic pain. For this reason, preemptive acute pain management may reduce both acute and chronic pain and is tailored to the surgery, the environment in which it is given (in-patient/out-patient) and the individual. Pain management is classified into either pre-emptive or on-demand. On-demand pain medications typically include either opioid or non-steroidal anti-inflammatory drugs but can also make use of novel approaches such as inhaled nitrous oxide or ketamine. On demand drugs can be administered by a clinician ("as needed drug orders") or by the patient using patient-controlled analgesia (PCA). PCA has been shown to provide slightly better pain control and increased patient satisfaction when compared with conventional methods. Common preemptive approaches include epidural neuraxial blockade or nerve blocks. One review which looked at pain control after abdominal aortic surgery found that epidural blockade provides better pain relief (especially during movement) in the period up to three postoperative days. It reduces the duration of postoperative tracheal intubation by roughly half. The occurrence of prolonged postoperative mechanical ventilation and myocardial infarction is also reduced by epidural analgesia. Risks and complications Risks and complications as they relate to anesthesia are classified as either morbidity (a disease or disorder that results from anesthesia) or mortality (death that results from anesthesia). Quantifying how anesthesia contributes to morbidity and mortality can be difficult because the patient's health prior to surgery and the complexity of the surgical procedure can also contribute to the risks. Prior to the introduction of anesthesia in the early 19th century, the physiologic stress from surgery caused significant complications and many deaths from shock. The faster the surgery was, the lower the rate of complications (leading to reports of very quick amputations). The advent of anesthesia allowed more complicated and life-saving surgery to be completed, decreased the physiologic stress of the surgery, but added an element of risk. It was two years after the introduction of ether anesthetics that the first death directly related to the use of anesthesia was reported. Morbidity can be major (myocardial infarction, pneumonia, pulmonary embolism, kidney failure/chronic kidney disease, postoperative cognitive dysfunction and allergy) or minor (minor nausea, vomiting, readmission). There is usually overlap in the contributing factors that lead to morbidity and mortality between the health of the patients, the type of surgery being performed and the anesthetic. To understand the relative risk of each contributing factor, consider that the rate of deaths totally attributed to the patient's health is 1:870. Compare that to the rate of deaths totally attributed to surgical factors (1:2860) or anesthesia alone (1:185,056) illustrating that the single greatest factor in anesthetic mortality is the health of the patient. These statistics can also be compared to the first such study on mortality in anesthesia from 1954, which reported a rate of death from all causes at 1:75 and a rate attributed to anesthesia alone at 1:2680. Direct comparisons between mortality statistics cannot reliably be made over time and across countries because of differences in the stratification of risk factors, however, there is evidence that anesthetics have made a significant improvement in safety but to what degree is uncertain. Rather than stating a flat rate of morbidity or mortality, many factors are reported as contributing to the relative risk of the procedure and anesthetic combined. For instance, an operation on a person who is between the ages of 60–79 years old places the patient at 2.3 times greater risk than someone less than 60 years old. Having an ASA score of 3, 4 or 5 places the person at 10.7 times greater risk than someone with an ASA score of 1 or 2. Other variables include age greater than 80 (3.3 times risk compared to those under 60), gender (females have a lower risk of 0.8), urgency of the procedure (emergencies have a 4.4 times greater risk), experience of the person completing the procedure (less than 8 years experience and/or less than 600 cases have a 1.1 times greater risk) and the type of anesthetic (regional anesthetics are lower risk than general anesthetics). Obstetrical, the very young and the very old are all at greater risk of complication so extra precautions may need to be taken. On 14 December 2016, the Food and Drug Administration issued a Public Safety Communication warning that "repeated or lengthy use of general anesthetic and sedation drugs during surgeries or procedures in children younger than 3 years or in pregnant women during their third trimester may affect the development of children's brains." The warning was criticized by the American College of Obstetricians and Gynecologists, which pointed out the absence of direct evidence regarding use in pregnant women and the possibility that "this warning could inappropriately dissuade providers from providing medically indicated care during pregnancy." Patient advocates noted that a randomized clinical trial would be unethical, that the mechanism of injury is well-established in animals, and that studies had shown exposure to multiple uses of anesthetic significantly increased the risk of developing learning disabilities in young children, with a hazard ratio of 2.12 (95% confidence interval, 1.26–3.54). Recovery The immediate time after anesthesia is called emergence. Emergence from general anesthesia or sedation requires careful monitoring because there is still a risk of complication. Nausea and vomiting are reported at 9.8% but will vary with the type of anesthetic and procedure. There is a need for airway support in 6.8%, there can be urinary retention (more common in those over 50 years of age) and hypotension in 2.7%. Hypothermia, shivering and confusion are also common in the immediate post-operative period because of the lack of muscle movement (and subsequent lack of heat production) during the procedure. Furthermore, the rare manifestation in the post-anesthetic period may be the occurrence of functional neurological symptom disorder (FNSD). Postoperative cognitive dysfunction (also known as POCD and post-anesthetic confusion) is a disturbance in cognition after surgery. It may also be variably used to describe emergence delirium (immediate post-operative confusion) and early cognitive dysfunction (diminished cognitive function in the first post-operative week). Although the three entities (delirium, early POCD and long-term POCD) are separate, the presence of delirium post-operatively predicts the presence of early POCD. There does not appear to be an association between delirium or early POCD and long-term POCD. According to a recent study conducted at the David Geffen School of Medicine at UCLA, the brain navigates its way through a series of activity clusters, or "hubs" on its way back to consciousness. Andrew Hudson, an assistant professor in anesthesiology states, "Recovery from anesthesia is not simply the result of the anesthetic 'wearing off,' but also of the brain finding its way back through a maze of possible activity states to those that allow conscious experience. Put simply, the brain reboots itself." Long-term POCD is a subtle deterioration in cognitive function, that can last for weeks, months, or longer. Most commonly, relatives of the person report a lack of attention, memory and loss of interest in activities previously dear to the person (such as crosswords). In a similar way, people in the workforce may report an inability to complete tasks at the same speed they could previously. There is good evidence that POCD occurs after cardiac surgery and the major reason for its occurrence is the formation of microemboli. POCD also appears to occur in non-cardiac surgery. Its causes in non-cardiac surgery are less clear but older age is a risk factor for its occurrence. History The first attempts at general anesthesia were probably herbal remedies administered in prehistory. Alcohol is one of the oldest known sedatives and it was used in ancient Mesopotamia thousands of years ago. The Sumerians are said to have cultivated and harvested the opium poppy (Papaver somniferum) in lower Mesopotamia as early as 3400 BCE. The ancient Egyptians had some surgical instruments, as well as crude analgesics and sedatives, including possibly an extract prepared from the mandrake fruit. In China, Bian Que (Chinese: 扁鹊, Wade–Giles: Pien Ch'iao, ) was a legendary Chinese internist and surgeon who reportedly used general anesthesia for surgical procedures. Despite this, it was the Chinese physician Hua Tuo whom historians considered the first verifiable historical figure to develop a type of mixture of anesthesia, though his recipe has yet to be fully discovered. Throughout Europe, Asia, and the Americas, a variety of Solanum species containing potent tropane alkaloids was used for anesthesia. In 13th-century Italy, Theodoric Borgognoni used similar mixtures along with opiates to induce unconsciousness, and treatment with the combined alkaloids proved a mainstay of anesthesia until the 19th century. Local anesthetics were used in Inca civilization where shamans chewed coca leaves and performed operations on the skull while spitting into the wounds they had inflicted to anesthetize. Cocaine was later isolated and became the first effective local anesthetic. It was first used in eye surgery in 1884 by Karl Koller, at the suggestion of Sigmund Freud. German surgeon August Bier (1861–1949) was the first to use cocaine for intrathecal anesthesia in 1898. Romanian surgeon Nicolae Racoviceanu-Piteşti (1860–1942) was the first to use opioids for intrathecal analgesia; he presented his experience in Paris in 1901. The "soporific sponge" ("sleep sponge") used by Arabic physicians was introduced to Europe by the Salerno school of medicine in the late 12th century and by Ugo Borgognoni (1180–1258) in the 13th century. The sponge was promoted and described by Ugo's son and fellow surgeon, Theodoric Borgognoni (1205–1298). In this anesthetic method, a sponge was soaked in a dissolved solution of opium, mandragora, hemlock juice, and other substances. The sponge was then dried and stored; just before surgery the sponge was moistened and then held under the patient's nose. When all went well, the fumes rendered the individual unconscious. The most famous anesthetic, ether, may have been synthesized as early as the 8th century, but it took many centuries for its anesthetic importance to be appreciated, even though the 16th century physician and polymath Paracelsus noted that chickens made to breathe it not only fell asleep but also felt no pain. By the early 19th century, ether was being used by humans, but only as a recreational drug. Meanwhile, in 1772, English scientist Joseph Priestley discovered the gas nitrous oxide. Initially, people thought this gas to be lethal, even in small doses, like some other nitrogen oxides. However, in 1799, British chemist and inventor Humphry Davy decided to find out by experimenting on himself. To his astonishment he found that nitrous oxide made him laugh, so he nicknamed it "laughing gas". In 1800 Davy wrote about the potential anesthetic properties of nitrous oxide in relieving pain during surgery, but nobody at that time pursued the matter any further. On 14 November 1804, Hanaoka Seishū, a Japanese doctor, became the first person to successfully perform surgery using general anesthesia. Hanaoka learned traditional Japanese medicine as well as Dutch-imported European surgery and Chinese medicine. After years of research and experimentation, he finally developed a formula which he named tsūsensan (also known as mafutsu-san), which combined Korean morning glory and other herbs. Hanaoka's success in performing this painless operation soon became widely known, and patients began to arrive from all parts of Japan. Hanaoka went on to perform many operations using tsūsensan, including resection of malignant tumors, extraction of bladder stones, and extremity amputations. Before his death in 1835, Hanaoka performed more than 150 operations for breast cancer. However, this finding did not benefit the rest of the world until 1854 as the national isolation policy of the Tokugawa shogunate prevented Hanaoka's achievements from being publicized until after the isolation ended. Nearly forty years would pass before Crawford Long, who is titled as the inventor of modern anesthetics in the West, used general anesthesia in Jefferson, Georgia. Long noticed that his friends felt no pain when they injured themselves while staggering around under the influence of diethyl ether. He immediately thought of its potential in surgery. Conveniently, a participant in one of those "ether frolics", a student named James Venable, had two small tumors he wanted excised. But fearing the pain of surgery, Venable kept putting the operation off. Hence, Long suggested that he have his operation while under the influence of ether. Venable agreed, and on 30 March 1842 he underwent a painless operation. However, Long did not announce his discovery until 1849. Horace Wells conducted the first public demonstration of the inhalational anesthetic at the Massachusetts General Hospital in Boston in 1845. However, the nitrous oxide was improperly administered and the person cried out in pain. On 16 October 1846, Boston dentist William Thomas Green Morton gave a successful demonstration using diethyl ether to medical students at the same venue. Morton, who was unaware of Long's previous work, was invited to the Massachusetts General Hospital to demonstrate his new technique for painless surgery. After Morton had induced anesthesia, surgeon John Collins Warren removed a tumor from the neck of Edward Gilbert Abbott. This occurred in the surgical amphitheater now called the Ether Dome. The previously skeptical Warren was impressed and stated, "Gentlemen, this is no humbug." In a letter to Morton shortly thereafter, physician and writer Oliver Wendell Holmes Sr. proposed naming the state produced "anesthesia", and the procedure an "anesthetic". Morton at first attempted to hide the actual nature of his anesthetic substance, referring to it as Letheon. He received a US patent for his substance, but news of the successful anesthetic spread quickly by late 1846. Respected surgeons in Europe including Liston, Dieffenbach, Pirogov, and Syme quickly undertook numerous operations with ether. An American-born physician, Boott, encouraged London dentist James Robinson to perform a dental procedure on a Miss Lonsdale. This was the first case of an operator-anesthetist. On the same day, 19 December 1846, in Dumfries Royal Infirmary, Scotland, a Dr. Scott used ether for a surgical procedure. The first use of anesthesia in the Southern Hemisphere took place in Launceston, Tasmania, that same year. Drawbacks with ether such as excessive vomiting and its explosive flammability led to its replacement in England with chloroform. Discovered in 1831 by an American physician Samuel Guthrie (1782–1848), and independently a few months later by Frenchman Eugène Soubeiran (1797–1859) and Justus von Liebig (1803–1873) in Germany, chloroform was named and chemically characterized in 1834 by Jean-Baptiste Dumas (1800–1884). In 1842, Dr Robert Mortimer Glover in London discovered the anaesthetic qualities of chloroform on laboratory animals. In 1847, Scottish obstetrician James Young Simpson was the first to demonstrate the anesthetic properties of chloroform on humans and helped to popularize the drug for use in medicine. This first supply came from local pharmacists, James Duncan and William Flockhart, and its use spread quickly, with 750,000 doses weekly in Britain by 1895. Simpson arranged for Flockhart to supply Florence Nightingale. Chloroform gained royal approval in 1853 when John Snow administered it to Queen Victoria when she was in labor with Prince Leopold. For the experience of child birth itself, chloroform met all the Queen's expectations; she stated it was "delightful beyond measure". Chloroform was not without fault though. The first fatality directly attributed to chloroform administration was recorded on 28 January 1848 after the death of Hannah Greener. This was the first of many deaths to follow from the untrained handling of chloroform. Surgeons began to appreciate the need for a trained anesthetist. The need, as Thatcher writes, was for an anesthetist to "(1) Be satisfied with the subordinate role that the work would require, (2) Make anesthesia their one absorbing interest, (3) not look at the situation of anesthetist as one that put them in a position to watch and learn from the surgeons technique (4) accept the comparatively low pay and (5) have the natural aptitude and intelligence to develop a high level of skill in providing the smooth anesthesia and relaxation that the surgeon demanded" These qualities of an anesthetist were often found in submissive medical students and even members of the public. More often, surgeons sought out nurses to provide anesthesia. By the time of the Civil War, many nurses had been professionally trained with the support of surgeons. John Snow of London published articles from May 1848 onwards "On Narcotism by the Inhalation of Vapours" in the London Medical Gazette. Snow also involved himself in the production of equipment needed for the administration of inhalational anesthetics, the forerunner of today's anesthesia machines. Alice Magaw, born in November 1860, is often referred to as "The Mother of Anesthesia". Her renown as the personal anesthesia provider for William and Charles Mayo was solidified by Mayo's own words in his 1905 article in which he described his satisfaction with and reliance on nurse anesthetists: "The question of anaesthesia is a most important one. We have regular anaesthetists [on] whom we can depend so that I can devote my entire attention to the surgical work." Magaw kept thorough records of her cases and recorded these anesthetics. In her publication reviewing more than 14,000 surgical anesthetics, Magaw indicates she successfully provided anesthesia without an anesthetic-related death. Magaw describes in another article, "We have administered an anesthetic 1,092 times; ether alone 674 times; chloroform 245 times; ether and chloroform combined 173 times. I can report that out of this number, 1,092 cases, we have not had an accident". Magaw's records and outcomes created a legacy defining that the delivery of anesthesia by nurses would serve the surgical community without increasing the risks to patients. In fact, Magaw's outcomes would eclipse those of practitioners today. The first comprehensive medical textbook on the subject, Anesthesia, was authored in 1914 by anesthesiologist Dr. James Tayloe Gwathmey and the chemist Dr. Charles Baskerville. This book served as the standard reference for the specialty for decades and included details on the history of anesthesia as well as the physiology and techniques of inhalation, rectal, intravenous, and spinal anesthesia. Of these first famous anesthetics, only nitrous oxide is still widely used today, with chloroform and ether having been replaced by safer but sometimes more expensive general anesthetics, and cocaine by more effective local anesthetics with less abuse potential. Society and culture Almost all healthcare providers use anesthetic drugs to some degree, but most health professions have their own field of specialists in the field including medicine, nursing and dentistry. Doctors specializing in anaesthesiology, including perioperative care, development of an anesthetic plan, and the administration of anesthetics are known in the US as anesthesiologists and in the UK, Canada, Australia, and NZ as anaesthetists or anaesthesiologists. All anesthetics in the UK, Australia, New Zealand, Hong Kong and Japan are administered by doctors. Nurse anesthetists also administer anesthesia in 109 nations. In the US, 35% of anesthetics are provided by physicians in solo practice, about 55% are provided by anesthesia care teams (ACTs) with anesthesiologists medically directing certified registered nurse anesthetists (CRNAs) or anesthesiologist assistants, and about 10% are provided by CRNAs in solo practice. There can also be anesthesiologist assistants (US) or physicians' assistants (anaesthesia) (UK) who assist with anesthesia. Special populations There are many circumstances when anesthesia needs to be altered for special circumstances due to the procedure (such as in cardiac surgery, cardiothoracic anesthesiology or neurosurgery), the patient (such as in pediatric anesthesia, geriatric, bariatric or obstetrical anesthesia) or special circumstances (such as in trauma, prehospital care, robotic surgery or extreme environments).
Biology and health sciences
Drugs and medication
null
56565
https://en.wikipedia.org/wiki/Circadian%20rhythm
Circadian rhythm
A circadian rhythm (), or circadian cycle, is a natural oscillation that repeats roughly every 24 hours. Circadian rhythms can refer to any process that originates within an organism (i.e., endogenous) and responds to the environment (is entrained by the environment). Circadian rhythms are regulated by a circadian clock whose primary function is to rhythmically co-ordinate biological processes so they occur at the correct time to maximize the fitness of an individual. Circadian rhythms have been widely observed in animals, plants, fungi and cyanobacteria and there is evidence that they evolved independently in each of these kingdoms of life. The term circadian comes from the Latin , meaning "around", and , meaning "day". Processes with 24-hour cycles are more generally called diurnal rhythms; diurnal rhythms should not be called circadian rhythms unless they can be confirmed as endogenous, and not environmental. Although circadian rhythms are endogenous, they are adjusted to the local environment by external cues called zeitgebers (from German (; )), which include light, temperature and redox cycles. In clinical settings, an abnormal circadian rhythm in humans is known as a circadian rhythm sleep disorder. History The earliest recorded account of a circadian process is credited to Theophrastus, dating from the 4th century BC, probably provided to him by report of Androsthenes, a ship's captain serving under Alexander the Great. In his book, 'Περὶ φυτῶν ἱστορία', or 'Enquiry into plants', Theophrastus describes a "tree with many leaves like the rose, and that this closes at night, but opens at sunrise, and by noon is completely unfolded; and at evening again it closes by degrees and remains shut at night, and the natives say that it goes to sleep." The tree mentioned by him was much later identified as the tamarind tree by the botanist, H Bretzl, in his book on the botanical findings of the Alexandrian campaigns. The observation of a circadian or diurnal process in humans is mentioned in Chinese medical texts dated to around the 13th century, including the Noon and Midnight Manual and the Mnemonic Rhyme to Aid in the Selection of Acu-points According to the Diurnal Cycle, the Day of the Month and the Season of the Year. In 1729, French scientist Jean-Jacques d'Ortous de Mairan conducted the first experiment designed to distinguish an endogenous clock from responses to daily stimuli. He noted that 24-hour patterns in the movement of the leaves of the plant Mimosa pudica persisted, even when the plants were kept in constant darkness. In 1896, Patrick and Gilbert observed that during a prolonged period of sleep deprivation, sleepiness increases and decreases with a period of approximately 24 hours. In 1918, J.S. Szymanski showed that animals are capable of maintaining 24-hour activity patterns in the absence of external cues such as light and changes in temperature. In the early 20th century, circadian rhythms were noticed in the rhythmic feeding times of bees. Auguste Forel, Ingeborg Beling, and Oskar Wahl conducted numerous experiments to determine whether this rhythm was attributable to an endogenous clock. The existence of circadian rhythm was independently discovered in fruit flies in 1935 by two German zoologists, Hans Kalmus and Erwin Bünning. In 1954, an important experiment reported by Colin Pittendrigh demonstrated that eclosion (the process of pupa turning into adult) in Drosophila pseudoobscura was a circadian behaviour. He demonstrated that while temperature played a vital role in eclosion rhythm, the period of eclosion was delayed but not stopped when temperature was decreased. The term circadian was coined by Franz Halberg in 1959. According to Halberg's original definition: In 1977, the International Committee on Nomenclature of the International Society for Chronobiology formally adopted the definition: Ron Konopka and Seymour Benzer identified the first clock mutation in Drosophila in 1971, naming the gene "period" (per) gene, the first discovered genetic determinant of behavioral rhythmicity. The per gene was isolated in 1984 by two teams of researchers. Konopka, Jeffrey Hall, Michael Roshbash and their team showed that per locus is the centre of the circadian rhythm, and that loss of per stops circadian activity. At the same time, Michael W. Young's team reported similar effects of per, and that the gene covers 7.1-kilobase (kb) interval on the X chromosome and encodes a 4.5-kb poly(A)+ RNA. They went on to discover the key genes and neurones in Drosophila circadian system, for which Hall, Rosbash and Young received the Nobel Prize in Physiology or Medicine 2017. Joseph Takahashi discovered the first mammalian circadian clock mutation (clockΔ19) using mice in 1994. However, recent studies show that deletion of clock does not lead to a behavioral phenotype (the animals still have normal circadian rhythms), which questions its importance in rhythm generation. The first human clock mutation was identified in an extended Utah family by Chris Jones, and genetically characterized by Ying-Hui Fu and Louis Ptacek. Affected individuals are extreme 'morning larks' with 4-hour advanced sleep and other rhythms. This form of familial advanced sleep phase syndrome is caused by a single amino acid change, S662➔G, in the human PER2 protein. Criteria To be called circadian, a biological rhythm must meet these three general criteria: The rhythm has an endogenously derived free-running period of time that lasts approximately 24 hours. The rhythm persists in constant conditions, i.e. constant darkness, with a period of about 24 hours. The period of the rhythm in constant conditions is called the free-running period and is denoted by the Greek letter τ (tau). The rationale for this criterion is to distinguish circadian rhythms from simple responses to daily external cues. A rhythm cannot be said to be endogenous unless it has been tested and persists in conditions without external periodic input. In diurnal animals (active during daylight hours), in general τ is slightly greater than 24 hours, whereas, in nocturnal animals (active at night), in general τ is shorter than 24 hours. The rhythms are entrainable. The rhythm can be reset by exposure to external stimuli (such as light and heat), a process called entrainment. The external stimulus used to entrain a rhythm is called the zeitgeber, or "time giver". Travel across time zones illustrates the ability of the human biological clock to adjust to the local time; a person will usually experience jet lag before entrainment of their circadian clock has brought it into sync with local time. The rhythms exhibit temperature compensation. In other words, they maintain circadian periodicity over a range of physiological temperatures. Many organisms live at a broad range of temperatures, and differences in thermal energy will affect the kinetics of all molecular processes in their . In order to keep track of time, the organism's circadian clock must maintain roughly a 24-hour periodicity despite the changing kinetics, a property known as temperature compensation. The Q10 temperature coefficient is a measure of this compensating effect. If the Q10 coefficient remains approximately 1 as temperature increases, the rhythm is considered to be temperature-compensated. Origin Circadian rhythms allow organisms to anticipate and prepare for precise and regular environmental changes. They thus enable organisms to make better use of environmental resources (e.g. light and food) compared to those that cannot predict such availability. It has therefore been suggested that circadian rhythms put organisms at a selective advantage in evolutionary terms. However, rhythmicity appears to be as important in regulating and coordinating internal metabolic processes, as in coordinating with the environment. This is suggested by the maintenance (heritability) of circadian rhythms in fruit flies after several hundred generations in constant laboratory conditions, as well as in creatures in constant darkness in the wild, and by the experimental elimination of behavioral—but not physiological—circadian rhythms in quail. What drove circadian rhythms to evolve has been an enigmatic question. Previous hypotheses emphasized that photosensitive proteins and circadian rhythms may have originated together in the earliest cells, with the purpose of protecting replicating DNA from high levels of damaging ultraviolet radiation during the daytime. As a result, replication was relegated to the dark. However, evidence for this is lacking: in fact the simplest organisms with a circadian rhythm, the cyanobacteria, do the opposite of this: they divide more in the daytime. Recent studies instead highlight the importance of co-evolution of redox proteins with circadian oscillators in all three domains of life following the Great Oxidation Event approximately 2.3 billion years ago. The current view is that circadian changes in environmental oxygen levels and the production of reactive oxygen species (ROS) in the presence of daylight are likely to have driven a need to evolve circadian rhythms to preempt, and therefore counteract, damaging redox reactions on a daily basis. The simplest known circadian clocks are bacterial circadian rhythms, exemplified by the prokaryote cyanobacteria. Recent research has demonstrated that the circadian clock of Synechococcus elongatus can be reconstituted in vitro with just the three proteins (KaiA, KaiB, KaiC) of their central oscillator. This clock has been shown to sustain a 22-hour rhythm over several days upon the addition of ATP. Previous explanations of the prokaryotic circadian timekeeper were dependent upon a DNA transcription/translation feedback mechanism. A defect in the human homologue of the Drosophila "period" gene was identified as a cause of the sleep disorder FASPS (Familial advanced sleep phase syndrome), underscoring the conserved nature of the molecular circadian clock through evolution. Many more genetic components of the biological clock are now known. Their interactions result in an interlocked feedback loop of gene products resulting in periodic fluctuations that the cells of the body interpret as a specific time of the day. It is now known that the molecular circadian clock can function within a single cell. That is, it is cell-autonomous. This was shown by Gene Block in isolated mollusk basal retinal neurons (BRNs). At the same time, different cells may communicate with each other resulting in a synchronized output of electrical signaling. These may interface with endocrine glands of the brain to result in periodic release of hormones. The receptors for these hormones may be located far across the body and synchronize the peripheral clocks of various organs. Thus, the information of the time of the day as relayed by the eyes travels to the clock in the brain, and, through that, clocks in the rest of the body may be synchronized. This is how the timing of, for example, sleep/wake, body temperature, thirst, and appetite are coordinately controlled by the biological clock. Importance in animals Circadian rhythmicity is present in the sleeping and feeding patterns of animals, including human beings. There are also clear patterns of core body temperature, brain wave activity, hormone production, cell regeneration, and other biological activities. In addition, photoperiodism, the physiological reaction of organisms to the length of day or night, is vital to both plants and animals, and the circadian system plays a role in the measurement and interpretation of day length. Timely prediction of seasonal periods of weather conditions, food availability, or predator activity is crucial for survival of many species. Although not the only parameter, the changing length of the photoperiod (day length) is the most predictive environmental cue for the seasonal timing of physiology and behavior, most notably for timing of migration, hibernation, and reproduction. Effect of circadian disruption Mutations or deletions of clock genes in mice have demonstrated the importance of body clocks to ensure the proper timing of cellular/metabolic events; clock-mutant mice are hyperphagic and obese, and have altered glucose metabolism. In mice, deletion of the Rev-ErbA alpha clock gene can result in diet-induced obesity and changes the balance between glucose and lipid utilization, predisposing to diabetes. However, it is not clear whether there is a strong association between clock gene polymorphisms in humans and the susceptibility to develop the metabolic syndrome. Effect of light–dark cycle The rhythm is linked to the light–dark cycle. Animals, including humans, kept in total darkness for extended periods eventually function with a free-running rhythm. Their sleep cycle is pushed back or forward each "day", depending on whether their "day", their endogenous period, is shorter or longer than 24 hours. The environmental cues that reset the rhythms each day are called zeitgebers. Totally blind subterranean mammals (e.g., blind mole rat Spalax sp.) are able to maintain their endogenous clocks in the apparent absence of external stimuli. Although they lack image-forming eyes, their photoreceptors (which detect light) are still functional; they do surface periodically as well. Free-running organisms that normally have one or two consolidated sleep episodes will still have them when in an environment shielded from external cues, but the rhythm is not entrained to the 24-hour light–dark cycle in nature. The sleep–wake rhythm may, in these circumstances, become out of phase with other circadian or ultradian rhythms such as metabolic, hormonal, CNS electrical, or neurotransmitter rhythms. Recent research has influenced the design of spacecraft environments, as systems that mimic the light–dark cycle have been found to be highly beneficial to astronauts. Light therapy has been trialed as a treatment for sleep disorders. Arctic animals Norwegian researchers at the University of Tromsø have shown that some Arctic animals (e.g., ptarmigan, reindeer) show circadian rhythms only in the parts of the year that have daily sunrises and sunsets. In one study of reindeer, animals at 70 degrees North showed circadian rhythms in the autumn, winter and spring, but not in the summer. Reindeer on Svalbard at 78 degrees North showed such rhythms only in autumn and spring. The researchers suspect that other Arctic animals as well may not show circadian rhythms in the constant light of summer and the constant dark of winter. A 2006 study in northern Alaska found that day-living ground squirrels and nocturnal porcupines strictly maintain their circadian rhythms through 82 days and nights of sunshine. The researchers speculate that these two rodents notice that the apparent distance between the sun and the horizon is shortest once a day, and thus have a sufficient signal to entrain (adjust) by. Butterflies and moths The navigation of the fall migration of the Eastern North American monarch butterfly (Danaus plexippus) to their overwintering grounds in central Mexico uses a time-compensated sun compass that depends upon a circadian clock in their antennae. Circadian rhythm is also known to control mating behavioral in certain moth species such as Spodoptera littoralis, where females produce specific pheromone that attracts and resets the male circadian rhythm to induce mating at night. In plants Plant circadian rhythms tell the plant what season it is and when to flower for the best chance of attracting pollinators. Behaviors showing rhythms include leaf movement (Nyctinasty), growth, germination, stomatal/gas exchange, enzyme activity, photosynthetic activity, and fragrance emission, among others. Circadian rhythms occur as a plant entrains to synchronize with the light cycle of its surrounding environment. These rhythms are endogenously generated, self-sustaining and are relatively constant over a range of ambient temperatures. Important features include two interacting transcription-translation feedback loops: proteins containing PAS domains, which facilitate protein-protein interactions; and several photoreceptors that fine-tune the clock to different light conditions. Anticipation of changes in the environment allows appropriate changes in a plant's physiological state, conferring an adaptive advantage. A better understanding of plant circadian rhythms has applications in agriculture, such as helping farmers stagger crop harvests to extend crop availability and securing against massive losses due to weather. Light is the signal by which plants synchronize their internal clocks to their environment and is sensed by a wide variety of photoreceptors. Red and blue light are absorbed through several phytochromes and cryptochromes. Phytochrome A, phyA, is light labile and allows germination and de-etiolation when light is scarce. Phytochromes B–E are more stable with , the main phytochrome in seedlings grown in the light. The cryptochrome (cry) gene is also a light-sensitive component of the circadian clock and is thought to be involved both as a photoreceptor and as part of the clock's endogenous pacemaker mechanism. Cryptochromes 1–2 (involved in blue–UVA) help to maintain the period length in the clock through a whole range of light conditions. The central oscillator generates a self-sustaining rhythm and is driven by two interacting feedback loops that are active at different times of day. The morning loop consists of CCA1 (Circadian and Clock-Associated 1) and LHY (Late Elongated Hypocotyl), which encode closely related MYB transcription factors that regulate circadian rhythms in Arabidopsis, as well as PRR 7 and 9 (Pseudo-Response Regulators.) The evening loop consists of GI (Gigantea) and ELF4, both involved in regulation of flowering time genes. When CCA1 and LHY are overexpressed (under constant light or dark conditions), plants become arrhythmic, and mRNA signals reduce, contributing to a negative feedback loop. Gene expression of CCA1 and LHY oscillates and peaks in the early morning, whereas TOC1 gene expression oscillates and peaks in the early evening. While it was previously hypothesised that these three genes model a negative feedback loop in which over-expressed CCA1 and LHY repress TOC1 and over-expressed TOC1 is a positive regulator of CCA1 and LHY, it was shown in 2012 by Andrew Millar and others that TOC1, in fact, serves as a repressor not only of CCA1, LHY, and PRR7 and 9 in the morning loop but also of GI and ELF4 in the evening loop. This finding and further computational modeling of TOC1 gene functions and interactions suggest a reframing of the plant circadian clock as a triple negative-component repressilator model rather than the positive/negative-element feedback loop characterizing the clock in mammals. In 2018, researchers found that the expression of PRR5 and TOC1 hnRNA nascent transcripts follows the same oscillatory pattern as processed mRNA transcripts rhythmically in A. thaliana. LNKs binds to the 5'region of PRR5 and TOC1 and interacts with RNAP II and other transcription factors. Moreover, RVE8-LNKs interaction enables a permissive histone-methylation pattern (H3K4me3) to be modified and the histone-modification itself parallels the oscillation of clock gene expression. It has previously been found that matching a plant's circadian rhythm to its external environment's light and dark cycles has the potential to positively affect the plant. Researchers came to this conclusion by performing experiments on three different varieties of Arabidopsis thaliana. One of these varieties had a normal 24-hour circadian cycle. The other two varieties were mutated, one to have a circadian cycle of more than 27 hours, and one to have a shorter than normal circadian cycle of 20 hours. The Arabidopsis with the 24-hour circadian cycle was grown in three different environments. One of these environments had a 20-hour light and dark cycle (10 hours of light and 10 hours of dark), the other had a 24-hour light and dark cycle (12 hours of light and 12 hours of dark),and the final environment had a 28-hour light and dark cycle (14 hours of light and 14 hours of dark). The two mutated plants were grown in both an environment that had a 20-hour light and dark cycle and in an environment that had a 28-hour light and dark cycle. It was found that the variety of Arabidopsis with a 24-hour circadian rhythm cycle grew best in an environment that also had a 24-hour light and dark cycle. Overall, it was found that all the varieties of Arabidopsis thaliana had greater levels of chlorophyll and increased growth in environments whose light and dark cycles matched their circadian rhythm. Researchers suggested that a reason for this could be that matching an Arabidopsis circadian rhythm to its environment could allow the plant to be better prepared for dawn and dusk, and thus be able to better synchronize its processes. In this study, it was also found that the genes that help to control chlorophyll peaked a few hours after dawn. This appears to be consistent with the proposed phenomenon known as metabolic dawn. According to the metabolic dawn hypothesis, sugars produced by photosynthesis have potential to help regulate the circadian rhythm and certain photosynthetic and metabolic pathways. As the sun rises, more light becomes available, which normally allows more photosynthesis to occur. The sugars produced by photosynthesis repress PRR7. This repression of PRR7 then leads to the increased expression of CCA1. On the other hand, decreased photosynthetic sugar levels increase PRR7 expression and decrease CCA1 expression. This feedback loop between CCA1 and PRR7 is what is proposed to cause metabolic dawn. In Drosophila The molecular mechanism of circadian rhythm and light perception are best understood in Drosophila. Clock genes are discovered from Drosophila, and they act together with the clock neurones. There are two unique rhythms, one during the process of hatching (called eclosion) from the pupa, and the other during mating. The clock neurones are located in distinct clusters in the central brain. The best-understood clock neurones are the large and small lateral ventral neurons (l-LNvs and s-LNvs) of the optic lobe. These neurones produce pigment dispersing factor (PDF), a neuropeptide that acts as a circadian neuromodulator between different clock neurones. Drosophila circadian rhythm is through a transcription-translation feedback loop. The core clock mechanism consists of two interdependent feedback loops, namely the PER/TIM loop and the CLK/CYC loop. The CLK/CYC loop occurs during the day and initiates the transcription of the per and tim genes. But their proteins levels remain low until dusk, because during daylight also activates the doubletime (dbt) gene. DBT protein causes phosphorylation and turnover of monomeric PER proteins. TIM is also phosphorylated by shaggy until sunset. After sunset, DBT disappears, so that PER molecules stably bind to TIM. PER/TIM dimer enters the nucleus several at night, and binds to CLK/CYC dimers. Bound PER completely stops the transcriptional activity of CLK and CYC. In the early morning, light activates the cry gene and its protein CRY causes the breakdown of TIM. Thus PER/TIM dimer dissociates, and the unbound PER becomes unstable. PER undergoes progressive phosphorylation and ultimately degradation. Absence of PER and TIM allows activation of clk and cyc genes. Thus, the clock is reset to start the next circadian cycle. PER-TIM model This protein model was developed based on the oscillations of the PER and TIM proteins in the Drosophila. It is based on its predecessor, the PER model where it was explained how the PER gene and its protein influence the biological clock. The model includes the formation of a nuclear PER-TIM complex which influences the transcription of the PER and the TIM genes (by providing negative feedback) and the multiple phosphorylation of these two proteins. The circadian oscillations of these two proteins seem to synchronise with the light-dark cycle even if they are not necessarily dependent on it. Both PER and TIM proteins are phosphorylated and after they form the PER-TIM nuclear complex they return inside the nucleus to stop the expression of the PER and TIM mRNA. This inhibition lasts as long as the protein, or the mRNA is not degraded. When this happens, the complex releases the inhibition. Here can also be mentioned that the degradation of the TIM protein is sped up by light. In mammals The primary circadian clock in mammals is located in the suprachiasmatic nucleus (or nuclei) (SCN), a pair of distinct groups of cells located in the hypothalamus. Destruction of the SCN results in the complete absence of a regular sleep–wake rhythm. The SCN receives information about illumination through the eyes. The retina of the eye contains "classical" photoreceptors ("rods" and "cones"), which are used for conventional vision. But the retina also contains specialized ganglion cells that are directly photosensitive, and project directly to the SCN, where they help in the entrainment (synchronization) of this master circadian clock. The proteins involved in the SCN clock are homologous to those found in the fruit fly. These cells contain the photopigment melanopsin and their signals follow a pathway called the retinohypothalamic tract, leading to the SCN. If cells from the SCN are removed and cultured, they maintain their own rhythm in the absence of external cues. The SCN takes the information on the lengths of the day and night from the retina, interprets it, and passes it on to the pineal gland, a tiny structure shaped like a pine cone and located on the epithalamus. In response, the pineal secretes the hormone melatonin. Secretion of melatonin peaks at night and ebbs during the day and its presence provides information about night-length. Several studies have indicated that pineal melatonin feeds back on SCN rhythmicity to modulate circadian patterns of activity and other processes. However, the nature and system-level significance of this feedback are unknown. The circadian rhythms of humans can be entrained to slightly shorter and longer periods than the Earth's 24 hours. Researchers at Harvard have shown that human subjects can at least be entrained to a 23.5-hour cycle and a 24.65-hour cycle. Humans Early research into circadian rhythms suggested that most people preferred a day closer to 25 hours when isolated from external stimuli like daylight and timekeeping. However, this research was faulty because it failed to shield the participants from artificial light. Although subjects were shielded from time cues (like clocks) and daylight, the researchers were not aware of the phase-delaying effects of indoor electric lights. The subjects were allowed to turn on light when they were awake and to turn it off when they wanted to sleep. Electric light in the evening delayed their circadian phase. A more stringent study conducted in 1999 by Harvard University estimated the natural human rhythm to be closer to 24 hours and 11 minutes: much closer to the solar day. Consistent with this research was a more recent study from 2010, which also identified sex differences, with the circadian period for women being slightly shorter (24.09 hours) than for men (24.19 hours). In this study, women tended to wake up earlier than men and exhibit a greater preference for morning activities than men, although the underlying biological mechanisms for these differences are unknown. Biological markers and effects The classic phase markers for measuring the timing of a mammal's circadian rhythm are: melatonin secretion by the pineal gland, core body temperature minimum, and plasma level of cortisol. For temperature studies, subjects must remain awake but calm and semi-reclined in near darkness while their rectal temperatures are taken continuously. Though variation is great among normal chronotypes, the average human adult's temperature reaches its minimum at about 5:00 a.m., about two hours before habitual wake time. Baehr et al. found that, in young adults, the daily body temperature minimum occurred at about 04:00 (4 a.m.) for morning types, but at about 06:00 (6 a.m.) for evening types. This minimum occurred at approximately the middle of the eight-hour sleep period for morning types, but closer to waking in evening types. Melatonin is absent from the system or undetectably low during daytime. Its onset in dim light, dim-light melatonin onset (DLMO), at roughly 21:00 (9 p.m.) can be measured in the blood or the saliva. Its major metabolite can also be measured in morning urine. Both DLMO and the midpoint (in time) of the presence of the hormone in the blood or saliva have been used as circadian markers. However, newer research indicates that the melatonin offset may be the more reliable marker. Benloucif et al. found that melatonin phase markers were more stable and more highly correlated with the timing of sleep than the core temperature minimum. They found that both sleep offset and melatonin offset are more strongly correlated with phase markers than the onset of sleep. In addition, the declining phase of the melatonin levels is more reliable and stable than the termination of melatonin synthesis. Other physiological changes that occur according to a circadian rhythm include heart rate and many cellular processes "including oxidative stress, cell metabolism, immune and inflammatory responses, epigenetic modification, hypoxia/hyperoxia response pathways, endoplasmic reticular stress, autophagy, and regulation of the stem cell environment." In a study of young men, it was found that the heart rate reaches its lowest average rate during sleep, and its highest average rate shortly after waking. In contradiction to previous studies, it has been found that there is no effect of body temperature on performance on psychological tests. This is likely due to evolutionary pressures for higher cognitive function compared to the other areas of function examined in previous studies. Outside the "master clock" More-or-less independent circadian rhythms are found in many organs and cells in the body outside the suprachiasmatic nuclei (SCN), the "master clock". Indeed, neuroscientist Joseph Takahashi and colleagues stated in a 2013 article that "almost every cell in the body contains a circadian clock". For example, these clocks, called peripheral oscillators, have been found in the adrenal gland, oesophagus, lungs, liver, pancreas, spleen, thymus, and skin. There is also some evidence that the olfactory bulb and prostate may experience oscillations, at least when cultured. Though oscillators in the skin respond to light, a systemic influence has not been proven. In addition, many oscillators, such as liver cells, for example, have been shown to respond to inputs other than light, such as feeding. Light and the biological clock Light resets the biological clock in accordance with the phase response curve (PRC). Depending on the timing, light can advance or delay the circadian rhythm. Both the PRC and the required illuminance vary from species to species, and lower light levels are required to reset the clocks in nocturnal rodents than in humans. Enforced longer or shorter cycles Various studies on humans have made use of enforced sleep/wake cycles strongly different from 24 hours, such as those conducted by Nathaniel Kleitman in 1938 (28 hours) and Derk-Jan Dijk and Charles Czeisler in the 1990s (20 hours). Because people with a normal (typical) circadian clock cannot entrain to such abnormal day/night rhythms, this is referred to as a forced desynchrony protocol. Under such a protocol, sleep and wake episodes are uncoupled from the body's endogenous circadian period, which allows researchers to assess the effects of circadian phase (i.e., the relative timing of the circadian cycle) on aspects of sleep and wakefulness including sleep latency and other functions - both physiological, behavioral, and cognitive. Studies also show that Cyclosa turbinata is unique in that its locomotor and web-building activity cause it to have an exceptionally short-period circadian clock, about 19 hours. When C. turbinata spiders are placed into chambers with periods of 19, 24, or 29 hours of evenly split light and dark, none of the spiders exhibited decreased longevity in their own circadian clock. These findings suggest that C. turbinata do not have the same costs of extreme desynchronization as do other species of animals. Human health Foundation of circadian medicine The leading edge of circadian biology research is translation of basic body clock mechanisms into clinical tools, and this is especially relevant to the treatment of cardiovascular disease. Timing of medical treatment in coordination with the body clock, chronotherapeutics, may also benefit patients with hypertension (high blood pressure) by significantly increasing efficacy and reduce drug toxicity or adverse reactions. 3) "Circadian Pharmacology" or drugs targeting the circadian clock mechanism have been shown experimentally in rodent models to significantly reduce the damage due to heart attacks and prevent heart failure. Importantly, for rational translation of the most promising Circadian Medicine therapies to clinical practice, it is imperative that we understand how it helps treat disease in both biological sexes. Causes of disruption to circadian rhythms Indoor lighting Lighting requirements for circadian regulation are not simply the same as those for vision; planning of indoor lighting in offices and institutions is beginning to take this into account. Animal studies on the effects of light in laboratory conditions have until recently considered light intensity (irradiance) but not color, which can be shown to "act as an essential regulator of biological timing in more natural settings". Blue LED lighting suppresses melatonin production five times more than the orange-yellow high-pressure sodium (HPS) light; a metal halide lamp, which is white light, suppresses melatonin at a rate more than three times greater than HPS. Depression symptoms from long term nighttime light exposure can be undone by returning to a normal cycle. Airline pilots and cabin crew Due to the nature of work of airline pilots, who often cross several time zones and regions of sunlight and darkness in one day, and spend many hours awake both day and night, they are often unable to maintain sleep patterns that correspond to the natural human circadian rhythm; this situation can easily lead to fatigue. The NTSB cites this as contributing to many accidents, and has conducted several research studies in order to find methods of combating fatigue in pilots. Effect of drugs Studies conducted on both animals and humans show major bidirectional relationships between the circadian system and abusive drugs. It is indicated that these abusive drugs affect the central circadian pacemaker. Individuals with substance use disorder display disrupted rhythms. These disrupted rhythms can increase the risk for substance abuse and relapse. It is possible that genetic and/or environmental disturbances to the normal sleep and wake cycle can increase the susceptibility to addiction. It is difficult to determine if a disturbance in the circadian rhythm is at fault for an increase in prevalence for substance abuse—or if other environmental factors such as stress are to blame. Changes to the circadian rhythm and sleep occur once an individual begins abusing drugs and alcohol. Once an individual stops using drugs and alcohol, the circadian rhythm continues to be disrupted. Alcohol consumption disrupts circadian rhythms, with acute intake causing dose-dependent alterations in melatonin and cortisol levels, as well as core body temperature, which normalize the following morning, while chronic alcohol use leads to more severe and persistent disruptions that are associated with alcohol use disorders (AUD) and withdrawal symptoms. The stabilization of sleep and the circadian rhythm might possibly help to reduce the vulnerability to addiction and reduce the chances of relapse. Circadian rhythms and clock genes expressed in brain regions outside the suprachiasmatic nucleus may significantly influence the effects produced by drugs such as cocaine. Moreover, genetic manipulations of clock genes profoundly affect cocaine's actions. Consequences of disruption to circadian rhythms Disruption Disruption to rhythms usually has a negative effect. Many travelers have experienced the condition known as jet lag, with its associated symptoms of fatigue, disorientation and insomnia. A number of other disorders, such as bipolar disorder, depression, and some sleep disorders such as delayed sleep phase disorder (DSPD), are associated with irregular or pathological functioning of circadian rhythms. Disruption to rhythms in the longer term is believed to have significant adverse health consequences for peripheral organs outside the brain, in particular in the development or exacerbation of cardiovascular disease. Studies have shown that maintaining normal sleep and circadian rhythms is important for many aspects of brain and health. A number of studies have also indicated that a power-nap, a short period of sleep during the day, can reduce stress and may improve productivity without any measurable effect on normal circadian rhythms. Circadian rhythms also play a part in the reticular activating system, which is crucial for maintaining a state of consciousness. A reversal in the sleep–wake cycle may be a sign or complication of uremia, azotemia or acute kidney injury. Studies have also helped elucidate how light has a direct effect on human health through its influence on the circadian biology. Relationship with cardiovascular disease One of the first studies to determine how disruption of circadian rhythms causes cardiovascular disease was performed in the Tau hamsters, which have a genetic defect in their circadian clock mechanism. When maintained in a 24-hour light-dark cycle that was "out of sync" with their normal 22 circadian mechanism they developed profound cardiovascular and renal disease; however, when the Tau animals were raised for their entire lifespan on a 22-hour daily light-dark cycle they had a healthy cardiovascular system. The adverse effects of circadian misalignment on human physiology has been studied in the laboratory using a misalignment protocol, and by studying shift workers. Circadian misalignment is associated with many risk factors of cardiovascular disease. High levels of the atherosclerosis biomarker, resistin, have been reported in shift workers indicating the link between circadian misalignment and plaque build up in arteries. Additionally, elevated triacylglyceride levels (molecules used to store excess fatty acids) were observed and contribute to the hardening of arteries, which is associated with cardiovascular diseases including heart attack, stroke and heart disease. Shift work and the resulting circadian misalignment is also associated with hypertension. Obesity and diabetes Obesity and diabetes are associated with lifestyle and genetic factors. Among those factors, disruption of the circadian clockwork and/or misalignment of the circadian timing system with the external environment (e.g., light–dark cycle) can play a role in the development of metabolic disorders. Shift work or chronic jet lag have profound consequences for circadian and metabolic events in the body. Animals that are forced to eat during their resting period show increased body mass and altered expression of clock and metabolic genes. In humans, shift work that favours irregular eating times is associated with altered insulin sensitivity, diabetes and higher body mass. Cognitive effects Reduced cognitive function has been associated with circadian misalignment. Chronic shift workers display increased rates of operational error, impaired visual-motor performance and processing efficacy which can lead to both a reduction in performance and potential safety issues. Increased risk of dementia is associated with chronic night shift workers compared to day shift workers, particularly for individuals over 50 years old. Society and culture In 2017, Jeffrey C. Hall, Michael W. Young, and Michael Rosbash were awarded Nobel Prize in Physiology or Medicine "for their discoveries of molecular mechanisms controlling the circadian rhythm". Circadian rhythms was taken as an example of scientific knowledge being transferred into the public sphere.
Biology and health sciences
Basics_3
null
56567
https://en.wikipedia.org/wiki/Hyperbolic%20functions
Hyperbolic functions
In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points form a circle with a unit radius, the points form the right half of the unit hyperbola. Also, similarly to how the derivatives of and are and respectively, the derivatives of and are and respectively. Hyperbolic functions occur in the calculations of angles and distances in hyperbolic geometry. They also occur in the solutions of many linear differential equations (such as the equation defining a catenary), cubic equations, and Laplace's equation in Cartesian coordinates. Laplace's equations are important in many areas of physics, including electromagnetic theory, heat transfer, fluid dynamics, and special relativity. The basic hyperbolic functions are: hyperbolic sine "" (), hyperbolic cosine "" (), from which are derived: hyperbolic tangent "" (), hyperbolic cotangent "" (), hyperbolic secant "" (), hyperbolic cosecant "" or "" () corresponding to the derived trigonometric functions. The inverse hyperbolic functions are: inverse hyperbolic sine "" (also denoted "", "" or sometimes "") inverse hyperbolic cosine "" (also denoted "", "" or sometimes "") inverse hyperbolic tangent "" (also denoted "", "" or sometimes "") inverse hyperbolic cotangent "" (also denoted "", "" or sometimes "") inverse hyperbolic secant "" (also denoted "", "" or sometimes "") inverse hyperbolic cosecant "" (also denoted "", "", "","", "", or sometimes "" or "") The hyperbolic functions take a real argument called a hyperbolic angle. The magnitude of a hyperbolic angle is the area of its hyperbolic sector to xy = 1. The hyperbolic functions may be defined in terms of the legs of a right triangle covering this sector. In complex analysis, the hyperbolic functions arise when applying the ordinary sine and cosine functions to an imaginary angle. The hyperbolic sine and the hyperbolic cosine are entire functions. As a result, the other hyperbolic functions are meromorphic in the whole complex plane. By Lindemann–Weierstrass theorem, the hyperbolic functions have a transcendental value for every non-zero algebraic value of the argument. Hyperbolic functions were introduced in the 1760s independently by Vincenzo Riccati and Johann Heinrich Lambert. Riccati used and () to refer to circular functions and and () to refer to hyperbolic functions. Lambert adopted the names, but altered the abbreviations to those used today. The abbreviations , , , are also currently used, depending on personal preference. Notation Definitions There are various equivalent ways to define the hyperbolic functions. Exponential definitions In terms of the exponential function: Hyperbolic sine: the odd part of the exponential function, that is, Hyperbolic cosine: the even part of the exponential function, that is, Hyperbolic tangent: Hyperbolic cotangent: for , Hyperbolic secant: Hyperbolic cosecant: for , Differential equation definitions The hyperbolic functions may be defined as solutions of differential equations: The hyperbolic sine and cosine are the solution of the system with the initial conditions The initial conditions make the solution unique; without them any pair of functions would be a solution. and are also the unique solution of the equation , such that , for the hyperbolic cosine, and , for the hyperbolic sine. Complex trigonometric definitions Hyperbolic functions may also be deduced from trigonometric functions with complex arguments: Hyperbolic sine: Hyperbolic cosine: Hyperbolic tangent: Hyperbolic cotangent: Hyperbolic secant: Hyperbolic cosecant: where is the imaginary unit with . The above definitions are related to the exponential definitions via Euler's formula (See below). Characterizing properties Hyperbolic cosine It can be shown that the area under the curve of the hyperbolic cosine (over a finite interval) is always equal to the arc length corresponding to that interval: Hyperbolic tangent The hyperbolic tangent is the (unique) solution to the differential equation , with . Useful relations The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity (up to but not including sinhs or implied sinhs of 4th degree) for , , or and into a hyperbolic identity, by expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term containing a product of two sinhs. Odd and even functions: Hence: Thus, and are even functions; the others are odd functions. Hyperbolic sine and cosine satisfy: the last of which is similar to the Pythagorean trigonometric identity. One also has for the other functions. Sums of arguments particularly Also: Subtraction formulas Also: Half argument formulas where is the sign function. If , then Square formulas Inequalities The following inequality is useful in statistics: It can be proved by comparing the Taylor series of the two functions term by term. Inverse functions as logarithms Derivatives Second derivatives Each of the functions and is equal to its second derivative, that is: All functions with this property are linear combinations of and , in particular the exponential functions and . Standard integrals The following integrals can be proved using hyperbolic substitution: where C is the constant of integration. Taylor series expressions It is possible to express explicitly the Taylor series at zero (or the Laurent series, if the function is not defined at zero) of the above functions. This series is convergent for every complex value of . Since the function is odd, only odd exponents for occur in its Taylor series. This series is convergent for every complex value of . Since the function is even, only even exponents for occur in its Taylor series. The sum of the sinh and cosh series is the infinite series expression of the exponential function. The following series are followed by a description of a subset of their domain of convergence, where the series is convergent and its sum equals the function. where: is the nth Bernoulli number is the nth Euler number Infinite products and continued fractions The following expansions are valid in the whole complex plane: Comparison with circular functions The hyperbolic functions represent an expansion of trigonometry beyond the circular functions. Both types depend on an argument, either circular angle or hyperbolic angle. Since the area of a circular sector with radius and angle (in radians) is , it will be equal to when . In the diagram, such a circle is tangent to the hyperbola xy = 1 at (1,1). The yellow sector depicts an area and angle magnitude. Similarly, the yellow and red regions together depict a hyperbolic sector with area corresponding to hyperbolic angle magnitude. The legs of the two right triangles with hypotenuse on the ray defining the angles are of length times the circular and hyperbolic functions. The hyperbolic angle is an invariant measure with respect to the squeeze mapping, just as the circular angle is invariant under rotation. The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic functions that does not involve complex numbers. The graph of the function is the catenary, the curve formed by a uniform flexible chain, hanging freely between two fixed points under uniform gravity. Relationship to the exponential function The decomposition of the exponential function in its even and odd parts gives the identities and Combined with Euler's formula this gives for the general complex exponential function. Additionally, Hyperbolic functions for complex numbers Since the exponential function can be defined for any complex argument, we can also extend the definitions of the hyperbolic functions to complex arguments. The functions and are then holomorphic. Relationships to ordinary trigonometric functions are given by Euler's formula for complex numbers: so: Thus, hyperbolic functions are periodic with respect to the imaginary component, with period ( for hyperbolic tangent and cotangent).
Mathematics
Geometry
null
56568
https://en.wikipedia.org/wiki/Pleiades
Pleiades
The Pleiades (), also known as Seven Sisters and Messier 45 (M45), is an asterism of an open star cluster containing young B-type stars in the northwest of the constellation Taurus. At a distance of about 444 light-years, it is among the nearest star clusters to Earth and the nearest Messier object to Earth, being the most obvious star cluster to the naked eye in the night sky. It is also observed to house the reflection nebula NGC 1432, an HII region. Around 2330 BC it marked the vernal point. The cluster is dominated by hot blue luminous stars that have formed within the last 100 million years. Reflection nebulae around the brightest stars were once thought to be leftover material from their formation, but are now considered likely to be an unrelated dust cloud in the interstellar medium through which the stars are currently passing. This dust cloud is estimated to be moving at a speed of approximately 18 km/s relative to the stars in the cluster. Computer simulations have shown that the Pleiades were probably formed from a compact configuration that once resembled the Orion Nebula. Astronomers estimate that the cluster will survive for approximately another 250 million years, after which the clustering will be lost due to gravitational interactions with the galactic neighborhood. Together with the open star cluster of the Hyades, the Pleiades form the Golden Gate of the Ecliptic. Origin of name The name, Pleiades, comes from . It probably derives from ( 'to sail') because of the cluster's importance in delimiting the sailing season in the Mediterranean Sea: "the season of navigation began with their heliacal rising". In Classical Greek mythology the name was used for seven divine sisters called the Pleiades. In time, the name was said to be derived from that of a mythical mother, Pleione, effectively meaning "daughters of Pleione". In reality, the ancient name of the star cluster related to sailing almost certainly came first in the culture, naming of a relationship to the sister deities followed, and eventually appearing in later myths, to interpret the group name, a mother, Pleione. Astronomical role of M45 in antiquity The M45 group played an important role in ancient times for the establishment of many calendars thanks to the combination of two remarkable elements. The first, which is still valid, is its unique and easily identifiable appearance on the celestial vault near the ecliptic. The second, essential for the ancients, is that in the middle of the third millennium BC, this asterism (a prominent pattern or group of stars that is smaller than a constellation) marked the vernal point. (2330 BC with ecliptic latitude about +3.5° according to Stellarium) The importance of this asterism is also evident in northern Europe. The Pleiades cluster is displayed on the Nebra sky disc that was found in Germany and is dated to around 1600 BC. On the disk the cluster is represented in a high position between the Sun and the Moon. This asterism also marks the beginning of several ancient calendars: In ancient India, it constitutes, in the Atharvaveda, compiled around 1200-1000 BC, the first (Sanskrit name for lunar stations), which is called (), a revealing name since it literally means 'the Cuttings', i.e. "Those that mark the break of the year". This is so before the classic list lowers this to third place, henceforth giving the first to the star couple β Arietis and γ Arietis, which, notably in Hipparchus, at that time, marks the equinox. In Mesopotamia, the MUL.APIN compendium, the first known Mesopotamian astronomy treatise, discovered at Nineveh in the library of Assurbanipal and dating from no later than 627 BC, presents a list of deities [holders of stars] who stand on "the path of the Moon", a list which begins with mul.MUL. In Greece, the () are a group whose name is probably functional before having a mythological meaning, as André Lebœuffle points out, who has his preference for the explanation by the Indo-European root that expresses the idea of 'multiplicity, crowd, assembly'. Similarly, the Ancient Arabs begin their old parapegma type calendar, that of the , with M45 under the name of (). And this before their classic calendar, that of the or 'lunar stations', also begins with the star couple β Arietis and γ Arietis whose name, (), is literally "the Two Marks [of entering the equinox]" Although M45 is no longer at the vernal point, the asterism still remains important, both functionally and symbolically. In addition to the changes in the calendars based on the lunar stations among the Indians and the Arabs, consider the case of an ancient Yemeni calendar in which the months are designated according to an astronomical criterion that caused it to be named Calendar of the Pleiades: the month of , literally 'five', is that during which the Sun and , i.e. the Pleiades, deviate from each other by five movements of the Moon, i.e. five times the path that the Moon travels on average in one day and one night, to use the terminology of . Nomenclature and mythology The Pleiades are a prominent sight in winter in the Northern Hemisphere, and are easily visible from mid-southern latitudes. They have been known since antiquity to cultures all around the world, including the Celts (, ); pre-colonial Filipinos (who called it , or , among other names), for whom it indicated the beginning of the year; Hawaiians (who call them ), Māori (who call them ); Indigenous Australians (from several traditions); the Achaemenid Empire, whence in Persians (who called them or ); the Arabs (who call them ; ); the Chinese (who called them ; ); the Quechua (who call them Qullqa or the storehouse); the Japanese (who call them ; , ); the Maya; the Aztec; the Sioux; the Kiowa; and the Cherokee. In Hinduism, the Pleiades are known as and are scripturally associated with the war deity and are also identified or associated with the (Seven Mothers). Hindus celebrate the first day (new moon) of the month of Kartik (month) as Diwali, a festival of abundance and lamps. The Pleiades are also mentioned three times in the Bible. The earliest known depiction of the Pleiades is likely a Northern German Bronze Age artifact known as the Nebra sky disk, dated to approximately 1600 BC. The Babylonian star catalogues name the Pleiades (), meaning 'stars' (literally 'star star'), and they head the list of stars along the ecliptic, reflecting the fact that they were close to the point of the vernal equinox around the twenty-third century BC. The Ancient Egyptians may have used the names "Followers" and "Ennead" in the prognosis texts of the Calendar of Lucky and Unlucky Days of papyrus Cairo 86637. Some Greek astronomers considered them to be a distinct constellation, and they are mentioned by Hesiod's Works and Days, Homer's Iliad and Odyssey, and the Geoponica. The Pleiades was the most well-known "star" among pre-Islamic Arabs and so often referred to simply as "the Star" (; ). Some scholars of Islam suggested that the Pleiades are the "star" mentioned in ('The Star') in the Quran. On numerous cylinder seals from the beginning of the first millennium BC, M45 is represented by seven points, while the Seven Gods appear, on low-reliefs of Neo-Assyrian royal palaces, wearing long open robes and large cylindrical headdresses surmounted by short feathers and adorned with three frontal rows of horns and a crown of feathers, while carrying both an ax and a knife, as well as a bow and a quiver. As noted by scholar Stith Thompson, the constellation was "nearly always imagined" as a group of seven sisters, and their myths explain why there are only six. Some scientists suggest that these may come from observations back when Pleione was farther from Atlas and more visible as a separate star as far back as 100,000 BC. Subaru In Japan, the cluster is mentioned under the name ("six stars") in the eighth-century Kojiki. The cluster is now known in Japan as Subaru. The name was chosen for that of the Subaru Telescope, the flagship telescope of the National Astronomical Observatory of Japan, located at the Mauna Kea Observatory on the island of Hawaii. It had the largest monolithic primary mirror in the world from its commissioning in 1998 until 2005. It also was chosen as the brand name of Subaru automobiles to reflect the origins of the firm as the joining of five companies, and is depicted in the firm's six-star logo. Tolkien's Legendarium In J. R. R. Tolkien's legendarium, where The Lord of the Rings is set, Pleiades is referred to as Remmirath, the netted star, as are several other celestial bodies, such as the constellation Orion as Menelvagor, swordsman of the Sky. Observational history Galileo Galilei was the first astronomer to view the Pleiades through a telescope. He thereby discovered that the cluster contains many stars too dim to be seen with the naked eye. He published his observations, including a sketch of the Pleiades showing 36 stars, in his treatise Sidereus Nuncius in March 1610. The Pleiades have long been known to be a physically related group of stars rather than any chance alignment. John Michell calculated in 1767 that the probability of a chance alignment of so many bright stars was only 1 in 500,000, and so surmised that the Pleiades and many other clusters must consist of physically related stars. When studies were first made of the proper motions of the stars, it was found that they are all moving in the same direction across the sky, at the same rate, further demonstrating that they were related. Charles Messier measured the position of the cluster and included it as "M45" in his catalogue of comet-like objects, published in 1771. Along with the Orion Nebula and the Praesepe cluster, Messier's inclusion of the Pleiades has been noted as curious, as most of Messier's objects were much fainter and more easily confused with comets—something that seems scarcely possible for the Pleiades. One possibility is that Messier simply wanted to have a larger catalogue than his scientific rival Lacaille, whose 1755 catalogue contained 42 objects, and so he added some bright, well-known objects to boost the number on his list. Edme-Sébastien Jeaurat then drew in 1782 a map of 64 stars of the Pleiades from his observations in 1779, which he published in 1786. Distance The distance to the Pleiades can be used as a key first step to calibrate the cosmic distance ladder. As the cluster is relatively close to the Earth, the distance should be relatively easy to measure and has been estimated by many methods. Accurate knowledge of the distance allows astronomers to plot a Hertzsprung–Russell diagram for the cluster, which, when compared with those plotted for clusters whose distance is not known, allows their distances to be estimated. Other methods may then extend the distance scale from open clusters to galaxies and clusters of galaxies, and a cosmic distance ladder may be constructed. Ultimately astronomers' understanding of the age and future evolution of the universe is influenced by their knowledge of the distance to the Pleiades. Yet some authors argue that the controversy over the distance to the Pleiades discussed below is a red herring, since the cosmic distance ladder can (presently) rely on a suite of other nearby clusters where consensus exists regarding the distances as established by the Hipparcos satellite and independent means (e.g., the Hyades, the Coma Berenices cluster, etc.). Measurements of the distance have elicited much controversy. Results prior to the launch of the Hipparcos satellite generally found that the Pleiades were approximately 135 parsecs (pc) away from Earth. Data from Hipparcos yielded a surprising result, namely a distance of only 118 pc, by measuring the parallax of stars in the cluster—a technique that should yield the most direct and accurate results. Later work consistently argued that the Hipparcos distance measurement for the Pleiades was erroneous: In particular, distances derived to the cluster via the Hubble Space Telescope and infrared color–magnitude diagram fitting (so-called "spectroscopic parallax") favor a distance between 135 and 140 pc; a dynamical distance from optical interferometric observations of the inner pair of stars within Atlas (a bright triple star in the Pleiades) favors a distance of 133 to 137 pc. However, the author of the 2007–2009 catalog of revised Hipparcos parallaxes reasserted that the distance to the Pleiades is ~120 pc and challenged the dissenting evidence. In 2012, Francis and Anderson proposed that a systematic effect on Hipparcos parallax errors for stars in clusters would bias calculation using the weighted mean; they gave a Hipparcos parallax distance of 126 pc and photometric distance of 132 pc based on stars in the AB Doradus, Tucana-Horologium and Beta Pictoris moving groups, which are all similar in age and composition to the Pleiades. Those authors note that the difference between these results may be attributed to random error. More recent results using very-long-baseline interferometry (VLBI) (August 2014), and preliminary solutions using Gaia Data Release 1 (September 2016) and Gaia Data Release 2 (August 2018), determine distances of 136.2 ± 1.2 pc, 134 ± 6 pc and 136.2 ± 5.0 pc, respectively. The Gaia Data Release 1 team were cautious about their result, and the VLBI authors assert "that the Hipparcos-measured distance to the Pleiades cluster is in error". The most recent distance estimate of the distance to the Pleiades based on the Gaia Data Release 3 is . Composition The cluster core radius is approximately 8 light-years and tidal radius is approximately 43 light-years. The cluster contains more than 1,000 statistically confirmed members, not counting the number that would be added if all binary stars could be resolved. Its light is dominated by young, hot blue stars, up to 14 of which may be seen with the naked eye, depending on local observing conditions and visual acuity of the observer. The brightest stars form a shape somewhat similar to that of Ursa Major and Ursa Minor. The total mass contained in the cluster is estimated to be approximately 800 solar masses and is dominated by fainter and redder stars. An estimate of the frequency of binary stars in the Pleiades is approximately 57%. The cluster contains many brown dwarfs, such as Teide 1. These are objects with less than approximately 8% of the Sun's mass, insufficient for nuclear fusion reactions to start in their cores and become proper stars. They may constitute up to 25% of the total population of the cluster, although they contribute less than 2% of the total mass. Astronomers have made great efforts to find and analyze brown dwarfs in the Pleiades and other young clusters, because they are still relatively bright and observable, while brown dwarfs in older clusters have faded and are much more difficult to study. Brightest stars The brightest stars of the cluster are named the Seven Sisters in early Greek mythology: Sterope, Merope, Electra, Maia, Taygeta, Celaeno, and Alcyone. Later, they were assigned parents, Pleione and Atlas. As daughters of Atlas, the Hyades were sisters of the Pleiades. The following table gives details of the brightest stars in the cluster: Age and future evolution Ages for star clusters may be estimated by comparing the Hertzsprung–Russell diagram for the cluster with theoretical models of stellar evolution. Using this technique, ages for the Pleiades of between 75 and 150 million years have been estimated. The wide spread in estimated ages is a result of uncertainties in stellar evolution models, which include factors such as convective overshoot, in which a convective zone within a star penetrates an otherwise non-convective zone, resulting in higher apparent ages. Another way of estimating the age of the cluster is by looking at the lowest-mass objects. In normal main-sequence stars, lithium is rapidly destroyed in nuclear fusion reactions. Brown dwarfs can retain their lithium, however. Due to lithium's very low ignition temperature of 2.5 × 106 K, the highest-mass brown dwarfs will burn it eventually, and so determining the highest mass of brown dwarfs still containing lithium in the cluster may give an idea of its age. Applying this technique to the Pleiades gives an age of about 115 million years. The cluster is slowly moving in the direction of the feet of what is currently the constellation of Orion. Like most open clusters, the Pleiades will not stay gravitationally bound forever. Some component stars will be ejected after close encounters with other stars; others will be stripped by tidal gravitational fields. Calculations suggest that the cluster will take approximately 250 million years to disperse, because of gravitational interactions with giant molecular clouds and the spiral arms of our galaxy hastening its demise. Reflection nebulosity With larger amateur telescopes, the nebulosity around some of the stars may be easily seen, especially when long-exposure photographs are taken. Under ideal observing conditions, some hint of nebulosity around the cluster may be seen even with small telescopes or average binoculars. It is a reflection nebula, caused by dust reflecting the blue light of the hot, young stars. It was formerly thought that the dust was left over from the formation of the cluster, but at the age of approximately 100 million years generally accepted for the cluster, almost all the dust originally present would have been dispersed by radiation pressure. Instead, it seems that the cluster is simply passing through a particularly dusty region of the interstellar medium. Studies show that the dust responsible for the nebulosity is not uniformly distributed, but is concentrated mainly in two layers along the line of sight to the cluster. These layers may have been formed by deceleration due to radiation pressure as the dust has moved toward the stars. Possible planets Analyzing deep-infrared images obtained by the Spitzer Space Telescope and Gemini North telescope, astronomers discovered that one of the stars in the cluster, HD 23514, which has a mass and luminosity a bit greater than that of the Sun, is surrounded by an extraordinary number of hot dust particles. This could be evidence for planet formation around HD 23514. Videos Gallery
Physical sciences
Other notable objects
null
56590
https://en.wikipedia.org/wiki/New%20General%20Catalogue
New General Catalogue
The New General Catalogue of Nebulae and Clusters of Stars (abbreviated NGC) is an astronomical catalogue of deep-sky objects compiled by John Louis Emil Dreyer in 1888. The NGC contains 7,840 objects, including galaxies, star clusters and emission nebulae. Dreyer published two supplements to the NGC in 1895 and 1908, known as the Index Catalogues (abbreviated IC), describing a further 5,386 astronomical objects. Thousands of these objects are best known by their NGC or IC numbers, which remain in widespread use. The NGC expanded and consolidated the cataloguing work of William and Caroline Herschel, and John Herschel's General Catalogue of Nebulae and Clusters of Stars. Objects south of the celestial equator are catalogued somewhat less thoroughly, but many were included based on observation by John Herschel or James Dunlop. The NGC contained multiple errors, but attempts to eliminate them were made by the Revised New General Catalogue (RNGC) by Jack W. Sulentic and William G. Tifft in 1973, NGC2000.0 by Roger W. Sinnott in 1988, and the NGC/IC Project in 1993. A Revised New General Catalogue and Index Catalogue (abbreviated as RNGC/IC) was compiled in 2009 by Wolfgang Steinicke and updated in 2019 with 13,957 objects. Original catalogue The original New General Catalogue was compiled during the 1880s by John Louis Emil Dreyer using observations from William Herschel and his son John, among others. Dreyer had already published a supplement to Herschel's General Catalogue of Nebulae and Clusters (GC), containing about 1,000 new objects. In 1886, he suggested building a second supplement to the General Catalogue, but the Royal Astronomical Society asked Dreyer to compile a new version instead. This led to the publication of the New General Catalogue in the Memoirs of the Royal Astronomical Society in 1888. Assembling the NGC was a challenge, as Dreyer had to deal with many contradictory and unclear reports made with a variety of telescopes with apertures ranging from 2 to 72 inches. While he did check some himself, the sheer number of objects meant Dreyer had to accept them as published by others for the purpose of his compilation. The catalogue contained several errors, mostly relating to position and descriptions, but Dreyer referenced the catalogue, which allowed later astronomers to review the original references and publish corrections to the original NGC. Index Catalogue The first major update to the NGC is the Index Catalogue of Nebulae and Clusters of Stars (abbreviated as IC), published in two parts by Dreyer in 1895 (IC I, containing 1,520 objects) and 1908 (IC II, containing 3,866 objects). It serves as a supplement to the NGC, and contains an additional 5,386 objects, collectively known as the IC objects. It summarizes the discoveries of galaxies, clusters and nebulae between 1888 and 1907, most of them made possible by photography. A list of corrections to the IC was published in 1912. Revised New General Catalogue The Revised New Catalogue of Nonstellar Astronomical Objects (abbreviated as RNGC) was compiled by Sulentic and Tifft in the early 1970s, and was published in 1973, as an update to the NGC. The work did not incorporate several previously published corrections to the NGC data (including corrections published by Dreyer himself), and introduced some new errors. For example, the well-known compact galaxy group Copeland Septet in the Leo constellation appears as non-existent in the RNGC. Nearly 800 objects are listed as "non-existent" in the RNGC. The designation is applied to objects which are duplicate catalogue entries, those which were not detected in subsequent observations, and a number of objects catalogued as star clusters which in subsequent studies were regarded as coincidental groupings. A 1993 monograph considered the 229 star clusters called non-existent in the RNGC. They had been "misidentified or have not been located since their discovery in the 18th and 19th centuries". It found that one of the 229—NGC 1498—was not actually in the sky. Five others were duplicates of other entries, 99 existed "in some form", and the other 124 required additional research to resolve. As another example, reflection nebula NGC 2163 in Orion was classified "non-existent" due to a transcription error by Dreyer. Dreyer corrected his own mistake in the Index Catalogues, but the RNGC preserved the original error, and additionally reversed the sign of the declination, resulting in NGC 2163 being classified as non-existent. Revised New General Catalogue and Index Catalogue The Revised New General Catalogue and Index Catalogue (abbreviated as RNGC/IC) is a compilation made by Wolfgang Steinicke in 2009. It is a comprehensive and authoritative treatment of the NGC and IC catalogues. The number of objects with status of "not found" in this catalogue is 301 objects (2.3%). The brightest star in this catalogue is NGC 771 with magnitude of 4.0. NGC 2000.0 NGC 2000.0 (also known as the Complete New General Catalog and Index Catalog of Nebulae and Star Clusters) is a 1988 compilation of the NGC and IC made by Roger W. Sinnott, using the J2000.0 coordinates. It incorporates several corrections and errata made by astronomers over the years. NGC/IC Project The NGC/IC Project was a collaboration among professional and amateur astronomers formed by Steve Gottlieb in 1990, although Steve Gottlieb already started to observe and record NGC objects as early as 1979. Other primary team members were Harold G. Corwin Jr., Malcolm Thomson, Robert E. Erdmann and Jeffrey Corder. The project was completed by 2017. This project identified all NGC and IC objects, corrected mistakes, collected images and basic astronomical data and checked all historical data related to the objects.
Physical sciences
Surveys and Catalogs
Astronomy
56637
https://en.wikipedia.org/wiki/Ammonium%20perchlorate
Ammonium perchlorate
Ammonium perchlorate ("AP") is an inorganic compound with the formula . It is a colorless or white solid that is soluble in water. It is a powerful oxidizer. Combined with a fuel, it can be used as a rocket propellant called ammonium perchlorate composite propellant. Its instability has involved it in a number of accidents, such as the PEPCON disaster. Production Ammonium perchlorate (AP) is produced by reaction between ammonia and perchloric acid. This process is the main outlet for the industrial production of perchloric acid. The salt also can be produced by salt metathesis reaction of ammonium salts with sodium perchlorate. This process exploits the relatively low solubility of NH4ClO4, which is about 10% of that for sodium perchlorate. AP crystallises as colorless rhombohedra. Decomposition Like most ammonium salts, ammonium perchlorate decomposes before melting. Mild heating results in production of hydrogen chloride, nitrogen, oxygen, and water. 4 NH4ClO4 → 4 HCl + 2 N2 + 5 O2 + 6 H2O The combustion of AP is quite complex and is widely studied. AP crystals decompose before melting, even though a thin liquid layer has been observed on crystal surfaces during high-pressure combustion processes. Strong heating may lead to explosions. Complete reactions leave no residue. Pure crystals cannot sustain a flame below the pressure of 2 MPa. AP is a Class 4 oxidizer (can undergo an explosive reaction) for particle sizes over 15 micrometres and is classified as an explosive for particle sizes less than 15 micrometres. Applications During World War I England and France used mixtures featuring ammonium perchlorate (such as "balstine") as a substitute high explosive. The primary use of ammonium perchlorate is in making solid rocket propellants. When AP is mixed with a fuel (like a powdered aluminium and/or with an elastomeric binder), it can generate self-sustained combustion at pressures far below atmospheric pressure. It is an important oxidizer with a decades-long history of use in solid rocket propellants – space launch (including the Space Shuttle Solid Rocket Booster), military, amateur, and hobby high-power rockets, as well as in some fireworks. Some "breakable" epoxy adhesives contain suspensions of AP. Upon heating to 300°C, the AP degrades the organic adhesive, breaking the cemented joint. Toxicity Perchlorate itself confers little acute toxicity. For example, sodium perchlorate has an of 2–4g/kg and is eliminated rapidly after ingestion. However, chronic exposure to perchlorates, even in low concentrations, has been shown to cause various thyroid problems, as it is taken up in place of iodine.
Physical sciences
Halide oxyanions
Chemistry
56654
https://en.wikipedia.org/wiki/Perchloric%20acid
Perchloric acid
Perchloric acid is a mineral acid with the formula HClO4. It is an oxoacid of chlorine. Usually found as an aqueous solution, this colorless compound is a stronger acid than sulfuric acid, nitric acid and hydrochloric acid. It is a powerful oxidizer when hot, but aqueous solutions up to approximately 70% by weight at room temperature are generally safe, only showing strong acid features and no oxidizing properties. Perchloric acid is useful for preparing perchlorate salts, especially ammonium perchlorate, an important rocket fuel component. Perchloric acid is dangerously corrosive and readily forms potentially explosive mixtures. History Perchloric acid was first synthesized (together with potassium perchlorate) by Austrian chemist and called "oxygenated chloric acid" in mid-1810s. French pharmacist Georges-Simon Serullas introduced the modern designation along with discovering its solid monohydrate (which he, however, mistook for an anhydride). Production Perchloric acid is produced industrially by two routes. The traditional method exploits the high aqueous solubility of sodium perchlorate (209 g/100 ml of water at room temperature). Treatment of such solutions with hydrochloric acid gives perchloric acid, precipitating solid sodium chloride: NaClO4 + HCl → NaCl + HClO4 The concentrated acid can be purified by distillation. The alternative route, which is more direct and avoids salts, entails anodic oxidation of aqueous chlorine at a platinum electrode. Laboratory preparations It can be distilled from a solution of potassium perchlorate in sulfuric acid. Treatment of barium perchlorate with sulfuric acid precipitates barium sulfate, leaving perchloric acid. It can also be made by mixing nitric acid with ammonium perchlorate and boiling while adding hydrochloric acid. The reaction gives nitrous oxide and perchloric acid due to a concurrent reaction involving the ammonium ion and can be concentrated and purified significantly by boiling off the remaining nitric and hydrochloric acids. Properties Anhydrous perchloric acid is an unstable oily liquid at room temperature. It forms at least five hydrates, several of which have been characterized crystallographically. These solids consist of the perchlorate anion linked via hydrogen bonds to H2O and H3O+ centers. An example is hydronium perchlorate. Perchloric acid forms an azeotrope with water, consisting of about 72.5% perchloric acid. This form of the acid is stable indefinitely and is commercially available. Such solutions are hygroscopic. Thus, if left open to the air, concentrated perchloric acid dilutes itself by absorbing water from the air. Dehydration of perchloric acid gives the anhydride dichlorine heptoxide: 2 HClO4 + P4O10 → Cl2O7 + H2P4O11 Uses Perchloric acid is mainly produced as a precursor to ammonium perchlorate, which is used in rocket propellant. The growth in rocketry has led to increased production of perchloric acid. Several million kilograms are produced annually. Perchloric acid is one of the most proven materials for etching of liquid crystal displays and critical electronics applications as well as ore extraction and has unique properties in analytical chemistry. Additionally it is a useful component in etching of chrome. As an acid Perchloric acid, a superacid, is one of the strongest Brønsted–Lowry acids. That its pKa is lower than −9 is evidenced by the fact that its monohydrate contains discrete hydronium ions and can be isolated as a stable, crystalline solid, formulated as [H3O+][]. The most recent estimate of its aqueous pKa is . It provides strong acidity with minimal interference because perchlorate is weakly nucleophilic (explaining the high acidity of HClO4). Other acids of noncoordinating anions, such as fluoroboric acid and hexafluorophosphoric acid are susceptible to hydrolysis, whereas perchloric acid is not. Despite hazards associated with the explosiveness of its salts, the acid is often preferred in certain syntheses. For similar reasons, it is a useful eluent in ion-exchange chromatography. It is also used in electropolishing or the etching of aluminium, molybdenum, and other metals. In geochemistry, perchloric acid aids in the digestion of silicate mineral samples for analysis, and also for complete digestion of organic matter. Safety Given its strong oxidizing properties, perchloric acid is subject to extensive regulations as it can react violently with metals and flammable substances such as wood, plastics, and oils. Work conducted with perchloric acid must be conducted in fume hoods with a wash-down capability to prevent accumulation of oxidisers in the ductwork. On February 20, 1947 in Los Angeles, California, 17 people were killed and 150 injured in the O'Connor Plating Works disaster. A bath, consisting of over 1000 litres of 75% perchloric acid and 35% acetic anhydride by volume which was being used to electro-polish aluminium furniture, exploded. Organic compounds were added to the overheating bath when an iron rack was replaced with one coated with cellulose acetobutyrate (Tenit-2 plastic). A few minutes later the bath exploded. The O'Connor Electro-Plating plant, 25 other buildings, and 40 automobiles were destroyed, and 250 nearby homes were damaged.
Physical sciences
Specific acids
Chemistry
56668
https://en.wikipedia.org/wiki/Apricot
Apricot
An apricot (, ) is a fruit, or the tree that bears the fruit, of several species in the genus Prunus. Usually an apricot is from the species P. armeniaca, but the fruits of the other species in Prunus sect. Armeniaca are also called apricots. In 2022, world production of apricots was 3.9 million tonnes, led by Turkey with 21% of the total. Etymology Apricot first appeared in English in the 16th century as abrecock from the Middle French aubercot or later abricot, from Spanish albaricoque and Catalan a(l)bercoc, in turn from Arabic الْبَرْقُوق (al-barqūq, "the plums"), from Byzantine Greek βερικοκκίᾱ (berikokkíā, "apricot tree"), derived from late Greek πραικόκιον (praikókion, "apricot") from Latin [persica ("peach")] praecocia (praecoquus, "early ripening"). Description The apricot is a small tree, tall, with a trunk up to in diameter and a dense, spreading canopy. The leaves are ovate, long, and wide, with a rounded base, a pointed tip, and a finely serrated margin. The flowers are in diameter, with five white to pinkish petals; they are produced singly or in pairs in early spring before the leaves. The fruit is a drupe (stonefruit) similar to a small peach, diameter (larger in some modern cultivars), from yellow to orange, often tinged red on the side most exposed to the sun; its surface can be smooth (botanically described as: glabrous) or velvety with very short hairs (botanically: pubescent). The flesh is usually succulent, but dry in some species such as P. sibirica. Its taste can range from sweet to tart. The single seed or "kernel" is enclosed in a hard shell, often called a "stone", with a grainy, smooth texture except for three ridges running down one side. Phytochemistry Apricots contain various phytochemicals, such as provitamin A beta-carotene and polyphenols, including catechins and chlorogenic acid. Taste and aroma compounds include sucrose, glucose, organic acids, terpenes, aldehydes and lactones. Species Apricots are species belonging to Prunus sect. Armeniaca. The taxonomic position of P. brigantina is disputed. It is grouped with plum species according to chloroplast DNA sequences, but more closely related to apricot species according to nuclear DNA sequences. Prunus armeniaca – common apricot, widely cultivated for its edible fruit and kernel Prunus brigantina – Briançon apricot, native to Europe, cultivated for its edible fruit and oil-producing kernel Prunus cathayana – native to Hebei Prunus × dasycarpa – purple apricot, cultivated in Central Asia and adjacent areas for its edible fruit Prunus hongpingensis – Hongping apricot, native to Shennongjia, cultivated for its edible fruit Prunus hypotrichodes – native to Chongqing Prunus limeixing – cultivated in northern China for its edible fruit Prunus mandshurica – Manchurian apricot, native to Northeast Asia, cultivated for its kernel, the fruits of some cultivars edible Prunus mume – Japanese apricot, native to southern China, widely cultivated for its beautiful blossom and edible fruit Prunus sibirica – Siberian apricot, native to Siberia, Mongolia, northern China, and Korea, cultivated for its kernel Prunus zhengheensis – Zhenghe apricot, native to Fujian Cultivation Origin and domestication Prunus armeniaca The most commonly cultivated apricot P. armeniaca was known in Armenia during ancient times, and has been cultivated there for so long that it was previously thought to have originated there, hence the epithet of its scientific name. However, this is not supported by genetic studies, which instead confirm the hypothesis proposed by Nikolai Vavilov that domestication of P. armeniaca occurred in Central Asia and China. The domesticated apricot then diffused south to South Asia, west to West Asia (including Armenia), Europe and North Africa, and east to Japan. Prunus mume Japanese apricot P. mume is another widely cultivated apricot species, usually for ornamental uses. Despite the common name, it originated from China, and was introduced to Japan in ancient times. Cultivation practices Apricots have a chilling requirement of 300 to 900 chilling units. A dry climate is good for fruit maturation. The tree is slightly more cold-hardy than the peach, tolerating winter temperatures as cold as or lower if healthy. However, large differences are observed between cultivars in frost resistance. They are hardy in USDA zones 5 through 8. A limiting factor in apricot culture is spring frosts: They tend to flower very early (in early March in western Europe), and spring frost can kill flowers or before flower buds in different stages of development. Furthermore, the trees are sensitive to temperature changes during the winter season. In China, winters can be very cold, but temperatures tend to be more stable than in Europe and especially North America, where large temperature swings can occur in winter. Hybridization with the closely related Prunus sibirica (Siberian apricot; hardy to but with less palatable fruit) offers options for breeding more cold-tolerant plants. They prefer well-drained soils with a pH of 6.0 to 7.0. Apricot cultivars are usually grafted onto plum or peach rootstocks. The cultivar scion provides the fruit characteristics, such as flavor and size, but the rootstock provides the growth characteristics of the plant. Some of the more popular US apricot cultivars are 'Blenheim', 'Wenatchee Moorpark', 'Tilton', and 'Perfection'. Some apricot cultivars are self-compatible, so do not require pollinizer trees; others are not: 'Moongold' and 'Sungold', for example, must be planted in pairs so they can pollinate each other. Hybridisors have created what is known as a "black apricot" or "purple apricot", (Prunus dasycarpa), a hybrid of an apricot and the cherry plum (Prunus cerasifera). Other apricot–plum hybrids are variously called plumcots, apriplums, pluots, or apriums. Pests and diseases Apricots are susceptible to various diseases whose relative importance differs in the major production regions as a consequence of their climatic differences. For example, hot weather as experienced in California's Central Valley often causes pit burn, a condition of soft and brown fruit around the pit. Bacterial diseases include bacterial spot and crown gall. Fungal diseases include brown rot caused by Monilinia fructicola: infection of the blossom by rainfall leads to "blossom wilt" whereby the blossoms and young shoots turn brown and die; the twigs die back in a severe attack; brown rot of the fruit is due to Monilinia infection later in the season. Dieback of branches in the summer is attributed to the fungus Eutypa lata, where examination of the base of the dead branch reveals a canker surrounding a pruning wound. Other fungal diseases are black knot, Alternaria spot and fruit rot, and powdery mildew. Unlike peaches, apricots are not affected by leaf curl, and bacterial canker (causing sunken patches in the bark, which then spread and kill the affected branch or tree) and silver leaf are not serious threats, which means that pruning in late winter is considered safe. Kernel Due to their natural amygdalin content, culinary uses for the kernel are limited. Oil made from apricot kernels is safe for human consumption without treatment because amygdalin is not oil soluble. Ground up shells are used in cosmetics as an exfoliant. As an exfoliant, it provides an alternative to plastic microbeads. Production In 2022, world production of apricots was 3.86 million tonnes, led by Turkey with 21% of the total (table). Other major producers (in descending order) were Uzbekistan, Iran, Italy, and Algeria. Malatya is the center of Turkey's apricot industry. Toxicity Apricot kernels (seeds) contain amygdalin, a poisonous compound. On average, bitter apricot kernels contain about 5% amygdalin and sweet kernels about 0.9% amygdalin. These values correspond to 0.3% and 0.05% of cyanide. Since a typical apricot kernel weighs 600 mg, bitter and sweet varieties contain, respectively, 1.8 and 0.3 mg of cyanide. Uses Apricot kernels can be made into a plant milk. Apricots are commonly consumed either as raw fruit or after dehydration as a dried fruit. Nutrition In a reference amount of , raw apricots supply 48 Calories and are composed of 11% carbohydrates, 1% protein, less than 1% fat, and 86% water (table). Raw apricots are a moderate source of vitamin A and vitamin C (11% of the Daily Value each). Dried apricots Dried apricots are a type of traditional dried fruit. Dried apricots are 63% carbohydrates, 31% water, 4% protein, and contain negligible fat. When apricots are dried, the relative concentration of micronutrients is increased, with vitamin A, vitamin E, and potassium having rich contents (Daily Values above 20%, table). In culture The apricot is the national fruit of Armenia, mostly growing in the Ararat plain. It is often depicted on souvenirs. The Chinese associate the apricot with education and medicine. For instance, the classical word 杏 壇 (literally: "apricot altar") (xìng tán 杏坛) which means "educational circle", is still widely used in written language. Zhuangzi, a Chinese philosopher in the fourth century BC, told a story that Confucius taught his students in a forum surrounded by the wood of apricot trees. The association with medicine in turn comes from the common use of apricot kernels as a component in traditional Chinese medicine, and from the story of Dong Feng (董奉), a physician during the Three Kingdoms period, who required no payment from his patients except that they plant apricot trees in his orchard upon recovering from their illnesses, resulting in a large grove of apricot trees and a steady supply of medicinal ingredients. The term "expert of the apricot grove" (杏林高手) is still used as a poetic reference to physicians. The fact that apricot season is short and unreliable in Egypt has given rise to the common Egyptian Arabic and Palestinian Arabic expression filmishmish ("in apricot [season]") or bukra filmishmish ("tomorrow in apricot [season]"), generally uttered as a riposte to an unlikely prediction, or as a rash promise to fulfill a request. This adynaton has the same sense as the English expression "when pigs fly". In Middle Eastern and North African cuisines, apricots are used to make Qamar al-Din ( "Moon of the faith"), a thick apricot drink that is a popular fixture at Iftar during Ramadan. Qamar al-Din is believed to originate in Damascus, Syria, where the variety of apricots most suitable for the drink was first grown. In Jewish culture, apricots are commonly eaten as part of the Tu Bishvat seder. The Turkish idiom bundan iyisi Şam'da kayısı (literally, "the only thing better than this is an apricot in Damascus") means "it doesn't get any better than this". In the U.S. Marines it is considered exceptionally bad luck to eat or possess apricots, especially near tanks. This superstition has been documented since at least the Vietnam War and is often cited as originating in World War II. Even calling them by their name is considered unlucky, so they are instead called "cots", "Forbidden fruit" or "A-fruit". American astronauts ate dried apricot on the Apollo 15 and Apollo 17 missions to the moon. Gallery
Biology and health sciences
Rosales
null
56778
https://en.wikipedia.org/wiki/Azeotrope
Azeotrope
An azeotrope () or a constant heating point mixture is a mixture of two or more liquids whose proportions cannot be changed by simple distillation. This happens because when an azeotrope is boiled, the vapour has the same proportions of constituents as the unboiled mixture. Knowing an azeotrope's behavior is important for distillation. Each azeotrope has a characteristic boiling point. The boiling point of an azeotrope is either less than the boiling point temperatures of any of its constituents (a positive azeotrope), or greater than the boiling point of any of its constituents (a negative azeotrope). For both positive and negative azeotropes, it is not possible to separate the components by fractional distillation and azeotropic distillation is usually used instead. For technical applications, the pressure-temperature-composition behavior of a mixture is the most important, but other important thermophysical properties are also strongly influenced by azeotropy, including the surface tension and transport properties. Etymology The term azeotrope is derived from the Greek words ζέειν (boil) and τρόπος (turning) with the prefix α- (no) to give the overall meaning, "no change on boiling". The term was coined in 1911 by English chemist John Wade and Richard William Merriman. Because their composition is unchanged by distillation, azeotropes are also called (especially in older texts) constant boiling point mixtures. Types Positive azeotropes A solution that shows greater positive deviation from Raoult's law forms a minimum boiling azeotrope at a specific composition. In general, a positive azeotrope boils at a lower temperature than any other ratio of its constituents. Positive azeotropes are also called minimum boiling mixtures or pressure maximum azeotropes. A well-known example of a positive azeotrope is an ethanol–water mixture (obtained by fermentation of sugars) consisting of 95.63% ethanol and 4.37% water (by mass), which boils at 78.2 °C. Ethanol boils at 78.4 °C, water boils at 100 °C, but the azeotrope boils at 78.2 °C, which is lower than either of its constituents. Indeed, 78.2 °C is the minimum temperature at which any ethanol/water solution can boil at atmospheric pressure. Once this composition has been achieved, the liquid and vapour have the same composition, and no further separation occurs. The boiling and recondensation of a mixture of two solvents are changes of chemical state; as such, they are best illustrated with a phase diagram. If the pressure is held constant, the two variable parameters are the temperature and the composition. The adjacent diagram shows a positive azeotrope of hypothetical constituents, X and Y. The bottom trace illustrates the boiling temperature of various compositions. Below the bottom trace, only the liquid phase is in equilibrium. The top trace illustrates the vapor composition above the liquid at a given temperature. Above the top trace, only the vapor is in equilibrium. Between the two traces, liquid and vapor phases exist simultaneously in equilibrium: for example, heating a 25% X : 75% Y mixture to temperature AB would generate vapor of composition B over liquid of composition A. The azeotrope is the point on the diagram where the two curves touch. The horizontal and vertical steps show the path of repeated distillations. Point A is the boiling point of a nonazeotropic mixture. The vapor that separates at that temperature has composition B. The shape of the curves requires that the vapor at B be richer in constituent X than the liquid at point A. The vapor is physically separated from the VLE (vapor-liquid equilibrium) system and is cooled to point C, where it condenses. The resulting liquid (point C) is now richer in X than it was at point A. If the collected liquid is boiled again, it progresses to point D, and so on. The stepwise progression shows how repeated distillation can never produce a distillate that is richer in constituent X than the azeotrope. Note that starting to the right of the azeotrope point results in the same stepwise process closing in on the azeotrope point from the other direction. Negative azeotropes A solution that shows large negative deviation from Raoult's law forms a maximum boiling azeotrope at a specific composition. Nitric acid and water is an example of this class of azeotrope. This azeotrope has an approximate composition of 68% nitric acid and 32% water by mass, with a boiling point of . In general, a negative azeotrope boils at a higher temperature than any other ratio of its constituents. Negative azeotropes are also called maximum boiling mixtures or pressure minimum azeotropes. An example of a negative azeotrope is hydrochloric acid at a concentration of 20.2% and 79.8% water (by mass). Hydrogen chloride boils at −85 °C and water at 100 °C, but the azeotrope boils at 110 °C, which is higher than either of its constituents. The maximum boiling point of any hydrochloric acid solution is 110 °C. Other examples: hydrofluoric acid (35.6%) / water, boils at 111.35 °C nitric acid (68%) / water, boils at 120.2 °C at 1 atm perchloric acid (71.6%) / water, boils at 203 °C sulfuric acid (98.3%) / water, boils at 338 °C The adjacent diagram shows a negative azeotrope of ideal constituents, X and Y. Again the bottom trace illustrates the boiling temperature at various compositions, and again, below the bottom trace the mixture must be entirely liquid phase. The top trace again illustrates the condensation temperature of various compositions, and again, above the top trace the mixture must be entirely vapor phase. The point, A, shown here is a boiling point with a composition chosen very near to the azeotrope. The vapor is collected at the same temperature at point B. That vapor is cooled, condensed, and collected at point C. Because this example is a negative azeotrope rather than a positive one, the distillate is farther from the azeotrope than the original liquid mixture at point A was. So the distillate is poorer in constituent X and richer in constituent Y than the original mixture. Because this process has removed a greater fraction of Y from the liquid than it had originally, the residue must be poorer in Y and richer in X after distillation than before. If the point, A had been chosen to the right of the azeotrope rather than to the left, the distillate at point C would be farther to the right than A, which is to say that the distillate would be richer in X and poorer in Y than the original mixture. So in this case too, the distillate moves away from the azeotrope and the residue moves toward it. This is characteristic of negative azeotropes. No amount of distillation, however, can make either the distillate or the residue arrive on the opposite side of the azeotrope from the original mixture. This is characteristic of all azeotropes. Double azeotropes Also more complex azeotropes exist, which comprise both a minimum-boiling and a maximum-boiling point. Such a system is called a double azeotrope, and will have two azeotropic compositions and boiling points. An example is water and N-methylethylenediamine as well as benzene and hexafluorobenzene. Complex systems Some azeotropes fit into neither the positive nor negative categories. The best known of these is the ternary azeotrope formed by 30% acetone, 47% chloroform, and 23% methanol, which boils at 57.5 °C. Each pair of these constituents forms a binary azeotrope, but chloroform/methanol and acetone/methanol both form positive azeotropes while chloroform/acetone forms a negative azeotrope. The resulting ternary azeotrope is neither positive nor negative. Its boiling point falls between the boiling points of acetone and chloroform, so it is neither a maximum nor a minimum boiling point. This type of system is called a saddle azeotrope. Only systems of three or more constituents can form saddle azeotropes. Miscibility and zeotropy If the constituents of a mixture are completely miscible in all proportions with each other, the type of azeotrope is called a homogeneous azeotrope. Homogeneous azeotropes can be of the low-boiling or high-boiling azeotropic type. For example, any amount of ethanol can be mixed with any amount of water to form a homogeneous solution. If the components of a mixture are not completely miscible, an azeotrope can be found inside the miscibility gap. This type of azeotrope is called a heterogeneous azeotrope or heteroazeotrope. A heteroazeotropic distillation will have two liquid phases. Heterogeneous azeotropes are only known in combination with temperature-minimum azeotropic behavior. For example, if equal volumes of chloroform (water solubility 0.8 g/100 ml at 20 °C) and water are shaken together and then left to stand, the liquid will separate into two layers. Analysis of the layers shows that the top layer is mostly water with a small amount of chloroform dissolved in it, and the bottom layer is mostly chloroform with a small amount of water dissolved in it. If the two layers are heated together, the system of layers will boil at 53.3 °C, which is lower than either the boiling point of chloroform (61.2 °C) or the boiling point of water (100 °C). The vapor will consist of 97.0% chloroform and 3.0% water regardless of how much of each liquid layer is present provided both layers are indeed present. If the vapor is re-condensed, the layers will reform in the condensate, and will do so in a fixed ratio, which in this case is 4.4% of the volume in the top layer and 95.6% in the bottom layer. Combinations of solvents that do not form an azeotrope when mixed in any proportion are said to be zeotropic. Azeotropes are useful in separating zeotropic mixtures. An example is zeotropic acetic acid and water. It is very difficult to separate out pure acetic acid (boiling point: 118.1 °C): progressive distillations produce drier solutions, but each further distillation becomes less effective at removing the remaining water. Distilling the solution to dry acetic acid is therefore economically impractical. But ethyl acetate forms an azeotrope with water that boils at 70.4 °C. By adding ethyl acetate as an entrainer, it is possible to distill away the azeotrope and leave nearly pure acetic acid as the residue. Number of constituents Azeotropes consisting of two constituents are called binary azeotropes such as diethyl ether (33%) / halothane (66%) a mixture once commonly used in anesthesia. Azeotropes consisting of three constituents are called ternary azeotropes, e.g. acetone / methanol / chloroform. Azeotropes of more than three constituents are also known. Condition of existence The condition relates activity coefficients in liquid phase to total pressure and the vapour pressures of pure components. Azeotropes can form only when a mixture deviates from Raoult's law, the equality of compositions in liquid phase and vapor phases, in vapour-liquid equilibrium and Dalton's law the equality of pressures for total pressure being equal to the sum of the partial pressures in real mixtures. In other words: Raoult's law predicts the vapor pressures of ideal mixtures as a function of composition ratio. More simply: per Raoult's law molecules of the constituents stick to each other to the same degree as they do to themselves. For example, if the constituents are X and Y, then X sticks to Y with roughly equal energy as X does with X and Y does with Y. A positive deviation from Raoult's law results when the constituents have a disaffinity for each other – that is X sticks to X and Y to Y better than X sticks to Y. Because this results in the mixture having less total affinity of the molecules than the pure constituents, they more readily escape from the stuck-together phase, which is to say the liquid phase, and into the vapor phase. When X sticks to Y more aggressively than X does to X and Y does to Y, the result is a negative deviation from Raoult's law. In this case because the molecules in the mixture are sticking together more than in the pure constituents, they are more reluctant to escape the stuck-together liquid phase. When the deviation is great enough to cause a local maxima or minima in the vapor pressure versus mole fraction graph (i.e. for some mole fraction of X in the solution), It is a mathematical consequence of the Gibbs–Duhem equation that at that point, the vapor above the solution will have the same composition as that of the liquid, resulting in an azeotrope. The adjacent diagram illustrates total vapor pressure of three hypothetical mixtures of constituents, X, and Y. The temperature throughout the plot is assumed to be constant. The center trace is a straight line, which is what Raoult's law predicts for an ideal mixture. In general solely mixtures of chemically similar solvents, such as n-hexane with n-heptane, form nearly ideal mixtures that come close to obeying Raoult's law. The top trace illustrates a nonideal mixture that has a positive deviation from Raoult's law, where the total combined vapor pressure of constituents, X and Y, is greater than what is predicted by Raoult's law. The top trace deviates sufficiently that there is a point on the curve where its tangent is horizontal. Whenever a mixture has a positive deviation and has a point at which the tangent is horizontal, the composition at that point is a positive azeotrope. At that point the total vapor pressure is at a maximum. Likewise the bottom trace illustrates a nonideal mixture that has a negative deviation from Raoult's law, and at the composition where tangent to the trace is horizontal there is a negative azeotrope. This is also the point where total vapor pressure is minimum. Separation If the two solvents can form a negative azeotrope, then distillation of any mixture of those constituents will result in the residue being closer to the composition at the azeotrope than the original mixture. For example, if a hydrochloric acid solution contains less than 20.2% hydrogen chloride, boiling the mixture will leave behind a solution that is richer in hydrogen chloride than the original. If the solution initially contains more than 20.2% hydrogen chloride, then boiling will leave behind a solution that is poorer in hydrogen chloride than the original. Boiling of any hydrochloric acid solution long enough will cause the solution left behind to approach the azeotropic ratio. On the other hand, if two solvents can form a positive azeotrope, then distillation of any mixture of those constituents will result in the residue away from the composition at the azeotrope than the original mixture. For example, if a 50/50 mixture of ethanol and water is distilled once, the distillate will be 80% ethanol and 20% water, which is closer to the azeotropic mixture than the original, which means the solution left behind will be poorer in ethanol. Distilling the 80/20% mixture produces a distillate that is 87% ethanol and 13% water. Further repeated distillations will produce mixtures that are progressively closer to the azeotropic ratio of 95.5/4.5%. No numbers of distillations will ever result in a distillate that exceeds the azeotropic ratio. Likewise, when distilling a mixture of ethanol and water that is richer in ethanol than the azeotrope, the distillate (contrary to intuition) will be poorer in ethanol than the original but still richer than the azeotrope. Distillation is one of the primary tools that chemists and chemical engineers use to separate mixtures into their constituents. Because distillation cannot separate the constituents of an azeotrope, the separation of azeotropic mixtures (also called azeotrope breaking) is a topic of considerable interest. Indeed, this difficulty led some early investigators to believe that azeotropes were actually compounds of their constituents. But there are two reasons for believing that this is not the case. One is that the molar ratio of the constituents of an azeotrope is not generally the ratio of small integers. For example, the azeotrope formed by water and acetonitrile contains 2.253 moles (or 9/4 with a relative error of just 2%) of acetonitrile for each mole of water. A more compelling reason for believing that azeotropes are not compounds is, as discussed in the last section, that the composition of an azeotrope can be affected by pressure. Contrast that with a true compound, carbon dioxide for example, which is two moles of oxygen for each mole of carbon no matter what pressure the gas is observed at. That azeotropic composition can be affected by pressure suggests a means by which such a mixture can be separated. Pressure swing distillation A hypothetical azeotrope of constituents X and Y is shown in the adjacent diagram. Two sets of curves on a phase diagram one at an arbitrarily chosen low pressure and another at an arbitrarily chosen, but higher, pressure. The composition of the azeotrope is substantially different between the high- and low-pressure plots: higher in X for the high-pressure system. The goal is to separate X in as high a concentration as possible starting from point A. At the low pressure, it is possible by progressive distillation to reach a distillate at the point, B, which is on the same side of the azeotrope as A. Successive distillation steps near the azeotropic composition exhibit very little difference in boiling temperature. If this distillate is now exposed to the high pressure, it boils at point C. From C, by progressive distillation it is possible to reach a distillate at the point D, which is on the same side of the high-pressure azeotrope as C. If that distillate is then exposed again to the low pressure, it boils at point E, which is on the opposite side of the low-pressure azeotrope to A. So, by means of the pressure swing, it is possible to cross over the low-pressure azeotrope. When the solution is boiled at point E, the distillate is poorer in X than the residue at point E. This means that the residue is richer in X than the distillate at point E. Indeed, progressive distillation can produce a residue as rich in X as is required. In summary: Low-pressure rectification (A to B) High-pressure rectification (C to D) Low-pressure stripping (E to target purity) Rectification: the distillate, or "tops", is retained and exhibits an increasingly lower boiling point. Stripping: the residue, or "bottoms", is retained and exhibits an increasingly higher boiling point. A mixture of 5% water with 95% tetrahydrofuran is an example of an azeotrope that can be economically separated using a pressure swing: a swing in this case between 1 atm and 8 atm. By contrast the composition of the water to ethanol azeotrope discussed earlier is not affected enough by pressure to be easily separated using pressure swings and instead, an entrainer may be added that either modifies the azeotropic composition and exhibits immiscibility with one of the components, or extractive distillation may be used. Azeotropic distillation Other methods of separation involve introducing an additional agent, called an entrainer, that will affect the volatility of one of the azeotrope constituents more than another. When an entrainer is added to a binary azeotrope to form a ternary azeotrope, and the resulting mixture distilled, the method is called azeotropic distillation. The best known example is adding benzene or cyclohexane to the water/ethanol azeotrope. With cyclohexane as the entrainer, the ternary azeotrope is 7% water, 17% ethanol, and 76% cyclohexane, and boils at 62.1 °C. Just enough cyclohexane is added to the water/ethanol azeotrope to engage all of the water into the ternary azeotrope. When the mixture is then boiled, the azeotrope vaporizes leaving a residue composed almost entirely of the excess ethanol. Chemical action separation Another type of entrainer is one that has a strong chemical affinity for one of the constituents. Using again the example of the water/ethanol azeotrope, the liquid can be shaken with calcium oxide, which reacts strongly with water to form the nonvolatile compound, calcium hydroxide. Nearly all of the calcium hydroxide can be separated by filtration and the filtrate redistilled to obtain 100% pure ethanol. A more extreme example is the azeotrope of 1.2% water with 98.8% diethyl ether. Ether holds the last bit of water so tenaciously that only a very powerful desiccant such as sodium metal added to the liquid phase can result in completely dry ether. Anhydrous calcium chloride is used as a desiccant for drying a wide variety of solvents since it is inexpensive and does not react with most nonaqueous solvents. Chloroform is an example of a solvent that can be effectively dried using calcium chloride. Distillation using a dissolved salt When a salt is dissolved in a solvent, it always has the effect of raising the boiling point of that solvent – that is it decreases the volatility of the solvent. When the salt is readily soluble in one constituent of a mixture but not in another, the volatility of the constituent in which it is soluble is decreased and the other constituent is unaffected. In this way, for example, it is possible to break the water/ethanol azeotrope by dissolving potassium acetate in it and distilling the result. Extractive distillation Extractive distillation is similar to azeotropic distillation, except in this case the entrainer is less volatile than any of the azeotrope's constituents. For example, the azeotrope of 20% acetone with 80% chloroform can be broken by adding water and distilling the result. The water forms a separate layer in which the acetone preferentially dissolves. The result is that the distillate is richer in chloroform than the original azeotrope. Pervaporation and other membrane methods The pervaporation method uses a membrane that is more permeable to the one constituent than to another to separate the constituents of an azeotrope as it passes from liquid to vapor phase. The membrane is rigged to lie between the liquid and vapor phases. Another membrane method is vapor permeation, where the constituents pass through the membrane entirely in the vapor phase. In all membrane methods, the membrane separates the fluid passing through it into a permeate (that which passes through) and a retentate (that which is left behind). When the membrane is chosen so that is it more permeable to one constituent than another, then the permeate will be richer in that first constituent than the retentate.
Physical sciences
Phase separations
Chemistry
56825
https://en.wikipedia.org/wiki/Eating%20disorder
Eating disorder
An eating disorder is a mental disorder defined by abnormal eating behaviors that adversely affect a person's physical or mental health. These behaviors may include eating either too much or too little. Types of eating disorders include binge eating disorder, where the patient keeps eating large amounts in a short period of time typically while not being hungry; anorexia nervosa, where the person has an intense fear of gaining weight and restricts food or overexercises to manage this fear; bulimia nervosa, where individuals eat a large quantity (binging) then try to rid themselves of the food (purging); pica, where the patient eats non-food items; rumination syndrome, where the patient regurgitates undigested or minimally digested food; avoidant/restrictive food intake disorder (ARFID), where people have a reduced or selective food intake due to some psychological reasons; and a group of other specified feeding or eating disorders. Anxiety disorders, depression and substance abuse are common among people with eating disorders. These disorders do not include obesity. People often experience comorbidity between an eating disorder and OCD. It is estimated 20–60% of patients with an ED have a history of OCD. The causes of eating disorders are not clear, although both biological and environmental factors appear to play a role. Cultural idealization of thinness is believed to contribute to some eating disorders. Individuals who have experienced sexual abuse are also more likely to develop eating disorders. Some disorders such as pica and rumination disorder occur more often in people with intellectual disabilities. Treatment can be effective for many eating disorders. Treatment varies by disorder and may involve counseling, dietary advice, reducing excessive exercise, and the reduction of efforts to eliminate food. Medications may be used to help with some of the associated symptoms. Hospitalization may be needed in more serious cases. About 70% of people with anorexia and 50% of people with bulimia recover within five years. Only 10% of people with eating disorders receive treatment, and of those, approximately 80% do not receive the proper care. Many are sent home weeks earlier than the recommended stay and are not provided with the necessary treatment. Recovery from binge eating disorder is less clear and estimated at 20% to 60%. Both anorexia and bulimia increase the risk of death. When people experience comorbidity with an eating disorder and OCD, certain aspects of treatment can be negatively impacted. OCD can make it harder to recover from obsession over weight and shape, body dissatisfaction, and body checking. This is in part because ED cognitions serve a similar purpose to OCD obsessions and compulsions (e.g., safety behaviors as temporary relief from anxiety). Research shows OCD does not have an impact on the BMI of patients during treatment. Estimates of the prevalence of eating disorders vary widely, reflecting differences in gender, age, and culture as well as methods used for diagnosis and measurement. In the developed world, anorexia affects about 0.4% and bulimia affects about 1.3% of young women in a given year. Binge eating disorder affects about 1.6% of women and 0.8% of men in a given year. According to one analysis, the percent of women who will have anorexia at some point in their lives may be up to 4%, or up to 2% for bulimia and binge eating disorders. Rates of eating disorders appear to be lower in less developed countries. Anorexia and bulimia occur nearly ten times more often in females than males. The typical onset of eating disorders is in late childhood to early adulthood. Rates of other eating disorders are not clear. Classification ICD and DSM diagnoses These eating disorders are specified as mental disorders in standard medical manuals, including the ICD-10 and the DSM-5. Anorexia nervosa (AN) is the restriction of energy intake relative to requirements, leading to significantly low body weight in the context of age, sex, developmental trajectory, and physical health. It is accompanied by an intense fear of gaining weight or becoming fat, as well as a disturbance in the way one experiences and appraises their body weight or shape. There are two subtypes of AN: the restricting type, and the binge-eating/purging type. The restricting type describes presentations in which weight loss is attained through dieting, fasting, and/or excessive exercise, with an absence of binge/purge behaviors. The binge-eating/purging type describes presentations in which the individual with the condition has engaged in recurrent episodes of binge-eating and purging behavior, such as self-induced vomiting, misuse of laxatives, and diuretics. Pubertal and post-pubertal females with anorexia often experience amenorrhea, that is the loss of menstrual periods, due to the extreme weight loss these individuals face. Although amenorrhea was a required criterion for a diagnosis of anorexia in the DSM-IV, it was dropped in the DSM-5 due to its exclusive nature, as male, post-menopause women, or individuals who do not menstruate for other reasons would fail to meet this criterion. Females with bulimia may also experience amenorrhea, although the cause is not clear. Bulimia nervosa (BN) is characterized by recurrent binge eating followed by compensatory behaviors such as purging (self-induced vomiting, eating to the point of vomiting, excessive use of laxatives/diuretics, or excessive exercise). Fasting may also be used as a method of purging following a binge. However, unlike anorexia nervosa, body weight is maintained at or above a minimally normal level. Severity of BN is determined by the number of episodes of inappropriate compensatory behaviors per week. Binge eating disorder (BED) is characterized by recurrent episodes of binge eating without use of inappropriate compensatory behaviors that are present in BN and AN binge-eating/purging subtype. Binge eating episodes are associated with eating much more rapidly than normal, eating until feeling uncomfortably full, eating large amounts of food when not feeling physically hungry, eating alone because of feeling embarrassed by how much one is eating, and/or feeling disgusted with oneself, depressed or very guilty after eating. For a BED diagnosis to be given, marked distress regarding binge eating must be present, and the binge eating must occur an average of once a week for 3 months. Severity of BED is determined by the number of binge eating episodes per week. Pica is the persistent eating of nonnutritive, nonfood substances in a way that is not developmentally appropriate or culturally supported. Although substances consumed vary with age and availability, paper, soap, hair, chalk, paint, and clay are among the most commonly consumed in those with a pica diagnosis. There are multiple causes for the onset of pica, including iron-deficiency anemia, malnutrition, and pregnancy, and pica often occurs in tandem with other mental health disorders associated with impaired function, such as intellectual disability, autism spectrum disorder, and schizophrenia. In order for a diagnosis of pica to be warranted, behaviors must last for at least one month. Rumination disorder encompasses the repeated regurgitation of food, which may be re-chewed, re-swallowed, or spit out. For this diagnosis to be warranted, behaviors must persist for at least one month, and regurgitation of food cannot be attributed to another medical condition. Additionally, rumination disorder is distinct from AN, BN, BED, and ARFID, and thus cannot occur during the course of one of these illnesses. Avoidant/restrictive food intake disorder (ARFID) is a feeding or eating disturbance, such as a lack of interest in eating food, avoidance based on sensory characteristics of food, or concern about aversive consequences of eating, that prevents one from meeting nutritional energy needs. It is frequently associated with weight loss, nutritional deficiency, or failure to meet growth trajectories. Notably, ARFID is distinguishable from AN and BN in that there is no evidence of a disturbance in the way in which one's body weight or shape is experienced. The disorder is not better explained by lack of available food, cultural practices, a concurrent medical condition, or another mental disorder. Other Specified Feeding or Eating Disorder (OSFED) is an eating or feeding disorder that does not meet full DSM-5 criteria for AN, BN, or BED. Examples of otherwise-specified eating disorders include individuals with atypical anorexia nervosa, who meet all criteria for AN except being underweight despite substantial weight loss; atypical bulimia nervosa, who meet all criteria for BN except that bulimic behaviors are less frequent or have not been ongoing for long enough; purging disorder; and night eating syndrome. Unspecified Feeding or Eating Disorder (USFED) describes feeding or eating disturbances that cause marked distress and impairment in important areas of functioning but that do not meet the full criteria for any of the other diagnoses. The specific reason the presentation does not meet criteria for a specified disorder is not given. For example, an USFED diagnosis may be given when there is insufficient information to make a more specific diagnosis, such as in an emergency room setting. Other Compulsive overeating, which may include habitual "grazing" of food or episodes of binge eating without feelings of guilt. Diabulimia, which is characterized by the deliberate manipulation of insulin levels by diabetics in an effort to control their weight. Drunkorexia, which is commonly characterized by purposely restricting food intake in order to reserve food calories for alcoholic calories, exercising excessively in order to burn calories from drinking, and over-drinking alcohol in order to purge previously consumed food. Food maintenance, which is characterized by a set of aberrant eating behaviors of children in foster care. Night eating syndrome, which is characterized by nocturnal hyperphagia (consumption of 25% or more of the total daily calories after the evening meal) with nocturnal ingestions, insomnia, loss of morning appetite and depression. Nocturnal sleep-related eating disorder, which is a parasomnia characterized by eating, habitually out-of-control, while in a state of NREM sleep, with no memory of this the next morning. Gourmand syndrome, a rare condition occurring after damage to the frontal lobe. Individuals develop an obsessive focus on fine foods. Orthorexia nervosa, a term used by Steven Bratman to describe an obsession with a "pure" diet, in which a person develops an obsession with avoiding unhealthy foods to the point where it interferes with the person's life. Klüver-Bucy syndrome, caused by bilateral lesions of the medial temporal lobe, includes compulsive eating, hypersexuality, hyperorality, visual agnosia, and docility. Prader-Willi syndrome, a genetic disorder associated with insatiable appetite and morbid obesity. Pregorexia, which is characterized by extreme dieting and over-exercising in order to control pregnancy weight gain. Prenatal undernutrition is associated with low birth weight, coronary heart disease, type 2 diabetes, stroke, hypertension, cardiovascular disease risk, and depression. Muscle dysmorphia is characterized by appearance preoccupation that one's own body is too small, too skinny, insufficiently muscular, or insufficiently lean. Muscle dysmorphia affects mostly males. Purging disorder. Recurrent purging behavior to influence weight or shape in the absence of binge eating. It is more properly a disorder of elimination rather than eating disorder. Symptoms and long-term effects Symptoms and complications vary according to the nature and severity of the eating disorder: Associated physical symptoms of eating disorders include weakness, fatigue, sensitivity to cold, reduced beard growth in men, reduction in waking erections, reduced libido, weight loss and growth failure. Frequent vomiting, which may cause acid reflux or entry of acidic gastric material into the laryngoesophageal tract, can lead to unexplained hoarseness. As such, individuals who induce vomiting as part of their eating disorder, such as those with anorexia nervosa, binge eating-purging type or those with purging-type bulimia nervosa, are at risk for acid reflux. Polycystic ovary syndrome (PCOS) is the most common endocrine disorder to affect women. Though often associated with obesity it can occur in normal weight individuals. PCOS has been associated with binge eating and bulimic behavior. Other possible manifestations are dry lips, burning tongue, parotid gland swelling, and temporomandibular disorders. Psychopathology The psychopathology of eating disorders centers around body image disturbance, such as concerns with weight and shape; self-worth being too dependent on weight and shape; fear of gaining weight even when underweight; denial of how severe the symptoms are and a distortion in the way the body is experienced. The main psychopathological features of anorexia were outlined in 1982 as problems in body perception, emotion processing and interpersonal relationships. Women with eating disorders have greater body dissatisfaction. This impairment of body perception involves vision, proprioception, interoception and tactile perception. There is an alteration in integration of signals in which body parts are experienced as dissociated from the body as a whole. Bruch once theorized that difficult early relationships were related to the cause of anorexia and how primary caregivers can contribute to the onset of the illness. A prominent feature of bulimia is dissatisfaction with body shape. However, dissatisfaction with body shape is not of diagnostic significance as it is sometimes present in individuals with no eating disorder. This highly labile feature can fluctuate depending on changes in shape and weight, the degree of control over eating and mood. In contrast, a necessary diagnostic feature for anorexia nervosa and bulimia nervosa is having overvalued ideas about shape and weight are relatively stable and partially related to the patients' low self-esteem. Pro-ana subculture Pro-ana refers to the promotion of behaviors related to the eating disorder anorexia nervosa. Several websites promote eating disorders, and can provide a means for individuals to communicate in order to maintain eating disorders. Members of these websites typically feel that their eating disorder is the only aspect of a chaotic life that they can control. These websites are often interactive and have discussion boards where individuals can share strategies, ideas, and experiences, such as diet and exercise plans that achieve extremely low weights. A study comparing the personal web-blogs that were pro-eating disorder with those focused on recovery found that the pro-eating disorder blogs contained language reflecting lower cognitive processing, used a more closed-minded writing style, contained less emotional expression and fewer social references, and focused more on eating-related contents than did the recovery blogs. Causes There is no single cause of eating disorders. Many people with eating disorders also have body image disturbance and a comorbid body dysmorphic disorder (BDD), leading them to an altered perception of their body. Studies have found that a high proportion of individuals diagnosed with body dysmorphic disorder also had some type of eating disorder, with 15% of individuals having either anorexia nervosa or bulimia nervosa. This link between body dysmorphic disorder and anorexia stems from the fact that both BDD and anorexia nervosa are characterized by a preoccupation with physical appearance and a distortion of body image. There are also many other possibilities such as environmental, social and interpersonal issues that could promote and sustain these illnesses. Also, the media are oftentimes blamed for the rise in the incidence of eating disorders due to the fact that media images of idealized slim physical shape of people such as models and celebrities motivate or even force people to attempt to achieve slimness themselves. The media are accused of distorting reality, in the sense that people portrayed in the media are either naturally thin and thus unrepresentative of normality or unnaturally thin by forcing their bodies to look like the ideal image by putting excessive pressure on themselves to look a certain way. While past findings have described eating disorders as primarily psychological, environmental, and sociocultural, further studies have uncovered evidence that there is a genetic component. Genetics Numerous studies show a genetic predisposition toward eating disorders. Twin studies have found a slight instances of genetic variance when considering the different criterion of both anorexia nervosa and bulimia nervosa as endophenotypes contributing to the disorders as a whole. A genetic link has been found on chromosome 1 in multiple family members of an individual with anorexia nervosa. An individual who is a first degree relative of someone who has had or currently has an eating disorder is seven to twelve times more likely to have an eating disorder themselves. Twin studies also show that at least a portion of the vulnerability to develop eating disorders can be inherited, and there is evidence to show that there is a genetic locus that shows susceptibility for developing anorexia nervosa. About 50% of eating disorder cases are attributable to genetics. Other cases are due to external reasons or developmental problems. There are also other neurobiological factors at play tied to emotional reactivity and impulsivity that could lead to binging and purging behaviors. Epigenetics mechanisms are means by which environmental effects alter gene expression via methods such as DNA methylation; these are independent of and do not alter the underlying DNA sequence. They are heritable, but also may occur throughout the lifespan, and are potentially reversible. Dysregulation of dopaminergic neurotransmission due to epigenetic mechanisms has been implicated in various eating disorders. Other candidate genes for epigenetic studies in eating disorders include leptin, pro-opiomelanocortin (POMC) and brain-derived neurotrophic factor (BDNF). There has found to be a genetic correlation between anorexia nervosa and OCD, suggesting a strong etiology. First and second relatives of probands with OCD have a greater chance of developing anorexia nervosa as genetic relatedness increases. Psychological Eating disorders are classified as Axis I disorders in the Diagnostic and Statistical Manual of Mental Health Disorders (DSM-IV) published by the American Psychiatric Association. There are various other psychological issues that may factor into eating disorders, some fulfill the criteria for a separate Axis I diagnosis or a personality disorder which is coded Axis II and thus are considered comorbid to the diagnosed eating disorder. Axis II disorders are subtyped into 3 "clusters": A, B and C. The causality between personality disorders and eating disorders has yet to be fully established. Some people have a previous disorder which may increase their vulnerability to developing an eating disorder. Some develop them afterwards. The severity and type of eating disorder symptoms have been shown to affect comorbidity. There has been controversy over various editions of the DSM diagnostic criteria including the latest edition, DSM-V, published in 2013. Cognitive attentional bias Attentional bias may have an effect on eating disorders. Attentional bias is the preferential attention toward certain types of information in the environment while simultaneously ignoring others. Individuals with eating disorders can be thought to have schemas, knowledge structures, which are dysfunctional as they may bias judgement, thought, behaviour in a manner that is self-destructive or maladaptive. They may have developed a disordered schema which focuses on body size and eating. Thus, this information is given the highest level of importance and overvalued among other cognitive structures. Researchers have found that people who have eating disorders tend to pay more attention to stimuli related to food. For people struggling to recover from an eating disorder or addiction, this tendency to pay attention to certain signals while discounting others can make recovery that much more difficult. Studies have utilized the Stroop task to assess the probable effect of attentional bias on eating disorders. This may involve separating food and eating words from body shape and weight words. Such studies have found that anorexic subjects were slower to colour name food related words than control subjects. Other studies have noted that individuals with eating disorders have significant attentional biases associated with eating and weight stimuli. Personality traits There are various childhood personality traits associated with the development of eating disorders, such as perfectionism and neuroticism. These personality traits are found to link eating disorders and OCD. During adolescence these traits may become intensified due to a variety of physiological and cultural influences such as the hormonal changes associated with puberty, stress related to the approaching demands of maturity and socio-cultural influences and perceived expectations, especially in areas that concern body image. Eating disorders have been associated with a fragile sense of self and with disordered mentalization. Many personality traits have a genetic component and are highly heritable. Maladaptive levels of certain traits may be acquired as a result of anoxic or traumatic brain injury, neurodegenerative diseases such as Parkinson's disease, neurotoxicity such as lead exposure, bacterial infection such as Lyme disease or parasitic infection such as Toxoplasma gondii as well as hormonal influences. While studies are still continuing via the use of various imaging techniques such as fMRI; these traits have been shown to originate in various regions of the brain such as the amygdala and the prefrontal cortex. Disorders in the prefrontal cortex and the executive functioning system have been shown to affect eating behavior. Celiac disease People with gastrointestinal disorders may be more risk of developing disordered eating practices than the general population, principally restrictive eating disturbances. An association of anorexia nervosa with celiac disease has been found. The role that gastrointestinal symptoms play in the development of eating disorders seems rather complex. Some authors report that unresolved symptoms prior to gastrointestinal disease diagnosis may create a food aversion in these persons, causing alterations to their eating patterns. Other authors report that greater symptoms throughout their diagnosis led to greater risk. It has been documented that some people with celiac disease, irritable bowel syndrome or inflammatory bowel disease who are not conscious about the importance of strictly following their diet, choose to consume their trigger foods to promote weight loss. On the other hand, individuals with good dietary management may develop anxiety, food aversion and eating disorders because of concerns around cross contamination of their foods. Some authors suggest that medical professionals should evaluate the presence of an unrecognized celiac disease in all people with eating disorder, especially if they present any gastrointestinal symptom (such as decreased appetite, abdominal pain, bloating, distension, vomiting, diarrhea or constipation), weight loss, or growth failure; and also routinely ask celiac patients about weight or body shape concerns, dieting or vomiting for weight control, to evaluate the possible presence of eating disorders, specially in women. Environmental influences Child maltreatment Child abuse which encompasses physical, psychological, and sexual abuse, as well as neglect, has been shown to approximately triple the risk of an eating disorder. Sexual abuse appears to double the risk of bulimia; however, the association is less clear for anorexia. The risk for individuals developing eating disorders increases if the individual grew up in an invalidating environment where displays of emotions were often punished. Abuse that has also occurred in childhood produces intolerable difficult emotions that cannot be expressed in a healthy manner. Eating disorders come in as an escape coping mechanism, as a means to control and avoid overwhelming negative emotions and feelings. Those who report physical or sexual maltreatment as a child are at an increased risk of developing an eating disorder. Social isolation Social isolation has been shown to have a deleterious effect on an individual's physical and emotional well-being. Those that are socially isolated have a higher mortality rate in general as compared to individuals that have established social relationships. This effect on mortality is markedly increased in those with pre-existing medical or psychiatric conditions, and has been especially noted in cases of coronary heart disease. "The magnitude of risk associated with social isolation is comparable with that of cigarette smoking and other major biomedical and psychosocial risk factors." (Brummett et al.) Social isolation can be inherently stressful, depressing and anxiety-provoking. In an attempt to ameliorate these distressful feelings an individual may engage in emotional eating in which food serves as a source of comfort. The loneliness of social isolation and the inherent stressors thus associated have been implicated as triggering factors in binge eating as well. Waller, Kennerley and Ohanian (2007) argued that both bingeing–vomiting and restriction are emotion suppression strategies, but they are just utilized at different times. For example, restriction is used to pre-empt any emotion activation, while bingeing–vomiting is used after an emotion has been activated. Parental influence Parental influence has been shown to be an intrinsic component in the development of eating behaviors of children. This influence is manifested and shaped by a variety of diverse factors such as familial genetic predisposition, dietary choices as dictated by cultural or ethnic preferences, the parents' own body shape, how they talk about their own body, and eating patterns, the degree of involvement and expectations of their children's eating behavior as well as the interpersonal relationship of parent and child. It is also influenced by the general psychosocial climate of the home and whether a nurturing stable environment is present. It has been shown that maladaptive parental behavior has an important role in the development of eating disorders. As to the more subtle aspects of parental influence, it has been shown that eating patterns are established in early childhood and that children should be allowed to decide when their appetite is satisfied as early as the age of two. A direct link has been shown between obesity and parental pressure to eat more. Coercive tactics in regard to diet have not been proven to be efficacious in controlling a child's eating behavior. Affection and attention have been shown to affect the degree of a child's finickiness and their acceptance of a more varied diet. Adams and Crane (1980), have shown that parents are influenced by stereotypes that influence their perception of their child's body. The conveyance of these negative stereotypes also affects the child's own body image and satisfaction. Hilde Bruch, a pioneer in the field of studying eating disorders, asserts that anorexia nervosa often occurs in girls who are high achievers, obedient, and always trying to please their parents. Their parents have a tendency to be over-controlling and fail to encourage the expression of emotions, inhibiting daughters from accepting their own feelings and desires. Adolescent females in these overbearing families lack the ability to be independent from their families, yet realize the need to, often resulting in rebellion. Controlling their food intake may make them feel better, as it provides them with a sense of control. Negative parental body-talk, meaning when a parent comments on their own weight, shape or size, is strongly correlated with disordered eating in their children. Children whose parents engage in self-talk about their weight frequently are three times as likely to practice extreme weight control behaviors such as disordered eating, than children who do not overhear negative parental body-talk. Additionally, negative body-talk from mothers is explicitly correlated with disordered eating in adolescent girls. Peer pressure In various studies such as one conducted by The McKnight Investigators, peer pressure was shown to be a significant contributor to body image concerns and attitudes toward eating among subjects in their teens and early twenties. Eleanor Mackey and co-author, Annette M. La Greca of the University of Miami, studied 236 teen girls from public high schools in southeast Florida. "Teen girls' concerns about their own weight, about how they appear to others and their perceptions that their peers want them to be thin are significantly related to weight-control behavior", says psychologist Eleanor Mackey of the Children's National Medical Center in Washington and lead author of the study. "Those are really important." According to one study, 40% of 9- and 10-year-old girls are already trying to lose weight. Such dieting is reported to be influenced by peer behavior, with many of those individuals on a diet reporting that their friends also were dieting. The number of friends dieting and the number of friends who pressured them to diet also played a significant role in their own choices. Elite athletes have a significantly higher rate in eating disorders. Female athletes in sports such as gymnastics, ballet, diving, etc. are found to be at the highest risk among all athletes. Women are more likely than men to acquire an eating disorder between the ages of 13 and 25. About 0–15% of those with bulimia and anorexia are men. Other psychological problems that could possibly create an eating disorder such as Anorexia Nervosa are depression, and low self-esteem. Depression is a state of mind where emotions are unstable causing a person's eating habits to change due to sadness and no interest of doing anything. According to PSYCOM "Studies show that a high percentage of people with an eating disorder will experience depression." Depression is a state of mind where people seem to refuge without being able to get out of it. A big factor of this can affect people with their eating and this can mostly affect teenagers. Teenagers are big candidates for Anorexia for the reason that during the teenage years, many things start changing and they start to think certain ways. According to Life Works an article about eating disorders "People of any age can be affected by pressure from their peers, the media and even their families but it is worse when you're a teenager at school." Teenagers can develop eating disorder such as Anorexia due to peer pressure which can lead to Depression. Many teens start off this journey by feeling pressure for wanting to look a certain way of feeling pressure for being different. This brings them to finding the result in eating less and soon leading to Anorexia which can bring big harms to the physical state. Cultural pressure Western perspective There is a cultural emphasis on thinness which is especially pervasive in western society. A child's perception of external pressure to achieve the ideal body that is represented by the media predicts the child's body image dissatisfaction, body dysmorphic disorder and an eating disorder. "The cultural pressure on men and women to be 'perfect' is an important predisposing factor for the development of eating disorders". Further, when women of all races base their evaluation of their self upon what is considered the culturally ideal body, the incidence of eating disorders increases. Socioeconomic status (SES) has been viewed as a risk factor for eating disorders, presuming that possessing more resources allows for an individual to actively choose to diet and reduce body weight. Some studies have also shown a relationship between increasing body dissatisfaction with increasing SES. However, once high socioeconomic status has been achieved, this relationship weakens and, in some cases, no longer exists. The media plays a major role in the way in which people view themselves. Countless magazine ads and commercials depict thin celebrities. Society has taught people that being accepted by others is necessary at all costs. This has led to the belief that in order to fit in one must look a certain way. Televised beauty competitions such as the Miss America Competition contribute to the idea of what it means to be beautiful because competitors are evaluated on the basis of their opinion. In addition to socioeconomic status being considered a cultural risk factor so is the world of sports. Athletes and eating disorders tend to go hand in hand, especially the sports where weight is a competitive factor. Gymnastics, horse back riding, wrestling, body building, and dancing are just a few that fall into this category of weight dependent sports. Eating disorders among individuals that participate in competitive activities, especially women, often lead to having physical and biological changes related to their weight that often mimic prepubescent stages. Oftentimes as women's bodies change they lose their competitive edge which leads them to taking extreme measures to maintain their younger body shape. Men often struggle with binge eating followed by excessive exercise while focusing on building muscle rather than losing fat, but this goal of gaining muscle is just as much an eating disorder as obsessing over thinness. The following statistics taken from Susan Nolen-Hoeksema's book, (ab)normal psychology, show the estimated percentage of athletes that struggle with eating disorders based on the category of sport. Aesthetic sports (dance, figure skating, gymnastics) – 35% Weight dependent sports (judo, wrestling) – 29% Endurance sports (cycling, swimming, running) – 20% Technical sports (golf, high jumping) – 14% Ball game sports (volleyball, soccer) – 12% Although most of these athletes develop eating disorders to keep their competitive edge, others use exercise as a way to maintain their weight and figure. This is just as serious as regulating food intake for competition. Even though there is mixed evidence showing at what point athletes are challenged with eating disorders, studies show that regardless of competition level all athletes are at higher risk for developing eating disorders that non-athletes, especially those that participate in sports where thinness is a factor. Pressure from society is also seen within the homosexual community. Gay men are at greater risk of eating disorder symptoms than heterosexual men. Within the gay culture, muscularity gives the advantages of both social and sexual desirability and also power. These pressures and ideas that another homosexual male may desire a mate who is thinner or muscular can possibly lead to eating disorders. The higher eating disorder symptom score reported, the more concern about how others perceive them and the more frequent and excessive exercise sessions occur. High levels of body dissatisfaction are also linked to external motivation to working out and old age; however, having a thin and muscular body occurs within younger homosexual males than older. Most of the cross-cultural studies use definitions from the DSM-IV-TR, which has been criticized as reflecting a Western cultural bias. Thus, assessments and questionnaires may not be constructed to detect some of the cultural differences associated with different disorders. Also, when looking at individuals in areas potentially influenced by Western culture, few studies have attempted to measure how much an individual has adopted the mainstream culture or retained the traditional cultural values of the area. Lastly, the majority of the cross-cultural studies on eating disorders and body image disturbances occurred in Western nations and not in the countries or regions being examined. While there are many influences to how an individual processes their body image, the media does play a major role. Along with the media, parental influence, peer influence, and self-efficacy beliefs also play a large role in an individual's view of themselves. The way the media presents images can have a lasting effect on an individual's perception of their body image. Eating disorders are a worldwide issue and while women are more likely to be affected by an eating disorder it still affects both genders (Schwitzer 2012). The media influences eating disorders whether shown in a positive or negative light, it then has a responsibility to use caution when promoting images that projects an ideal that many turn to eating disorders to attain. To try to address unhealthy body image in the fashion world, in 2015, France passed a law requiring models to be declared healthy by a doctor to participate in fashion shows. It also requires re-touched images to be marked as such in magazines. There is a relationship between "thin ideal" social media content and body dissatisfaction and eating disorders among young adult women, especially in the Western hemisphere. New research points to an "internalization" of distorted images online, as well as negative comparisons among young adult women. Most studies have been based in the U.S., the U.K, and Australia, these are places where the thin ideal is strong among women, as well as the strive for the "perfect" body. In addition to mere media exposure, there is an online "pro-eating disorder" community. Through personal blogs and Twitter, this community promotes eating disorders as a "lifestyle", and continuously posts pictures of emaciated bodies, and tips on how to stay thin. The hashtag "#proana" (pro-anorexia), is a product of this community, as well as images promoting weight loss, tagged with the term "thinspiration". According to social comparison theory, young women have a tendency to compare their appearance to others, which can result in a negative view of their own bodies and altering of eating behaviors, that in turn can develop disordered eating behaviors. When body parts are isolated and displayed in the media as objects to be looked at, it is called objectification, and women are affected most by this phenomenon. Objectification increases self-objectification, where women judge their own body parts as a mean of praise and pleasure for others. There is a significant link between self-objectification, body dissatisfaction, and disordered eating, as the beauty ideal is altered through social media. Although eating disorders are typically under diagnosed in people of color, they still experience eating disorders in great numbers. It is thought that the stress that those of color face in the United States from being multiply marginalized may contribute to their rates of eating disorders. Eating disorders, for these women, may be a response to environmental stressors such as racism, abuse and poverty. African perspective In the majority of many African communities, thinness is generally not seen as an ideal body type and most pressure to attain a slim figure may stem from influence or exposure to Western culture and ideology. Traditional African cultural ideals are reflected in the practice of some health professionals; in Ghana, pharmacists sell appetite stimulants to women who desire to, as Ghanaians stated, "grow fat". Girls are told that if they wish to find a partner and birth children they must gain weight. On the contrary, there are certain taboos surrounding a slim body image, specifically in West Africa. Lack of body fat is linked to poverty and HIV/AIDS. However, the emergence of Western and European influence, specifically with the introduction of such fashion and modelling shows and competitions, is changing certain views among body acceptance, and the prevalence of eating disorders has consequently increased. This acculturation is also related to how South Africa is concurrently undergoing rapid, intense urbanization. Such modern development is leading to cultural changes, and professionals cite rates of eating disorders in this region will increase with urbanization, specifically with changes in identity, body image, and cultural issues. Further, exposure to Western values through private Caucasian schools or caretakers is another possible factor related to acculturation which may be associated with the onset of eating disorders. Other factors which are cited to be related to the increasing prevalence of eating disorders in African communities can be related to sexual conflicts, such as psychosexual guilt, first sexual intercourse, and pregnancy. Traumatic events which are related to both family (i.e. parental separation) and eating related issues are also cited as possible effectors. Religious fasting, particularly around times of stress, and feelings of self-control are also cited as determinants in the onset of eating disorders. Asian perspective The West plays a role in Asia's economic development via foreign investments, advanced technologies joining financial markets, and the arrival of American and European companies in Asia, especially through outsourcing manufacturing operations. This exposure to Western culture, especially the media, imparts Western body ideals to Asian society, termed Westernization. In part, Westernization fosters eating disorders among Asian populations. However, there are also country-specific influences on the occurrence of eating disorders in Asia. China In China as well as other Asian countries, Westernization, migration from rural to urban areas, after-effects of sociocultural events, and disruptions of social and emotional support are implicated in the emergence of eating disorders. In particular, risk factors for eating disorders include higher socioeconomic status, preference for a thin body ideal, history of child abuse, high anxiety levels, hostile parental relationships, jealousy towards media idols, and above-average scores on the body dissatisfaction and interoceptive awareness sections of the Eating Disorder Inventory. Similarly to the West, researchers have identified the media as a primary source of pressures relating to physical appearance, which may even predict body change behaviors in males and females. Fiji While colonised by the British in 1874, Fiji kept a large degree of linguistic and cultural diversity which characterised the ethnic Fijian population. Though gaining independence in 1970, Fiji has rejected Western, capitalist values which challenged its mutual trusts, bonds, kinships and identity as a nation. Similar to studies conducted on Polynesian groups, ethnic Fijian traditional aesthetic ideals reflected a preference for a robust body shape; thus, the prevailing 'pressure to be slim,' thought to be associated with diet and disordered eating in many Western societies was absent in traditional Fiji. Additionally, traditional Fijian values would encourage a robust appetite and a widespread vigilance for and social response to weight loss. Individual efforts to reshape the body by dieting or exercise, thus traditionally was discouraged. However, studies conducted in 1995 and 1998 both demonstrated a link between the introduction of television in the country, and the emergence of eating disorders in young adolescent ethnic Fijian girls. Through the quantitative data collected in these studies there was found to be a significant increase in the prevalence of two key indicators of disordered eating: self-induced vomiting and high Eating Attitudes Test- 26. These results were recorded following prolonged television exposure in the community, and an associated increase in the percentage of households owning television sets. Additionally, qualitative data linked changing attitudes about dieting, weight loss and aesthetic ideas in the peer environment to Western media images. The impact of television was especially profound given the longstanding social and cultural traditions that had previously rejected the notions of dieting, purging and body dissatisfaction in Fiji. Additional studies in 2011 found that social network media exposure, independent of direct media and other cultural exposures, was also associated with eating pathology. Hong Kong From the early- to-mid- 1990s, a variant form of anorexia nervosa was identified in Hong Kong. This variant form did not share features of anorexia in the West, notably "fat-phobia" and distorted body image. Patients attributed their restrictive food intake to somatic complaints, such as epigastric bloating, abdominal or stomach pain, or a lack of hunger or appetite. Compared to Western patients, individuals with this variant anorexia demonstrated bulimic symptoms less frequently and tended to have lower pre-morbid body mass index. This form disapproves the assumption that a "fear of fatness or weight gain" is the defining characteristic of individuals with anorexia nervosa. India In the past, the available evidence did not suggest that unhealthy weight loss methods and eating disordered behaviors are common in India as proven by stagnant rates of clinically diagnosed eating disorders. However, it appears that rates of eating disorders in urban areas of India are increasing based on surveys from psychiatrists who were asked whether they perceived eating disorders to be a "serious clinical issue" in India. One notable Indian psychiatrist and eating disorder specialist Dr Udipi Gauthamadas is on record saying, "Disturbed eating attitudes and behaviours affect about 25 to 40 percent of adolescent girls and around 20 percent of adolescent boys. While on one hand there is increasing recognition of eating disorders in the country, there is also a persisting belief that this illness is alien to India. This prevents many sufferers from seeking professional help." 23.5% of respondents believed that rates of eating disorders were rising in Bangalore, 26.5% claimed that rates were stagnant, and 42%, the largest percentage, expressed uncertainty. It has been suggested that urbanization and socioeconomic status are associated with increased risk for body weight dissatisfaction. However, due to the physical size of and diversity within India, trends may vary throughout the country. American perspective Black and African American Historically, identifying as African American has been considered a protective factor for body dissatisfaction. Those identifying as African American have been found to have a greater acceptance of larger body image ideals and less internalization of the thin ideal, and African American women have reported the lowest levels of body dissatisfaction among the five major racial/ethnic groups in the US. However, recent research contradicts these findings, indicating that African American women may exhibit levels of body dissatisfaction comparable to other racial/ethnic minority groups. In this way, just because those who identify as African American may not internalize the thin ideal as strongly as other racial and ethnic groups, it does not mean that they do not hold other appearance ideals that may promote body shape concerns. Similarly, recent research shows that African Americans exhibit rates of disordered eating that are similar to or even higher than their white counterparts. American Indian and Alaska Native American Indian and Alaska Native women are more likely than white women to both experience a fear of losing control over their eating and to abuse laxatives and diuretics for weight control purposes. They have comparable rates of binge eating and other disordered weight control behaviors in comparison to other racial groups. Latinos Disproportionately high rates of disordered eating and body dissatisfaction have been found in Hispanics in comparison to other racial and ethnic groups. Studies have found significantly more laxative use in those identifying as Hispanic in comparison to non-Hispanic white counterparts. Specifically, those identifying as Hispanic may be at heightened risk of engaging in binge eating and bingeing/purging behaviors. Food insecurity Food insecurity is defined as inadequate access to sufficient food, both in terms of quantity and quality, in direct contrast to food security, which is conceptualized as having access to sufficient, safe, and nutritious food to meet dietary needs and preferences. Notably, levels of food security exist on a continuum from reliable access to food to disrupted access to food. Multiple studies have found food insecurity to be associated with eating pathology. A study conducted on individuals visiting a food bank in Texas found higher food insecurity to be correlated with higher levels of binge eating, overall eating disorder pathology, dietary restraint, compensatory behaviors and weight self-stigma. Findings of a replication study with a larger, more diverse sample mirrored these results, and a study looking at the relationship between food insecurity and bulimia nervosa similarly found greater food insecurity to be associated with elevated levels of eating pathology. Trauma One study has found that binge-eating disorder may stem from trauma, with some female patients engaging in these disorders to numb pain experienced through sexual trauma. There are various forms of trauma that individuals may have experienced, leading them to cope through an eating disorder. When in pain, individuals may attempt to exert control over this aspect of their lives, perceiving it as their only means of managing their life. The brain is a very complex organ that tries its best to help us navigate through the hardships of life. Sexual Orientation and Gender Identity Sexual orientation, gender identity and gender norms influence people with eating disorders. Some eating disorder patients have implied that enforced heterosexuality and heterosexism led many to engage in their condition to align with norms associated with their gender identity. Families may restrict women's food intake to keep them thin, thus increasing their ability to attain a male romantic partner. Non-heterosexual male adolescents are consistently at higher risk of developing disordered eating than their heterosexual peers for various body image concerns, including worries about weight, shape, muscle tone, and definition. Eating disorders in trans and non-binary adolescents is complicated in that some eating disorder symptoms may affirm gender identity in transitioning patients, complicating treatment. For example, loss of menstruation in birth-assigned females or a slender frame in birth-assigned males may align with their gender identity during transition. Mechanisms Biochemical: Eating behavior is a complex process controlled by the neuroendocrine system, of which the Hypothalamus-pituitary-adrenal-axis (HPA axis) is a major component. Dysregulation of the HPA axis has been associated with eating disorders, such as irregularities in the manufacture, amount or transmission of certain neurotransmitters, hormones or neuropeptides and amino acids such as homocysteine, elevated levels of which are found in AN and BN as well as depression. Serotonin: a neurotransmitter involved in depression also has an inhibitory effect on eating behavior. Norepinephrine is both a neurotransmitter and a hormone; abnormalities in either capacity may affect eating behavior. Dopamine: which in addition to being a precursor of norepinephrine and epinephrine is also a neurotransmitter which regulates the rewarding property of food. Neuropeptide Y also known as NPY is a hormone that encourages eating and decreases metabolic rate. Blood levels of NPY are elevated in patients with anorexia nervosa, and studies have shown that injection of this hormone into the brain of rats with restricted food intake increases their time spent running on a wheel. Normally the hormone stimulates eating in healthy patients, but under conditions of starvation it increases their activity rate, probably to increase the chance of finding food. The increased levels of NPY in the blood of patients with eating disorders can in some ways explain the instances of extreme over-exercising found in most anorexia nervosa patients. Leptin and ghrelin: leptin is a hormone produced primarily by the fat cells in the body; it has an inhibitory effect on appetite by inducing a feeling of satiety. Ghrelin is an appetite inducing hormone produced in the stomach and the upper portion of the small intestine. Circulating levels of both hormones are an important factor in weight control. While often associated with obesity, both hormones and their respective effects have been implicated in the pathophysiology of anorexia nervosa and bulimia nervosa. Leptin can also be used to distinguish between constitutional thinness found in a healthy person with a low BMI and an individual with anorexia nervosa. Gut bacteria and immune system: studies have shown that a majority of patients with anorexia and bulimia nervosa have elevated levels of autoantibodies that affect hormones and neuropeptides that regulate appetite control and the stress response. There may be a direct correlation between autoantibody levels and associated psychological traits. Later study revealed that autoantibodies reactive with alpha-MSH are, in fact, generated against ClpB, a protein produced by certain gut bacteria e.g. Escherichia coli. ClpB protein was identified as a conformational antigen-mimetic of alpha-MSH. In patients with eating disorders plasma levels of anti-ClpB IgG and IgM correalated with patients' psychological traits Infection: PANDAS is an abbreviation for the controversial Pediatric Autoimmune Neuropsychiatric Disorders Associated with Streptococcal Infections hypothesis. Children with PANDAS are postulated to "have obsessive-compulsive disorder (OCD) and/or tic disorders such as Tourette syndrome, and in whom symptoms worsen following infections such as strep throat". (NIMH) PANDAS and the broader PANS are hypothesized to be a precipitating factor in the development of anorexia nervosa in some cases, (PANDAS AN). Lesions: studies have shown that lesions to the right frontal lobe or temporal lobe can cause the pathological symptoms of an eating disorder. Tumors: tumors in various regions of the brain have been implicated in the development of abnormal eating patterns. Brain calcification: a study highlights a case in which prior calcification of the right thalumus may have contributed to development of anorexia nervosa. Somatosensory homunculus: is the representation of the body located in the somatosensory cortex, first described by renowned neurosurgeon Wilder Penfield. The illustration was originally termed "Penfield's Homunculus", homunculus meaning little man. "In normal development this representation should adapt as the body goes through its pubertal growth spurt. However, in AN it is hypothesized that there is a lack of plasticity in this area, which may result in impairments of sensory processing and distortion of body image". (Bryan Lask, also proposed by VS Ramachandran) Obstetric complications: There have been studies done which show maternal smoking, obstetric and perinatal complications such as maternal anemia, very pre-term birth (less than 32 weeks), being born small for gestational age, neonatal cardiac problems, preeclampsia, placental infarction and sustaining a cephalhematoma at birth increase the risk factor for developing either anorexia nervosa or bulimia nervosa. Some of this developmental risk as in the case of placental infarction, maternal anemia and cardiac problems may cause intrauterine hypoxia, umbilical cord occlusion or cord prolapse may cause ischemia, resulting in cerebral injury, the prefrontal cortex in the fetus and neonate is highly susceptible to damage as a result of oxygen deprivation which has been shown to contribute to executive dysfunction, ADHD, and may affect personality traits associated with both eating disorders and comorbid disorders such as impulsivity, mental rigidity and obsessionality. The problem of perinatal brain injury, in terms of the costs to society and to the affected individuals and their families, is extraordinary. (Yafeng Dong, PhD) Symptom of starvation: Evidence suggests that the symptoms of eating disorders are actually symptoms of the starvation itself, not of a mental disorder. In a study involving thirty-six healthy young men that were subjected to semi-starvation, the men soon began displaying symptoms commonly found in patients with eating disorders. In this study, the healthy men ate approximately half of what they had become accustomed to eating and soon began developing symptoms and thought patterns (preoccupation with food and eating, ritualistic eating, impaired cognitive ability, other physiological changes such as decreased body temperature) that are characteristic symptoms of anorexia nervosa. The men used in the study also developed hoarding and obsessive collecting behaviors, even though they had no use for the items, which revealed a possible connection between eating disorders and obsessive–compulsive disorder. Diagnosis According to Pritts and Susman "The medical history is the most powerful tool for diagnosing eating disorders". There are many medical disorders that mimic eating disorders and comorbid psychiatric disorders. Early detection and intervention can assure a better recovery and can improve a lot the quality of life of these patients. In the past 30 years eating disorders have become increasingly conspicuous and it is uncertain whether the changes in presentation reflect a true increase. Anorexia nervosa and bulimia nervosa are the most clearly defined subgroups of a wider range of eating disorders. Many patients present with subthreshold expressions of the two main diagnoses: others with different patterns and symptoms. As eating disorders, especially anorexia nervosa, are thought of as being associated with young, white females, diagnosis of eating disorders in other races happens more rarely. In one study, when clinicians were presented with identical case studies demonstrating disordered eating symptoms in Black, Hispanic, and white women, 44% noted the white woman's behavior as problematic; 41% identified the Hispanic woman's behavior as problematic, and only 17% of the clinicians noted the Black woman's behavior as problematic (Gordon, Brattole, Wingate, & Joiner, 2006). Medical The diagnostic workup typically includes complete medical and psychosocial history and follows a rational and formulaic approach to the diagnosis. Neuroimaging using fMRI, MRI, PET and SPECT scans have been used to detect cases in which a lesion, tumor or other organic condition has been either the sole causative or contributory factor in an eating disorder. "Right frontal intracerebral lesions with their close relationship to the limbic system could be causative for eating disorders, we therefore recommend performing a cranial MRI in all patients with suspected eating disorders" (Trummer M et al. 2002), "intracranial pathology should also be considered however certain is the diagnosis of early-onset anorexia nervosa. Second, neuroimaging plays an important part in diagnosing early-onset anorexia nervosa, both from a clinical and a research prospective".(O'Brien et al. 2001). Psychological After ruling out organic causes and the initial diagnosis of an eating disorder being made by a medical professional, a trained mental health professional aids in the assessment and treatment of the underlying psychological components of the eating disorder and any comorbid psychological conditions. The clinician conducts a clinical interview and may employ various psychometric tests. Some are general in nature while others were devised specifically for use in the assessment of eating disorders. Some of the general tests that may be used are the Hamilton Depression Rating Scale and the Beck Depression Inventory. longitudinal research showed that there is an increase in chance that a young adult female would develop bulimia due to their current psychological pressure and as the person ages and matures, their emotional problems change or are resolved and then the symptoms decline. Several types of scales are currently used – (a) self-report questionnaires –EDI-3, BSQ, TFEQ, MAC, BULIT-R, QEWP-R, EDE-Q, EAT, NEQ – and other; (b) semi-structured interviews – SCID-I, EDE – and other; (c) clinical interviews unstructured or observer-based rating scales- Morgan Russel scale The majority of the scales used were described and used in adult populations. From all the scales evaluated and analyzed, only three are described at the child population – it is EAT-26 (children above 16 years), EDI-3 (children above 13 years), and ANSOCQ (children above 13 years). It is essential to develop specific scales for people under 18 years of age, given the increasing incidence of ED among children and the need for early detection and appropriate intervention. Moreover, the urgent need for accurate scales and telemedicine testing and diagnosis tools are of high importance during the COVID-19 pandemic (Leti, Garner & al., 2020). Differential diagnoses There are multiple medical conditions which may be misdiagnosed as a primary psychiatric disorder, complicating or delaying treatment. These may have a synergistic effect on conditions which mimic an eating disorder or on a properly diagnosed eating disorder. Lyme disease is known as the "great imitator", as it may present as a variety of psychiatric or neurological disorders including anorexia nervosa. Gastrointestinal diseases, such as celiac disease, Crohn's disease, peptic ulcer, eosinophilic esophagitis or non-celiac gluten sensitivity, among others. Celiac disease is also known as the "great imitator", because it may involve several organs and cause an extensive variety of non-gastrointestinal symptoms, such as psychiatric and neurological disorders, including anorexia nervosa. Addison's disease is a disorder of the adrenal cortex which results in decreased hormonal production. Addison's disease, even in subclinical form may mimic many of the symptoms of anorexia nervosa. Gastric adenocarcinoma is one of the most common forms of cancer in the world. Complications due to this condition have been misdiagnosed as an eating disorder. Hypothyroidism, hyperthyroidism, hypoparathyroidism and hyperparathyroidism may mimic some of the symptoms of, can occur concurrently with, be masked by or exacerbate an eating disorder. Toxoplasma seropositivity: even in the absence of symptomatic toxoplasmosis, toxoplasma gondii exposure has been linked to changes in human behavior and psychiatric disorders including those comorbid with eating disorders such as depression. In reported case studies the response to antidepressant treatment improved only after adequate treatment for toxoplasma. Neurosyphilis: It is estimated that there may be up to one million cases of untreated syphilis in the US alone. "The disease can present with psychiatric symptoms alone, psychiatric symptoms that can mimic any other psychiatric illness". Many of the manifestations may appear atypical. Up to 1.3% of short term psychiatric admissions may be attributable to neurosyphilis, with a much higher rate in the general psychiatric population. (Ritchie, M Perdigao J,) Dysautonomia: a wide variety of autonomic nervous system (ANS) disorders may cause a wide variety of psychiatric symptoms including anxiety, panic attacks and depression. Dysautonomia usually involves failure of sympathetic or parasympathetic components of the ANS system but may also include excessive ANS activity. Dysautonomia can occur in conditions such as diabetes and alcoholism. Psychological disorders which may be confused with an eating disorder, or be co-morbid with one: Emetophobia is an anxiety disorder characterized by an intense fear of vomiting. A person so impacted may develop rigorous standards of food hygiene, such as not touching food with their hands. They may become socially withdrawn to avoid situations which in their perception may make them vomit. Many who have emetophobia are diagnosed with anorexia or self-starvation. In severe cases of emetophobia they may drastically reduce their food intake. Phagophobia is an anxiety disorder characterized by a fear of eating, it is usually initiated by an adverse experience while eating such as choking or vomiting. Persons with this disorder may present with complaints of pain while swallowing. Body dysmorphic disorder (BDD) is listed as an obsessive-compulsive disorder that affects up to 2% of the population. BDD is characterized by excessive rumination over an actual or perceived physical flaw. BDD has been diagnosed equally among men and women. While BDD has been misdiagnosed as anorexia nervosa, it also occurs comorbidly in 39% of eating disorder cases. BDD is a chronic and debilitating condition which may lead to social isolation, major depression and suicidal ideation and attempts. Neuroimaging studies to measure response to facial recognition have shown activity predominately in the left hemisphere in the left lateral prefrontal cortex, lateral temporal lobe and left parietal lobe showing hemispheric imbalance in information processing. There is a reported case of the development of BDD in a 21-year-old male following an inflammatory brain process. Neuroimaging showed the presence of a new atrophy in the frontotemporal region. Prevention Prevention aims to promote a healthy development before the occurrence of eating disorders. It also intends early identification of an eating disorder before it is too late to treat. Children as young as ages 5–7 are aware of the cultural messages regarding body image and dieting. Prevention comes in bringing these issues to the light. The following topics can be discussed with young children (as well as teens and young adults). Emotional Bites: a simple way to discuss emotional eating is to ask children about why they might eat besides being hungry. Talk about more effective ways to cope with emotions, emphasizing the value of sharing feelings with a trusted adult. Say No to Teasing: another concept is to emphasize that it is wrong to say hurtful things about other people's body sizes. Intuitive Eating: emphasize the importance of listening to one's body. That is, eat when you are hungry, pay attention to fullness, and choose foods that make you feel good. Children intuitively grasp these concepts. Additionally, parents can reinforce intuitive eating by removing value judgments of food as “good” or “bad” from conversations about food. Positive Body Talk: family members can help prevent eating disorders by not making negative comments about themselves. When children hear family members complain that they are fat or about the proportions of their bodies, this influences their own body image and is a contributing factor to the development of eating disorders. Fitness Comes in All Sizes: educate children about the genetics of body size and the normal changes occurring in the body. Discuss their fears and hopes about growing bigger. Focus on fitness and a balanced diet. Internet and modern technologies provide new opportunities for prevention. Online programs have the potential to increase the use of prevention programs. The development and practice of prevention programs via online sources make it possible to reach a wide range of people at minimal cost. Such an approach can also make prevention programs to be sustainable. Parents can do a lot for their children at a young age to impede them from ever seeing themselves in the eyes of an eating disorder. The parents who are actively engaged in their children's lives' often contribute to fostering a stronger sense of self-love in them. Treatment Treatment varies according to type and severity of eating disorder, and often more than one treatment option is utilized. Various forms of cognitive behavioral therapy have been developed for eating disorders and found to be useful. If a person is experiencing comorbidity between an eating disorder and OCD, exposure and response prevention, coupled with weight restoration and serotonin reputake inhibitors has proven most effective. Other forms of psychotherapies can also be useful. Family doctors play an important role in early treatment of people with eating disorders by encouraging those who are also reluctant to see a psychiatrist. Treatment can take place in a variety of different settings such as community programs, hospitals, day programs, and groups. The American Psychiatric Association (APA) recommends a team approach to treatment of eating disorders. The members of the team are usually a psychiatrist, therapist, and registered dietitian, but other clinicians may be included. That said, some treatment methods are: Cognitive behavioral therapy (CBT), which postulates that an individual's feelings and behaviors are caused by their own thoughts instead of external stimuli such as other people, situations or events; the idea is to change how a person thinks and reacts to a situation even if the situation itself does not change. See Cognitive behavioral treatment of eating disorders. Acceptance and commitment therapy: a type of CBT Cognitive behavioral therapy enhanched (CBT-E): the most widespread cognitive behavioral psychotherapy specific for eating disorders Cognitive remediation therapy (CRT), a set of cognitive drills or compensatory interventions designed to enhance cognitive functioning. Exposure and Response Prevention: a type of CBT; the gradual exposure to anxiety provoking situations in a safe environment, to learn how to deal with the uncomfortableness The Maudsley anorexia nervosa treatment for adults (MANTRA), which focuses on addressing rigid information processing styles, emotional avoidance, pro-anorectic beliefs, and difficulties with interpersonal relationships. These four targets of treatment are proposed to be core maintenance factors within the Cognitive-Interpersonal Maintenance Model of anorexia nervosa. Dialectical behavior therapy Family therapy including "conjoint family therapy" (CFT), "separated family therapy" (SFT) and Maudsley Family Therapy. Behavioral therapy: focuses on gaining control and changing unwanted behaviors. Interpersonal psychotherapy (IPT) Cognitive Emotional Behaviour Therapy (CEBT) Art therapy Nutrition counseling and Medical nutrition therapy Self-help and guided self-help have been shown to be helpful in AN, BN and BED; this includes support groups and self-help groups such as Eating Disorders Anonymous and Overeaters Anonymous. Having meaninful relationships are often a way to recovery. Having a partner, friend or someone else close in your life may lead away from the way of problematic eating according to professor Cynthia M. Bulik. psychoanalytic psychotherapy Inpatient care There are few studies on the cost-effectiveness of the various treatments. Treatment can be expensive; due to limitations in health care coverage, people hospitalized with anorexia nervosa may be discharged while still underweight, resulting in relapse and rehospitalization. Research has found comorbidity between an eating disorder (e.g., anorexia nervosa, bulimia nervosa, and binge eating) and OCD does not impact the length of the time patients spend in treatment, but can negatively impact treatment outcomes. For children with anorexia, the only well-established treatment is the family treatment-behavior. For other eating disorders in children, however, there is no well-established treatments, though family treatment-behavior has been used in treating bulimia. A 2019 Cochrane review examined studies comparing the effectiveness of inpatient versus outpatient models of care for eating disorders. Four trials including 511 participants were studied but the review was unable to draw any definitive conclusions as to the superiority of one model over another. Barriers to treatment A variety of barriers to eating disorder treatment have been identified, typically grouped into individual and systemic barriers. Individual barriers include shame, fear of stigma, cultural perceptions, minimizing the seriousness of the problem, unfamiliarity with mental health services, and a lack of trust in mental health professionals. Systemic barriers include language differences, financial limitations, lack of insurance coverage, inaccessible health care facilities, time conflicts, long waits, lack of transportation, and lack of child care.  These barriers may be particularly exacerbated for those who identify outside of the skinny, white, affluent girl stereotype that dominates in the field of eating disorders, such that those who do not identify with this stereotype are much less likely to seek treatment. Conditions during the COVID-19 pandemic may increase the difficulties experienced by those with eating disorders, and the risk that otherwise healthy individuals may develop eating disorders. The pandemic has been a stressful life event for everyone, increasing anxiety and isolation, disrupting normal routines, creating economic strain and food insecurity, and making it more difficult and stressful to obtain needed resources including food and medical treatment. The COVID-19 pandemic in England exposed a dramatic rise in demand for eating disorder services which the English NHS struggled to meet. The National Institute for Health and Care Excellence and NHS England both advised that services should not impose thresholds using body mass index or duration of illness to determine whether treatment for eating disorders should be offered, but there were continuing reports that these recommendations were not followed. In terms of access to treatment, therapy sessions have generally switched from in-person to video calls. This may actually help people who previously had difficulty finding a therapist with experience in treating eating disorders, for example, those who live in rural areas. Studies suggest that virtual (telehealth) CBT can be as effective as face-to-face CBT for bulimia and other mental illnesses. To help patients cope with conditions during the pandemic, therapists may have to particularly emphasize strategies to create structure where little is present, build interpersonal connections, and identify and avoid triggers. Medication Orlistat is used in obesity treatment. Olanzapine seems to promote weight gain as well as the ability to ameliorate obsessional behaviors concerning weight gain. zinc supplements have been shown to be helpful, and cortisol is also being investigated. Two pharmaceuticals, Prozac and Vyvanse, have been approved by the FDA to treat bulimia nervosa and binge-eating disorder, respectively. Olanzapine has also been used off-label to treat anorexia nervosa. Studies are also underway to explore psychedelic and psychedelic-adjacent medicines such as MDMA, psilocybin and ketamine for anorexia nervosa and binge-eating disorder. Outcomes For anorexia nervosa, bulimia nervosa, and binge eating disorder, there is a general agreement that full recovery rates range between 50% and 85%, with larger proportions of people experiencing at least partial remission. It can be a lifelong struggle or it can be overcome within months. Miscarriages: Pregnant women with a binge eating disorder have shown to have a greater chance of having a miscarriage compared to pregnant women with any other eating disorders. According to a study done, out of a group of pregnant women being evaluated, 46.7% of the pregnancies ended with a miscarriage in women that were diagnosed with BED, with 23.0% in the control. In the same study, 21.4% of women diagnosed with Bulimia Nervosa had their pregnancies end with miscarriages and only 17.7% of the controls. Relapse: An individual who is in remission from BN and EDNOS (Eating Disorder Not Otherwise Specified) is at a high risk of falling back into the habit of self-harm. Factors such as high stress regarding their job, pressures from society, as well as other occurrences that inflict stress on a person, can push a person back to what they feel will ease the pain. A study tracked a group of selected people that were either diagnosed with BN or EDNOS for 60 months. After the 60 months were complete, the researchers recorded whether or not the person was having a relapse. The results found that the probability of a person previously diagnosed with EDNOS had a 41% chance of relapsing; a person with BN had a 47% chance. Attachment insecurity: People who are showing signs of attachment anxiety will most likely have trouble communicating their emotional status as well as having trouble seeking effective social support. Signs that a person has adopted this symptom include not showing recognition to their caregiver or when he/she is feeling pain. In a clinical sample, it is clear that at the pretreatment step of a patient's recovery, more severe eating disorder symptoms directly corresponds to higher attachment anxiety. The more this symptom increases, the more difficult it is to achieve eating disorder reduction prior to treatment. Impaired Decision Making: Studies have found mixed results on the relationship between eating disorders and decision making. Researchers have continuously found that patients with anorexia were less capable of thinking about long-term consequences of their decisions when completing the Iowa Gambling Task, a test designed to measure a person's decision-making capabilities. Consequently, they were at a higher risk of making hastier, harmful choices. Anorexia symptoms include the increasing chance of getting osteoporosis. Thinning of the hair as well as dry hair and skin are also very common. The muscles of the heart will also start to change if no treatment is inflicted on the patient. This causes the heart to have an abnormally slow heart rate along with low blood pressure. Heart failure becomes a major consideration when this begins to occur. Muscles throughout the body begin to lose their strength. This will cause the individual to begin feeling faint, drowsy, and weak. Along with these symptoms, the body will begin to grow a layer of hair called lanugo. The human body does this in response to the lack of heat and insulation due to the low percentage of body fat. Bulimia symptoms include heart problems like an irregular heartbeat that can lead to heart failure and death may occur. This occurs because of the electrolyte imbalance that is a result of the constant binge and purge process. The probability of a gastric rupture increases. A gastric rupture is when there is a sudden rupture of the stomach lining that can be fatal. The acids that are contained in the vomit can cause a rupture in the esophagus as well as tooth decay. As a result, to laxative abuse, irregular bowel movements may occur along with constipation. Sores along the lining of the stomach called peptic ulcers begin to appear and the chance of developing pancreatitis increases. Binge eating symptoms include high blood pressure, which can cause heart disease if it is not treated. Many patients recognize an increase in the levels of cholesterol. The chance of being diagnosed with gallbladder disease increases, which affects an individual's digestive tract. Risk of death Eating disorders result in about 7,000 deaths a year as of 2010, making them the mental illnesses with the highest mortality rate. Anorexia has a risk of death that is increased about 5 fold with 20% of these deaths as a result of suicide. Rates of death in bulimia and other disorders are similar at about a 2 fold increase. The mortality rate for those with anorexia is 5.4 per 1000 individuals per year. Roughly 1.3 deaths were due to suicide. A person who is or had been in an inpatient setting had a rate of 4.6 deaths per 1000. Of individuals with bulimia about 2 persons per 1000 persons die per year and among those with EDNOS about 3.3 per 1000 people die per year. Epidemiology It is a common misconception that eating disorders are restricted only to women, and this may have skewed research disproportionately to study female populations. In the developed world, binge eating disorder affects about 1.6% of women and 0.8% of men in a given year. Anorexia affects about 0.4% and bulimia affects about 1.3% of young women in a given year. Up to 4% of women have anorexia, 2% have bulimia, and 2% have binge eating disorder at some point in time. Anorexia and bulimia occur nearly ten times more often in females than males. Typically, they begin in late childhood or early adulthood. Rates of other eating disorders are not clear. Rates of eating disorders appear to be lower in less developed countries. In the United States, twenty million women and ten million men have an eating disorder at least once in their lifetime. Anorexia Rates of anorexia in the general population among women aged 11 to 65 ranges from 0 to 2.2% and around 0.3% among men. The incidence of female cases is low in general medicine or specialized consultation in town, ranging from 4.2 and 8.3/100,000 individuals per year. The incidence of AN ranges from 109 to 270/100,000 individuals per year. Mortality varies according to the population considered. AN has one of the highest mortality rates among mental illnesses. The rates observed are 6.2 to 10.6 times greater than that observed in the general population for follow-up periods ranging from 13 to 10 years. Standardized mortality ratios for anorexia vary from 1.36% to 20%. Bulimia Bulimia affects females 9 times more often than males. Approximately one to three percent women develop bulimia in their lifetime. About 2% to 3% of women are currently affected in the United States. New cases occur in about 12 per 100,000 population per year. The standardized mortality ratios for bulimia is 1% to 3%. Binge eating disorder Reported rates vary from 1.3 to 30% among subjects seeking weight-loss treatment. Based on surveys, BED appears to affect about 1-2% at some point in their life, with 0.1-1% of people affected in a given year. BED is more common among females than males. There have been no published studies investigating the effects of BED on mortality, although it is comorbid with disorders that are known to increase mortality risks. Economics Since 2017, the number of cost-effectiveness studies regarding eating disorders appears to be increasing in the past six years. In 2011 United States dollars, annual healthcare costs were $1,869 greater among individuals with eating disorders compared to the general population. The added presence of mental health comorbidities was also associated with higher, but not statistically significant, costs difference of $1,993. In 2013 Canadian dollars, the total hospital cost per admission for treatment of anorexia nervosa was $51,349 and the total societal cost was $54,932 based on an average length of stay of 37.9 days. For every unit increase in body mass index, there was also a 15.7% decrease in hospital cost. For Ontario, Canada patients who received specialized inpatient care for an eating disorder both out of country and in province, annual total healthcare costs were about $11 million before 2007 and $6.5 million in the years afterwards. For those treated out of country alone, costs were about $5 million before 2007 and $2 million in the years afterwards. Evolutionary perspective Evolutionary psychiatry as an emerging scientific discipline has been studying mental disorders from an evolutionary perspective. If eating disorders have evolutionary functions or if they are new modern "lifestyle" problems is still debated.
Biology and health sciences
Mental disorder
null
56873
https://en.wikipedia.org/wiki/Lactose%20intolerance
Lactose intolerance
Lactose intolerance is caused by a lessened ability or a complete inability to digest lactose, a sugar found in dairy products. Humans vary in the amount of lactose they can tolerate before symptoms develop. Symptoms may include abdominal pain, bloating, diarrhea, flatulence, and nausea. These symptoms typically start thirty minutes to two hours after eating or drinking something containing lactose, with the severity typically depending on the amount consumed. Lactose intolerance does not cause damage to the gastrointestinal tract. Lactose intolerance is due to the lack of the enzyme lactase in the small intestines to break lactose down into glucose and galactose. There are four types: primary, secondary, developmental, and congenital. Primary lactose intolerance occurs as the amount of lactase declines as people grow up. Secondary lactose intolerance is due to injury to the small intestine. Such injury could be the result of infection, celiac disease, inflammatory bowel disease, or other diseases. Developmental lactose intolerance may occur in premature babies and usually improves over a short period of time. Congenital lactose intolerance is an extremely rare genetic disorder in which little or no lactase is made from birth. The reduction of lactase production starts typically in late childhood or early adulthood, but prevalence increases with age. Diagnosis may be confirmed if symptoms resolve following eliminating lactose from the diet. Other supporting tests include a hydrogen breath test and a stool acidity test. Other conditions that may produce similar symptoms include irritable bowel syndrome, celiac disease, and inflammatory bowel disease. Lactose intolerance is different from a milk allergy. Management is typically by decreasing the amount of lactose in the diet, taking lactase supplements, or treating the underlying disease. People are typically able to drink at least one cup of milk without developing symptoms, with greater amounts tolerated if drunk with a meal or throughout the day. Worldwide, around 65% of adults are affected by lactose malabsorption. Other mammals usually lose the ability to digest lactose after weaning. Lactose intolerance is the ancestral state of all humans before the recent evolution of lactase persistence in some cultures, which extends lactose tolerance into adulthood. Lactase persistence evolved in several populations independently, probably as an adaptation to the domestication of dairy animals around 10,000 years ago. Today the prevalence of lactose tolerance varies widely between regions and ethnic groups. The ability to digest lactose is most common in people of Northern European descent, and to a lesser extent in some parts of the Middle East and Africa. Lactose intolerance is most common among people of East Asian descent, with 90% lactose intolerance, people of Jewish descent, in many African countries and Arab countries, and among people of Southern European descent (notably amongst Greeks and Italians). Traditional food cultures reflect local variations in tolerance and historically many societies have adapted to low levels of tolerance by making dairy products that contain less lactose than fresh milk. The medicalization of lactose intolerance as a disorder has been attributed to biases in research history, since most early studies were conducted amongst populations which are normally tolerant, as well as the cultural and economic importance and impact of milk in countries such as the United States. Terminology Lactose intolerance primarily refers to a syndrome with one or more symptoms upon the consumption of food substances containing lactose sugar. Individuals may be lactose intolerant to varying degrees, depending on the severity of these symptoms. Hypolactasia is the term specifically for the small intestine producing little or no lactase enzyme. If a person with hypolactasia consumes lactose sugar, it results in lactose malabsorption. The digestive system is unable to process the lactose sugar, and the unprocessed sugars in the gut produce the symptoms of lactose intolerance. Lactose intolerance is not an allergy, because it is not an immune response, but rather a sensitivity to dairy caused by a deficiency of lactase enzyme. Milk allergy, occurring in about 2% of the population, is a separate condition, with distinct symptoms that occur when the presence of milk proteins trigger an immune reaction. Signs and symptoms The principal manifestation of lactose intolerance is an adverse reaction to products containing lactose (primarily milk), including abdominal bloating and cramps, flatulence, diarrhea, nausea, borborygmi, and vomiting (particularly in adolescents). These appear one-half to two hours after consumption. The severity of these signs and symptoms typically increases with the amount of lactose consumed; most lactose-intolerant people can tolerate a certain level of lactose in their diets without ill effects. Because lactose intolerance is not an allergy, it does not produce allergy symptoms (such as itching, hives, or anaphylaxis). Causes Lactose intolerance is a consequence of lactase deficiency, which may be genetic (primary hypolactasia and primary congenital alactasia) or environmentally induced (secondary or acquired hypolactasia). In either case, symptoms are caused by insufficient levels of lactase in the lining of the duodenum. Lactose, a disaccharide molecule found in milk and dairy products, cannot be directly absorbed through the wall of the small intestine into the bloodstream, so, in the absence of lactase, passes intact into the colon. Bacteria in the colon can metabolise lactose, and the resulting fermentation produces copious amounts of gas (a mixture of hydrogen, carbon dioxide, and methane) that causes the various abdominal symptoms. The unabsorbed sugars and fermentation products also raise the osmotic pressure of the colon, causing an increased flow of water into the bowels (diarrhea). Lactose intolerance in infants (congenital lactase deficiency) is caused by mutations in the LCT gene. The LCT gene provides the instructions for making lactase. Mutations are believed to interfere with the function of lactase, causing affected infants to have a severely impaired ability to digest lactose in breast milk or formula. Lactose intolerance in adulthood is a result of gradually decreasing activity (expression) of the LCT gene after infancy, which occurs in most humans. The specific DNA sequence in the MCM6 gene helps control whether the LCT gene is turned on or off. At least several thousand years ago, some humans developed a mutation in the MCM6 gene that keeps the LCT gene turned on even after breast feeding is stopped. Populations that are lactose intolerant lack this mutation. The LCT and MCM6 genes are both located on the long arm (q) of chromosome 2 in region 21. The locus can be expressed as 2q21. The lactase deficiency also could be linked to certain heritages and varies widely. A 2016 study of over 60,000 participants from 89 countries found regional prevalence of lactose malabsorption was "64% (54–74) in Asia (except Middle East), 47% (33–61) in eastern Europe, Russia, and former Soviet Republics, 38% (CI 18–57) in Latin America, 70% (57–83) in the Middle East, 66% (45–88) in northern Africa, 42% (13–71) in northern America, 45% (19–71) in Oceania, 63% (54–72) in sub-Saharan Africa, and 28% (19–37) in northern, southern and western Europe." According to Johns Hopkins Medicine, lactose intolerance is more common in Asian Americans, African Americans, Mexican Americans, and Native Americans. Analysis of the DNA of 94 ancient skeletons in Europe and Russia concluded that the mutation for lactose tolerance appeared about 4,300 years ago and spread throughout the European population. Some human populations have developed lactase persistence, in which lactase production continues into adulthood probably as a response to the benefits of being able to digest milk from farm animals. Some have argued that this links intolerance to natural selection favoring lactase-persistent individuals, but it is also consistent with a physiological response to decrease lactase production when it is not needed in cultures in which dairy products are not an available food source. Although populations in Europe, India, Arabia, and Africa were first thought to have high rates of lactase persistence because of a single mutation, lactase persistence has been traced to a number of mutations that occurred independently. Different alleles for lactase persistence have developed at least three times in East African populations, with persistence extending from 26% in Tanzania to 88% in the Beja pastoralist population in Sudan. The accumulation of epigenetic factors, primarily DNA methylation, in the extended LCT region, including the gene enhancer located in the MCM6 gene near C/T-13910 SNP, may also contribute to the onset of lactose intolerance in adults. Age-dependent expression of LCT in mice intestinal epithelium has been linked to DNA methylation in the gene enhancer. Lactose intolerance is classified according to its causes as: Primary hypolactasia Primary hypolactasia, or primary lactase deficiency, is genetic, develops in childhood at various ages, and is caused by the absence of a lactase persistence allele. In individuals without the lactase persistence allele, less lactase is produced by the body over time, leading to hypolactasia in adulthood. The frequency of lactase persistence, which allows lactose tolerance, varies enormously worldwide, with the highest prevalence in Northwestern Europe, declines across southern Europe and the Middle East and is low in Asia and most of Africa, although it is common in pastoralist populations from Africa. Secondary hypolactasia Secondary hypolactasia or secondary lactase deficiency, also called acquired hypolactasia or acquired lactase deficiency, is caused by an injury to the small intestine. This form of lactose intolerance can occur in both infants and lactase persistent adults and is generally reversible. It may be caused by acute gastroenteritis, coeliac disease, Crohn's disease, ulcerative colitis, chemotherapy, intestinal parasites (such as giardia), or other environmental causes. Primary congenital alactasia Primary congenital alactasia, also called congenital lactase deficiency, is an extremely rare, autosomal recessive enzyme defect that prevents lactase expression from birth. People with congenital lactase deficiency cannot digest lactose from birth, so cannot digest breast milk. This genetic defect is characterized by a complete lack of lactase (alactasia). About 40 cases have been reported worldwide, mainly limited to Finland. Before the 20th century, babies born with congenital lactase deficiency often did not survive, but death rates decreased with soybean-derived infant formulas and manufactured lactose-free dairy products. Diagnosis In order to assess lactose intolerance, intestinal function is challenged by ingesting more dairy products than can be readily digested. Clinical symptoms typically appear within 30 minutes, but may take up to two hours, depending on other foods and activities. Substantial variability in response (symptoms of nausea, cramping, bloating, diarrhea, and flatulence) is to be expected, as the extent and severity of lactose intolerance varies among individuals. The next step is to determine whether it is due to primary lactase deficiency or an underlying disease that causes secondary lactase deficiency. Physicians should investigate the presence of undiagnosed coeliac disease, Crohn's disease, or other enteropathies when secondary lactase deficiency is suspected and infectious gastroenteritis has been ruled out. Lactose intolerance is distinct from milk allergy, an immune response to cow's milk proteins. They may be distinguished in diagnosis by giving lactose-free milk, producing no symptoms in the case of lactose intolerance, but the same reaction as to normal milk in the presence of a milk allergy. A person can have both conditions. If positive confirmation is necessary, four tests are available. Hydrogen breath test In a hydrogen breath test, the most accurate lactose intolerance test, after an overnight fast, 25 grams of lactose (in a solution with water) are swallowed. If the lactose cannot be digested, enteric bacteria metabolize it and produce hydrogen, which, along with methane, if produced, can be detected on the patient's breath by a clinical gas chromatograph or compact solid-state detector. The test takes about 2.5 hours to complete. If the hydrogen levels in the patient's breath are high, they may have lactose intolerance. This test is not usually done on babies and very young children, because it can cause severe diarrhea. Lactose tolerance test In conjunction, measuring blood glucose level every 10 to 15 minutes after ingestion will show a "flat curve" in individuals with lactose malabsorption, while the lactase persistent will have a significant "top", with a typical elevation of 50% to 100%, within one to two hours. However, due to the need for frequent blood sampling, this approach has been largely replaced by breath testing. After an overnight fast, blood is drawn and then 50 grams of lactose (in aqueous solution) are swallowed. Blood is then drawn again at the 30-minute, 1-hour, 2-hour, and 3-hour marks. If the lactose cannot be digested, blood glucose levels will rise by less than 20 mg/dl. Stool acidity test This test can be used to diagnose lactose intolerance in infants, for whom other forms of testing are risky or impractical. The infant is given lactose to drink. If the individual is tolerant, the lactose is digested and absorbed in the small intestine; otherwise, it is not digested and absorbed, and it reaches the colon. The bacteria in the colon, mixed with the lactose, cause acidity in stools. Stools passed after the ingestion of the lactose are tested for level of acidity. If the stools are acidic, the infant is intolerant to lactose. Stool pH in lactose intolerance is less than 5.5. Intestinal biopsy An intestinal biopsy must confirm lactase deficiency following discovery of elevated hydrogen in the hydrogen breath test. Modern techniques have enabled a bedside test, identifying presence of lactase enzyme on upper gastrointestinal endoscopy instruments. However, for research applications such as mRNA measurements, a specialist laboratory is required. Stool sugar chromatography Chromatography can be used to separate and identify undigested sugars present in faeces. Although lactose may be detected in the faeces of people with lactose intolerance, this test is not considered reliable enough to conclusively diagnose or exclude lactose intolerance. Genetic diagnostic Genetic tests may be useful in assessing whether a person has primary lactose intolerance. Lactase activity persistence in adults is associated with two polymorphisms: C/T 13910 and G/A 22018 located in the MCM6 gene. These polymorphisms may be detected by molecular biology techniques at the DNA extracted from blood or saliva samples; genetic kits specific for this diagnosis are available. The procedure consists of extracting and amplifying DNA from the sample, following with a hybridation protocol in a strip. Colored bands are obtained as result, and depending on the different combinations, it would be possible to determine whether the patient is lactose intolerant. This test allows a noninvasive definitive diagnostic. Management When lactose intolerance is due to secondary lactase deficiency, treatment of the underlying disease may allow lactase activity to return to normal levels. In people with celiac disease, lactose intolerance normally reverts or improves several months after starting a gluten-free diet, but temporary dietary restriction of lactose may be needed. People with primary lactase deficiency cannot modify their body's ability to produce lactase. In societies where lactose intolerance is the norm, it is not considered a condition that requires treatment. However, where dairy is a larger component of the normal diet, a number of efforts may be useful. There are four general principles in dealing with lactose intolerance: avoidance of dietary lactose, substitution to maintain nutrient intake, regulation of calcium intake, and use of enzyme substitute. Regular consumption of dairy food by lactase deficient individuals may also reduce symptoms of intolerance by promoting colonic bacteria adaptation. Dietary avoidance The primary way of managing the symptoms of lactose intolerance is to limit the intake of lactose to a level that can be tolerated. Lactase deficient individuals vary in the amount of lactose they can tolerate, and some report that their tolerance varies over time, depending on health status and pregnancy. However, as a rule of thumb, people with primary lactase deficiency and no small intestine injury are usually able to consume at least 12 grams of lactose per sitting without symptoms, or with only mild symptoms, with greater amounts tolerated if consumed with a meal or throughout the day. Lactose is found primarily in dairy products, which vary in the amount of lactose they contain: Milk – unprocessed cow's milk is about 4.7% lactose; goat's milk 4.7%; sheep's milk 4.7%; buffalo milk 4.86%; and yak milk 4.93%. Sour cream and buttermilk – if made in the traditional way, this may be tolerable, but most modern brands add milk solids. Yogurt – lactobacilli used in the production of yogurt metabolize lactose to varying degrees, depending on the type of yogurt. Some bacteria found in yogurt also produce their own lactase, which facilitates digestion in the intestines of lactose intolerant individuals. Cheese – The curdling of cheese concentrates most of the lactose from milk into the whey: fresh cottage cheese contains 7% of the lactose found in an equivalent mass of milk. Further fermentation and aging converts the remaining lactose into lactic acid; traditionally made hard cheeses, which have a long ripening period, contain virtually no lactose: cheddar contains less than 1.5% of the lactose found in an equivalent mass of milk. However, manufactured cheeses may be produced using processes that do not have the same lactose-reducing properties. There used to be a lack of standardization on how lactose is measured and reported in food. The different molecular weights of anhydrous lactose or lactose monohydrate result in up to 5% difference. One source recommends using the "carbohydrates" or "sugars" part of the nutritional label as surrogate for lactose content, but such "lactose by difference" values are not assured to correspond to real lactose content. The stated dairy content of a product also varies according to manufacturing processes and labelling practices, and commercial terminology varies between languages and regions. As a result, absolute figures for the amount of lactose consumed (by weight) may not be very reliable. Kosher products labeled pareve or fleishig are free of milk. However, if a "D" (for "dairy") is present next to the circled "K", "U", or other hechsher, the food product likely contains milk solids, although it may also simply indicate the product was produced on equipment shared with other products containing milk derivatives. Lactose is also a commercial food additive used for its texture, flavor, and adhesive qualities. It is found in additives labelled as casein, caseinate, whey, lactoserum, milk solids, modified milk ingredients, etc. As such, lactose is found in foods such as processed meats (sausages/hot dogs, sliced meats, pâtés), gravy stock powder, margarines, sliced breads, breakfast cereals, potato chips, processed foods, medications, prepared meals, meal replacements (powders and bars), protein supplements (powders and bars), and even beers in the milk stout style. Some barbecue sauces and liquid cheeses used in fast-food restaurants may also contain lactose. When dining out, carrying lactose intolerance cards that explain dietary restrictions in the local language can help communicate needs to restaurant staff. Lactose is often used as the primary filler (main ingredient) in most prescription and non-prescription solid pill form medications, though product labeling seldom mentions the presence of 'lactose' or 'milk', and neither do product monograms provided to pharmacists, and most pharmacists are unaware of the very wide scale yet common use of lactose in such medications until they contact the supplier or manufacturer for verification. Milk substitutes Plant-based milks and derivatives such as soy milk, rice milk, almond milk, coconut milk, hazelnut milk, oat milk, hemp milk, macadamia nut milk, and peanut milk are inherently lactose-free. Low-lactose and lactose-free versions of foods are often available to replace dairy-based foods for those with lactose intolerance. Lactase supplements When lactose avoidance is not possible, or on occasions when a person chooses to consume such items, then enzymatic lactase supplements may be used. Lactase enzymes similar to those produced in the small intestines of humans are produced industrially by fungi of the genus Aspergillus. The enzyme, β-galactosidase, is available in tablet form in a variety of doses, in many countries without a prescription. It functions well only in high-acid environments, such as that found in the human gut due to the addition of gastric juices from the stomach. Unfortunately, too much acid can denature it, so it should not be taken on an empty stomach. Also, the enzyme is ineffective if it does not reach the small intestine by the time the problematic food does. Lactose-sensitive individuals can experiment with both timing and dosage to fit their particular needs. While essentially the same process as normal intestinal lactose digestion, direct treatment of milk employs a different variety of industrially produced lactase. This enzyme, produced by yeast from the genus Kluyveromyces, takes much longer to act, must be thoroughly mixed throughout the product, and is destroyed by even mildly acidic environments. Its main use is in producing the lactose-free or lactose-reduced dairy products sold in supermarkets. Rehabituation to dairy products Regular consumption of dairy foods containing lactose can promote a colonic bacteria adaptation, enhancing a favorable microbiome, which allows people with primary lactase deficiency to diminish their intolerance and to consume more dairy foods. The way to induce tolerance is based on progressive exposure, consuming smaller amounts frequently, distributed throughout the day. Lactose intolerance can also be managed by ingesting live yogurt cultures containing lactobacilli that are able to digest the lactose in other dairy products. Epidemiology Worldwide, about 65% of people experience some form of lactose intolerance as they age past infancy, but there are significant differences between populations and regions. As few as 5% of northern Europeans are lactose intolerant, while as many as 90% of adults in parts of Asia are lactose intolerant. In northern European countries, early adoption of dairy farming conferred a selective evolutionary advantage to individuals that could tolerate lactose. This led to higher frequencies of lactose tolerance in these countries. For example, almost 100% of Irish people are predicted to be lactose tolerant. Conversely, regions of the south, such as Africa, did not adopt dairy farming as early and tolerance from milk consumption did not occur the same way as in northern Europe. Lactose intolerance is common among people of Jewish descent, as well as from West Africa, the Arab countries, Greece, and Italy. Different populations will present certain gene constructs depending on the evolutionary and cultural pre-settings of the geographical region. History Greater lactose tolerance has come about in two ways. Some populations have developed genetic changes to allow the digestion of lactose: lactase persistence. Other populations developed cooking methods like milk fermentation. Lactase persistence in humans evolved relatively recently (in the last 10,000 years) among some populations. Around 8,000 years ago in modern-day Turkey, humans became reliant on newly-domesticated animals that could be milked; such as cows, sheep, and goats. This resulted in higher frequency of lactase persistence. Lactase persistence became high in regions such as Europe, Scandinavia, the Middle East and Northwestern India. However, most people worldwide remain lactase -persistent. Populations that raised animals not used for milk tend to have 90–100 percent of a lactose intolerant rate. For this reason, lactase persistence is of some interest to the fields of anthropology, human genetics, and archaeology, which typically use the genetically derived persistence/non-persistence terminology. The rise of dairy and producing dairy related products from cow milk alone, varies across different regions of the world, aside from genetic predisposition. The process of turning milk into cheese dates back earlier than 5200 BC. DNA analysis in February 2012 revealed that Ötzi was lactose intolerant, supporting the theory that lactose intolerance was still common at that time, despite the increasing spread of agriculture and dairying. Genetic analysis shows lactase persistence has developed several times in different places independently in an example of convergent evolution. History of research It was not until relatively recently that medicine recognised the worldwide prevalence of lactose intolerance and its genetic causes. Its symptoms were described as early as Hippocrates (460–370 BC), but until the 1960s, the prevailing assumption was that tolerance was the norm. Intolerance was explained as the result of a milk allergy, intestinal pathogens, or as being psychosomatic – it being recognised that some cultures did not practice dairying, and people from those cultures often reacted badly to consuming milk. Two reasons have been given for this misconception. One was that early research was conducted solely on European-descended populations, which have an unusually low incidence of lactose intolerance and an extensive cultural history of dairying. As a result, researchers wrongly concluded that tolerance was the global norm. Another reason is that lactose intolerance tends to be under-reported: lactose intolerant individuals can tolerate at least some lactose before they show symptoms, and their symptoms differ in severity. The large majority of people are able to digest some quantity of milk, for example in tea or coffee, without developing any adverse effects. Fermented dairy products, such as cheese, also contain significantly less lactose than plain milk. Therefore, in societies where tolerance is the norm, many lactose intolerant people who consume only small amounts of dairy, or have only mild symptoms, may be unaware that they cannot digest lactose. Eventually, in the 1960s, it was recognised that lactose intolerance was correlated with race in the United States. Subsequent research revealed that lactose intolerance was more common globally than tolerance, and that the variation was due to genetic differences, not an adaptation to cultural practices. Other animals Most mammals normally cease to produce lactase and become lactose intolerant after weaning. The downregulation of lactase expression in mice could be attributed to the accumulation of DNA methylation in the Lct gene and the adjacent Mcm6 gene.
Biology and health sciences
Health and fitness: General
Health
56884
https://en.wikipedia.org/wiki/Plum
Plum
A plum is a fruit of some species in Prunus subg. Prunus. Dried plums are often called prunes, though in the United States they may be labeled as 'dried plums', especially during the 21st century. Plums are likely to have been one of the first fruits domesticated by humans, with origins in East European and Caucasian mountains and China. They were brought to Britain from Asia, and their cultivation has been documented in Andalusia, southern Spain. Plums are a diverse group of species, with trees reaching a height of when pruned. The fruit is a drupe, with a firm and juicy flesh. China is the largest producer of plums, followed by Romania and Serbia. Japanese or Chinese plums dominate the fresh fruit market, while European plums are also common in some regions. Plums can be eaten fresh, dried to make prunes, used in jams, or fermented into wine and distilled into brandy. Plum kernels contain cyanogenic glycosides, but the oil made from them is not commercially available. In terms of nutrition, raw plums are 87% water, 11% carbohydrates, 1% protein, and less than 1% fat. They are a moderate source of vitamin C but do not contain significant amounts of other micronutrients. History Plums are likely to have been one of the first fruits domesticated by humans. Three of the most abundantly cultivated species are not found in the wild, only around human settlements: Prunus domestica has been traced to East European and Caucasian mountains, while Prunus salicina and Prunus simonii originated in China. Plum remains have been found in Neolithic age archaeological sites along with olives, grapes and figs. According to Ken Albala, plums originated in Iran. They were brought to Britain from Asia. An article on plum tree cultivation in Andalusia (southern Spain) appears in Ibn al-'Awwam's 12th-century agricultural work, Book on Agriculture. Plum cultivation is recorded in medieval monasteries in England. A garden with 'ploumes' and 'bulaces' is referred to by Chaucer. The cultivation of plums increased during the 17th and 18th centuries. During this period greengages were given their English name and the Mirabelle plum became firmly established. Advances in the development of new varieties in England were made by Thomas Rivers. Two examples of River's work are the varieties Early Rivers and Czar. Both are still esteemed. The fame of the Victoria plum, first sold in 1844, has been put down to good marketing rather than any inherent quality. Etymology and names The name plum derived from Old English "plum, plum tree", borrowed from Germanic or Middle Dutch, derived from Latin and ultimately from Ancient Greek , itself believed to be a loanword from an unknown language of Asia Minor. In the late 18th century, the word plum was used to indicate "something sweet or agreeable", probably in reference to tasty fruit pieces in desserts, as in the word sugar-plum. Description Plums are a diverse group of species. The commercially important plum trees are medium-sized, usually pruned to height. The tree is of medium hardiness. Without pruning, the trees can reach in height and spread across . They blossom in different months in different parts of the world; for example, in about January in Taiwan and early April in the United Kingdom. Fruits are usually of medium size, between in diameter, globose to oval. The flesh is firm and juicy. The fruit's peel is smooth, with a natural waxy surface that adheres to the flesh. The plum is a drupe, meaning its fleshy fruit surrounds a single hard fruitstone which encloses the fruit's seed. Cultivation and uses Japanese or Chinese plums are large and juicy with a long shelf life and therefore dominate the fresh fruit market. They are usually clingstone and not suitable for making prunes. They are cultivars of Prunus salicina or its hybrids. The cultivars developed in the US are mostly hybrids of P. salicina with P. simonii and P. cerasifera. Although these cultivars are often called Japanese plums, two of the three parents (P. salicina and P. simonii) originated from China and one (P. cerasifera) from Eurasia. In some parts of Europe, European plum (Prunus domestica) is also common in fresh fruit market. It has both dessert (eating) or culinary (cooking) cultivars, which include: Damson (purple or black skin, green flesh, clingstone, astringent) Prune plum (usually oval, freestone, sweet, fresh eaten or used to make prunes) Greengage (firm, green flesh and skin even when ripe) Mirabelle (dark yellow, predominantly grown in northeast France) Victoria (yellow flesh with a red or mottled skin) Yellowgage or golden plum (similar to greengage, but yellow) In West Asia, myrobalan plum or cherry plum (Prunus cerasifera) is also widely cultivated. In Russia, apart from these three commonly cultivated species, there are also many cultivars resulting from hybridization between Japanese plum and myrobalan plum, known as Russian plum (Prunus × rossica). When it flowers in the early spring, a plum tree will be covered in blossoms, and in a good year approximately 50% of the flowers will be pollinated and become plums. Flowering starts after 80 growing degree days. If the weather is too dry, the plums will not develop past a certain stage, but will fall from the tree while still tiny, green buds, and if it is unseasonably wet or if the plums are not harvested as soon as they are ripe, the fruit may develop a fungal condition called brown rot. Brown rot is not toxic, and some affected areas can be cut out of the fruit, but unless the rot is caught immediately, the fruit will no longer be edible. Plum is used as a food plant by the larvae of some Lepidoptera, including November moth, willow beauty and short-cloaked moth. The taste of the plum fruit ranges from sweet to tart; the skin itself may be particularly tart. It is juicy and can be eaten fresh or used in jam-making or other recipes. Plum juice can be fermented into plum wine. In central England, a cider-like alcoholic beverage known as plum jerkum is made from plums. Dried, salted plums are used as a snack, sometimes known as saladito or salao. Various flavors of dried plum are available at Chinese grocers and specialty stores worldwide. They tend to be much drier than the standard prune. Cream, ginseng, spicy, and salty are among the common varieties. Licorice is generally used to intensify the flavor of these plums and is used to make salty plum drinks and toppings for shaved ice or baobing. Pickled plums are another type of preserve available in Asia and international specialty stores. The Japanese variety, called umeboshi, is often used for rice balls, called onigiri or omusubi. The ume, from which umeboshi are made, is more closely related, however, to the apricot than to the plum. In the Balkans, plum is converted into an alcoholic drink named slivovitz (plum brandy, called in Bosnian, Croatian, Montenegrin or Serbian šljivovica). A large number of plums, of the Damson variety, are also grown in Hungary, where they are called szilva and are used to make lekvar (a plum paste jam), palinka (traditional fruit brandy), plum dumplings, and other foods. In Romania, 80% of the plum production is used to create a similar brandy, called țuică. As with many other members of the rose family, plum kernels contain cyanogenic glycosides, including amygdalin. Prune kernel oil is made from the fleshy inner part of the pit of the plum. Though not available commercially, the wood of plum trees is used by hobbyists and other private woodworkers for musical instruments, knife handles, inlays, and similar small projects. Production In 2019, global production of plums (data combined with sloes) was 12.6 million tonnes, led by China with 56% of the world total (table). Romania and Serbia were secondary producers. Nutrition Raw plums are 87% water, 11% carbohydrates, 1% protein, and less than 1% fat (table). In a reference serving, raw plums supply of food energy and are a moderate source only of vitamin C (12% Daily Value), with no other micronutrients in significant content (table). Species The numerous species of Prunus subg. Prunus are classified into many sections, but not all of them are called plums. Plums include species of sect. Prunus and sect. Prunocerasus, as well as P. mume of sect. Armeniaca. Only two plum species, the hexaploid European plum (Prunus domestica) and the diploid Japanese plum (Prunus salicina and hybrids), are of worldwide commercial significance. The origin of P. domestica is uncertain but may have involved P. cerasifera and possibly P. spinosa as ancestors. Other species of plum variously originated in Europe, Asia and America. Sect. Prunus (Old World plums) – leaves in bud rolled inwards; flowers 1–3 together; fruit smooth, often wax-bloomed Sect. Prunocerasus (New World plums) – leaves in bud folded inwards; flowers 3–5 together; fruit smooth, often wax-bloomed Sect. Armeniaca (apricots) – leaves in bud rolled inwards; flowers very short-stalked; fruit velvety; treated as a distinct subgenus by some authors In certain parts of the world, some fruits are called plums and are quite different from fruits known as plums in Europe or the Americas. For example, marian plums are popular in Thailand, Malaysia and Indonesia, otherwise also known as gandaria, plum mango, ma-praang, ma-yong, ramania, kundang, rembunia or setar. Another example is the loquat, also known as Japanese plum and Japanese medlar, as well as nispero, bibassier and wollmispel elsewhere. In South Asia and Southeast Asia, Jambul, a fruit from tropical tree in family Myrtaceae, is similarly sometimes referred to 'damson plums', and it is different from damson plums found in Europe and Americas. Jambul is also called as Java plum, Malabar plum, Jaman, Jamun, Jamblang, Jiwat, Salam, Duhat, Koeli, Jambuláo or Koriang. Gallery
Biology and health sciences
Rosales
null
56887
https://en.wikipedia.org/wiki/Pineapple
Pineapple
The pineapple (Ananas comosus) is a tropical plant with an edible fruit; it is the most economically significant plant in the family Bromeliaceae. The pineapple is indigenous to South America, where it has been cultivated for many centuries. The introduction of the pineapple plant to Europe in the 17th century made it a significant cultural icon of luxury. Since the 1820s, pineapple has been commercially grown in greenhouses and many tropical plantations. Pineapples grow as a small shrub; the individual flowers of the unpollinated plant fuse to form a multiple fruit. The plant normally propagates from the offset produced at the top of the fruit or from a side shoot, and typically matures within a year. Description The pineapple is a herbaceous perennial, which grows to tall on average, although sometimes it can be taller. The plant has a short, stocky stem with tough, waxy leaves. When creating its fruit, it usually produces up to 200 flowers, although some large-fruited cultivars can exceed this. Once it flowers, the individual fruits of the flowers join together to create a multiple fruit. After the first fruit is produced, side shoots (called 'suckers' by commercial growers) are produced in the leaf axils of the main stem. These suckers may be removed for propagation, or left to produce additional fruits on the original plant. Commercially, suckers that appear around the base are cultivated. It has 30 or more narrow, fleshy, trough-shaped leaves that are long, surrounding a thick stem; the leaves have sharp spines along the margins. In the first year of growth, the axis lengthens and thickens, bearing numerous leaves in close spirals. After 12 to 20 months, the stem grows into a spike-like inflorescence up to long with over 100 spirally arranged, trimerous flowers, each subtended by a bract. In the wild, pineapples are pollinated primarily by hummingbirds. Certain wild pineapples are foraged and pollinated at night by bats. Under cultivation, because seed development diminishes fruit quality, pollination is performed by hand, and seeds are retained only for breeding. In Hawaii, where pineapples were cultivated and canned industrially throughout the 20th century, importation of hummingbirds was prohibited. The ovaries develop into berries, which coalesce into a large, compact, multiple fruit. The fruit of a pineapple is usually arranged in two interlocking helices, often with 8 in one direction and 13 in the other, each being a Fibonacci number. The pineapple carries out CAM photosynthesis, fixing carbon dioxide at night and storing it as the acid malate, then releasing it during the day aiding photosynthesis. Taxonomy The pineapple comprises five botanical varieties, formerly regarded as separate species. The genomes of three varieties, including the wild progenitor variety bracteatus, have been sequenced. History Etymology The first reference in English to the pineapple fruit was the 1568 translation from the French of André Thevet's The New Found World, or Antarctike where he refers to a , a fruit cultivated and eaten by the Tupinambá people, living near modern Rio de Janeiro, and now believed to be a pineapple. Later in the same English translation, he describes the same fruit as a "Nana made in the manner of a Pine apple", where he used another Tupi word , meaning 'excellent fruit'. This usage was adopted by many European languages and led to the plant's scientific binomial , where 'tufted' refers to the stem of the plant. Purchas, writing in English in 1613, referred to the fruit as Ananas, but the Oxford English Dictionary first record of the word pineapple itself by an English writer is by Mandeville in 1714. Precolonial cultivation The wild plant originates from the Paraná–Paraguay River drainages between southern Brazil and Paraguay. Little is known about its domestication, but it spread as a crop throughout South America. Archaeological evidence of use is found as far back as 1200–800 BC (3200–2800 BP) in Peru and 200 BC – 700 AD (2200–1300 BP) in Mexico, where it was cultivated by the Mayas and the Aztecs. By the late 1400s, cropped pineapple was widely distributed and a staple food of Native Americans. The first European to encounter the pineapple was Christopher Columbus, in Guadeloupe on 4 November 1493. The Portuguese took the fruit from Brazil and introduced it into India by 1550. The '' cultivar was also introduced by the Spanish from Latin America to the Philippines, and it was grown to produce piña fibers that would then be used to produce textiles from at least the 17th century. Columbus brought the plant back to Spain and called it , meaning "pine of the Indians". The pineapple was documented in Peter Martyr's Decades of the New World (1516) and Antonio Pigafetta's (1524–1525), and the first known illustration was in Oviedo's (1535). Old World introduction While the pineapple fascinated Europeans as a fruit of colonialism, it was not successfully cultivated in Europe until Pieter de la Court (1664–1739) developed greenhouse horticulture near Leiden. Pineapple plants were distributed from the Netherlands to English gardeners in 1719 and French ones in 1730. In England, the first pineapple was grown at Dorney Court, Dorney in Buckinghamshire, and a huge "pineapple stove" to heat the plants was built at the Chelsea Physic Garden in 1723. In France, King Louis XV was presented with a pineapple that had been grown at Versailles in 1733. In Russia, Peter the Great imported de la Court's method into St. Petersburg in the 1720s; in 1730, twenty pineapple saplings were transported from there to a greenhouse at Empress Anna's new Moscow palace. Because of the expense of direct import and the enormous cost in equipment and labour required to grow them in a temperate climate, in greenhouses called "pineries", pineapple became a symbol of wealth. They were initially used mainly for display at dinner parties, rather than being eaten, and were used again and again until they began to rot. In the second half of the 18th century, the production of the fruit on British estates became the subject of great rivalry between wealthy aristocrats. John Murray, 4th Earl of Dunmore, built a hothouse on his estate surmounted by a huge stone cupola 14 metres tall in the shape of the fruit; it is known as the Dunmore Pineapple. In architecture, pineapple figures became decorative elements symbolizing hospitality. Since the 19th century: mass commercialization Many different varieties, mostly from the Antilles, were tried for European glasshouse cultivation. The most significant cultivar was "Smooth Cayenne", first imported to France in 1820, then subsequently re-exported to the United Kingdom in 1835, and then from UK, the cultivation spread via Hawaii to Australia and Africa. The "Smooth Cayenne" cultivar (and sub-selections or clones of the "Smooth Cayenne") make up for the majority of world pineapple production today. Jams and sweets based on pineapple were imported to Europe from the West Indies, Brazil, and Mexico from an early date. By the early 19th century, fresh pineapples were transported direct from the West Indies in large enough quantities to reduce European prices. Later pineapple production was dominated by the Azores for Europe, and Florida and the Caribbean for North America, because of the short trade routes. The Spanish had introduced the pineapple into Hawaii in the 18th century where it is known as the ("foreign hala"), but the first commercial plantation was established in 1886. The most famous investor was James Dole, who moved to Hawaii in 1899 and started a pineapple plantation in 1900 which would grow into the Dole Food Company. Dole and Del Monte began growing pineapples on the island of Oahu in 1901 and 1917, respectively, and the Maui Pineapple Company began cultivation on Maui in 1909. James Dole began the commercial processing of pineapple, and Dole employee Henry Ginaca invented an automatic peeling and coring machine in 1911. Hawaiian production started to decline from the 1970s because of competition and the shift to refrigerated sea transport. Dole ceased its cannery operations in Honolulu in 1991, and in 2008, Del Monte terminated its pineapple-growing operations in Hawaii. In 2009, the Maui Pineapple Company reduced its operations to supply pineapples only locally on Maui, and by 2013, only the Dole Plantation on Oahu grew pineapples in a volume of about 0.1 percent of the world's production. Despite this decline, the pineapple is sometimes used as a symbol of Hawaii. Further, foods with pineapple in them are sometimes known as "Hawaiian" for this reason alone. In the Philippines, "Smooth Cayenne" was introduced in the early 1900s by the US Bureau of Agriculture during the American colonial period. Dole and Del Monte established plantations in the island of Mindanao in the 1920s; in the provinces of Cotabato and Bukidnon, respectively. Large scale canning had started in Southeast Asia, including in the Philippines, from 1920. This trade was severely damaged by World War II, and Hawaii dominated the international trade until the 1960s. The Philippines remain one of the top exporters of pineapples in the world. The Del Monte plantations are now locally managed, after Del Monte Pacific Ltd., a Filipino company, completed the purchase of Del Monte Foods in 2014. Composition Nutrition Raw pineapple pulp is 86% water, 13% carbohydrates, 0.5% protein, and contains negligible fat (table). In a 100-gram reference amount, raw pineapple supplies of food energy, and is a rich source of manganese (40% Daily Value, DV) and vitamin C (53% DV), but otherwise contains no micronutrients in significant amounts (table). Phytochemistry Pineapple fruits and peels contain diverse phytochemicals, among which are polyphenols, including gallic acid, syringic acid, vanillin, ferulic acid, sinapic acid, coumaric acid, chlorogenic acid, epicatechin, and arbutin. Present in all parts of the pineapple plant, bromelain is a mixture of proteolytic enzymes. It is present in stem, fruit, crown, core, leaves of pineapple itself. Bromelain is under preliminary research for treatment of a variety of clinical disorders, but has not been adequately defined for its effects in the human body. Bromelain may be unsafe for some users, such as in pregnancy, allergies, or anticoagulation therapy. Having sufficient bromelain content, raw pineapple juice may be useful as a meat marinade and tenderizer. Although pineapple enzymes can interfere with the preparation of some foods or manufactured products, such as gelatin-based desserts or gel capsules, their proteolytic activity responsible for such properties may be degraded during cooking and canning. The quantity of bromelain in a typical serving of pineapple fruit is probably not significant, but specific extraction can yield sufficient quantities for domestic and industrial processing. Varieties Cultivars Many cultivars are known. The leaves of the commonly grown "Smooth Cayenne" cultivar and its various clones are smooth, and it is the most commonly grown worldwide. Many cultivars have become distributed from its origins in Paraguay and the southern part of Brazil, and later improved stocks were introduced into the Americas, the Azores, Africa, India, Malaysia and Australia. Varieties include: "Hilo" is a compact, 1.0- to 1.5-kg (2– to 3-lb) Hawaiian variant of smooth cayenne; the fruit is more cylindrical and produces many suckers, but no slips. "Kona sugarloaf", at 2.5 to 3.0 kg (5–6 lb), has white flesh with no woodiness in the center, is cylindrical in shape, and has a high sugar content but no acid; it has an unusually sweet fruit. "Natal queen", at 1.0 to 1.5 kg (2 to 3 lb), has golden yellow flesh, crisp texture, and delicate mild flavor; well-adapted to fresh consumption, it keeps well after ripening. It has spiny leaves and is grown in Australia, Malaysia, and South Africa. "Pernambuco" ("eleuthera") weighs 1–2 kg (2–4 lb), and has pale yellow to white flesh. It is sweet, melting in texture, and excellent for eating fresh; it is poorly adapted for shipping, has spiny leaves, and is grown in Latin America. "Red Spanish", at 1–2 kg (2–4 lb), has pale yellow flesh with a pleasant aroma, is squarish in shape, and well-adapted for shipping as fresh fruit to distant markets; it has spiny leaves and is grown in Latin America and the Philippines. It was the original pineapple cultivar in the Philippines grown for their leaf fibers (piña) in the traditional Philippine textile industry. "Smooth cayenne", a 2.5- to 3.0-kg (5- to 6-lb), pale yellow– to yellow-fleshed, cylindrical fruit with high sugar and acid content, is well-adapted to canning and processing; its leaves are without spines. It is an ancient cultivar developed by Amerind peoples. In some parts of Asia, this cultivar is known as Sarawak, after an area of Malaysia in which it is grown. It is one of the ancestors of cultivars "73-50" (also called "MD-1" and "CO-2") and "73–114" (also called "MD-2"). Smooth cayenne was previously the variety produced in Hawaii, and the most easily obtainable in U.S. grocery stores, but was replaced over the course of the mid-1990s and 2000s by MD-2. The success of Del Monte's MD-2 caused Dole to obtain & grow its own MD-2 pineapples, leading to Del Monte Fresh Produce Co. v. Dole Food Co.. Some Ananas species are grown as ornamentals for color, novel fruit size, and other aesthetic qualities. In the US, in 1986, the Pineapple Research Institute was dissolved and its assets divided between Del Monte and Maui Land and Pineapple. Del Monte took cultivar '73–114', dubbed 'MD-2', to its plantations in Costa Rica, found it to be well-suited to growing there, and launched it publicly in 1996 as 'Gold Extra Sweet', while Del Monte also began marketing '73–50', dubbed 'CO-2', as 'Del Monte Gold'. The Maui Pineapple Company began growing variety 73-50 in 1988 and named it Maui Gold. The successor company to MPC, the Hali'imaile Pineapple Company continues to grow Maui Gold on the slopes of Haleakala. Production In 2022, world production of pineapples was 29 million tonnes, led by Indonesia, the Philippines, and Costa Rica, each producing about 3 million tonnes. Uses Culinary The flesh and juice of the pineapple are used in cuisines around the world. In many tropical countries, pineapple is prepared and sold on roadsides as a snack. It is sold whole or in halves with a stick inserted. Whole, cored slices with a cherry in the middle are a common garnish on hams in the West. Chunks of pineapple are used in desserts such as fruit salad, as well as in some savory dishes, including the Hawaiian pizza, or as a grilled ring on a hamburger. Traditional dishes that use pineapple include , , , and Hawaiian haystack. Crushed pineapple is used in yogurt, jam, sweets, and ice cream. The juice of the pineapple is served as a beverage, and it is also the main ingredient in cocktails such as the and in the drink . In the Philippines, a traditional jelly-like dessert called has also been produced since the 18th century. It is made by fermenting pineapple juice with the bacteria Komagataeibacter xylinus. Pineapple vinegar is an ingredient found in both Honduran and Filipino cuisine, where it is produced locally. In Mexico, it is usually made with peels from the whole fruit, rather than the juice; however, in Taiwanese cuisine, it is often produced by blending pineapple juice with grain vinegar. The European Union consumed 50% of the global total for pineapple juice in 2012–2016. The Netherlands was the largest importer of pineapple juice in Europe. Thailand, Costa Rica and the Netherlands are the major suppliers to the European Union market in 2012–2016. Countries consuming the most pineapple juice in 2017 were Thailand, Indonesia and the Philippines, having combined consumption of 47% of the world total. The consumption of pineapple juice in China and India is low compared to their populations. Textiles The 'Red Spanish' cultivar of pineapples were once extensively cultivated in the Philippines. The long leaves of the cultivar were the source of traditional fibers, an adaptation of the native weaving traditions with fibers extracted from . These were woven into lustrous lace-like fabrics usually decorated with intricate floral embroidery known as and . The fabric was a luxury export from the Philippines during the Spanish colonial period and gained favor among European aristocracy in the 18th and 19th centuries. Domestically, they were used to make the traditional , , and clothing of the Filipino upper class, as well as women's kerchiefs (). They were favored for their light and breezy quality, which was ideal in the hot tropical climate of the islands. The industry was destroyed in the Second World War and is only starting to be revived. Houseplant The variety A. comosus 'Variegatus' is occasionally grown as a houseplant. It needs direct sunlight and thrives at temperatures of , with a minimum winter temperature of . It should be kept humid, but the soil should be allowed to dry out between waterings. It has almost no resting period but should be repotted each spring until the container reaches . Cultivation In commercial farming, flowering can be induced artificially, and the early harvesting of the main fruit can encourage the development of a second crop of smaller fruits. Once removed during cleaning, the top of the pineapple can be planted in soil and a new plant will grow. Slips and suckers are planted commercially. Storage and transport Some buyers prefer green fruit, others ripened or off-green. A plant growth regulator, Ethephon, is typically sprayed onto the fruit one week before harvest, developing ethylene, which turns the fruit golden yellow. After cleaning and slicing, a pineapple is typically canned in sugar syrup with added preservative. A pineapple never becomes any riper than it was when harvested since it is a non-climacteric fruit. Ethical and environmental concerns Like most modern fruit production, pineapple plantations are highly industrialized operations. In Costa Rica particularly, the pineapple industry uses large amounts of insecticides to protect the crop, which have caused health problems in many workers. These workers often receive little compensation, and are mostly poor migrants, often Nicaraguan. Workers' wages also decrease every time prices are lowered overseas. In 2016, the government declared that it would be trying to improve the situation, with the help of various other groups. Historically, tropical fruit agriculture, such as for pineapples, has been concentrated in so-called "banana republics". Illegal drug trade Export pineapples from Costa Rica to Europe are often used as a cover for narcotrafficking, and containers are impounded routinely in both locations. Expansion into protected areas In Costa Rica, pineapple cultivation has expanded into the Maquenque, , Barra del Colorado and Caño Negro wildlife refuges, all located in the north of the country. As those are protected areas and not national parks, limited and restricted sustainable activities are allowed, however pineapple plantations are industrial operations and many of these do not have the proper license to operate in the protected areas, or were started before either the designation of the area, recent regulations or the creation of the environmental regulatory agency (Setena) in 1996. The agency has registers for around of pineapple plantations operating within protected areas, but satellite imagery from 2018 reports around . Pests and diseases Pineapples are subject to a variety of diseases, the most serious of which is wilt disease vectored by mealybugs typically found on the surface of pineapples, but possibly in the closed blossom cups. Other diseases include citrus pink disease, bacterial heart rot, anthracnose, fungal heart rot, root rot, black rot, butt rot, fruitlet core rot, and yellow spot virus. Pineapple pink disease (not citrus pink disease) is characterized by the fruit developing a brownish to black discoloration when heated during the canning process. The causal agents of pink disease are the bacteria Acetobacter aceti, Gluconobacter oxydans, Pantoea citrea and Tatumella ptyseos. Some pests that commonly affect pineapple plants are scales, thrips, mites, mealybugs, ants, and symphylids. Heart-rot is the most serious disease affecting pineapple plants. The disease is caused by Phytophthora cinnamomi and P. parasitica, fungi that often affect pineapples grown in wet conditions. Since it is difficult to treat, it is advisable to guard against infection by planting resistant cultivars where these are available; all suckers that are required for propagation should be dipped in a fungicide, since the fungus enters through the wounds.
Biology and health sciences
Poales
null
56891
https://en.wikipedia.org/wiki/Carbon%20%28API%29
Carbon (API)
Carbon was one of two primary C-based application programming interfaces (APIs) developed by Apple for the macOS (formerly Mac OS X and OS X) operating system. Carbon provided a good degree of backward compatibility for programs that ran on Mac OS 8 and 9. Developers could use the Carbon APIs to port (“carbonize”) their “classic” Mac applications and software to the Mac OS X platform with little effort, compared to porting the app to the entirely different Cocoa system, which originated in OPENSTEP. With the release of macOS 10.15 Catalina, the Carbon API was officially discontinued and removed, leaving Cocoa as the sole primary API for developing macOS applications. Carbon was an important part of Apple's strategy for bringing Mac OS X to market, offering a path for quick porting of existing software applications, as well as a means of shipping applications that would run on either Mac OS X or the classic Mac OS. As the market has increasingly moved to the Cocoa-based frameworks, especially after the release of iOS, the need for a porting library was reduced. Apple did not create a 64-bit version of Carbon while updating their other frameworks in the 2007 time-frame, and eventually deprecated the entire API in OS X 10.8 Mountain Lion, which was released on July 24, 2012. History Classic Mac OS programming The original Mac OS used Pascal as its primary development platform, and the APIs were heavily based on Pascal's call semantics. Much of the Macintosh Toolbox consisted of procedure calls, passing information back and forth between the API and program using a variety of data structures based on Pascal's variant record concept. Over time, a number of object libraries evolved on the Mac, notably the Object Pascal library MacApp and the THINK C Think Class Library, and later versions of MacApp and CodeWarrior's PowerPlant in C++. Rhapsody With the purchase of NeXT in late 1996, Apple developed a new operating system strategy based largely on the existing OPENSTEP for Mach platform. The new Rhapsody OS strategy was relatively simple; it retained most of OpenStep's existing object libraries under the name "Yellow Box", ported the existing GUI in OPENSTEP for Mach and made it look more Mac-like, ported several major APIs from the Mac OS to Rhapsody's underlying Unix-like system (notably QuickTime and AppleSearch), and added an emulator known as the "Blue Box" that ran existing Mac OS software. When this plan was revealed at the Worldwide Developers Conference (WWDC) in 1997 there was some push-back from existing Mac OS developers who were upset that their code bases would be effectively locked into an emulator that was unlikely to ever be updated. They took to calling the Blue Box the "penalty box". Larger developers like Microsoft and Adobe balked outright, and refused to consider porting to OpenStep, which was so different from the existing Mac OS that there was little or no compatibility. Apple took these concerns to heart. When Steve Jobs announced Apple's change in direction at the next WWDC in 1998, he stated that "what developers really wanted was a modern version of the Mac OS, and Apple [was] going to deliver it". The original Rhapsody concept, with only the Blue Box for running existing Mac OS software, was eventually released in 1999 as Mac OS X Server 1.0. This was the only release based on the original Rhapsody concept. Cocoa and Carbon In order to offer a real and well supported upgrade path for existing Mac OS code bases, Apple introduced the Carbon system. Carbon consists of many libraries and functions that offer a Mac-like API, but running on top of the underlying Unix-like OS, rather than a copy of the Mac OS running in emulation. The Carbon libraries are extensively cleaned up, modernized and better "protected". While the Mac OS was filled with APIs that shared memory to pass data, under Carbon all such access was re-implemented using accessor subroutines on opaque data types. This allowed Carbon to support true multitasking and memory protection, features Mac developers had been requesting for a decade. Other changes from the pre-existing API removed features which were conceptually incompatible with Mac OS X, or simply obsolete. For example, applications could no longer install interrupt handlers or device drivers. In order to support Carbon, the entire Rhapsody model changed. Whereas Rhapsody would effectively be OpenStep with an emulator, under the new system both the OpenStep and Carbon API would, where possible, share common code. To do this, many of the useful bits of code from the lower-levels of the OpenStep system, written in Objective-C and known as Foundation, were re-implemented in pure C. This code became known as Core Foundation, or CF for short. A version of the Yellow Box ported to call CF became the new Cocoa API, and the Mac-like calls of Carbon also called the same functions. Under the new system, Carbon and Cocoa were peers. This conversion would normally have slowed the performance of Cocoa as the object methods called into the underlying C libraries, but Apple used a technique they called toll-free bridging to reduce this impact. As part of this conversion, Apple also ported the graphics engine from the licence-encumbered Display PostScript to the licence-free Quartz (which has been called "Display PDF"). Quartz provided native calls that could be used from either Carbon or Cocoa, as well as offering Java 2D-like interfaces as well. The underlying operating system itself was further isolated and released as Darwin. Release and evolution Carbon was introduced in incomplete form in 2000, as a shared library backward-compatible with 1997's Mac OS 8.1. This version allowed developers to port their code to Carbon without losing the ability for those programs to run on existing Mac OS machines. Porting to Carbon became known as "Carbonization". Official Mac OS X support arrived in 2001 with the release of Mac OS X v10.0, the first public version of the new OS. Carbon was very widely used in early versions of Mac OS X by almost all major software houses, even by Apple. The Finder, for instance, remained a Carbon application for many years, only being ported to Cocoa with the release of Mac OS X 10.6 in 2009. The transition to 64-bit Macintosh applications beginning with Mac OS X v10.5, released October 26, 2007, brought the first major limitations to Carbon. Apple does not provide compatibility between the Macintosh graphical user interface and the C programming language in the 64-bit environment, instead requiring the use of the Objective-C dialect with the Cocoa API. Many commentaries took this to be the first sign of Carbon's eventual disappearance, a position that was re-enforced when Apple stated no new major additions would be added to the Carbon system, and further reinforced with its deprecation in 2012. Transition to Cocoa Despite the purported advantages of Cocoa, the need to rewrite large amounts of legacy code slowed the transition of Carbon-based applications, famously with Adobe Photoshop, which was eventually updated to Cocoa in April 2010. This also extended to Apple's own flagship software packages, as iTunes and Final Cut Pro (as well as the features in the QuickTime engine that powers it) remained written in Carbon for many years. Both iTunes and Final Cut Pro X have since been released in Cocoa versions. Deprecation and discontinuation In 2012, with the release of OS X 10.8 Mountain Lion, most Carbon APIs were considered deprecated. The APIs were still accessible to developers and all Carbon applications still ran, but the APIs would no longer be updated. On June 28, 2017, Apple announced that 32-bit software for macOS, such as all Carbon applications, would no longer be supported “without compromise” on versions of macOS after macOS 10.13 High Sierra. macOS 10.15 Catalina officially removed support for 32-bit applications, including all Carbon applications. Architecture Carbon descends from the Toolbox, and as such, is composed of "Managers". Each Manager is a functionally related API, defining sets of data structures and functions to manipulate them. Managers are often interdependent or layered. Carbon consists of a broad set of functions for managing files, memory, data, the user interface, and other system services. It is implemented as any other API: in macOS, it is spread over several frameworks (each a structure built around a shared library), principally Carbon.framework, ApplicationServices.framework, and CoreServices.framework, and in classic Mac OS, it resides in a single shared library named CarbonLib. As an umbrella term encompassing all C-language API procedures accessing Mac-specific functionality, Carbon is not designed as a discrete system. Rather, it opens nearly all the functionality of macOS to developers who do not know the Objective-C language required for the broadly equivalent Cocoa API. Carbon is compatible with all of the several executable formats available for PowerPC Mac OS. Binary compatibility between Mac OS X and previous versions requires use of a Preferred Executable Format file, which Apple never supported in their Xcode IDE. Newer parts of Carbon tend to be much more object-oriented in their conception, most of them based on Core Foundation. Some Managers, such as the HIView Manager (a superset of the Control Manager), are implemented in C++, but Carbon remains a C API. Some examples of Carbon Managers: File Manager — manages access to the file system, opening, closing, reading and writing files. Resource Manager — manages access to resources, which are predefined chunks of data a program may require. Calls File Manager to read and write resources from disk files. Examples of resources include icons, sounds, images, templates for widgets, etc. Font Manager — manages fonts. Deprecated (as part of QuickDraw) since Mac OS X v10.4, in favor of Apple Type Services (ATS). QuickDraw — 2D graphics primitives. Deprecated since Mac OS X v10.4, in favor of Quartz 2D. Carbon Event Manager — converts user and system activity into events that code can recognise and respond to. HIObject — a completely new object-oriented API which brings to Carbon an OO model for building GUIs. HIToolbox in Mac OS Classic and Copland relied on abandoned IBM System Object Model, so Carbon had to provide quick-and-dirty replacement to enable porting of legacy code. This is available in Mac OS X v10.2 or later, and gives Carbon programmers some of the tools that Cocoa developers have long been familiar with. Starting with Mac OS X v10.2, HIObject is the base class for all GUI elements in Carbon. HIView is supported by Interface Builder, part of Apple's developer tools. Traditionally GUI architectures of this sort have been left to third-party application frameworks to provide. Starting with Mac OS X v10.4, HIObjects are NSObjects and inherit the ability to be serialized into data streams for transport or saving to disk. HITheme — uses QuickDraw and Quartz to render graphical user interface (GUI) elements to the screen. HITheme was introduced in Mac OS X v10.3, and Appearance Manager is a compatibility layer on top of HITheme since that version. HIView Manager — manages creation, drawing, hit-testing, and manipulation of controls. Since Mac OS X v10.2, all controls are HIViews. In Mac OS X v10.4, the Control Manager was renamed HIView Manager. Window Manager — manages creation, positioning, updating, and manipulation of windows. Since Mac OS X v10.2, windows have a root HIView. Menu Manager — manages creation, selection, and manipulation of menus. Since Mac OS X v10.2, menus are HIObjects. Since Mac OS X v10.3, menu content may be drawn using HIViews, and all standard menus use HIViews to draw. Event handling The Mac Toolbox's Event Manager originally used a polling model for application design. The application's main event loop asks the Event Manager for an event using GetNextEvent. If there is an event in the queue, the Event Manager passes it back to the application, where it is handled, otherwise it returns immediately. This behavior is called "busy-waiting", running the event loop unnecessarily. Busy-waiting reduces the amount of CPU time available for other applications and decreases battery power on laptops. The classic Event Manager dates from the original Mac OS in 1984, when whatever application was running was guaranteed to be the only application running, and where power management was not a concern. With the advent of MultiFinder and the ability to run more than one application simultaneously came a new Event Manager call, WaitNextEvent, which allows an application to specify a sleep interval. One easy trick for legacy code to adopt a more efficient model without major changes to its source code is simply to set the sleep parameter passed to WaitNextEvent to a very large value—on macOS, this puts the thread to sleep whenever there is nothing to do, and only returns an event when there is one to process. In this way, the polling model is quickly inverted to become equivalent to the callback model, with the application performing its own event dispatching in the original manner. There are loopholes, though. For one, the legacy toolbox call ModalDialog, for example, calls the older GetNextEvent function internally, resulting in polling in a tight loop without blocking. Carbon introduces a replacement system, called the Carbon Event Manager. (The original Event Manager still exists for compatibility with legacy applications). Carbon Event Manager provides the event loop for the developer (based on Core Foundation's CFRunLoop in the current implementation); the developer sets up event handlers and enters the event loop in the main function, and waits for Carbon Event Manager to dispatch events to the application. Timers In the classic Mac OS, there was no operating system support for application level timers (the lower level Time Manager was available, but it executed timer callbacks at interrupt time, during which calls could not be safely made to most Toolbox routines). Timers were usually left to application developers to implement, and this was usually done by counting elapsed time during the idle event - that is, an event that was returned by WaitNextEvent when any other event wasn't available. In order for such timers to have reasonable resolution, developers could not afford WaitNextEvent to delay too long, and so low "sleep" parameters were usually set. This results in highly inefficient scheduling behavior, since the thread will not sleep for very long, instead repeatedly waking to return these idle events. Apple added timer support to Carbon to address this problem—the system can schedule timers with great efficiency. Open source implementations GNUstep contains an implementation of the Carbon API called Boron. It aims to be compatible with non-deprecated parts of ApplicationServices and CoreServices. The name derives the fact that Boron comes before Carbon on the periodic table of elements. Darling also contains a Carbon implementation. Both implementations are highly incomplete and consist mostly of stub functions.
Technology
Software development: General
null
56912
https://en.wikipedia.org/wiki/Trans-Siberian%20Railway
Trans-Siberian Railway
The Trans-Siberian Railway, historically known as the Great Siberian Route and often shortened to Transsib, is a large railway system that connects European Russia to the Russian Far East. Spanning a length of over , it is the longest railway line in the world. It runs from the city of Moscow in the west to the city of Vladivostok in the east. During the period of the Russian Empire, government ministers—personally appointed by Alexander III and his son Nicholas II—supervised the building of the railway network between 1891 and 1916. Even before its completion, the line attracted travelers who documented their experiences. Since 1916, the Trans-Siberian Railway has directly connected Moscow with Vladivostok. , expansion projects remain underway, with connections being built to Russia's neighbors Mongolia, China, and North Korea. Additionally, there have been proposals and talks to expand the network to Tokyo, Japan, with new bridges or tunnels that would connect the mainland railway via the Russian island of Sakhalin and the Japanese island of Hokkaido. Route The railway is often associated with the main transcontinental Russian line that connects many large and small cities of the European and Asian parts of Russia. At a Moscow–Vladivostok track length of , it spans a record eight time zones. Taking eight days to complete the journey, it was the third-longest single continuous service in the world, after the Moscow–Pyongyang service and the former Kyiv (Kiev)–Vladivostok service , both of which also follow the Trans-Siberian for much of their routes. The main route begins in Moscow at Yaroslavsky Vokzal, runs through Yaroslavl or Chelyabinsk, Omsk, Novosibirsk, Krasnoyarsk, Irkutsk, Ulan-Ude, Chita, and Khabarovsk to Vladivostok via southern Siberia. A second primary route is the Trans-Manchurian, which coincides with the Trans-Siberian east of Chita as far as Tarskaya (a stop east of Karymskoye, in Chita Oblast), about east of Lake Baikal. From Tarskaya the Trans-Manchurian heads southeast, via Harbin Harbin–Manzhouli railway and Mudanjiang Harbin–Suifenhe railway in China's Northeastern provinces (from where a connection to Beijing is used by one of the Moscow–Beijing trains), joining the main route in Ussuriysk just north of Vladivostok. The third primary route is the Trans-Mongolian Railway, which coincides with the Trans-Siberian as far as Ulan-Ude on Lake Baikal's eastern shore. From Ulan-Ude the Trans-Mongolian heads south to Ulaanbaatar before making its way southeast to Beijing. In 1991, a fourth route running further to the north was finally completed, after more than five decades of sporadic work. Known as the Baikal–Amur Mainline (BAM), this recent extension departs from the Trans-Siberian line at Taishet several hundred miles west of Lake Baikal and passes the lake at its northernmost extremity. It crosses the Amur River at Komsomolsk-na-Amure (north of Khabarovsk), and reaches the Tatar Strait at Sovetskaya Gavan. History Demand and design In the late 19th century, the development of Siberia was hampered by poor transport links within the region and with the rest of the country. Aside from the Great Siberian Route, roads suitable for wheeled transport were rare. For about five months of the year, rivers were the main means of transport. During winter, cargo and passengers traveled by horse-drawn sledges over the winter roads, many of which were the same rivers but frozen. The first steamboat on the River Ob, Nikita Myasnikov's Osnova, was launched in 1844. However, early innovation had proven to be difficult, and it was not until 1857 that steamboat shipping had begun major development on the Ob system. Steamboats began operation on the Yenisei in 1863, and on the Lena and Amur in the 1870s. While the comparative flatness of Western Siberia was served by good river systems, the major river systems Ob–Irtysh–Tobol–Chulym of Eastern Siberia had difficulties. The Yenisei, the upper course of the Angara River below Bratsk which was not easily navigable because of the rapids, and the Lena, were mostly navigable only in the north–south direction, making west–east transportation difficult. An attempt to partially remedy the situation by building the Ob–Yenisei Canal had not yielded great success. These issues in the region created the need for a railway to be constructed. The first railway projects in Siberia emerged after the completion of the Saint Petersburg–Moscow Railway in 1851. One of the first was the Irkutsk–Chita project, proposed by the American entrepreneur Perry Collins and supported by Transport Minister Constantine Possiet with a view toward connecting Moscow to the Amur River, and consequently the Pacific Ocean. Siberia's governor, Nikolay Muravyov-Amursky, was anxious to advance Russian colonization of the now Russian Far East, but his plans were unfeasible due to colonists importing grain and food from China and Korea. It was on Muravyov's initiative that surveys for a railway in the Khabarovsk region were conducted. Before 1880, the central government had virtually ignored these projects, due to weaknesses in Siberian enterprises, an inefficient bureaucracy, and financial risk. By 1880, there was a large number of rejected and upcoming applications for permission to construct railways in order to connect Siberia with the Pacific, but not Eastern Russia. This worried the government and made connecting Siberia with Central Russia a pressing concern. The design process lasted 10 years. Along with the actual route constructed, alternative projects were proposed: Southern route: via Kazakhstan, Barnaul, Abakan and Mongolia. Northern route: via Tyumen, Tobolsk, Tomsk, Yeniseysk and the modern Baikal Amur Mainline or even through Yakutsk. The line was divided into seven sections, most or all of which was simultaneously worked on by 62,000 workers. With financial support provided by leading European financier, Baron Henri Hottinguer of the Parisian bankers Hottinger & Cie, the total cost estimated at £35 million was raised with the first section (Chelyabinsk to the River Ob) and finished at a cost of £900,000 lower than anticipated. Railwaymen argued against suggestions to save funds, such as installing ferryboats instead of bridges over the rivers until traffic increased. Unlike the rejected private projects that intended to connect the existing cities that required transport, the Trans-Siberian did not have such a priority. Thus, to save money and avoid clashes with land owners, it was decided to lay the railway outside the existing cities. However, due to the swampy banks of the Ob River near Tomsk (the largest settlement at the time), the idea to construct a bridge was rejected. The railway was laid to the south (instead crossing the Ob at Novonikolaevsk, later renamed Novosibirsk); a dead-end branch line connected with Tomsk, depriving the city of the prospective transit railway traffic and trade. Construction On 9 March 1891, the Russian government issued an imperial rescript in which it announced its intention to construct a railway across Siberia. Tsarevich Nicholas (later Tsar Nicholas II) inaugurated the construction of the railway in Vladivostok on 19 May that year. Lake Baikal is more than long and more than deep. Until the Circum-Baikal Railway was built the line ended on either side of the lake. The ice-breaking train ferry built in 1897 and smaller ferry SS Angara built in about 1900 made the four-hour crossing to link the two railheads. The Russian admiral and explorer Stepan Makarov (1849–1904) designed Baikal and Angara but they were built in Newcastle upon Tyne, by Armstrong Whitworth. They were "knock down" vessels; that is, each ship was bolted together in the United Kingdom, every part of the ship was marked with a number, the ship was disassembled into many hundreds of parts and transported in kit form to Listvyanka where a shipyard was built especially to reassemble them. Their boilers, engines and some other components were built in Saint Petersburg and transported to Listvyanka to be installed. Baikal had 15 boilers, four funnels, and was long. it could carry 24 railway coaches and one locomotive on the middle deck. Angara was smaller, with two funnels. Completion of the Circum-Baikal Railway in 1904 bypassed the ferries, but from time to time the Circum-Baikal Railway suffered from derailments or rockfalls so both ships were held in reserve until 1916. Baikal was burnt out and destroyed in the Russian Civil War but Angara survives. It has been restored and is permanently moored at Irkutsk where it serves as an office and a museum. In winter, sleighs were used to move passengers and cargo from one side of the lake to the other until the completion of the Lake Baikal spur along the southern edge of the lake. With the Amur River Line north of the Chinese border being completed in 1916, there was a continuous railway from Petrograd to Vladivostok that, to this day, is the world's second longest railway line. Electrification of the line, begun in 1929 and completed in 2002, allowed a doubling of train weights to . There were expectations upon electrification that it would increase rail traffic on the line by 40 percent. The entire length of the Trans-Siberian Railway was double track by 1939. Effects Siberian agriculture began to send cheap grain westwards beginning around 1869. Agriculture in Central Russia was still under economic pressure after the end of serfdom, which was formally abolished in 1861. To defend the central territory and prevent possible social destabilization, the Tsarist government introduced the Chelyabinsk tariff-break () in 1896, a tariff barrier for grain passing through Chelyabinsk, and a similar barrier in Manchuria. This measure changed the nature of export: mills emerged to produce bread from grain in Altai Krai, Novosibirsk and Tomsk, and many farms switched to corn (maize) production. The railway immediately filled to capacity with local traffic, mostly wheat. From 1896 until 1913 Siberia exported on average (30,643,000 pood) of grain and flour annually. During the Russo-Japanese War of 1904–1905, military traffic to the east disrupted the flow of civil freight. The Trans-Siberian Railway brought with it millions of peasant-migrants from the Western regions of Russia and Ukraine. Between 1906 and 1914, the peak migration years, about 4 million peasants arrived in Siberia. Historian Christian Wolmar argues that the railroad was a failure, because it was built for narrow political reasons, with poor supervision and planning. The costs were vastly exaggerated to enrich greedy bureaucrats. The planners hoped it would stimulate settlement, but the Siberian lands were too infertile and cold and distant. There was little settlement beyond 30 miles from the line. The fragile system could not handle the heavy traffic demanded in wartime, so the Japanese in 1904 knew they were safe in their war with Russia. Wolmar concludes: War and revolution In the Russo-Japanese War (1904–1905), the strategic importance and limitations of the Trans-Siberian Railway contributed to Russia's defeat in the war. As the line was single track, transit was slower as trains had to wait in crossing sidings for opposing trains to cross. This limited the capacity of the line and increased transit times. A troop train or a train carrying injured personnel traveling from east to west would delay the arrival of troops or supplies and ammunition in a train traveling from west to east. The supply difficulties meant the Russian forces had limited troops and supplies while Japanese forces with shorter lines of communication were able to attack and advance. After the Russian Revolution of 1917, the railway served as the vital line of communication for the Czechoslovak Legion and the allied armies that landed troops at Vladivostok during the Siberian Intervention of the Russian Civil War. These forces supported the White Russian government of Admiral Alexander Kolchak, based in Omsk, and White Russian soldiers fighting the Bolsheviks on the Ural front. The intervention was weakened, and ultimately defeated, by partisan fighters who blew up bridges and sections of track, particularly in the volatile region between Krasnoyarsk and Chita. The leader of legions politician Milan Rastislav Stefanik traveled from Moscow to Vladivostok in March to August 1918, on his journey to Japan and the United States of America. The Trans-Siberian Railway also played a very direct role during parts of Russia's history, with the Czechoslovak Legion using heavily armed and armored trains to control large amounts of the railway (and of Russia itself) during the Russian Civil War at the end of World War I. As one of the few fighting forces left in the aftermath of the imperial collapse, and before the Red Army took control, the Czechs and Slovaks were able to use their organization and the resources of the railway to establish a temporary zone of control before eventually continuing onwards towards Vladivostok, from where they emigrated back to Czechoslovakia. World War II During World War II, the Trans-Siberian Railway played an important role in the supply of the powers fighting in Europe. In 1939–1941 it was a source of rubber for Germany thanks to the USSR-Germany pact. While Germany's merchant shipping was shut down, the Trans-Siberian Railway (along with its Trans-Manchurian branch) served as the essential link between Germany and Japan, especially for rubber. By March 1941, of this material would, on average, traverse the Trans-Siberian Railway every day on its way to Germany. At the same time, a number of Jews and anti-Nazis used the Trans-Siberian Railway to escape Europe, including the mathematician Kurt Gödel and Betty Ehrlich Löwenstein, mother of British actor, director and producer Heinz Bernard. Several thousand Jewish refugees were able to make this trip thanks to the Curaçao visas issued by the Dutch consul Jan Zwartendijk and the Japanese visas issued by the Japanese consul, Chiune Sugihara, in Kaunas, Lithuania. Typically, they took the TSR to Vladivostok, then by ship to US. Until June 1941, pro-Nazi ethnic Germans from the Americas used the TSR to go to Germany. The situation reversed after 22 June 1941. By invading the Soviet Union, Germany cut off its only reliable trade route to Japan. Instead, it had to use fast merchant ships and later large oceanic submarines to evade the Allied blockade. On the other hand, the USSR received Lend-Lease supplies from the US. Even after Japan went to war with the US, despite German complaints, Japan usually allowed Soviet ships to sail between the US and Vladivostok unmolested. As a result, the Pacific Route – via northern Pacific Ocean and the TSR – became the safest connection between the US and the USSR. Accordingly, it accounted for as much freight as the North Atlantic–Arctic and Iranian routes combined, though cargoes were limited to raw materials and non-military goods. From 1941 to 1942 the TSR also played an important role in relocating Soviet industries from European Russia to Siberia in the face of the German invasion. The TSR also transported Soviet troops west from the Far East to take part in the Soviet counter-offensive in December 1941. In 1944–45 the TSR was used to prepare for the Soviet–Japanese War of August 1945; see Pacific Route. When an Anglo-American delegation visited Moscow in October 1944 to discuss the Soviet Union joining the war against Japan, Alanbrooke was told by General Antonov and Stalin himself that the line capacity was 36 pairs of trains per day, but only 26 could be counted on for military traffic; see Pacific Route. The capacity of each train was from 600 to 700 tons. Although the Japanese estimated that an attack was not likely before Spring 1946, Stavka had planned for a mid-August 1945 offensive, and had concealed the buildup of a force of 90 divisions; many had crossed Siberia in their vehicles to avoid straining the rail link. Post World War II A trainload of containers can be taken from Beijing to Hamburg, via the Trans-Mongolian and Trans-Siberian lines in as little as 15 days, but typical cargo transit times are usually significantly longer and typical cargo transit time from Japan to major destinations in European Russia was reported as around 25 days. According to a 2009 report, the best travel times for cargo block trains from Russia's Pacific ports to the western border (of Russia, or perhaps of Belarus) were around 12 days, with trains making around per day, at a maximum operating speed of . In early 2009; however, Russian Railways announced an ambitious "Trans-Siberian in Seven Days" plan. According to this plan, $11 billion will be invested over the next five years to make it possible for goods traffic to cover the same distance in just seven days. The plan will involve increasing the cargo trains' speed to in 2010–2012, and, at least on some sections, to by 2015. At these speeds, goods trains will be able to cover per day. Developments in shipping On January 11, 2008, China, Mongolia, Russia, Belarus, Poland, and Germany agreed to collaborate on a cargo train service between Beijing and Hamburg. The railway can typically deliver containers in to of the time of a sea voyage, and in late 2009 announced a 20% reduction in its container shipping rates. With its 2009 rate schedule, the Trans-Siberian Railway will transport a forty-foot container to Poland from Yokohama for $2,820, or from Busan for $2,154. Gallery Routes Trans-Siberian line A commonly used main line route is as follows. Distances and travel times are from the schedule of train No. 002M, Moscow–Vladivostok. There are many alternative routings between Moscow and Siberia. For example: Some trains would leave Moscow from Kazansky Rail Terminal instead of Yaroslavsky Rail Terminal; this would save some off the distances, because it provides a shorter exit from Moscow onto the Nizhny Novgorod main line. One can take a night train from Moscow's Kursky Rail Terminal to Nizhny Novgorod, make a stopover in the Nizhny and then transfer to a Siberia-bound train From 1956 to 2001 many trains went between Moscow and Kirov via Yaroslavl instead of Nizhny Novgorod. This would add some to the distances from Moscow, making the total distance to Vladivostok at . Other trains get from Moscow (Kazansky Terminal) to Yekaterinburg via Kazan. Between Yekaterinburg and Omsk it is possible to travel via Kurgan Petropavlovsk (in Kazakhstan) instead of Tyumen. One can bypass Yekaterinburg altogether by traveling via Samara, Ufa, Chelyabinsk and Petropavlovsk; this was historically the earliest configuration. Depending on the route taken, the distances from Moscow to the same station in Siberia may differ by several tens of km (a few dozen miles). Trans-Manchurian line The Trans–Manchurian line, as e.g. used by train No.020, Moscow–Beijing follows the same route as the Trans-Siberian between Moscow and Chita and then follows this route to China: Branch off from the Trans-Siberian-line at Tarskaya ( from Moscow) Zabaikalsk (), Russian border town; there is a break-of-gauge Manzhouli ( from Moscow, from Beijing), Chinese border city Harbin (, 1,388 km) Chinese city Changchun ( from Moscow) Chinese city Beijing ( from Moscow) the Chinese capital The express train (No. 020) travel time from Moscow to Beijing is just over six days. There is no direct passenger service along the entire original Trans-Manchurian route (i.e., from Moscow or anywhere in Russia, west of Manchuria, to Vladivostok via Harbin), due to the obvious administrative and technical (gauge break) inconveniences of crossing the border twice. Assuming sufficient patience and possession of appropriate visas, however, it is still possible to travel all the way along the original route, with a few stopovers (e.g. in Harbin, Grodekovo and Ussuriysk). Such an itinerary would pass through the following points from Harbin east: Harbin ( from Moscow) Mudanjiang () Suifenhe (), the Chinese border station Grodekovo (), Russia Ussuriysk () Vladivostok () Trans-Mongolian line The Trans–Mongolian line follows the same route as the Trans-Siberian between Moscow and Ulan Ude, and then follows this route to Mongolia and China: Branch off from the Trans-Siberian line ( from Moscow) Naushki (, MT+5), Russian border town Russian–Mongolian border (, MT+5) Sükhbaatar (, MT+5), Mongolian border town Ulaanbaatar (, MT+5), the Mongolian capital Zamyn-Üüd (, MT+5), Mongolian border city Erenhot ( from Beijing, MT+5), Chinese border city Datong (, MT+5) Chinese city Beijing (MT+5) the Chinese capital Highest point The highest point of Trans–Siberian Railroad is at Yablonovy pass at an altitude of 1070m situated in the Yablonoi Mountains, in Transbaikal (mainly in Zabaykalsky Krai), Siberia, Russia. The Trans–Siberian Railroad passes the mountains at Chita and runs parallel to the range before going through a tunnel to bypass the heights.
Technology
Trains
null
57008
https://en.wikipedia.org/wiki/European%20robin
European robin
The European robin (Erithacus rubecula), known simply as the robin or robin redbreast in the British Isles, is a small insectivorous passerine bird that belongs to the Old World flycatcher family Muscicapidae. It is found across Europe, east to Western Siberia and south to North Africa; it is sedentary in the west and south of its range, and migratory in the north and east of its range where winters are harsher. It is in length; the male and female are identical in plumage, with an orange-toned red breast and face lined with grey, brown upper-parts and a whitish belly. Juveniles are distinct, freckled brown all over and without the red breast; first-winter immatures are like the adults, except for more obvious yellow-brown tips to the wing covert feathers (inconspicuous or absent in adults). Etymology The distinctive orange breast of both sexes contributed to the European robin's original name of "redbreast", orange as a colour name being unknown in English until the 16th century, by which time the fruit of the same name had been introduced. The Dutch , French , Swedish rödhake, German , Italian , Spanish and Portuguese all refer to the distinctively coloured front. In the 15th century, when it became popular to give human names to familiar species, the bird came to be known as robin redbreast, which was eventually shortened to robin. As a given name, Robin is originally a smaller form of the name Robert. The term robin is also applied to some birds in other families with red or orange breasts. These include the American robin (Turdus migratorius, a thrush) and the Australasian robins of the family Petroicidae, the relationships of which are unclear. Other older English names for the bird include ruddock and robinet. In American literature of the late 19th century, this robin was frequently called the English robin. Taxonomy and systematics The European robin was described by Carl Linnaeus in 1758 in the 10th edition of his Systema Naturae under the binomial name Motacilla rubecula. Its specific epithet rubecula is a diminutive derived from the Latin , meaning 'red'. The genus Erithacus was described by French naturalist Georges Cuvier in 1800, giving the bird its current binomial name E. rubecula. The genus name Erithacus is from Ancient Greek and refers to an unknown bird, now usually identified as robin. The genus Erithacus was formerly classified as a member of the thrush family (Turdidae) but is now known to belong to the Old World flycatcher family Muscicapidae. The genus formerly included the Japanese robin and the Ryukyu robin, but these east Asian species were shown in molecular phylogenetic studies to be more similar to a group of other Asian species than to the European robin; in a reorganisation of the genera, the Japanese and the Ryukyu robins were moved to the resurrected genus Larvivora leaving the European robin as the sole member of Erithacus. A 2010 phylogenetic analysis placed Erithacus in the subfamily Erithacinae, which otherwise contained only African species, but its exact position with respect to the other genera was not resolved. More detailed analysis has shown it to be the sole European member of an otherwise entirely tropical African subfamily Cossyphinae, in which it is in a basal position. Subspecies In their large continental Eurasian range, robins vary somewhat, but do not form discrete populations that might be considered subspecies. Robin subspecies are mainly distinguished by forming resident populations on islands and in mountainous areas. The robin found in the British Isles and much of western Europe, Erithacus rubecula melophilus, occurs as a vagrant in adjacent regions. E. r. witherbyi from northwest Africa, Corsica, and Sardinia closely resembles melophilus but has shorter wings. The northeasternmost birds, large and fairly washed-out in colour, are E. r. tataricus. In the southeast of its range, E. r. valens of the Crimean Peninsula, E. r. caucasicus of the Caucasus and northern Transcaucasia, and E. r. hyrcanus southeastwards into Iran are generally accepted as significantly distinct. On Madeira and the Azores, the local population has been described as E. r. microrhynchos, and although not distinct in morphology, its isolation seems to suggest the subspecies is valid (but see below). Canary Islands robin The most distinct birds are those of Gran Canaria (E. r. marionae) and Tenerife (E. r. superbus), which may be considered two distinct species or at least two different subspecies. They are readily distinguished by a white eye-ring, an intensely coloured breast, a grey line that separates the orange-red from the brown colouration, and the belly is entirely white. Cytochrome b sequence data and vocalisations indicate that the Gran Canaria/Tenerife robins are indeed very distinct and probably derived from colonisation by mainland birds some 2 million years ago. Christian Dietzen, Hans-Hinrich Witt and Michael Wink published in 2003 in Avian Science a study called "The phylogeographic differentiation of the European robin Erithacus rubecula on the Canary Islands revealed by mitochondrial DNA sequence data and morphometrics: evidence for a new robin taxon on Gran Canaria?". In it they concluded that Gran Canaria's robin diverged genetically from their European relatives as far back as 2.3 million years, while the Tenerife ones took another half a million years to make this leap, 1.8 million years ago. The most likely reason would be a different colonisation of the Canaries by this bird, which arrived at the oldest island first (Gran Canaria) and subsequently passed to the neighbouring island (Tenerife). A thorough comparison between marionae and superbus is pending to confirm that the first one is effectively a different subspecies. Initial results suggest that birds from Gran Canaria have wings about 10% shorter than those on Tenerife. The west Canary Islands' populations are younger (Middle Pleistocene) and only beginning to diverge genetically. Robins from the western Canary Islands: El Hierro, La Palma and La Gomera (E. r. microrhynchus) are similar to the European type subspecies (E. r. rubecula). Finally, the robins which can be found in Fuerteventura are the European ones, which is not surprising as the species does not breed either in this island or in the nearby Lanzarote; they are wintering birds or just passing through during their long migration between Africa and Europe. Other robins The larger American robin (Turdus migratorius) is a much larger bird named from its similar colouration to the European robin, but the two birds are not closely related, with the American robin instead belonging to the same genus as the common blackbird (T. merula), a species which occupies much of the same range as the European robin. The similarity between the European and American robins lies largely in the orange chest patch found in both species. This American species was incorrectly shown "feathering its nest" in London in the film Mary Poppins, but it only occurs in the UK as a very rare vagrant. Some South and Central American Turdus thrushes are also called robins, such as the rufous-collared thrush. The Australian "robin redbreast", more correctly the scarlet robin (Petroica multicolor), is more closely related to crows and jays than it is to the European robin. It belongs to the family Petroicidae, whose members are commonly called "Australasian robins". The red-billed leiothrix (Leiothrix lutea) is sometimes named the "Pekin robin" by aviculturalists. Another group of Old World flycatchers, this time from Africa and Asia, is the genus Copsychus; its members are known as magpie-robins, one of which, the Oriental magpie robin (C. saularis), is the national bird of Bangladesh. Description The adult European robin is long and weighs , with a wingspan of . The male and female bear similar plumage: an orange breast and face (more strongly coloured in the otherwise similar British subspecies E. r. melophilus), lined by a bluish grey on the sides of the neck and chest. The upperparts are brownish, or olive-tinged in British birds, and the belly whitish, while the legs and feet are brown. The bill and eyes are black. Juveniles are a spotted brown and white in colouration, with patches of orange gradually appearing. Distribution and habitat The robin occurs in Eurasia east to Western Siberia, south to Algeria and on the Atlantic islands as far west as the Central Group of the Azores and Madeira. It is a vagrant in Iceland. In the southeast, it reaches Iran the Caucasus range. Irish and British robins are largely resident but a small minority, usually female, migrate to southern Europe during winter, a few as far as Spain. Scandinavian and Russian robins migrate to Britain and western Europe to escape the harsher winters. These migrants can be recognised by the greyer tone of the upper parts of their bodies and duller orange breast. The continental European robins that migrate during winter prefer spruce woods in northern Europe, contrasting with its preference for parks and gardens in Great Britain. In southern Iberia, habitat segregation of resident and migrant robins occurs, with resident robins remaining in the same woodlands where they bred. Attempts to introduce the European robin into Australia and New Zealand in the latter part of the 19th century were unsuccessful. Birds were released around Melbourne, Auckland, Christchurch, Wellington and Dunedin by various local acclimatisation societies, with none becoming established. There was a similar outcome in North America, as birds failed to become established after being released in Long Island, New York in 1852, Oregon in 1889–1892, and the Saanich Peninsula in British Columbia in 1908–1910. Behaviour and ecology The robin is diurnal, although it has been reported to be active hunting insects on moonlit nights or near artificial light at night. Well known to British and Irish gardeners, it is relatively unafraid of people and drawn to human activities involving the digging of soil, in order to look out for earthworms and other food freshly turned up. The British and Irish considered robins to be a gardener's friend and would never harm them, due also to the traditional association of the red colouring of their breasts with the blood of Christ. In continental Europe, on the other hand, robins were hunted and killed as were most other small birds, and are therefore more wary. Robins also approach large wild animals, such as wild boar, which disturb the ground, to look for any food that might be brought to the surface. In autumn and winter, robins will supplement their usual diet of terrestrial invertebrates, such as spiders, worms and insects, with berries, fruit and seeds. They will also eat seed mixtures and suet placed on bird-tables, as well as left-overs. The robin is even known to feed on small vertebrates (including fish and lizards) and carrion. Male robins are very territorial and will fiercely attack other males and competitors that stray into their territories. They have been observed attacking other small birds without apparent provocation. There are recorded instances of robins attacking their own reflection. Territorial disputes sometimes lead to fatalities, accounting for up to 10% of adult robin deaths in some areas. Because of high mortality in the first year of life, a robin has an average life expectancy of 1.1 years; however, once past its first year, life expectancy increases. One robin has been recorded as reaching 19 years of age. A spell of very low temperatures in winter can, however, result in higher mortality rates. The species is parasitised by the moorhen flea (Dasypsyllus gallinulae) and the acanthocephalan Apororhynchus silesiacus. Breeding Robins may choose a wide variety of sites for building a nest. In fact, anything which can offer some shelter, like a depression or hole, may be considered. As well as the usual crevices, or sheltered banks, other objects include pieces of machinery, barbecues, bicycle handlebars, bristles on upturned brooms, discarded kettles, watering cans, flower pots and hats. Robins will also nest in manmade nest boxes, favouring a design with an open front placed in a sheltered position up to from the ground. Nests are generally composed of moss, leaves and grass, with fine grass, hair and feathers for lining. Two or three clutches of five or six eggs are laid throughout the breeding season, which commences in March in Britain and Ireland. The eggs are a cream, buff or white speckled or blotched with reddish-brown colour, often more heavily so at the larger end. When juvenile birds fly from the nests, their colouration is entirely mottled brown. After two to three months out of the nest, the juvenile bird grows some orange feathers under its chin, and over a similar period this patch gradually extends to complete the adult appearance of an entirely red-orange breast. Vocalisation The robin produces a fluting, warbling during the breeding season. Both the male and female sing throughout the year, including during the winter, when they hold separate territories. During the winter, the robin's song is more plaintive than the summer version. The female robin moves a short distance from the summer nesting territory to a nearby area that is more suitable for winter feeding. The male robin keeps the same territory throughout the year. During the breeding season, male robins usually initiate their morning song an hour before civil sunrise, and usually terminate their daily singing around thirty minutes after sunset. Nocturnal singing can also occur, especially in urban areas that are artificially lit during the night. Some urban robins opt to sing at night to avoid daytime anthropogenic noise. Magnetoreception The avian magnetic compass of the robin has been extensively researched and uses vision-based magnetoreception, in which the robin's ability to sense the magnetic field of the Earth for navigation is affected by the light entering the bird's eye. The physical mechanism of the robin's magnetic sense involves quantum entanglement of electron spins in cryptochrome in the bird's eyes. Conservation status The European robin has an extensive range and a population numbering in the hundreds of millions. The species does not approach the vulnerable thresholds under the population trend criterion (>30 per cent decline over ten years or three generations); the population appears to be increasing. The International Union for Conservation of Nature evaluates it as least concern. Cultural depictions The robin features prominently in British folklore and that of northwestern France, but much less so in other parts of Europe, though in the nineteenth century Jacob Grimm reported a tradition from German-speaking Europe that if someone disturbed a robin's nest their house would be struck by lightning. Robins feature in the traditional children's tale Babes in the Wood; the birds cover the dead bodies of the children. The robin has become strongly associated with Christmas, taking a starring role on many Christmas cards since the mid-19th century. The robin has appeared on many Christmas postage stamps. An old British folk tale seeks to explain the robin's distinctive breast. Legend has it that when Jesus was dying on the cross, the robin, then simply brown in colour, flew to his side and sang into his ear in order to comfort him in his pain. The blood from his wounds stained the robin's breast, and thereafter all robins carry the mark of Christ's blood upon them. An alternative legend has it that its breast was scorched fetching water for souls in Purgatory. The association with Christmas more probably arises from the fact that postmen in Victorian Britain wore red jackets and were nicknamed "Robins"; the robin featured on the Christmas card is an emblem of the postman delivering the card. In the 1960s, in a vote publicised by The Times, the robin was adopted as the unofficial national bird of the United Kingdom. In 2015, the robin was again voted Britain's national bird in a poll organised by birdwatcher David Lindo, taking 34% of the final vote. Several English and Welsh sports organisations are nicknamed "the Robins". The nickname is typically used for teams whose home colours predominantly use red. These include the professional football clubs Bristol City, Crewe Alexandra, Swindon Town, Cheltenham Town and, traditionally, Wrexham A.F.C., as well as the English rugby league team the Hull Kingston Rovers (whose home colours are white with a red band). As of 2019, Bristol City, Swindon Town and Cheltenham Town also incorporate a robin image in their current badge designs. A small bird is an unusual choice, although it is thought to symbolise agility in darting around the field.
Biology and health sciences
Passerida
null
57079
https://en.wikipedia.org/wiki/Lettuce
Lettuce
Lettuce (Lactuca sativa) is an annual plant of the family Asteraceae mostly grown as a leaf vegetable. The leaves are most often used raw in green salads, although lettuce is also seen in other kinds of food, such as sandwiches, wraps and soups; it can also be grilled. Its stem and seeds are sometimes used; celtuce (asparagus lettuce) is one variety grown for its stems, which are eaten either raw or cooked. In addition to its main use as a leafy green, it has also gathered religious and medicinal significance over centuries of human consumption. Europe and North America originally dominated the market for lettuce, but by the late 20th century the consumption of lettuce had spread throughout the world. , world production of lettuce (and chicory) was 27 million tonnes, 53percent of which came from China. Lettuce was originally farmed by the ancient Egyptians, who transformed it from a plant whose seeds were used to obtain oil into an important food crop raised for its succulent leaves and oil-rich seeds. Lettuce spread to the Greeks and Romans; the latter gave it the name , from which the English lettuce is derived. By 50 AD, many types were described, and lettuce appeared often in medieval writings, including several herbals. The 16th through 18th centuries saw the development of many varieties in Europe, and by the mid-18th century, cultivars were described that can still be found in modern gardens. Generally grown as a hardy annual, lettuce is easily cultivated, although it requires relatively low temperatures to prevent it from flowering quickly. It can be plagued by numerous nutrient deficiencies, as well as insect and mammal pests, and fungal and bacterial diseases. L. sativa crosses easily within the species and with some other species within the genus Lactuca. Although this trait can be a problem to home gardeners who attempt to save seeds, biologists have used it to broaden the gene pool of cultivated lettuce varieties. Lettuce is a rich source of vitamin K and vitamin A, and a moderate source of folate and iron. Contaminated lettuce is often a source of bacterial, viral, and parasitic outbreaks in humans, including E. coli and Salmonella. Taxonomy and etymology Lactuca sativa is a member of the Lactuca (lettuce) genus and the Asteraceae (sunflower or aster) family. The species was first described in 1753 by Carl Linnaeus in the second volume of his Species Plantarum. Synonyms for L. sativa include Lactuca scariola sativa, L. scariola integrata and L. scariola integrifolia. L. scariola is itself a synonym for L. serriola, the common wild or prickly lettuce. L. sativa also has many identified taxonomic groups, subspecies and varieties, which delineate the various cultivar groups of domesticated lettuce. Lettuce is closely related to several Lactuca species from southwest Asia; the closest relationship is to L. serriola, an aggressive weed common in temperate and subtropical zones in much of the world. The Romans referred to lettuce as ( meaning "dairy" in Latin), an allusion to the white substance, latex, exuded by cut stems. The name Lactuca has become the genus name, while (meaning "sown" or "cultivated") was added to create the species name. The current word lettuce, originally from Middle English, came from the Old French or , which derived from the Roman name. The name romaine came from the variety of lettuce grown in the Roman papal gardens, while , another term for romaine lettuce, came from the earliest European seeds of the type from the Greek island of Kos, a center of lettuce farming in the Byzantine period. Description Lettuce's native range spreads from the Mediterranean to Siberia, although it has been transported to almost all areas of the world. Plants generally have a height and spread of . The leaves are colorful, mainly in the green and red color spectrums, with some variegated varieties. There are also a few varieties with yellow, gold or blue-teal leaves. Lettuces have a wide range of shapes and textures, from the dense heads of the iceberg type to the notched, scalloped, frilly or ruffly leaves of leaf varieties. Lettuce plants have a root system that includes a main taproot and smaller secondary roots. Some varieties, especially those found in the United States and Western Europe, have long, narrow taproots and a small set of secondary roots. Longer taproots and more extensive secondary systems are found in varieties from Asia. Depending on the variety and time of year, lettuce generally lives 65–130 days from planting to harvesting. Because lettuce that flowers (through the process known as "bolting") becomes bitter and unsaleable, plants grown for consumption are rarely allowed to grow to maturity. Lettuce flowers more quickly in hot temperatures, while freezing temperatures cause slower growth and sometimes damage to outer leaves. Once plants move past the edible stage, they develop flower stalks up to high with small yellow blossoms. Like other members of the tribe Cichorieae, lettuce inflorescences (also known as flower heads or capitula) are composed of multiple florets, each with a modified calyx called a pappus (which becomes the feathery "parachute" of the fruit), a corolla of five petals fused into a ligule or strap, and the reproductive parts. These include fused anthers that form a tube which surrounds a style and bipartite stigma. As the anthers shed pollen, the style elongates to allow the stigmas, now coated with pollen, to emerge from the tube. The ovaries form compressed, obovate (teardrop-shaped) dry fruits that do not open at maturity, measuring 3 to 4 mm long. The fruits have 5–7 ribs on each side and are tipped by two rows of small white hairs. The pappus remains at the top of each fruit as a dispersal structure. Each fruit contains one seed, which can be white, yellow, gray or brown depending on the variety of lettuce. The domestication of lettuce over the centuries has resulted in several changes through selective breeding: delayed bolting, larger seeds, larger leaves and heads, better taste and texture, a lower latex content, and different leaf shapes and colors. Work in these areas continues through the present day. Scientific research into the genetic modification of lettuce is ongoing, with over 85 field trials taking place between 1992 and 2005 in the European Union and the United States to test modifications allowing greater herbicide tolerance, greater resistance to insects and fungi and slower bolting patterns. However, genetically modified lettuce is not currently used in commercial agriculture. History DNA analysis of 445 types of lettuce indicates that lettuce was first domesticated from its wild ancestor near the Caucasus, where seed shattering was first selected out of the cultivar. At this time, the lettuce plant was only suitable for harvesting its seeds, which could be pressed to extract oil, likely used for cooking, among other purposes. From there, lettuce was likely transported to the Near East and then to ancient Egypt, where the first depictions of lettuce cultivation can be found as early as 2680 BC. Like the early lettuce from the Caucasus, this lettuce was grown to produce cooking oil from its seeds. Lettuce was considered a sacred plant of the reproduction god Min, and was carried during his festivals and placed near his images. The plant was thought to help the god "perform the sexual act untiringly". Its use in religious ceremonies resulted in the creation of many images in tombs and wall paintings. The cultivated variety appears to have been about tall and resembled a large version of the modern romaine lettuce. These upright lettuces were developed by the Egyptians and passed to the Greeks, who in turn shared them with the Romans. Around 50 AD, Roman agriculturalist Columella described several lettuce varieties – some of which may have been ancestors of today's lettuces. The plant was eventually selectively bred into a plant grown for its edible leaves. The long leaves in Egyptian depictions suggest that it may have been grown for its leaves, which would make it the first lettuce cultivar grown for this purpose. However, genome wide analysis suggests the traits needed for cultivation as a leafy vegetable, like the loss of bitterness and thorns, evolved much later, from around 500 BC in Southern Europe. Lettuce cultivars radiated more rapidly from this point, with oilseed lettuce likely being brought by the ancient Greeks from Egypt to Italy, where it was modified into cos lettuce and cultivated for its leaves. From there, it was brought north to Central Europe, where it was modified into butterhead lettuce and other varieties. Lettuce appears in many medieval writings, especially as a medicinal herb. Hildegard of Bingen mentioned it in her writings on medicinal herbs between 1098 and 1179, and many early herbals also describe its uses. In 1586, Joachim Camerarius provided descriptions of the three basic modern lettuces – head lettuce, loose-leaf lettuce, and romaine (or cos) lettuce. Lettuce was first brought to the Americas from Europe by Christopher Columbus in the late 15th century. Between the late 16th century and the early 18th century, many varieties were developed in Europe, particularly Holland. Books published in the mid-18th and early 19th centuries describe several varieties found in gardens today. Due to its short lifespan after harvest, lettuce was originally sold relatively close to where it was grown. The early 20th century saw the development of new packing, storage and shipping technologies that improved the lifespan and transportability of lettuce and resulted in a significant increase in availability. During the 1950s, lettuce production was revolutionized with the development of vacuum cooling, which allowed field cooling and packing of lettuce, replacing the previously used method of ice-cooling in packing houses outside the fields. Lettuce is easy to grow, and as such has been a significant source of sales for many seed companies. Tracing the history of many varieties is complicated by the practice of many companies, particularly in the US, of changing a variety's name from year to year. This practice is conducted for several reasons, the most prominent being to boost sales by promoting a "new" variety, or to prevent customers from knowing that the variety had been developed by a competing seed company. Documentation from the late 19th century shows between 65 and 140 distinct varieties of lettuce, depending on the amount of variation allowed between types – a distinct difference from the 1,100 named lettuce varieties on the market at the time. Names also often changed significantly from country to country. Although most lettuce grown today is used as a vegetable, a minor amount is used in the production of tobacco-free cigarettes; however, domestic lettuce's wild relatives produce a leaf that visually more closely resembles tobacco. Cultivation A hardy annual, some varieties of lettuce can be overwintered even in relatively cold climates under a layer of straw, and older, heirloom varieties are often grown in cold frames. Lettuces meant for the cutting of individual leaves are generally planted straight into the garden in thick rows. Heading varieties of lettuces are commonly started in flats, then transplanted to individual spots, usually apart, in the garden after developing several leaves. Lettuce spaced farther apart receives more sunlight, which improves color and nutrient quantities in the leaves. Pale to white lettuce, such as the centers in some iceberg lettuce, contain few nutrients. Lettuce grows best in full sun in loose, nitrogen-rich soils with a pH of between 6.0 and 6.8. Heat generally prompts lettuce to bolt, with most varieties growing poorly above ; cool temperatures prompt better performance, with being preferred and as low as being tolerated. Plants in hot areas that are provided partial shade during the hottest part of the day will bolt more slowly. Temperatures above will generally result in poor or non-existent germination of lettuce seeds. After harvest, lettuce lasts the longest when kept at and 96 percent humidity. The high water content of lettuce (94.9 percent) creates problems when attempting to preserve the plant – it cannot be successfully frozen, canned or dried and must be eaten fresh. In spite of its high water content, traditionally grown lettuce has a low water footprint, with of water required for each kilogram of lettuce produced. Hydroponic growing methods can reduce this water consumption by nearly two orders of magnitude. Lettuce varieties will cross with each other, making spacing of between varieties necessary to prevent contamination when saving seeds. Lettuce will also cross with Lactuca serriola (wild lettuce), with the resulting seeds often producing a plant with tough, bitter leaves. Celtuce, a lettuce variety grown primarily in Asia for its stems, crosses easily with lettuces grown for their leaves. This propensity for crossing, however, has led to breeding programs using closely related species in Lactuca, such as L. serriola, L. saligna, and L. virosa, to broaden the available gene pool. Starting in the 1990s, such programs began to include more distantly related species such as L. tatarica. Seeds keep best when stored in cool conditions, and, unless stored cryogenically, remain viable the longest when stored at ; they are relatively short lived in storage. At room temperature, lettuce seeds remain viable for only a few months. However, when newly harvested lettuce seed is stored cryogenically, this life increases to a half-life of 500 years for vaporized nitrogen and 3,400 years for liquid nitrogen; this advantage is lost if seeds are not frozen promptly after harvesting. Cultivars (varieties) There are several types and cultivars of lettuce. Categorization may sometimes refer to "leaf" versus "head", but there are seven main cultivar groups of lettuce, each including many varieties: Leaf—Also known as looseleaf, cutting or bunching lettuce, this type has loosely bunched leaves and is the most widely planted. It is used mainly for salads. Red leaf lettuce—A group of lettuce types with red leaves. Romaine/Cos—Used mainly for salads and sandwiches, this type forms long, upright heads. This is the most often used lettuce in Caesar salads. Little Gem—a dwarf, compact romaine lettuce, popular in the UK. Iceberg/Crisphead—The most popular type in the United States. Iceberg lettuce is very heat-sensitive and was originally developed in 1894 for growth in the northern United States by Burpee Seeds and Plants. It gets its name from the way it was transported in crushed ice, where the heads of lettuce looked like icebergs. Today, it ships well, but is low in flavor and nutritional content, being composed of even more water than other lettuce types. Butterhead—Also known as Boston or Bibb lettuce, and traditionally in the UK as "round lettuce", this type is a head lettuce with a loose arrangement of leaves, known for its sweet flavor and tender texture. Summercrisp—Also called Batavian or French crisp, this lettuce is midway between the crisphead and leaf types. These lettuces tend to be larger, bolt-resistant and well-flavored. Celtuce/Stem—This type is grown for its seedstalk, rather than its leaves, and is used in Asian cooking, primarily Chinese, as well as stewed and creamed dishes. Oilseed—This type is grown for its seeds, which are pressed to extract an oil mainly used for cooking. It has few leaves, bolts quickly and produces seeds around 50 percent larger than other types of lettuce. The four main types in the Western world have been looseleaf, romaine, crisphead, and butterhead, with the others being intermediary or more exotic. The butterhead and crisphead types are sometimes known together as "cabbage" lettuce, because their heads are shorter, flatter, and more cabbage-like than romaine lettuces. Cultivation problems Soil nutrient deficiencies can cause a variety of plant problems that range from malformed plants to a lack of head growth. Many insects are attracted to lettuce, including cutworms, which cut seedlings off at the soil line; wireworms and nematodes, which cause yellow, stunted plants; tarnished plant bugs and aphids, which cause yellow, distorted leaves; leafhoppers, which cause stunted growth and pale leaves; thrips, which turn leaves gray-green or silver; leafminers, which create tunnels within the leaves; flea beetles, which cut small holes in leaves and caterpillars, slugs and snails, which cut large holes in leaves. For example, the larvae of the ghost moth is a common pest of lettuce plants. Mammals, including rabbits and groundhogs, also eat the plants. Lettuce contains several defensive compounds, including sesquiterpene lactones, and other natural phenolics such as flavonol and glycosides, which help to protect it against pests. Certain varieties contain more than others, and some selective breeding and genetic modification studies have focused on using this trait to identify and produce commercial varieties with increased pest resistance. Lettuce also suffers from several viral diseases, including big vein, which causes yellow, distorted leaves, and mosaic virus, which is spread by aphids and causes stunted plant growth and deformed leaves. Aster yellows are a disease-causing bacteria carried by leafhoppers, which causes deformed leaves. Fungal diseases include powdery mildew and downy mildew, which cause leaves to mold and die and bottom rot, lettuce drop and gray mold, which cause entire plants to rot and collapse. Bacterial diseases include Botrytis cinerea, for which UV-C treatments may be used: Vàsquez et al. 2017 find that phenylalanine ammonia-lyase activity, phenolic production, and B. cinerea resistance are increased by UV-C. Crowding lettuce tends to attract pests and diseases. Weeds can also be an issue, as cultivated lettuce is generally not competitive with them, especially when directly seeded into the ground. Transplanted lettuce (started in flats and later moved to growing beds) is generally more competitive initially, but can still be crowded later in the season, causing misshapen lettuce and lower yields. Weeds also act as homes for insects and disease and can make harvesting more difficult. Herbicides are often used to control weeds in commercial production. However, this has led to the development of herbicide-resistant weeds in lettuce cultivation. Production In 2022, world production of lettuce (report combined with chicory) was 27 million tonnes, with China alone producing 55% of the total (table). Lettuce is the only member of the genus Lactuca to be grown commercially. Although China is the top world producer of lettuce, the majority of the crop is consumed domestically. Markets Western Europe and North America were the original major markets for large-scale lettuce production. By the late 1900s, Asia, South America, Australia and Africa became more substantial markets. Different locations tended to prefer different types of lettuce, with butterhead prevailing in northern Europe and Great Britain, romaine in the Mediterranean and stem lettuce in China and Egypt. By the late 20th century, the preferred types began to change, with crisphead, especially iceberg, lettuce becoming the dominant type in northern Europe and Great Britain and more popular in western Europe. In the US, no one type predominated until the early 20th century, when crisphead lettuces began gaining popularity. After the 1940s, with the development of iceberg lettuce, 95 percent of the lettuce grown and consumed in the US was crisphead lettuce. By the end of the century, other types began to regain popularity and eventually made up over 30 percent of production. Stem lettuce was first developed in China, where it remains primarily cultivated. In the early 21st century, bagged salad products increased in the lettuce market, especially in the US where innovative packaging and shipping methods prolonged freshness. In the United States in 2022, lettuce was the main vegetable ingredient in salads, and was the most consumed among leaf vegetables; its market was about 20% of all vegetables, with Romaine and iceberg having about equal sales. Some 85% of the lettuce consumed in the United States in 2022 was produced domestically. Uses Culinary As described around 50 AD, lettuce leaves were often cooked and served by the Romans with an oil-and-vinegar dressing; however, smaller leaves were sometimes eaten raw. During the 81–96 AD reign of Domitian, the tradition of serving a lettuce salad before a meal began. Post-Roman Europe continued the tradition of poaching lettuce, mainly with large romaine types, as well as the method of pouring a hot oil and vinegar mixture over the leaves. Today, the majority of lettuce is grown for its leaves, although one type is grown for its stem and one for its seeds, which are made into an oil. Most lettuce is used in salads, either alone or with other greens, vegetables, meats and cheeses. Romaine lettuce is often used for Caesar salads. Lettuce leaves can also be found in soups, sandwiches and wraps, while the stems are eaten both raw and cooked. The consumption of lettuce in China developed differently from in Western countries, due to health risks and cultural aversion to eating raw leaves; Chinese "salads" are composed of cooked vegetables and are served hot or cold. Lettuce is also used in a larger variety of dishes than in Western countries, contributing to a range of dishes including bean curd and meat dishes, soups and stir-frys plain or with other vegetables. Stem lettuce, widely consumed in China, is eaten either raw or cooked, the latter primarily in soups and stir-frys. Lettuce is also used as a primary ingredient in the preparation of lettuce soup. Nutrition Raw iceberg lettuce is 96% water, 3% carbohydrates, and contains negligible protein and fat (table). In a reference amount of , iceberg lettuce supplies 14 calories and is a rich source (20% or more of the Daily Value, DV) of vitamin K (20% DV), with no other micronutrients in significant content (table). In lettuce varieties with dark green leaves, such as romaine (also called cos), vitamin A contents are appreciable due to the presence of the provitamin A compound, beta-carotene. Dark green varieties of lettuce also contain moderate amounts of calcium and iron. The edible spine and ribs of the lettuce plant supply dietary fiber, while micronutrients are contained in the leaf portion. Food-borne illness Food-borne pathogens that can survive on lettuce include Listeria monocytogenes, the causative agent of listeriosis, which multiplies in storage. However, despite high levels of bacteria being found on ready-to-eat lettuce products, a 2008 study found no incidents of food-borne illness related to listeriosis, possibly due to the product's short shelf life, indigenous microflora competing with the Listeria bacteria or inhibition of bacteria to cause listeriosis. Other bacteria found on lettuce include Aeromonas species, which have not been linked to any outbreaks; Campylobacter species, which cause campylobacteriosis; and Yersinia intermedia and Yersinia kristensenii (species of Yersinia), which have been found mainly in lettuce. Salmonella bacteria, including the uncommon Salmonella braenderup type, have also caused outbreaks traced to contaminated lettuce. Viruses, including hepatitis A, calicivirus and a Norwalk-like strain, have been found in lettuce. The vegetable has also been linked to outbreaks of parasitic infestations, including Giardia lamblia. Lettuce has been linked to numerous outbreaks of the bacteria E.coli O157:H7 and Shigella; the plants were most likely contaminated through contact with animal or human feces. A 2007 study determined that the vacuum cooling method, especially prevalent in the California lettuce industry, increased the uptake and survival rates of E. coli O157:H7. Scientific experiments using treated municipal wastewater as irrigation for romaine lettuce have shown that the contamination levels of foliage, leachate, and soil with E. coli and AP205 bacteriophage (used by researchers as a surrogate for enteric viruses), respectively, were directly correlated with the presence of these organisms in the irrigation water. Due to the increase in food demand, the use of treated wastewater effluent for irrigation and animal or human excreta (i.e., manure or biosolids) as soil amendments is increasing. As such, so are the outbreaks of food-borne illnesses. Due to the overuse of antibiotics in farming, the number of pathogens resistant to antibiotics is increasing, one of these being AR E.coli, which has been found on lettuce irrigated with wastewater. Pathogens found on lettuce are not specific to lettuce (though some E. coli strains have affinity for Romaine). But, unlike other vegetables which tend to be cooked, lettuce is eaten raw, thus food-borne outbreaks associated with it are more frequent and affect a larger number of people.
Biology and health sciences
Asterales
null
57122
https://en.wikipedia.org/wiki/Multiplication%20table
Multiplication table
In mathematics, a multiplication table (sometimes, less formally, a times table) is a mathematical table used to define a multiplication operation for an algebraic system. The decimal multiplication table was traditionally taught as an essential part of elementary arithmetic around the world, as it lays the foundation for arithmetic operations with base-ten numbers. Many educators believe it is necessary to memorize the table up to 9 × 9. History Pre-modern times The oldest known multiplication tables were used by the Babylonians about 4000 years ago. However, they used a base of 60. The oldest known tables using a base of 10 are the Chinese decimal multiplication table on bamboo strips dating to about 305 BC, during China's Warring States period. The multiplication table is sometimes attributed to the ancient Greek mathematician Pythagoras (570–495 BC). It is also called the Table of Pythagoras in many languages (for example French, Italian and Russian), sometimes in English. The Greco-Roman mathematician Nichomachus (60–120 AD), a follower of Neopythagoreanism, included a multiplication table in his Introduction to Arithmetic, whereas the oldest surviving Greek multiplication table is on a wax tablet dated to the 1st century AD and currently housed in the British Museum. In 493 AD, Victorius of Aquitaine wrote a 98-column multiplication table which gave (in Roman numerals) the product of every number from 2 to 50 times and the rows were "a list of numbers starting with one thousand, descending by hundreds to one hundred, then descending by tens to ten, then by ones to one, and then the fractions down to 1/144." Modern times In his 1820 book The Philosophy of Arithmetic, mathematician John Leslie published a multiplication table up to 1000 × 1000, which allows numbers to be multiplied in triplets of digits at a time. Leslie also recommended that young pupils memorize the multiplication table up to 50 × 50. The illustration below shows a table up to 12 × 12, which is a size commonly used nowadays in English-world schools. Because multiplication of integers is commutative, many schools use a smaller table as below. Some schools even remove the first column since 1 is the multiplicative identity. The traditional rote learning of multiplication was based on memorization of columns in the table, arranged as follows. This form of writing the multiplication table in columns with complete number sentences is still used in some countries, such as Bosnia and Herzegovina, instead of the modern grids above. Patterns in the tables There is a pattern in the multiplication table that can help people to memorize the table more easily. It uses the figures below: Figure 1 is used for multiples of 1, 3, 7, and 9. Figure 2 is used for the multiples of 2, 4, 6, and 8. These patterns can be used to memorize the multiples of any number from 0 to 10, except 5. As you would start on the number you are multiplying, when you multiply by 0, you stay on 0 (0 is external and so the arrows have no effect on 0, otherwise 0 is used as a link to create a perpetual cycle). The pattern also works with multiples of 10, by starting at 1 and simply adding 0, giving you 10, then just apply every number in the pattern to the "tens" unit as you would normally do as usual to the "ones" unit. For example, to recall all the multiples of 7: Look at the 7 in the first picture and follow the arrow. The next number in the direction of the arrow is 4. So think of the next number after 7 that ends with 4, which is 14. The next number in the direction of the arrow is 1. So think of the next number after 14 that ends with 1, which is 21. After coming to the top of this column, start with the bottom of the next column, and travel in the same direction. The number is 8. So think of the next number after 21 that ends with 8, which is 28. Proceed in the same way until the last number, 3, corresponding to 63. Next, use the 0 at the bottom. It corresponds to 70. Then, start again with the 7. This time it will correspond to 77. Continue like this. In abstract algebra Tables can also define binary operations on groups, fields, rings, and other algebraic systems. In such contexts they are called Cayley tables. For every natural number n, addition and multiplication in Zn, the ring of integers modulo n, is described by an n by n table. (See Modular arithmetic.) For example, the tables for Z5 are: For other examples, see group. Hypercomplex numbers Hypercomplex number multiplication tables show the non-commutative results of multiplying two hypercomplex imaginary units. The simplest example is that of the quaternion multiplication table. {|class="wikitable" |+Quaternion multiplication table |- !width=15 nowrap|↓ × → !width=15| !width=15| !width=15| !width=15| |- ! | | | | |- ! | | | | |- ! | | | | |- ! | | | | |} For further examples, see , , and . Chinese and Japanese multiplication tables Mokkan discovered at Heijō Palace suggest that the multiplication table may have been introduced to Japan through Chinese mathematical treatises such as the Sunzi Suanjing, because their expression of the multiplication table share the character in products less than ten. Chinese and Japanese share a similar system of eighty-one short, easily memorable sentences taught to students to help them learn the multiplication table up to 9 × 9. In current usage, the sentences that express products less than ten include an additional particle in both languages. In the case of modern Chinese, this is (); and in Japanese, this is (). This is useful for those who practice calculation with a suanpan or a soroban, because the sentences remind them to shift one column to the right when inputting a product that does not begin with a tens digit. In particular, the Japanese multiplication table uses non-standard pronunciations for numbers in some specific instances (such as the replacement of san roku with saburoku). Warring States decimal multiplication bamboo slips A bundle of 21 bamboo slips dated 305 BC in the Warring States period in the Tsinghua Bamboo Slips (清華簡) collection is the world's earliest known example of a decimal multiplication table. Standards-based mathematics reform in the US In 1989, the National Council of Teachers of Mathematics (NCTM) developed new standards which were based on the belief that all students should learn higher-order thinking skills, which recommended reduced emphasis on the teaching of traditional methods that relied on rote memorization, such as multiplication tables. Widely adopted texts such as Investigations in Numbers, Data, and Space (widely known as TERC after its producer, Technical Education Research Centers) omitted aids such as multiplication tables in early editions. NCTM made it clear in their 2006 Focal Points that basic mathematics facts must be learned, though there is no consensus on whether rote memorization is the best method. In recent years, a number of nontraditional methods have been devised to help children learn multiplication facts, including video-game style apps and books that aim to teach times tables through character-based stories.
Mathematics
Basics
null
57146
https://en.wikipedia.org/wiki/Chickpea
Chickpea
The chickpea or chick pea (Cicer arietinum) is an annual legume of the family Fabaceae, subfamily Faboideae, cultivated for its edible seeds. Its different types are variously known as gram or Bengal gram; chhola, chhana, chana, or channa; garbanzo or garbanzo bean; or Egyptian pea. It is one of the earliest cultivated legumes, the oldest archaeological evidence of which was found in Syria. Chickpeas are high in protein. The chickpea is a key ingredient in Mediterranean and Middle Eastern cuisines, used in hummus, and, when soaked and coarsely ground with herbs and spices then made into patties and fried, falafel. As an important part of Indian cuisine, it is used in salads, soups and stews, and curry, in chana masala, and in other food products that contain channa (chickpeas). In 2022, India accounted for 75% of global chickpea production. Etymology Chickpeas have been cultivated for at least ten thousand years. Cultivation spread from the Fertile Crescent eastward toward South Asia and into Europe through the Balkans. Historical linguistics have found ancestral words relating to chickpeas in the prehistoric Proto-Indo-European language family that evolved into the Indo-European languages. The Proto-Indo-European roots and that denoted both and appeared in the Pontic–Caspian steppe of Eastern Europe between 4,500 and 2,500 BCE. As speakers of the language became isolated from each other through the Indo-European migrations, the regional dialects diverged due to contact with other languages and dialects, and transformed into the known ancient Indo-European languages. The Old Prussian word , appearing between 1 and 100 CE, retained the meaning of the word, but in most cases, the word came to be used to denote chickpeas. In Old Macedonian, the word appeared between 1000 and 400 BCE, and may have evolved from the Proto-Hellenic word . In Ancient Rome, the Latin word for chickpeas appeared around 700 BCE, and is probably derived from the word used by the Pelasgians that inhabited north Greece before Greek-speaking tribes took over. The Old Armenian word for chickpeas appeared before 400 CE. Over time, linkages between languages led to other descendant words, including the Albanian word , the Swedish word , the Slovak word , the Estonian word , the Basque word , and the Maltese word . The Latin word evolved into words for chickpeas in nearly all extinct and living Romance languages, including the Mozarabic word ; the Catalan words , , , and ; the Walloon words ; the Old French words and ; and the Modern French terms , , and . These words were borrowed by many geographically neighboring languages, such as the French term becoming in Old English. The word pease, like the modern words for wheat and corn, was both singular and plural, but since it had an "s" sound at the end of it which became associated with the plural form of nouns, English speakers by the end of the 17th century were starting to refer to a single grain of pease as a pea. Other important Proto-Indo-European roots relating to chickpeas are , , and , which were used to denote both the kernel of a legume and a pea. This root evolved into the Greek word , mentioned in The Iliad in around 800 BCE and in Historia Plantarum by Theophrastus, written between 350 and 287 BCE. The Portuguese words and ; the Asturian word ; the Galician word ; the French words , , and ; and the Spanish word are all related to the Greek term. In American English, the term garbanzo to refer to the chickpea appeared in writing as early as 1759, and the seed is also referred to as a garbanzo bean. Taxonomy Chickpea (Cicer arietinum) is a member of the genus Cicer and the legume family, Fabaceae. Carl Linnaeus described it in the first edition of Species Plantarum in 1753, marking the first use of binomial nomenclature for the plant. Linnaeus classified the plant in the genus Cicer, which was the Latin term for chickpeas, crediting Joseph Pitton de Tournefort's 1694 publication which called it "Cicer arietinum". Tournefort himself repeated the names the plant that had been used since antiquity. The specific epithet arietinum is based on the shape of the seed resembling the head of a ram. In Ancient Greece, Theophrastus described one of the varieties of chickpea called "rams" in Historia Plantarum. The Roman writer on agriculture Lucius Junius Moderatus Columella wrote about chickpeas in the second book of De re rustica, published in about 64 CE, and said that the chickpea was called arietillum. Pliny the Elder expanded further in Naturalis Historia that this name was due to the seed's resemblance to the head of a ram. Cicer arietinum is the type species of the genus. The wild species C. reticulatum is interfertile with C. arietinum and is considered to be the progenitor of the cultivated species. C. echinospermum is also closely related and can be hybridized with both C. reticulatum and C. arietinum, but generally produce infertile seeds. History The earliest well-preserved archaeobotanical evidence of chickpea outside its wild progenitor's natural distribution area comes from the site of Tell el-Kerkh, in modern Syria, dating back to the early Pre-Pottery Neolithic period around c.8400 BC. Cicer reticulatum is the wild progenitor of chickpeas. This species currently grows only in southeast Turkey, where it is believed to have been domesticated. The domestication event can be dated to around 7000 BC. Domesticated chickpeas have been found at Pre-Pottery Neolithic B sites in Turkey and the Levant, namely at Çayönü, Hacilar, and Tell es-Sultan (Jericho). Chickpeas, by the Bronze Age, had spread to modern day Israel, Iraq, Pakistan, and India, having arrived in Ethiopia by the Iron Age. In southern France, mesolithic layers in a cave at L'Abeurador, Hérault, have yielded chickpeas, carbon-dated to 6790±90 BC. They were found in the late Neolithic (about 3500 BC) sites at Thessaly, Kastanas, Lerna and Dimini, Greece. Chickpeas are mentioned in Charlemagne's (about 800 AD) as , as grown in each imperial demesne. Albertus Magnus mentions red, white, and black varieties. The 17th-century botanist Nicholas Culpeper noted "chick-pease or cicers" are less "windy" than peas and more nourishing. Ancient people also associated chickpeas with Venus because they were said to offer medical uses such as increasing semen and milk production, inducing menstruation and urination, and helping to treat kidney stones. "White cicers" were thought to be especially strong and helpful. In 1793, ground, roasted chickpeas were noted by a German writer as a substitute for coffee in Europe. In the First World War, they were grown for this use in some areas of Germany. They are still sometimes brewed instead of coffee. Genome sequencing Sequencing of the genome of the chickpea has been completed for 90 chickpea genotypes, including several wild species. A collaboration of 20 research organizations, led by the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), sequenced CDC Frontier, a kabuli chickpea variety, and identified more than 28,000 genes and several million genetic markers. Description The plant grows to 20–50 cm (8–20 in) high and has small, feathery leaves on either side of the stem. Chickpeas are a type of pulse, with one seedpod containing two or three peas. It has white flowers with blue, violet, or pink veins. Varieties The most common variety of chickpea in South Asia, Ethiopia, Mexico, and Iran is the desi type, also called Bengal gram. It has small, dark seeds and a rough coat. It can be black, green or speckled. In Hindi, it is called desi chana 'native chickpea' or kala chana 'black chickpea', and in Assamese and Bengali, it is called boot or chholaa boot. It can be hulled and split to make chana dal, Kurukshetra Prasadam (channa laddu), and bootor daali. Around the Mediterranean and in the Middle East, the most common variety of chickpea is the kabuli type. It is large and tan-colored, with a smooth coat. It was introduced to India in the 18th century from Afghanistan and is called kabuli chana in Hindi. An uncommon black chickpea, ceci neri, is grown only in Apulia and Basilicata, in southern Italy. It is around the same size as garbanzo beans, larger and darker than the 'desi' variety. Uses Culinary Chickpeas are usually rapidly boiled for 10 minutes and then simmered for longer. Dried chickpeas need a long cooking time (1–2 hours) but will easily fall apart when cooked longer. If soaked for 12–24 hours before use, cooking time can be shortened by around 30 minutes. Chickpeas can also be pressure cooked or sous vide cooked at . Mature chickpeas can be cooked and eaten cold in salads, cooked in stews, ground into flour, ground and shaped in balls and fried as falafel, made into a batter and baked to make farinata or socca, or fried to make panelle. Chickpea flour is known as gram flour or besan in South Asia and is used frequently in South Asian cuisine. In Portugal, chickpeas are one of the main ingredients in rancho, eaten with pasta, meat, or rice. They are used in other hot dishes with bacalhau and in soups, meat stews, salads mixed with tuna and vegetables, olive oil, vinegar, hot pepper and salt. In Spain, they are used cold in tapas and salads, as well as in cocido madrileño. Hummus is the Arabic word for chickpeas, which are often cooked and ground into a paste and mixed with tahini (sesame seed paste) to make ḥummuṣ bi ṭaḥīna, usually called simply hummus in English. By the end of the 20th century, hummus had become common in American cuisine: by 2010, 5% of Americans consumed it regularly, and it was present at some point in 17% of American households. In the Middle East, chickpeas are also roasted, spiced, and eaten as a snack, such as leblebi. Chickpeas and Bengal grams are used to make curries. They are one of the most popular vegetarian foods in the Indian subcontinent and in diaspora communities of many other countries, served with a variety of bread or steamed rice. Popular dishes in Indian cuisine are made with chickpea flour, such as mirchi bajji and mirapakaya bajji. In India, as well as in the Levant, unripe chickpeas are often picked out of the pod and eaten as a raw snack, and the leaves are eaten as a leaf vegetable in salads. In India, desserts such as besan halwa and sweets such as mysore pak, and laddu are made. Chickpea flour is used to make "Burmese tofu", which was first known among the Shan people of Burma. In South Asian cuisine, chickpea flour (besan) is used as a batter to coat vegetables before deep frying to make pakoras. The flour is also used as a batter to coat vegetables and meats before frying or fried alone, such as panelle (little bread), a chickpea fritter from Sicily. Chickpea flour is used to make the Mediterranean flatbread socca and is called panisse in Provence, southern France. It is made of cooked chickpea flour, poured into saucers, allowed to set, cut into strips, and fried in olive oil, often eaten during Lent. In Tuscany, chickpea flour (farina di ceci) is used to make an oven-baked pancake: the flour is mixed with water, oil and salt. Chickpea flour, known as kadlehittu in Kannada, is used for making sweet dish Mysore pak. In the Philippines, chickpeas preserved in syrup are eaten as sweets and in desserts such as halo-halo. Ashkenazi Jews traditionally serve whole chickpeas, referred to as arbes (אַרבעס) in Yiddish, at the Shalom Zachar celebration for baby boys. The chickpeas are boiled until soft and served hot with salt and lots of ground black pepper. Guasanas or garbanza is a Mexican chickpea street snack. The beans, while still green, are cooked in water and salt, kept in a steamer to maintain their humidity, and served in a plastic bag. A chickpea-derived liquid (aquafaba) can be used as an egg white replacement to make meringue or ice cream, with the residual pomace used as flour. Animal feed Chickpeas are an energy and protein source as animal feed. Raw chickpeas have a lower trypsin and chymotrypsin inhibitor content than peas, common beans, and soybeans. This leads to higher nutrition values and fewer digestive problems in nonruminants. Nonruminant diets can be completed with 200 g/kg of raw chickpeas to promote egg production and growth of birds and pigs. Higher amounts can be used when chickpeas are treated with heat. Experiments have shown that ruminants grow equally well and produce an equal amount and quality of milk when soybean or cereal meals are replaced with chickpeas. Pigs show the same performance, but growing pigs experience a negative effect of raw chickpea feed; extruded chickpeas can increase performance even in growing pigs. Only young broilers (starting period) showed worse performance in poultry diet experiments with untreated chickpeas. Fish performed equally well when extruded chickpeas replaced their soybean or cereal diet. Chickpea seeds have also been used in rabbit diets. Secondary components of legumes—such as lecithin, polyphenols, oligosaccharides; and amylase, protease, trypsin and chymotrypsin inhibitors—can lead to lower nutrient availability, and thus to impaired growth and health of animals (especially in nonruminants). Ruminants generally have less trouble digesting legumes with secondary components since they can inactivate them in the rumen liquor. Their diets can be supplemented by 300 g/kg or more raw chickpea seeds. However, protein digestibility and energy availability can be improved through treatments such as germination, dehulling, and heat. Extrusion is a very good heat technique to destroy secondary legume components since the proteins are irreversibly denatured. Overprocessing may decrease the nutritional value; extrusion leads to losses in minerals and vitamins, while dry heating does not change the chemical composition. Production In 2022, world production of chickpeas was 18 million tonnes, led by India with 75% of the global total (table). Nutrition Chickpeas are a nutrient-dense food, providing rich content (20% or higher of the Daily Value, DV) of protein, dietary fiber, folate, and certain dietary minerals, such as iron and phosphorus in a 100-gram reference amount (see adjacent nutrition table). Thiamin, vitamin B6, magnesium, and zinc contents are moderate, providing 10–16% of the DV. Compared to reference levels established by the United Nations Food and Agriculture Organization and World Health Organization, proteins in cooked and germinated chickpeas are rich in essential amino acids such as lysine, isoleucine, tryptophan, and total aromatic amino acids. A reference serving of cooked chickpeas provides of food energy. Cooked chickpeas are 60% water, 27% carbohydrates, 9% protein and 3% fat (table). Seventy-five percent of the fat content is unsaturated fatty acids for which linoleic acid comprises 43% of the total fat. Effects of cooking Cooking treatments do not lead to variance in total protein and carbohydrate content. Soaking and cooking of dry seeds possibly induces chemical modification of protein-fibre complexes, which leads to an increase in crude fibre content. Thus, cooking can increase protein quality by inactivating or destroying heat-labile antinutritional factors. Cooking also increases protein digestibility, essential amino acid index, and protein efficiency ratio. Although cooking lowers concentrations of amino acids such as tryptophan, lysine, total aromatic, and sulphur-containing amino acids, their contents are still higher than proposed by the FAO/WHO reference. Raffinose and sucrose and other reducing sugars diffuse from the chickpea into the cooking water and this reduces or completely removes these components from the chickpea. Cooking also significantly reduces fat and mineral content. The B vitamins riboflavin, thiamin, niacin, and pyridoxine dissolve into cooking water at differing rates. Germination Germination of chickpeas improves protein digestibility, although at a lower level than cooking. Germination degrades proteins to simple peptides, improving crude protein, nonprotein nitrogen, and crude fibre content. Germination decreases lysine, tryptophan, sulphur and total aromatic amino acids, but most contents are still higher than proposed by the FAO/WHO reference pattern. Oligosaccharides, such as stachyose and raffinose, are reduced in higher amounts during germination than during cooking. Minerals and B vitamins are retained more effectively during germination than with cooking. Phytic acids are reduced significantly, but trypsin inhibitor, tannin, and saponin reduction is less effective than cooking. Autoclaving, microwave cooking, boiling In a 2002 study comparing germination and cooking effects on chickpea nutritional values, all treatments of cooking (autoclaving, microwave cooking, boiling) were found to improve protein digestibility. Essential amino acids were slightly increased by boiling and microwave cooking compared to autoclaving and germination. losses in B-vitamins and minerals in chickpeas cooked by microwaving were smaller than in those cooked by boiling and autoclaving. Skinning Chickpeas contain oligosaccharides (raffinose, stachyose, and verbascose) which are indigestible to humans but are fermented in the gut by bacteria, leading to flatulence in susceptible individuals. This can be prevented by skinning the husks from the chickpeas before serving. Leaves In some parts of the world, young chickpea leaves are consumed as cooked green vegetables. Especially in malnourished populations, it can supplement important dietary nutrients because regions where chickpeas are consumed have sometimes been found to have populations lacking micronutrients. Chickpea leaves have a significantly higher mineral content than either cabbage leaves or spinach leaves. Environmental factors and nutrient availability could influence mineral concentrations in natural settings. Consumption of chickpea leaves may contribute nutrients to the diet. Research The consumption of chickpeas is under preliminary research for the potential to improve nutrition and affect chronic diseases. Heat and nutrient cultivation Agricultural yield for chickpeas is often based on genetic and phenotypic variability, which has recently been influenced by artificial selection. The uptake of macronutrients such as inorganic phosphorus or nitrogen is vital to the plant development of Cicer arietinum, commonly known as the perennial chickpea. Heat cultivation and macronutrient coupling are two relatively unknown methods used to increase the yield and size of the chickpea. Recent research has indicated that a combination of heat treatment along with the two vital macronutrients, phosphorus and nitrogen, are the most critical components to increasing the overall yield of Cicer arietinum. Perennial chickpeas are a fundamental source of nutrition in animal feed as they are high-energy and protein sources for livestock. Unlike other food crops, the perennial chickpea can change its nutritional content in response to heat cultivation. Treating the chickpea with a constant heat source increases its protein content almost threefold. Consequently, the impact of heat cultivation affects the protein content of the chickpea itself and the ecosystem it supports. Increasing the height and size of chickpea plants involves using macronutrient fertilization with varying doses of inorganic phosphorus and nitrogen. The level of phosphorus that a chickpea seed is exposed to during its lifecycle has a positive correlation relative to the height of the plant at full maturity. Increasing the levels of inorganic phosphorus at all doses incrementally increases the height of the chickpea plant. Thus, the seasonal changes in phosphorus soil content, as well as periods of drought that are known to be a native characteristic of the dry Middle-Eastern region where the chickpea is most commonly cultivated, have a strong effect on the growth of the plant itself. Plant yield is also affected by a combination of phosphorus nutrition and water supply, resulting in a 12% increase in crop yield. Nitrogen nutrition is another factor that affects the yield of Cicer arietinum, although the application differs from other perennial crops regarding the levels administered on the plant. High doses of nitrogen inhibit the yield of the chickpea plant. Drought stress is a likely factor that inhibits nitrogen uptake and subsequent fixation in the roots of Cicer arietinum. The perennial chickpea's growth depends on the balance between nitrogen fixation and assimilation, which is also characteristic of many other agricultural plant types. The influence of drought stress, sowing date, and mineral nitrogen supply affect the plant's yield and size, with trials showing that Cicer arietinum differed from other plant species in its capacity to assimilate mineral nitrogen supply from the soil during drought stress. Additional minerals and micronutrients make the absorption process of nitrogen and phosphorus more available. Inorganic phosphate ions are generally attracted towards charged minerals such as iron and aluminium oxides. Additionally, growth and yield are also limited by the micronutrients zinc and boron deficiencies in the soil. Boron-rich soil increased chickpea yield and size, while soil fertilization with zinc seemed to have no apparent effect on the chickpea yield. Pathogens Pathogens in chickpeas are the main cause of yield loss (up to 90%). One example is the fungus Fusarium oxysporum f.sp. ciceris, present in most of the major pulse crop-growing areas and causing regular yield damages between 10 and 15%. Many plant hosts produce heat shock protein 70s including C. arietinum. In response to F. o. ciceris Gupta et al., 2017 finds C. arietinum produces an orthologue of AtHSP70-1, an Arabidopsis HSP70. From 1978 until 1995, the worldwide number of pathogens increased from 49 to 172, of which 35 were recorded in India. These pathogens originate from groups of bacteria, fungi, viruses, mycoplasma and nematodes and show a high genotypic variation. The most widely distributed pathogens are Ascochyta rabiei (35 countries), Fusarium oxysporum f.sp. ciceris (32 countries) Uromyces ciceris-arietini (25 countries), bean leafroll virus (23 countries), and Macrophomina phaseolina (21 countries). Ascochyta disease emergence is favoured by wet weather; spores are carried to new plants by wind and water splash. The stagnation of yield improvement over the last decades is linked to the susceptibility to pathogens. Research for yield improvement, such as an attempt to increase yield from by breeding cold-resistant varieties, is always linked with pathogen-resistance breeding as pathogens such as Ascochyta rabiei and F. o. f.sp. ciceris flourish in conditions such as cold temperature. Research started selecting favourable genes for pathogen resistance and other traits through marker-assisted selection. This method is a promising sign for the future to achieve significant yield improvements. Gallery
Biology and health sciences
Fabales
null
57169
https://en.wikipedia.org/wiki/Heating%2C%20ventilation%2C%20and%20air%20conditioning
Heating, ventilation, and air conditioning
Heating, ventilation, and air conditioning (HVAC) is the use of various technologies to control the temperature, humidity, and purity of the air in an enclosed space. Its goal is to provide thermal comfort and acceptable indoor air quality. HVAC system design is a subdiscipline of mechanical engineering, based on the principles of thermodynamics, fluid mechanics, and heat transfer. "Refrigeration" is sometimes added to the field's abbreviation as HVAC&R or HVACR, or "ventilation" is dropped, as in HACR (as in the designation of HACR-rated circuit breakers). HVAC is an important part of residential structures such as single family homes, apartment buildings, hotels, and senior living facilities; medium to large industrial and office buildings such as skyscrapers and hospitals; vehicles such as cars, trains, airplanes, ships and submarines; and in marine environments, where safe and healthy building conditions are regulated with respect to temperature and humidity, using fresh air from outdoors. Ventilating or ventilation (the "V" in HVAC) is the process of exchanging or replacing air in any space to provide high indoor air quality which involves temperature control, oxygen replenishment, and removal of moisture, odors, smoke, heat, dust, airborne bacteria, carbon dioxide, and other gases. Ventilation removes unpleasant smells and excessive moisture, introduces outside air, keeps interior building air circulating, and prevents stagnation of the interior air. Methods for ventilating a building are divided into mechanical/forced and natural types. Overview The three major functions of heating, ventilation, and air conditioning are interrelated, especially with the need to provide thermal comfort and acceptable indoor air quality within reasonable installation, operation, and maintenance costs. HVAC systems can be used in both domestic and commercial environments. HVAC systems can provide ventilation, and maintain pressure relationships between spaces. The means of air delivery and removal from spaces is known as room air distribution. Individual systems In modern buildings, the design, installation, and control systems of these functions are integrated into one or more HVAC systems. For very small buildings, contractors normally estimate the capacity and type of system needed and then design the system, selecting the appropriate refrigerant and various components needed. For larger buildings, building service designers, mechanical engineers, or building services engineers analyze, design, and specify the HVAC systems. Specialty mechanical contractors and suppliers then fabricate, install and commission the systems. Building permits and code-compliance inspections of the installations are normally required for all sizes of buildings District networks Although HVAC is executed in individual buildings or other enclosed spaces (like NORAD's underground headquarters), the equipment involved is in some cases an extension of a larger district heating (DH) or district cooling (DC) network, or a combined DHC network. In such cases, the operating and maintenance aspects are simplified and metering becomes necessary to bill for the energy that is consumed, and in some cases energy that is returned to the larger system. For example, at a given time one building may be utilizing chilled water for air conditioning and the warm water it returns may be used in another building for heating, or for the overall heating-portion of the DHC network (likely with energy added to boost the temperature). Basing HVAC on a larger network helps provide an economy of scale that is often not possible for individual buildings, for utilizing renewable energy sources such as solar heat, winter's cold, the cooling potential in some places of lakes or seawater for free cooling, and the enabling function of seasonal thermal energy storage. By utilizing natural sources that can be used for HVAC systems it can make a huge difference for the environment and help expand the knowledge of using different methods. History HVAC is based on inventions and discoveries made by Nikolay Lvov, Michael Faraday, Rolla C. Carpenter, Willis Carrier, Edwin Ruud, Reuben Trane, James Joule, William Rankine, Sadi Carnot, Alice Parker and many others. Multiple inventions within this time frame preceded the beginnings of the first comfort air conditioning system, which was designed in 1902 by Alfred Wolff (Cooper, 2003) for the New York Stock Exchange, while Willis Carrier equipped the Sacketts-Wilhems Printing Company with the process AC unit the same year. Coyne College was the first school to offer HVAC training in 1899. The first residential AC was installed by 1914, and by the 1950s there was "widespread adoption of residential AC". The invention of the components of HVAC systems went hand-in-hand with the Industrial Revolution, and new methods of modernization, higher efficiency, and system control are constantly being introduced by companies and inventors worldwide. Heating Heaters are appliances whose purpose is to generate heat (i.e. warmth) for the building. This can be done via central heating. Such a system contains a boiler, furnace, or heat pump to heat water, steam, or air in a central location such as a furnace room in a home, or a mechanical room in a large building. The heat can be transferred by convection, conduction, or radiation. Space heaters are used to heat single rooms and only consist of a single unit. Generation Heaters exist for various types of fuel, including solid fuels, liquids, and gases. Another type of heat source is electricity, normally heating ribbons composed of high resistance wire (see Nichrome). This principle is also used for baseboard heaters and portable heaters. Electrical heaters are often used as backup or supplemental heat for heat pump systems. The heat pump gained popularity in the 1950s in Japan and the United States. Heat pumps can extract heat from various sources, such as environmental air, exhaust air from a building, or from the ground. Heat pumps transfer heat from outside the structure into the air inside. Initially, heat pump HVAC systems were only used in moderate climates, but with improvements in low temperature operation and reduced loads due to more efficient homes, they are increasing in popularity in cooler climates. They can also operate in reverse to cool an interior. Distribution Water/steam In the case of heated water or steam, piping is used to transport the heat to the rooms. Most modern hot water boiler heating systems have a circulator, which is a pump, to move hot water through the distribution system (as opposed to older gravity-fed systems). The heat can be transferred to the surrounding air using radiators, hot water coils (hydro-air), or other heat exchangers. The radiators may be mounted on walls or installed within the floor to produce floor heat. The use of water as the heat transfer medium is known as hydronics. The heated water can also supply an auxiliary heat exchanger to supply hot water for bathing and washing. Air Warm air systems distribute the heated air through ductwork systems of supply and return air through metal or fiberglass ducts. Many systems use the same ducts to distribute air cooled by an evaporator coil for air conditioning. The air supply is normally filtered through air filters to remove dust and pollen particles. Dangers The use of furnaces, space heaters, and boilers as a method of indoor heating could result in incomplete combustion and the emission of carbon monoxide, nitrogen oxides, formaldehyde, volatile organic compounds, and other combustion byproducts. Incomplete combustion occurs when there is insufficient oxygen; the inputs are fuels containing various contaminants and the outputs are harmful byproducts, most dangerously carbon monoxide, which is a tasteless and odorless gas with serious adverse health effects. Without proper ventilation, carbon monoxide can be lethal at concentrations of 1000 ppm (0.1%). However, at several hundred ppm, carbon monoxide exposure induces headaches, fatigue, nausea, and vomiting. Carbon monoxide binds with hemoglobin in the blood, forming carboxyhemoglobin, reducing the blood's ability to transport oxygen. The primary health concerns associated with carbon monoxide exposure are its cardiovascular and neurobehavioral effects. Carbon monoxide can cause atherosclerosis (the hardening of arteries) and can also trigger heart attacks. Neurologically, carbon monoxide exposure reduces hand to eye coordination, vigilance, and continuous performance. It can also affect time discrimination. Ventilation Ventilation is the process of changing or replacing air in any space to control the temperature or remove any combination of moisture, odors, smoke, heat, dust, airborne bacteria, or carbon dioxide, and to replenish oxygen. It plays a critical role in maintaining a healthy indoor environment by preventing the buildup of harmful pollutants and ensuring the circulation of fresh air. Different methods, such as natural ventilation through windows and mechanical ventilation systems, can be used depending on the building design and air quality needs. Ventilation often refers to the intentional delivery of the outside air to the building indoor space. It is one of the most important factors for maintaining acceptable indoor air quality in buildings. Although ventilation is an integral component of maintaining good indoor air quality, it may not be satisfactory alone. A clear understanding of both indoor and outdoor air quality parameters is needed to improve the performance of ventilation in terms of ... In scenarios where outdoor pollution would deteriorate indoor air quality, other treatment devices such as filtration may also be necessary. Methods for ventilating a building may be divided into mechanical/forced and natural types. Mechanical or forced Mechanical, or forced, ventilation is provided by an air handler (AHU) and used to control indoor air quality. Excess humidity, odors, and contaminants can often be controlled via dilution or replacement with outside air. However, in humid climates more energy is required to remove excess moisture from ventilation air. Kitchens and bathrooms typically have mechanical exhausts to control odors and sometimes humidity. Factors in the design of such systems include the flow rate (which is a function of the fan speed and exhaust vent size) and noise level. Direct drive fans are available for many applications and can reduce maintenance needs. In summer, ceiling fans and table/floor fans circulate air within a room for the purpose of reducing the perceived temperature by increasing evaporation of perspiration on the skin of the occupants. Because hot air rises, ceiling fans may be used to keep a room warmer in the winter by circulating the warm stratified air from the ceiling to the floor. Passive Natural ventilation is the ventilation of a building with outside air without using fans or other mechanical systems. It can be via operable windows, louvers, or trickle vents when spaces are small and the architecture permits. ASHRAE defined Natural ventilation as the flow of air through open windows, doors, grilles, and other planned building envelope penetrations, and as being driven by natural and/or artificially produced pressure differentials. Natural ventilation strategies also include cross ventilation, which relies on wind pressure differences on opposite sides of a building. By strategically placing openings, such as windows or vents, on opposing walls, air is channeled through the space to enhance cooling and ventilation. Cross ventilation is most effective when there are clear, unobstructed paths for airflow within the building. In more complex schemes, warm air is allowed to rise and flow out high building openings to the outside (stack effect), causing cool outside air to be drawn into low building openings. Natural ventilation schemes can use very little energy, but care must be taken to ensure comfort. In warm or humid climates, maintaining thermal comfort solely via natural ventilation might not be possible. Air conditioning systems are used, either as backups or supplements. Air-side economizers also use outside air to condition spaces, but do so using fans, ducts, dampers, and control systems to introduce and distribute cool outdoor air when appropriate. An important component of natural ventilation is air change rate or air changes per hour: the hourly rate of ventilation divided by the volume of the space. For example, six air changes per hour means an amount of new air, equal to the volume of the space, is added every ten minutes. For human comfort, a minimum of four air changes per hour is typical, though warehouses might have only two. Too high of an air change rate may be uncomfortable, akin to a wind tunnel which has thousands of changes per hour. The highest air change rates are for crowded spaces, bars, night clubs, commercial kitchens at around 30 to 50 air changes per hour. Room pressure can be either positive or negative with respect to outside the room. Positive pressure occurs when there is more air being supplied than exhausted, and is common to reduce the infiltration of outside contaminants. Airborne diseases Natural ventilation is a key factor in reducing the spread of airborne illnesses such as tuberculosis, the common cold, influenza, meningitis or COVID-19. Opening doors and windows are good ways to maximize natural ventilation, which would make the risk of airborne contagion much lower than with costly and maintenance-requiring mechanical systems. Old-fashioned clinical areas with high ceilings and large windows provide the greatest protection. Natural ventilation costs little and is maintenance free, and is particularly suited to limited-resource settings and tropical climates, where the burden of TB and institutional TB transmission is highest. In settings where respiratory isolation is difficult and climate permits, windows and doors should be opened to reduce the risk of airborne contagion. Natural ventilation requires little maintenance and is inexpensive. Natural ventilation is not practical in much of the infrastructure because of climate. This means that the facilities need to have effective mechanical ventilation systems and or use Ceiling Level UV or FAR UV ventilation systems. Ventilation is measured in terms of Air Changes Per Hour (ACH). As of 2023, the CDC recommends that all spaces have a minimum of 5 ACH. For hospital rooms with airborne contagions the CDC recommends a minimum of 12 ACH. The challenges in facility ventilation are public unawareness, ineffective government oversight, poor building codes that are based on comfort levels, poor system operations, poor maintenance, and lack of transparency. UVC or Ultraviolet Germicidal Irradiation is a function used in modern air conditioners which reduces airborne viruses, bacteria, and fungi, through the use of a built-in LED UV light that emits a gentle glow across the evaporator. As the cross-flow fan circulates the room air, any viruses are guided through the sterilization module’s irradiation range, rendering them instantly inactive. Air conditioning An air conditioning system, or a standalone air conditioner, provides cooling and/or humidity control for all or part of a building. Air conditioned buildings often have sealed windows, because open windows would work against the system intended to maintain constant indoor air conditions. Outside, fresh air is generally drawn into the system by a vent into a mix air chamber for mixing with the space return air. Then the mixture air enters an indoor or outdoor heat exchanger section where the air is to be cooled down, then be guided to the space creating positive air pressure. The percentage of return air made up of fresh air can usually be manipulated by adjusting the opening of this vent. Typical fresh air intake is about 10% of the total supply air. Air conditioning and refrigeration are provided through the removal of heat. Heat can be removed through radiation, convection, or conduction. The heat transfer medium is a refrigeration system, such as water, air, ice, and chemicals are referred to as refrigerants. A refrigerant is employed either in a heat pump system in which a compressor is used to drive thermodynamic refrigeration cycle, or in a free cooling system that uses pumps to circulate a cool refrigerant (typically water or a glycol mix). It is imperative that the air conditioning horsepower is sufficient for the area being cooled. Underpowered air conditioning systems will lead to power wastage and inefficient usage. Adequate horsepower is required for any air conditioner installed. Refrigeration cycle The refrigeration cycle uses four essential elements to cool, which are compressor, condenser, metering device, and evaporator. At the inlet of a compressor, the refrigerant inside the system is in a low pressure, low temperature, gaseous state. The compressor pumps the refrigerant gas up to high pressure and temperature. From there it enters a heat exchanger (sometimes called a condensing coil or condenser) where it loses heat to the outside, cools, and condenses into its liquid phase. An expansion valve (also called metering device) regulates the refrigerant liquid to flow at the proper rate. The liquid refrigerant is returned to another heat exchanger where it is allowed to evaporate, hence the heat exchanger is often called an evaporating coil or evaporator. As the liquid refrigerant evaporates it absorbs heat from the inside air, returns to the compressor, and repeats the cycle. In the process, heat is absorbed from indoors and transferred outdoors, resulting in cooling of the building. In variable climates, the system may include a reversing valve that switches from heating in winter to cooling in summer. By reversing the flow of refrigerant, the heat pump refrigeration cycle is changed from cooling to heating or vice versa. This allows a facility to be heated and cooled by a single piece of equipment by the same means, and with the same hardware. Free cooling Free cooling systems can have very high efficiencies, and are sometimes combined with seasonal thermal energy storage so that the cold of winter can be used for summer air conditioning. Common storage mediums are deep aquifers or a natural underground rock mass accessed via a cluster of small-diameter, heat-exchanger-equipped boreholes. Some systems with small storages are hybrids, using free cooling early in the cooling season, and later employing a heat pump to chill the circulation coming from the storage. The heat pump is added-in because the storage acts as a heat sink when the system is in cooling (as opposed to charging) mode, causing the temperature to gradually increase during the cooling season. Some systems include an "economizer mode", which is sometimes called a "free-cooling mode". When economizing, the control system will open (fully or partially) the outside air damper and close (fully or partially) the return air damper. This will cause fresh, outside air to be supplied to the system. When the outside air is cooler than the demanded cool air, this will allow the demand to be met without using the mechanical supply of cooling (typically chilled water or a direct expansion "DX" unit), thus saving energy. The control system can compare the temperature of the outside air vs. return air, or it can compare the enthalpy of the air, as is frequently done in climates where humidity is more of an issue. In both cases, the outside air must be less energetic than the return air for the system to enter the economizer mode. Packaged split system Central, "all-air" air-conditioning systems (or package systems) with a combined outdoor condenser/evaporator unit are often installed in North American residences, offices, and public buildings, but are difficult to retrofit (install in a building that was not designed to receive it) because of the bulky air ducts required. (Minisplit ductless systems are used in these situations.) Outside of North America, packaged systems are only used in limited applications involving large indoor space such as stadiums, theatres or exhibition halls. An alternative to packaged systems is the use of separate indoor and outdoor coils in split systems. Split systems are preferred and widely used worldwide except in North America. In North America, split systems are most often seen in residential applications, but they are gaining popularity in small commercial buildings. Split systems are used where ductwork is not feasible or where the space conditioning efficiency is of prime concern. The benefits of ductless air conditioning systems include easy installation, no ductwork, greater zonal control, flexibility of control, and quiet operation. In space conditioning, the duct losses can account for 30% of energy consumption. The use of minisplits can result in energy savings in space conditioning as there are no losses associated with ducting. With the split system, the evaporator coil is connected to a remote condenser unit using refrigerant piping between an indoor and outdoor unit instead of ducting air directly from the outdoor unit. Indoor units with directional vents mount onto walls, suspended from ceilings, or fit into the ceiling. Other indoor units mount inside the ceiling cavity so that short lengths of duct handle air from the indoor unit to vents or diffusers around the rooms. Split systems are more efficient and the footprint is typically smaller than the package systems. On the other hand, package systems tend to have a slightly lower indoor noise level compared to split systems since the fan motor is located outside. Dehumidification Dehumidification (air drying) in an air conditioning system is provided by the evaporator. Since the evaporator operates at a temperature below the dew point, moisture in the air condenses on the evaporator coil tubes. This moisture is collected at the bottom of the evaporator in a pan and removed by piping to a central drain or onto the ground outside. A dehumidifier is an air-conditioner-like device that controls the humidity of a room or building. It is often employed in basements that have a higher relative humidity because of their lower temperature (and propensity for damp floors and walls). In food retailing establishments, large open chiller cabinets are highly effective at dehumidifying the internal air. Conversely, a humidifier increases the humidity of a building. The HVAC components that dehumidify the ventilation air deserve careful attention because outdoor air constitutes most of the annual humidity load for nearly all buildings. Humidification Maintenance All modern air conditioning systems, even small window package units, are equipped with internal air filters. These are generally of a lightweight gauze-like material, and must be replaced or washed as conditions warrant. For example, a building in a high dust environment, or a home with furry pets, will need to have the filters changed more often than buildings without these dirt loads. Failure to replace these filters as needed will contribute to a lower heat exchange rate, resulting in wasted energy, shortened equipment life, and higher energy bills; low air flow can result in iced-over evaporator coils, which can completely stop airflow. Additionally, very dirty or plugged filters can cause overheating during a heating cycle, which can result in damage to the system or even fire. Because an air conditioner moves heat between the indoor coil and the outdoor coil, both must be kept clean. This means that, in addition to replacing the air filter at the evaporator coil, it is also necessary to regularly clean the condenser coil. Failure to keep the condenser clean will eventually result in harm to the compressor because the condenser coil is responsible for discharging both the indoor heat (as picked up by the evaporator) and the heat generated by the electric motor driving the compressor. Energy efficiency HVAC is significantly responsible for promoting energy efficiency of buildings as the building sector consumes the largest percentage of global energy. Since the 1980s, manufacturers of HVAC equipment have been making an effort to make the systems they manufacture more efficient. This was originally driven by rising energy costs, and has more recently been driven by increased awareness of environmental issues. Additionally, improvements to the HVAC system efficiency can also help increase occupant health and productivity. In the US, the EPA has imposed tighter restrictions over the years. There are several methods for making HVAC systems more efficient. Heating energy In the past, water heating was more efficient for heating buildings and was the standard in the United States. Today, forced air systems can double for air conditioning and are more popular. Some benefits of forced air systems, which are now widely used in churches, schools, and high-end residences, are Better air conditioning effects Energy savings of up to 15–20% Even conditioning A drawback is the installation cost, which can be slightly higher than traditional HVAC systems. Energy efficiency can be improved even more in central heating systems by introducing zoned heating. This allows a more granular application of heat, similar to non-central heating systems. Zones are controlled by multiple thermostats. In water heating systems the thermostats control zone valves, and in forced air systems they control zone dampers inside the vents which selectively block the flow of air. In this case, the control system is very critical to maintaining a proper temperature. Forecasting is another method of controlling building heating by calculating the demand for heating energy that should be supplied to the building in each time unit. Ground source heat pump Ground source, or geothermal, heat pumps are similar to ordinary heat pumps, but instead of transferring heat to or from outside air, they rely on the stable, even temperature of the earth to provide heating and air conditioning. Many regions experience seasonal temperature extremes, which would require large-capacity heating and cooling equipment to heat or cool buildings. For example, a conventional heat pump system used to heat a building in Montana's low temperature or cool a building in the highest temperature ever recorded in the US— in Death Valley, California, in 1913 would require a large amount of energy due to the extreme difference between inside and outside air temperatures. A metre below the earth's surface, however, the ground remains at a relatively constant temperature. Utilizing this large source of relatively moderate temperature earth, a heating or cooling system's capacity can often be significantly reduced. Although ground temperatures vary according to latitude, at underground, temperatures generally only range from . Solar air conditioning Photovoltaic solar panels offer a new way to potentially decrease the operating cost of air conditioning. Traditional air conditioners run using alternating current, and hence, any direct-current solar power needs to be inverted to be compatible with these units. New variable-speed DC-motor units allow solar power to more easily run them since this conversion is unnecessary, and since the motors are tolerant of voltage fluctuations associated with variance in supplied solar power (e.g., due to cloud cover). Ventilation energy recovery Energy recovery systems sometimes utilize heat recovery ventilation or energy recovery ventilation systems that employ heat exchangers or enthalpy wheels to recover sensible or latent heat from exhausted air. This is done by transfer of energy from the stale air inside the home to the incoming fresh air from outside. Air conditioning energy The performance of vapor compression refrigeration cycles is limited by thermodynamics. These air conditioning and heat pump devices move heat rather than convert it from one form to another, so thermal efficiencies do not appropriately describe the performance of these devices. The Coefficient of performance (COP) measures performance, but this dimensionless measure has not been adopted. Instead, the Energy Efficiency Ratio (EER) has traditionally been used to characterize the performance of many HVAC systems. EER is the Energy Efficiency Ratio based on a outdoor temperature. To more accurately describe the performance of air conditioning equipment over a typical cooling season a modified version of the EER, the Seasonal Energy Efficiency Ratio (SEER), or in Europe the ESEER, is used. SEER ratings are based on seasonal temperature averages instead of a constant outdoor temperature. The current industry minimum SEER rating is 14 SEER. Engineers have pointed out some areas where efficiency of the existing hardware could be improved. For example, the fan blades used to move the air are usually stamped from sheet metal, an economical method of manufacture, but as a result they are not aerodynamically efficient. A well-designed blade could reduce the electrical power required to move the air by a third. Demand-controlled kitchen ventilation Demand-controlled kitchen ventilation (DCKV) is a building controls approach to controlling the volume of kitchen exhaust and supply air in response to the actual cooking loads in a commercial kitchen. Traditional commercial kitchen ventilation systems operate at 100% fan speed independent of the volume of cooking activity and DCKV technology changes that to provide significant fan energy and conditioned air savings. By deploying smart sensing technology, both the exhaust and supply fans can be controlled to capitalize on the affinity laws for motor energy savings, reduce makeup air heating and cooling energy, increasing safety, and reducing ambient kitchen noise levels. Air filtration and cleaning Air cleaning and filtration removes particles, contaminants, vapors and gases from the air. The filtered and cleaned air then is used in heating, ventilation, and air conditioning. Air cleaning and filtration should be taken in account when protecting our building environments. If present, contaminants can come out from the HVAC systems if not removed or filtered properly. Clean air delivery rate (CADR) is the amount of clean air an air cleaner provides to a room or space. When determining CADR, the amount of airflow in a space is taken into account. For example, an air cleaner with a flow rate of per minute and an efficiency of 50% has a CADR of per minute. Along with CADR, filtration performance is very important when it comes to the air in our indoor environment. This depends on the size of the particle or fiber, the filter packing density and depth, and the airflow rate. Circulation of harmful substances Poorly maintained air conditioners/ventilation systems can harbor mold, bacteria, and other contaminants, which are then circulated throughout indoor spaces, contributing to ... Industry and standards The HVAC industry is a worldwide enterprise, with roles including operation and maintenance, system design and construction, equipment manufacturing and sales, and in education and research. The HVAC industry was historically regulated by the manufacturers of HVAC equipment, but regulating and standards organizations such as HARDI (Heating, Air-conditioning and Refrigeration Distributors International), ASHRAE, SMACNA, ACCA (Air Conditioning Contractors of America), Uniform Mechanical Code, International Mechanical Code, and AMCA have been established to support the industry and encourage high standards and achievement. (UL as an omnibus agency is not specific to the HVAC industry.) The starting point in carrying out an estimate both for cooling and heating depends on the exterior climate and interior specified conditions. However, before taking up the heat load calculation, it is necessary to find fresh air requirements for each area in detail, as pressurization is an important consideration. International ISO 16813:2006 is one of the ISO building environment standards. It establishes the general principles of building environment design. It takes into account the need to provide a healthy indoor environment for the occupants as well as the need to protect the environment for future generations and promote collaboration among the various parties involved in building environmental design for sustainability. ISO16813 is applicable to new construction and the retrofit of existing buildings. The building environmental design standard aims to: provide the constraints concerning sustainability issues from the initial stage of the design process, with building and plant life cycle to be considered together with owning and operating costs from the beginning of the design process; assess the proposed design with rational criteria for indoor air quality, thermal comfort, acoustical comfort, visual comfort, energy efficiency, and HVAC system controls at every stage of the design process; iterate decisions and evaluations of the design throughout the design process. United States Licensing In the United States, federal licensure is generally handled by EPA certified (for installation and service of HVAC devices). Many U.S. states have licensing for boiler operation. Some of these are listed as follows: Arkansas Georgia Michigan Minnesota Montana New Jersey North Dakota Ohio Oklahoma Oregon Finally, some U.S. cities may have additional labor laws that apply to HVAC professionals. Societies Many HVAC engineers are members of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). ASHRAE regularly organizes two annual technical committees and publishes recognized standards for HVAC design, which are updated every four years. Another popular society is AHRI, which provides regular information on new refrigeration technology, and publishes relevant standards and codes. Codes Codes such as the UMC and IMC do include much detail on installation requirements, however. Other useful reference materials include items from SMACNA, ACGIH, and technical trade journals. American design standards are legislated in the Uniform Mechanical Code or International Mechanical Code. In certain states, counties, or cities, either of these codes may be adopted and amended via various legislative processes. These codes are updated and published by the International Association of Plumbing and Mechanical Officials (IAPMO) or the International Code Council (ICC) respectively, on a 3-year code development cycle. Typically, local building permit departments are charged with enforcement of these standards on private and certain public properties. Technicians An HVAC technician is a tradesman who specializes in heating, ventilation, air conditioning, and refrigeration. HVAC technicians in the US can receive training through formal training institutions, where most earn associate degrees. Training for HVAC technicians includes classroom lectures and hands-on tasks, and can be followed by an apprenticeship wherein the recent graduate works alongside a professional HVAC technician for a temporary period. HVAC techs who have been trained can also be certified in areas such as air conditioning, heat pumps, gas heating, and commercial refrigeration. United Kingdom The Chartered Institution of Building Services Engineers is a body that covers the essential Service (systems architecture) that allow buildings to operate. It includes the electrotechnical, heating, ventilating, air conditioning, refrigeration and plumbing industries. To train as a building services engineer, the academic requirements are GCSEs (A-C) / Standard Grades (1-3) in Maths and Science, which are important in measurements, planning and theory. Employers will often want a degree in a branch of engineering, such as building environment engineering, electrical engineering or mechanical engineering. To become a full member of CIBSE, and so also to be registered by the Engineering Council UK as a chartered engineer, engineers must also attain an Honours Degree and a master's degree in a relevant engineering subject. CIBSE publishes several guides to HVAC design relevant to the UK market, and also the Republic of Ireland, Australia, New Zealand and Hong Kong. These guides include various recommended design criteria and standards, some of which are cited within the UK building regulations, and therefore form a legislative requirement for major building services works. The main guides are: Guide A: Environmental Design Guide B: Heating, Ventilating, Air Conditioning and Refrigeration Guide C: Reference Data Guide D: Transportation systems in Buildings Guide E: Fire Safety Engineering Guide F: Energy Efficiency in Buildings Guide G: Public Health Engineering Guide H: Building Control Systems Guide J: Weather, Solar and Illuminance Data Guide K: Electricity in Buildings Guide L: Sustainability Guide M: Maintenance Engineering and Management Within the construction sector, it is the job of the building services engineer to design and oversee the installation and maintenance of the essential services such as gas, electricity, water, heating and lighting, as well as many others. These all help to make buildings comfortable and healthy places to live and work in. Building Services is part of a sector that has over 51,000 businesses and employs represents 2–3% of the GDP. Australia The Air Conditioning and Mechanical Contractors Association of Australia (AMCA), Australian Institute of Refrigeration, Air Conditioning and Heating (AIRAH), Australian Refrigeration Mechanical Association and CIBSE are responsible. Asia Asian architectural temperature-control have different priorities than European methods. For example, Asian heating traditionally focuses on maintaining temperatures of objects such as the floor or furnishings such as Kotatsu tables and directly warming people, as opposed to the Western focus, in modern periods, on designing air systems. Philippines The Philippine Society of Ventilating, Air Conditioning and Refrigerating Engineers (PSVARE) along with Philippine Society of Mechanical Engineers (PSME) govern on the codes and standards for HVAC / MVAC (MVAC means "mechanical ventilation and air conditioning") in the Philippines. India The Indian Society of Heating, Refrigerating and Air Conditioning Engineers (ISHRAE) was established to promote the HVAC industry in India. ISHRAE is an associate of ASHRAE. ISHRAE was founded at New Delhi in 1981 and a chapter was started in Bangalore in 1989. Between 1989 & 1993, ISHRAE chapters were formed in all major cities in India.
Technology
Other components
null
57174
https://en.wikipedia.org/wiki/Rib%20cage
Rib cage
The rib cage or thoracic cage is an endoskeletal enclosure in the thorax of most vertebrates that comprises the ribs, vertebral column and sternum, which protect the vital organs of the thoracic cavity, such as the heart, lungs and great vessels and support the shoulder girdle to form the core part of the axial skeleton. A typical human thoracic cage consists of 12 pairs of ribs and the adjoining costal cartilages, the sternum (along with the manubrium and xiphoid process), and the 12 thoracic vertebrae articulating with the ribs. The thoracic cage also provides attachments for extrinsic skeletal muscles of the neck, upper limbs, upper abdomen and back, and together with the overlying skin and associated fascia and muscles, makes up the thoracic wall. In tetrapods, the rib cage intrinsically holds the muscles of respiration (diaphragm, intercostal muscles, etc.) that are crucial for active inhalation and forced exhalation, and therefore has a major ventilatory function in the respiratory system. Structure There are thirty-three vertebrae in the human vertebral column. The rib cage is associated with TH1−TH12. Ribs are described based on their location and connection with the sternum. All ribs are attached posteriorly to the thoracic vertebrae and are numbered accordingly one to twelve. Ribs that articulate directly with the sternum are called true ribs, whereas those that do not articulate directly are termed false ribs. The false ribs include the floating ribs (eleven and twelve) that are not attached to the sternum at all. Attachment The terms true ribs and false ribs describe rib pairs that are directly or indirectly attached to the sternum respectively. The first seven rib pairs known as the fixed or vertebrosternal ribs are the true ribs () as they connect directly to the sternum via their own individual costal cartilages. The next five pairs (eighth to twelfth) are the false ribs () or vertebrochondral ribs, which do not connect directly to the sternum. The first three pairs of vertebrochondral ribs (eighth to tenth) connect indirectly to the sternum via the costal cartilages of the ribs above them, and the overall elasticity of their articulations allows the bucket handle movements of the rib cage essential for respiratory activity. The phrase floating rib () or vertebral rib refers to the two lowermost (the eleventh and twelfth) rib pairs; so-called because they are attached only to the vertebrae and not to the sternum or any of the costal cartilages. These ribs are relatively small and delicate, and include a cartilaginous tip. The spaces between the ribs are known as intercostal spaces; they contain the instrinsic intercostal muscles and the neurovascular bundles containing intercostal nerves, arteries and veins. The superficial surface of the rib cage is covered by the thoracolumbar fascia, which provides external attachments for the neck, back, pectoral and abdominal muscles. Parts of rib Each rib consists of a head, neck, and a shaft. All ribs are attached posteriorly to the thoracic vertebrae. They are numbered to match the vertebrae they attach to – one to twelve, from top (T1) to bottom. The head of the rib is the end part closest to the vertebra with which it articulates. It is marked by a kidney-shaped articular surface which is divided by a horizontal crest into two articulating regions. The upper region articulates with the inferior costal facet on the vertebra above, and the larger region articulates with the superior costal facet on the vertebra with the same number. The transverse process of a thoracic vertebra also articulates at the transverse costal facet with the tubercle of the rib of the same number. The crest gives attachment to the intra-articular ligament. The neck of the rib is the flattened part that extends laterally from the head. The neck is about 3 cm long. Its anterior surface is flat and smooth, whilst its posterior is perforated by numerous foramina and its surface rough, to give attachment to the ligament of the neck. Its upper border presents a rough crest (crista colli costae) for the attachment of the anterior costotransverse ligament; its lower border is rounded. On the posterior surface at the neck, is an eminence—the tubercle that consists of an articular and a non-articular portion. The articular portion is the lower and more medial of the two and presents a small, oval surface for articulation with the transverse costal facet on the end of the transverse process of the lower of the two vertebrae to which the head is connected. The non-articular portion is a rough elevation and affords attachment to the ligament of the tubercle. The tubercle is much more prominent in the upper ribs than in the lower ribs. The angle of a rib (costal angle) may both refer to the bending part of it, and a prominent line in this area, a little in front of the tubercle. This line is directed downward and laterally; this gives attachment to a tendon of the iliocostalis muscle. At this point, the rib is bent in two directions, and at the same time twisted on its long axis. The distance between the angle and the tubercle is progressively greater from the second to the tenth ribs. The area between the angle and the tubercle is rounded, rough, and irregular, and serves for the attachment of the longissimus dorsi muscle. Bones Ribs and vertebrae The first rib (the topmost one) is the most curved and usually the shortest of all the ribs; it is broad and flat, its surfaces looking upward and downward, and its borders inward and outward. The head is small and rounded, and possesses only a single articular facet, for articulation with the body of the first thoracic vertebra. The neck is narrow and rounded. The tubercle, thick and prominent, is placed on the outer border. It bears a small facet for articulation with the transverse costal facet on the transverse process of T1. There is no angle, but at the tubercle, the rib is slightly bent, with the convexity upward, so that the head of the bone is directed downward. The upper surface of the body is marked by two shallow grooves, separated from each other by a slight ridge prolonged internally into a tubercle, the scalene tubercle, for the attachment of the anterior scalene; the anterior groove transmits the subclavian vein, the posterior the subclavian artery and the lowest trunk of the brachial plexus. Behind the posterior groove is a rough area for the attachment of the medial scalene. The under surface is smooth and without a costal groove. The outer border is convex, thick, and rounded, and at its posterior part gives attachment to the first digitation of the serratus anterior. The inner border is concave, thin, and sharp, and marked about its center by the scalene tubercle. The anterior extremity is larger and thicker than that of any of the other ribs. The second rib is the second uppermost rib in humans or second most frontal in animals that walk on four limbs. In humans, the second rib is defined as a true rib since it connects with the sternum through the intervention of the costal cartilage anteriorly (at the front). Posteriorly, the second rib is connected with the vertebral column by the second thoracic vertebra. The second rib is much longer than the first rib, but has a very similar curvature. The non-articular portion of the tubercle is occasionally only feebly marked. The angle is slight and situated close to the tubercle. The body is not twisted so that both ends touch any plane surface upon which it may be laid; but there is a bend, with its convexity upward, similar to, though smaller than that found in the first rib. The body is not flattened horizontally like that of the first rib. Its external surface is convex, and looks upward and a little outward; near the middle of it is a rough eminence for the origin of the lower part of the first and the whole of the second digitation of the serratus anterior; behind and above this is attached the posterior scalene. The internal surface, smooth, and concave, is directed downward and a little inward: on its posterior part there is a short costal groove between the ridge of the internal surface of the rib and the inferior border. It protects the intercostal space containing the intercostal veins, intercostal arteries, and intercostal nerves. The ninth rib has a frontal part at the same level as the first lumbar vertebra. This level is called the transpyloric plane, since the pylorus is also at this level. The tenth rib attaches directly to the body of vertebra T10 instead of between vertebrae like the second through ninth ribs. Due to this direct attachment, vertebra T10 has a complete costal facet on its body. The eleventh and twelfth ribs, the floating ribs, have a single articular facet on the head, which is of rather large size. They have no necks or tubercles, and are pointed at their anterior ends. The eleventh has a slight angle and a shallow costal groove, whereas the twelfth does not. The twelfth rib is much shorter than the eleventh rib, and only has a one articular facet. Sternum The sternum is a long, flat bone that forms the front of the rib cage. The cartilages of the top seven ribs (the true ribs) join with the sternum at the sternocostal joints. The costal cartilage of the second rib articulates with the sternum at the sternal angle making it easy to locate. The manubrium is the wider, superior portion of the sternum. The top of the manubrium has a shallow, U-shaped border called the jugular (suprasternal) notch. The clavicular notch is the shallow depression located on either side at the superior-lateral margins of the manubrium. This is the site of the sternoclavicular joint, between the sternum and clavicle. The first ribs also attach to the manubrium. The transversus thoracis muscle is innervated by one of the intercostal nerves and superiorly attaches at the posterior surface of the lower sternum. Its inferior attachment is the internal surface of costal cartilages two through six and works to depress the ribs. Development Expansion of the rib cage in males is caused by the effects of testosterone during puberty. Thus, males generally have broad shoulders and expanded chests, allowing them to inhale more air to supply their muscles with oxygen. The development of the rib cage is influenced by a combination of genetic and environmental factors, as well as specific stages of embryonic growth. Genetic factors play a critical role, with specific genes regulating the formation of bones and cartilage to ensure the proper development and alignment of the ribs and sternum. During the embryonic stage, the rib cage begins to form from the mesoderm, one of the three primary germ layers. Ribs develop from structures called somites, which later segment into vertebrae and ribs. Initially, the ribs are composed of cartilage, which gradually ossifies into bone through a process known as endochondral ossification. As the embryo grows, the ribs elongate and differentiate into three types: true ribs, which attach directly to the sternum; false ribs, which connect to the sternum via cartilage; and floating ribs, which do not attach to the sternum. Additionally, environmental factors such as maternal health, nutrition, and exposure to certain substances can impact rib cage development. For instance, deficiencies in essential nutrients like calcium and vitamin D may hinder proper bone growth and development. Together, these genetic, developmental, and environmental influences ensure the formation of a functional rib cage. Variation Variations in the number of ribs occur. About 1 in 200–500 people have an additional cervical rib, and there is a female predominance. Intrathoracic supernumerary ribs are extremely rare. The rib remnant of the 7th cervical vertebra on one or both sides is occasionally replaced by a free extra rib called a cervical rib, which can mechanically interfere with the nerves (brachial plexus) going to the arm. In several ethnic groups, most significantly the Japanese, the tenth rib is sometimes a floating rib, as it lacks a cartilaginous connection to the seventh rib. Function The human rib cage is a component of the human respiratory system. It encloses the thoracic cavity, which contains the lungs. An inhalation is accomplished when the muscular diaphragm, at the floor of the thoracic cavity, contracts and flattens, while the contraction of intercostal muscles lift the rib cage up and out. Expansion of the thoracic cavity is driven in three planes; the vertical, the anteroposterior and the transverse. The vertical plane is extended by the help of the diaphragm contracting and the abdominal muscles relaxing to accommodate the downward pressure that is supplied to the abdominal viscera by the diaphragm contracting. A greater extension can be achieved by the diaphragm itself moving down, rather than simply the domes flattening. The second plane is the anteroposterior and this is expanded by a movement known as the 'pump handle'. The downward sloping nature of the upper ribs are as such because they enable this to occur. When the external intercostal muscles contract and lift the ribs, the upper ribs are able also to push the sternum up and out. This movement increases the anteroposterior diameter of the thoracic cavity, and hence aids breathing further. The third, transverse, plane is primarily expanded by the lower ribs (some say it is the 7th to 10th ribs in particular), with the diaphragm's central tendon acting as a fixed point. When the diaphragm contracts, the ribs are able to evert (meaning turn outwards or inside out) and produce what is known as the bucket handle movement, facilitated by gliding at the costovertebral joints. In this way, the transverse diameter is expanded and the lungs can fill. The circumference of the normal adult human rib cage expands by 3 to 5 cm during inhalation. Clinical significance Rib fractures are the most common injury to the rib cage. These most frequently affect the middle ribs. When several adjacent ribs incur two or more fractures each, this can result in a flail chest which is a life-threatening condition. A dislocated rib can be painful and can be caused simply by coughing, or for example by trauma or lifting heavy weights. One or more costal cartilages can become inflamed – a condition known as costochondritis; the resulting pain is similar to that of a heart attack. Abnormalities of the rib cage include pectus excavatum ("sunken chest") and pectus carinatum ("pigeon chest"). A bifid rib is a bifurcated rib, split towards the sternal end, and usually just affecting one of the ribs of a pair. It is a congenital defect affecting about 1.2% of the population. It is often without symptoms though respiratory difficulties and other problems can arise. Rib removal is the surgical removal of one or more ribs for therapeutic or cosmetic reasons. Rib resection is the removal of part of a rib. Regeneration The ability of the human rib to regenerate itself has been appreciated for some time. However, the repair has only been described in a few case reports. The phenomenon has been appreciated particularly by craniofacial surgeons, who use both cartilage and bone material from the rib for ear, jaw, face, and skull reconstruction. The perichondrium and periosteum are fibrous sheaths of vascular connective tissue surrounding the rib cartilage and bone respectively. These tissues containing a source of progenitor stem cells that drive regeneration. Society and culture The position of ribs can be permanently altered by a form of body modification called tightlacing, which uses a corset to compress and move the ribs. The ribs, particularly their sternal ends, are used as a way of estimating age in forensic pathology due to their progressive ossification. Biblical story The number of ribs as 24 (12 pairs) was noted by the Flemish anatomist Vesalius in his key work of anatomy De humani corporis fabrica in 1543, setting off a wave of controversy, as it was traditionally assumed from the Biblical story of Adam and Eve that men's ribs would number one fewer than women's. However, thirteenth or "cervical ribs" occur in 1% of humans and this is more common in females than in males. Other animals In herpetology, costal grooves refer to lateral indents along the integument of salamanders. The grooves run between the axilla to the groin. Each groove overlies the myotomal septa to mark the position of the internal rib. Birds and reptiles have bony uncinate processes on their ribs that project caudally from the vertical section of each rib. These serve to attach sacral muscles and also aid in allowing greater inspiration. Crocodiles have cartilaginous uncinate processes. Additional images
Biology and health sciences
Skeletal system
Biology
57176
https://en.wikipedia.org/wiki/Nunchaku
Nunchaku
is a traditional East-Asian martial arts weapon consisting of two sticks (traditionally made of wood), connected to each other at their ends by a short metal chain or a rope. It is approximately (sticks) and (rope). A person who has practiced using this weapon is referred to in Japanese as . The nunchaku is most widely used in Southern Chinese Kung fu, Okinawan Kobudo and karate. It is intended to be used as a training weapon, since practicing with it enables the development of quick hand movements and improves posture. Modern nunchaku may be made of metal, plastic, or fiberglass instead of the traditional wood. Toy versions and replicas not intended to be used as weapons may be made of polystyrene foam or plastic. Possession of this weapon is illegal in some countries, except for use in professional martial arts schools. The origin of the nunchaku is unclear. One traditional explanation holds that it was originally invented by Emperor Taizu of Song, as a weapon utilised in war, initially named Grand Ancestor Coiling Dragon Staff (大小盤龍棍/太祖盤龍棍, taai3 zo2 pun4 lung4 gwan3/taai3 zo2 pun4 lung4 gwan3). Another weapon, called the tabak-toyok, native to the northern Philippines, is constructed very similarly, suggesting that it and the nunchaku descended from the same instrument. In modern times, the nunchaku and the tabak-toyok were popularized by the actor and martial artist Bruce Lee and by Dan Inosanto. Lee famously used nunchaku in several scenes in the 1972 film Fist of Fury. When Tadashi Yamashita worked with Bruce Lee on the 1973 film Enter the Dragon, he enabled Lee to further explore the use of the nunchaku and other kobudo disciplines. The nunchaku is also the signature weapon of the character Michelangelo in the Teenage Mutant Ninja Turtles franchise. In addition the nunchaku is used in certain contact sports. Etymology The origin of the Ryukyuan word likely originated from the Min Chinese word of "nng chat kun"(兩節棍). Another name for this weapon is "nūchiku" (). In the English language, nunchaku are often referred to as "nunchuks". It is a variant of a word from the Okinawan language, which itself may come from a Min Chinese word for a farming tool, neng-cak. Origins The first written record of nunchaku-like weapon was the Chinese military compendium of compiled during the Northern Song dynasty: "鐵鏈夾棒,本出西戎,馬上用之,以敵漢之步兵。其狀如農家打麥之枷,以鐵飾之,利於自上擊下,故漢兵善用者巧於戎人。" Translation: "Two sticks connected by metal chain, originated from Xirong, used on horses in combat against Han infantry, shaped similarly to flails used by farmers to thresh wheat, iron-decorated, easy to strike below from above, Han soldiers who were able to master could exercise with excellence against the Xirongs."One popular belief is that nunchaku in its contemporary form was originally a short South-East Asian flail. A near variant to the nunchaku called tabak-toyok exists in the northern Philippines, which was used to thresh rice or soybeans. Alternative theories are that it was originally developed from an Okinawan horse bit (muge) or from a wooden clapper called hyoshiki carried by the village night watch, made of two blocks of wood joined by a cord. The night watch would hit the blocks of wood together to attract people's attention, then warn them about fires and other dangers. An oft-repeated claim is that the nunchaku and other Okinawan weapons were tools adapted for use as weapons by peasants who were forbidden from possessing conventional weapons, but available academic sources suggest this is likely a romantic exaggeration created by 20th century martial arts schools. Martial arts in Okinawa were practiced exclusively by the aristocracy (kazoku) and "serving nobles" (shizoku), but were prohibited among commoners (heimin). Parts Ana: the hole on the kontoh of each handle for the himo to pass through—only nunchaku that are connected by himo have an ana. Himo: the rope which connects the two handles of some nunchaku. Kusari: the chain which connects the two handles of some nunchaku. Kontoh: the top of each handle. Jukon-bu: the upper area of the handle. Chukon-bu: the center part of the handle. Kikon-bu: the lower part of the handle. Kontei: the bottom of the handle. Construction Nunchaku consist of two sections of wood connected by a cord or chain, though variants may include additional sections of wood and chain. In China, the striking stick is called "dragon stick" ("龍棍"), while the handle is called "yang stick" ("陽棍"). The rounded nunchaku is comparatively heavy and used for training, whereas the octagonal nunchaku is used for combat. Ideally, each piece should be long enough to protect the forearm when held in a high grip near the top of the shaft. Both ends are usually of equal length, although asymmetrical nunchaku exist that are closer to a traditional flail. The ideal length of the connecting rope or chain is just long enough to allow the user to lay it over his or her palm, with the sticks hanging comfortably and perpendicular to the ground. The weapon should be properly balanced in terms of weight. Cheaper or gimmicky nunchaku (such as glow-in-the-dark versions) are often not properly balanced, which prevents the performer from performing the more advanced and flashier "low-grip" moves, such as overhand twirls. The weight should be balanced towards the outer edges of the sticks for maximum ease and control of the swing arcs. Traditional nunchaku are made from a strong, flexible hardwood such as oak, loquat or pasania. Formal styles The nunchaku is most commonly used in Okinawan kobudō and karate, but it is also used in Korean hapkido and eskrima. (More accurately, the Tabak-Toyok, a similar though distinct Philippine weapon, is used, not the Okinawan nunchaku). Its application is different in each style. The traditional Okinawan forms use the sticks primarily to grip and lock. Filipino martial artists use it much the same way they would wield a stick: striking is given precedence. Korean systems combine offensive and defensive moves, so both locks and strikes are taught. Other proprietary systems of Nunchaku are also used in Sembkalah (Iranian Monolingual Combat Style), which makes lethal blows in defense and assault. Nunchaku is often the first weapon wielded by a student, to teach self-restraint and posture, as the weapon is liable to hit the wielder more than the opponent if not used properly. The Nunchaku is usually wielded in one hand, but it can also be dual wielded. It can be whirled around, using its hardened handles for blunt force, as well as wrapping its chain around an attacking weapon to immobilize or disarm an opponent. Nunchaku training has been noted to increase hand speed, improve posture, and condition the hands of the practitioner. Therefore, it makes a useful training weapon. Freestyle Freestyle nunchaku is a modern style of performance art using nunchaku as a visual tool, rather than as a weapon. With the growing prevalence of the Internet, the availability of nunchaku has greatly increased. In combination with the popularity of other video sharing sites, many people have become interested in learning how to use the weapons for freestyle displays. Freestyle is one discipline of competition held by the World Nunchaku Association. Some modern martial arts teach the use of nunchaku, as it may help students improve their reflexes, hand control, and other skills. Legality In a number of countries, possession of nunchaku is illegal, or the nunchaku is defined as a regulated weapon. These bans largely came after the wave of popularity of Bruce Lee films. Norway, Canada, Russia, Poland, Chile, and Spain are all known to have significant restrictions. In Germany, nunchaku have been illegal since April 2006, when they were declared a strangling weapon. In England and Wales, public possession of nunchaku is heavily restricted by the Prevention of Crime Act 1953 and the Criminal Justice Act 1988. However, nunchaku are not included in the list of weapons whose sale and manufacture is prohibited by Schedule 1 of the Criminal Justice Act 1988 (Offensive Weapons) Order 1988 and are traded openly (subject to age restrictions). In Scotland, laws restricting offensive weapons are similar to those of England and Wales. However, in a case in 2010, Glasgow Sheriff Court refused to accept a defence submission that nunchaku were not explicitly prohibited weapons under Scottish law, although the defendants were acquitted on other grounds. The use of nunchaku was, in the 1980s and 1990s, censored from UK rebroadcasts of American children's TV shows such as ThunderCats and Teenage Mutant Ninja Turtles cartoons and films. The UK version of ThunderCats edited out nunchakus used by the character Panthro. Teenage Mutant Ninja Turtles needed to be edited, the nunchakus used by Michelangelo were edited, until they were replaced by a grappling hook. The UK version of the video game Soul Blade was also edited, replacing the character Li Long's nunchaku with a three-sectioned staff. In Hong Kong, it is illegal to possess metal or wooden nunchaku connected by a chain, though one can obtain a license from the police as a martial arts instructor, and rubber nunchaku are still allowed. Possession of nunchaku in mainland China is legal. Australia varies by state laws. In New South Wales, the weapon is on the restricted weapons list and, thus, can only be owned with a permit. In the United States, the popularity of Bruce Lee movies in the 1970s led to a wave of state-level nunchaku bans in New York, Arizona, California, and Massachusetts. Only the Massachusetts ban remains, but other state laws and local ordinances continue to prohibit carrying nunchaku in specific situations, such as on school grounds or in government facilities, or if carrying in public as a concealed weapon. New York's nunchaku ban was ruled unconstitutional in the 2018 case Maloney v. Singas. The state of Arizona previously considered nunchaku to be a "prohibited weapon", making mere possession illegal, with the sole exception of nunchaku-like objects that are manufactured for use as illumination devices. A constitutional challenge failed, but Arizona legalized nunchaku in 2019. California prohibited nunchaku with exceptions for professional martial arts schools and practitioners, but the ban was repealed in 2021. This leaves Massachusetts as the only US state with a nunchaku ban. Massachusetts law classifies nunchucks as "dangerous weapons", with an exemption for use in martial arts, and anyone found carrying them without proper authorization may face criminal charges. Law enforcement use Nunchaku have been employed by a few American police departments for decades, especially after the popular Bruce Lee movies of the 1970's. For instance, in 2015, police in the small town of Anderson, California were trained and deployed to use nunchaku as a form of non-lethal force. They were selected because of their utility as both a striking weapon and a control tool. Orcutt Police Nunchaku (OPN) had been adopted by more than 200 law enforcement agencies in the USA. Even though it could be used as a striking weapon, it was mainly used as a grappling implement on the wrists and ankles for pain compliance. They were very effective in that regard but improper use had been associated with injuries like wrist and limb breaks that led to them being phased out. However, tasers have become the preferred non-lethal weapon for most departments. Notable organizations World Nunchaku Association Hong Kong Nunchaku Association Nunchaku Association of India Nunchaku Sport India Association I.R. Iran Nunchaku Association Ken-Fu Nunchaku American Style Nunchaku North American Nunchaku Association
Technology
Melee weapons
null
57212
https://en.wikipedia.org/wiki/TGV
TGV
The TGV (; , , 'high-speed train') is France's intercity high-speed rail service. With commercial operating speeds of up to on the newer lines, the TGV was conceived at the same period as other technological projects such as the Ariane 1 rocket and Concorde supersonic airliner; sponsored by the Government of France, those funding programmes were known as ('national champion') policies. In 2023 the TGV network in France carried 122 million passengers. The state-owned SNCF started working on a high-speed rail network in 1966. It presented the project to President Georges Pompidou in 1974 who approved it. Originally designed as turbotrains to be powered by gas turbines, TGV prototypes evolved into electric trains with the 1973 oil crisis. In 1976 the SNCF ordered 87 high-speed trains from Alstom. Following the inaugural service between Paris and Lyon in 1981 on the LGV Sud-Est, the network, centred on Paris, has expanded to connect major cities across France, including Marseille, Lille, Bordeaux, Strasbourg, Rennes and Montpellier, as well as in neighbouring countries on a combination of high-speed and conventional lines. The success of the first high-speed service led to a rapid development of lignes à grande vitesse (LGVs, 'high-speed lines') to the south (Rhône-Alpes, Méditerranée, Nîmes–Montpellier), west (Atlantique, Bretagne-Pays de la Loire, Sud Europe Atlantique), north (Nord, Interconnexion Est) and east (Rhin-Rhône, Est). Since it was launched, the TGV has not recorded a single passenger fatality in an accident on normal, high-speed service. A specially modified TGV high-speed train known as Project V150, weighing only 265 tonnes, set the world record for the fastest wheeled train, reaching during a test run on 3 April 2007. In 2007, the world's fastest scheduled rail journey was a start-to-stop average speed of between the Gare de Champagne-Ardenne and Gare de Lorraine on the LGV Est, not surpassed until the 2013 reported average of express service on the Shijiazhuang to Zhengzhou segment of China's Shijiazhuang–Wuhan high-speed railway. During the engineering phase, the transmission voie-machine (TVM) cab-signalling technology was developed, as drivers would not be able to see signals along the track-side when trains reach full speed. It allows for a train engaging in an emergency braking to request within seconds all following trains to reduce their speed; if a driver does not react within , the system overrides the controls and reduces the train's speed automatically. The TVM safety mechanism enables TGVs using the same line to depart every three minutes. The TGV system itself extends to neighbouring countries, either directly (Italy, Spain, Belgium, Luxembourg and Germany) or through TGV-derivative networks linking France to Switzerland (Lyria), to Belgium, Germany and the Netherlands (former Thalys), as well as to the United Kingdom (Eurostar). Several future lines are under construction or planned, including extensions within France and to surrounding countries. The Mont d'Ambin Base Tunnel, part of the LGV Lyon–Turin that is currently under construction, is set to become the longest rail tunnel in the world. Cities such as Tours and Le Mans have become part of a "TGV commuter belt" around Paris; the TGV also serves Charles de Gaulle Airport and Lyon–Saint-Exupéry Airport. A visitor attraction in itself, it stops at Disneyland Paris and in southern tourist cities such as Avignon and Aix-en-Provence as well. Brest, Chambéry, Nice, Toulouse and Biarritz are reachable by TGVs running on a mix of LGVs and modernised lines. In 2007, the SNCF generated profits of €1.1 billion (approximately US$1.75 billion, £875 million) driven largely by higher margins on the TGV network. History The idea of the TGV was first proposed in the 1960s, after Japan had begun construction of the Shinkansen in 1959. At the time the Government of France favoured new technology, exploring the production of hovercraft and the Aérotrain air-cushion vehicle. Simultaneously, the SNCF began researching high-speed trains on conventional tracks. In 1976, the administration agreed to fund the first line. By the mid-1990s, the trains were so popular that SNCF president Louis Gallois declared that the TGV was "the train that saved French railways". Development It was originally planned that the TGV, then standing for ('very high speed') or ('high-speed turbine'), would be propelled by gas turbines, selected for their small size, good power-to-weight ratio and ability to deliver high power over an extended period. The first prototype, TGV 001, was the only gas-turbine TGV: following the increase in the price of oil during the 1973 energy crisis, gas turbines were deemed uneconomic and the project turned to electricity from overhead lines, generated by new nuclear power stations. TGV 001 was not a wasted prototype: its gas turbine was only one of its many new technologies for high-speed rail travel. It also tested high-speed brakes, needed to dissipate the large amount of kinetic energy of a train at high speed, high-speed aerodynamics, and signalling. It was articulated, comprising two adjacent carriages sharing a bogie, allowing free yet controlled motion with respect to one another. It reached , which remains the world speed record for a non-electric train. Its interior and exterior were styled by French designer Jacques Cooper, whose work formed the basis of early TGV designs, including the distinctive nose shape of the first power cars. Changing the TGV to electric traction required a significant design overhaul. The first electric prototype, nicknamed Zébulon, was completed in 1974, testing features such as innovative body mounting of motors, pantographs, suspension and braking. Body mounting of motors allowed over 3 tonnes to be eliminated from the power cars and greatly reduced the unsprung weight. The prototype travelled almost during testing. In 1976, the French administration funded the TGV project, and construction of the LGV Sud-Est, the first high-speed line (), began shortly afterwards. The line was given the designation LN1, ('New Line 1'). After two pre-production trainsets (nicknamed Patrick and Sophie) had been tested and substantially modified, the first production version was delivered on 25 April 1980. Service The TGV opened to the public between Paris and Lyon on 27 September 1981. Contrary to its earlier fast services, SNCF intended TGV service for all types of passengers, with the same initial ticket price as trains on the parallel conventional line. To counteract the popular misconception that the TGV would be a premium service for business travellers, SNCF started a major publicity campaign focusing on the speed, frequency, reservation policy, normal price, and broad accessibility of the service. This commitment to a democratised TGV service was enhanced in the Mitterrand era with the promotional slogan "Progress means nothing unless it is shared by all". The TGV was considerably faster (in terms of door to door travel time) than normal trains, cars, or aeroplanes. The trains became widely popular, the public welcoming fast and practical travel. The Eurostar service began operation in 1994, connecting continental Europe to London via the Channel Tunnel and the LGV Nord-Europe with a version of the TGV designed for use in the tunnel and the United Kingdom. The first phase of the British High Speed 1 line was completed in 2003, the second phase in November 2007. The fastest trains take 2 hours 15 minutes London–Paris and 1 hour 51 minutes London–Brussels. The first twice-daily London-Amsterdam service ran 3 April 2018, and took 3 hours 47 minutes. Milestones The TGV (1981) was the world's second commercial and the fastest standard gauge high-speed train service, after Japan's Shinkansen, which connected Tokyo and Osaka from 1 October 1964. It was a commercial success. A TGV test train holds the world speed record for conventional trains. On 3 April 2007 a modified TGV POS train reached under test conditions on the LGV Est between Paris and Strasbourg. The line voltage was boosted to 31 kV, and extra ballast was tamped onto the permanent way. The train beat the 1990 world speed record of , set by a similarly TGV, along with unofficial records set during weeks preceding the official record run. The test was part of an extensive research programme by Alstom. In 2007, the TGV was the world's fastest conventional scheduled train: one journey's average start-to-stop speed from Champagne-Ardenne Station to Lorraine Station is . This record was surpassed on 26 December 2009 by the new Wuhan–Guangzhou high-speed railway in China where the fastest scheduled train covered at an average speed of . A Eurostar (TGV) train broke the record for the longest non-stop high-speed international journey on 17 May 2006 carrying the cast and filmmakers of The Da Vinci Code from London to Cannes for the Cannes Film Festival. The journey took 7 hours 25 minutes on an average speed of . The fastest single long-distance run on the TGV was done by a TGV Réseau train from Calais-Frethun to Marseille (i) in 3 hours 29 minutes at a speed of for the inauguration of the LGV Méditerranée on 26 May 2001. Passenger usage On 28 November 2003, the TGV network carried its one billionth passenger, a distant second only to the Shinkansen's five billionth passenger in 2000. Excluding international traffic, the TGV system carried 98 million passengers during 2008, an increase of 8 million (9.1%) on the previous year. Rolling stock All TGV trains have two power cars, one on each end. Between those power cars are a set of semi-permanently coupled articulated un-powered coaches. Cars are connected with Jacobs bogies, a single bogie shared between the ends of two coaches. The only exception are the end cars, which have a standalone bogie on the side closest to the power car, which is often motorized. Power cars also have two bogies. Trains can be lengthened by coupling two TGVs, using couplers hidden in the noses of the power cars. The articulated design is advantageous during a derailment, as the passenger carriages are more likely to stay upright and in line with the track. Normal trains could split at couplings and jackknife, as seen in the Eschede train disaster. A disadvantage is that it is difficult to split sets of carriages. While power cars can be removed from trains by standard uncoupling procedures, specialized equipment is needed to split carriages, by lifting up cars off a bogie. Once uncoupled, one of the carriage ends is left without support, so a specialized frame is required. SNCF prefers to use power cars instead of electric multiple units because it allows for less electrical equipment. There are six types of TGV equipment in use, all built by Alstom: TGV Atlantique (10 carriages) TGV Réseau (an upgrade of the Atlantique, 8 carriages) TGV Duplex (two floors for greater passenger capacity) TGV POS (originally for routes to Germany, now used to Switzerland) TGV 2N2 (also known as the Avelia Euroduplex, an upgrade of the TGV Duplex) TGV M (also known as the Avelia Horizon, expected to enter service in 2025) Retired sets: TGV Sud-Est (retired in December 2019) TGV La Poste (retired in June 2015) Several TGV types have broken records, including the V150 and TGV 001. V150 was a specially modified five-car double-deck trainset that reached under controlled conditions on a test run. It narrowly missed beating the world train speed record of . The record-breaking speed is impractical for commercial trains due to motor overcharging, empty train weight, rail and engine wear issues, elimination of all but three coaches, excessive vibration, noise and lack of emergency stopping methods. TGVs travel at up to in commercial use. All TGVs are at least bi-current, which means that they can operate at (used on LGVs) and (used on traditional lines). Trains travelling internationally must accommodate other voltages ( or ), requiring tri-current and quad-current TGVs. Each TGV power car has two pantographs: one for AC use and one for DC. When passing between areas with different electric systems (identified by marker boards), trains enter a phase break zone. Just before this section, train operators must power down the motors (allowing the train to coast), lower the pantograph, adjust a switch to select the appropriate system, and raise the pantograph. Once the train exits the phase break zone and detects the correct electric supply, a dashboard indicator illuminates, and the operator can once again engage the motors. TGV Sud-Est The Sud-Est fleet was built between 1978 and 1988 and operated the first TGV service, from Paris to Lyon in 1981. There were 107 passenger sets, of which nine are tri-current (including for use in Switzerland) and the rest bi-current. There were seven bi-current half-sets without seats that carried mail for La Poste between Paris, Lyon and Provence, in a distinctive yellow livery until they were phased out in 2015. Each set were made up of two power cars and eight carriages (capacity 345 seats), including a powered bogie in the carriages adjacent to the power cars. They are long and wide. They weighed with a power output of 6,450 kW under 25 kV. The sets were originally built to run at but most were upgraded to during mid-life refurbishment in preparation for the opening of the LGV Méditerranée. The few sets that kept a maximum speed of operated on routes that include a comparatively short distance on LGV, such as to Switzerland via Dijon; SNCF did not consider it financially worthwhile to upgrade their speed for a marginal reduction in journey time. In December 2019, the trains were phased out from service. In late 2019 and early 2020, TGV 01 (Nicknamed Patrick), the very first TGV train, did a farewell service that included all three liveries that were worn during their service. TGV Atlantique The 105 train Atlantique fleet was built between 1988 and 1992 for the opening of the LGV Atlantique and entry into service began in 1989. They are all bi-current, long and wide. They weigh and are made up of two power cars and ten carriages with a capacity of 485 seats. They were built with a maximum speed of and 8,800 kW of power under 25 kV. The efficiency of the Atlantique with all seats filled has been calculated at 767 PMPG, though with a typical occupancy of 60% it is about 460 PMPG (a Toyota Prius with three passengers is 144 PMPG). Modified unit 325 set the world speed record in 1990 on the LGV Atlantique before its opening. Modifications such as improved aerodynamics, larger wheels and improved braking were made to enable speeds of over . The set was reduced to two power cars and three carriages to improve the power-to-weight ratio, weighing 250 tonnes. Three carriages, including the bar carriage in the centre, is the minimum possible configuration because of the Jacobs bogies. TGV Réseau The first Réseau (Network) sets entered service in 1993. Fifty bi-current sets were ordered in 1990, supplemented by 40 tri-current sets in 1992/1993 (adding system used on traditional lines in Belgum). Ten tri-current sets carry the Eurostar Red (ex-Thalys) livery and are known as the PBA (Paris-Brussels-Amsterdam) sets. They are formed of two power cars (8,800 kW under 25 kV – as TGV Atlantique) and eight carriages, giving a capacity of 377 seats. They have a top speed of . They are long and are wide. The bi-current sets weigh 383 tonnes: owing to axle-load restrictions in Belgium the tri-current sets have a series of modifications, such as the replacement of steel with aluminum and hollow axles, to reduce the weight to under 17 t per axle. Owing to early complaints of uncomfortable pressure changes when entering tunnels at high speed on the LGV Atlantique, the Réseau sets are now pressure-sealed. They can be coupled to a Duplex set. TGV Duplex The Duplex was built to increase TGV capacity without increasing train length or the number of trains. Each carriage has two levels, with access doors at the lower level taking advantage of low French platforms. A staircase gives access to the upper level, where the gangway between carriages is located. There are 512 seats per set. On busy routes such as Paris-Marseille they are operated in pairs, providing 1,024 seats in two Duplex sets or 800 in a Duplex set plus a Reseau set. Each set has a wheelchair accessible compartment. After a lengthy development process starting in 1988 (during which they were known as the TGV-2N) the original batch of 30 was built between 1995 and 1998. Further deliveries started in 2000 with the Duplex fleet now totaling 160 units, making it the backbone of the SNCF TGV-fleet. They weigh 380 tonnes and are long, made up of two power cars and eight carriages. Extensive use of aluminum means that they weigh not much more than the TGV Réseau sets they supplement. The bi-current power cars provide 8,800 kW, and they have a slightly increased speed of . Duplex TGVs run on all of French high-speed lines. TGV POS TGV POS (Paris-Ostfrankreich-Süddeutschland or Paris-Eastern France-Southern Germany) are used on the LGV Est. They consist of two Duplex power cars with eight TGV Réseau-type carriages, with a power output of 9,600 kW and a top speed of . Unlike TGV-A, TGV-R and TGV-D, they have asynchronous motors, and isolation of an individual motor is possible in case of failure. Avelia Euroduplex (TGV 2N2) The bi-current TGV 2N2 (Avelia Euroduplex) can be regarded as the 3rd generation of Duplex. The series was commissioned from December 2011 for links to Germany and Switzerland (tri-current trains) and to cope with the increased traffic due to the opening of the LGV Rhine-Rhone. They are numbered from 800 and are limited to . ERTMS makes them compatible to allow access to Spain similar to Dasye. TGV M Avelia Horizon The design that emerged from the process was named TGV M, and in July 2018 SNCF ordered 100 trainsets with deliveries expected to begin in 2024. They are expected to cost €25 million per 8-car set. TGV technology outside France TGV technology has been adopted in a number of other countries: AVE (Alta Velocidad Española) in Spain with the Renfe Class 100 based on the TGV Atlantique. Eurostar operates international high-speed services connecting France with Belgium, Germany and the Netherlands. Several trainsets use TGV technology (e300, PBA, PBKA). Korea Train Express (KTX) in South Korea with KTX-I (based on the TGV Réseau) and KTX-Sancheon. Acela Express, a high-speed tilting train built by Alstom and Bombardier for the Northeast Corridor in the United States. The Acela power cars use several TGV technologies including the motors, electrical/drivetrain system (rectifiers, inverters, regenerative braking technology), and disc brakes. However, they are strengthened to meet U.S. Federal Railroad Administration crash standards. The Acela's tilting, non-articulated carriages are derived from the Bombardier's LRC train and also meet crash standards. Avelia Liberty, the replacement for the Acela Express in the United States. Expected to enter service in 2023. The Moroccan government agreed to a €2 billion contract for Alstom to build Al-Boraq, an LGV between Tangier and Casablanca which opened in 2018 using TGV Euroduplex. Italian open-access high-speed operator Nuovo Trasporto Viaggiatori signed up with Alstom to purchase 25 AGV 11-car sets. Future TGVs SNCF and Alstom are investigating new technology that could be used for high-speed transport. The development of TGV trains is being pursued in the form of the Automotrice à grande vitesse (AGV) high-speed multiple unit with motors under each carriage. Investigations are being carried out with the aim of producing trains at the same cost as TGVs with the same safety standards. AGVs of the same length as TGVs could have up to 450 seats. The target speed is . The prototype AGV was unveiled by Alstom on 5 February 2008. Italian operator NTV is the first customer for the AGV, and became the first open-access high-speed rail operator in Europe, starting operation in 2011. The design process of the next generation of TGVs began in 2016 when SNCF and Alstom signed an agreement to jointly develop the trainsets, with goals of reducing purchase and operating costs, as well as improved interior design. Lines in operation In June 2021, there were approximately of (LGV), with four additional line sections under construction. The current lines and those under construction can be grouped into four routes radiating from Paris. Accidents In over four decades of operation, the TGV has not recorded a single passenger fatality in an accident on normal, high-speed service. There have been several accidents, including four derailments at or above , but in only one of these—a test run on a new line—did carriages overturn. This safety record is credited in part to the stiffness that the articulated design lends to the train. There have been fatal accidents involving TGVs on lignes classiques, where the trains are exposed to the same dangers as normal trains, such as level crossings. These include one terrorist bombing unrelated to the speed at which the train was traveling. On LGVs 14 December 1992: TGV 920 from Annecy to Paris, operated by set 56, derailed at at Mâcon-Loché TGV station (Saône-et-Loire). A previous emergency stop had caused a wheel flat; the bogie concerned derailed while crossing the points at the entrance to the station. No one on the train was injured, but 25 passengers waiting on the platform for another TGV were slightly injured by ballast that was thrown up from the trackbed. 21 December 1993: TGV 7150 from Valenciennes to Paris, operated by set 511, derailed at at the site of Haute Picardie TGV station, before it was built. Rain had caused a hole to open up under the track; the hole dated from the First World War but had not been detected during construction. The front power car and four carriages derailed but remained aligned with the track. Of the 200 passengers, one was slightly injured. 5 June 2000: Eurostar 9073 from Paris to London, operated by sets 3101/2 owned by the National Railway Company of Belgium, derailed at in the Nord-Pas de Calais region near Croisilles. The transmission assembly on the rear bogie of the front power car failed, with parts falling onto the track. Four bogies out of 24 derailed. Out of 501 passengers, seven were bruised and others treated for shock. 14 November 2015: TGV 2369 was involved in the Eckwersheim derailment, near Strasbourg, while being tested on the then-unopened second phase of the LGV Est. The derailment resulted in 11 deaths among those aboard, while 11 others aboard the train were seriously injured. Excessive speed has been cited as the cause. On classic lines 31 December 1983: A bomb allegedly planted by the terrorist organisation of Carlos the Jackal exploded on board a TGV from Marseille to Paris; two people were killed. 28 September 1988: TGV 736, operated by set 70 "Melun", collided with a lorry carrying an electric transformer weighing 100 tonnes that had become stuck on a level crossing in Voiron, Isère. The vehicle had not obtained the required crossing permit from the French Direction départementale de l'équipement. The weight of the lorry caused a very violent collision; the train driver and a passenger died, and 25 passengers were slightly injured. 4 January 1991: after a brake failure, TGV 360 ran away from Châtillon depot. The train was directed onto an unoccupied track and collided with the car loading ramp at Paris-Vaugirard station at . No one was injured. The leading power car and the first two carriages were severely damaged, and were rebuilt. 25 September 1997: TGV 7119 from Paris to Dunkerque, operated by set 502, collided at with a 70 tonne asphalt paving machine on a level crossing at Bierne, near Dunkerque. The power car spun round and fell down an embankment. The front two carriages left the track and came to a stop in woods beside the track. Seven people were injured. 31 October 2001: TGV 8515 from Paris to Irun derailed at near Dax in southwest France. All ten carriages derailed and the rear power unit fell over. The cause was a broken rail. 30 January 2003: a TGV from Dunkerque to Paris collided at with a heavy goods vehicle stuck on the level crossing at Esquelbecq in northern France. The front power car was severely damaged, but only one bogie derailed. Only the driver was slightly injured. 19 December 2007: a TGV from Paris to Geneva collided at about with a truck on a level crossing near Tossiat in eastern France, near the Swiss border. The driver of the truck died; on the train, one person was seriously injured and 24 were slightly injured. 17 July 2014: a TER train ran into the rear of a TGV at Denguin, Pyrénées-Atlantiques. Forty people were injured. Following the number of accidents at level crossings, an effort has been made to remove all level crossings on lignes classiques used by TGVs. The ligne classique from Tours to Bordeaux at the end of the LGV Atlantique has no level crossings as a result. Protests against the TGV The first environmental protests against the building of an LGV occurred in May 1990 during the planning stages of the LGV Méditerranée. Protesters blocked a railway viaduct to protest against the planned route, arguing that it was unnecessary, and that trains could keep using existing lines to reach Marseille from Lyon. The Turin–Lyon high-speed railway (Lyon-Chambéry-Turin), which would connect the TGV network to the Italian TAV network, has been the subject of demonstrations in Italy. While most Italian political parties agree on the construction of this line, some inhabitants of the towns where construction would take place oppose it vehemently. The concerns put forward by the protesters centre on storage of dangerous materials mined during tunnel boring, like asbestos and perhaps uranium, in the open air. This health danger could be avoided by using more expensive techniques for handling radioactive materials. A six-month delay in the start of construction has been decided in order to study solutions. In addition to the concerns of the residents, RFB – a ten-year-old national movement – opposes the development of Italy's TAV high-speed rail network as a whole. General complaints about the noise of TGVs passing near towns and villages have led the SNCF to build acoustic fencing along large sections of LGV to reduce the disturbance to residents, but protests still take place where SNCF has not addressed the issue. On July 26 2024, the opening day of the 2024 Olympics, the TGV was hit by an arson attack. At least 800,000 people were affected by this. The Eurostar was specifically hit by this with 25% of trains canceled. Mail services In addition to its standard services, mail delivery services were also operated by TGVs. For many years, a service termed SNCF TGV La Poste transported mail for the French mail service, La Poste. It used windowless but otherwise standard TGV rolling stock, painted in the yellow and blue livery of La Poste. However, the service ceased in June 2015. Mobile hospital service During the COVID-19 pandemic, several TGV trains were transformed into mobile hospitals, in order to transport critically ill patients from overwhelmed hospitals in the East of France to hospitals in the West. Every coach allowed for up to 6 patients, allowing for the transport of several dozen patients, attended by a staff of 50 medical workers. Although the train moves at high speed, it accelerates and decelerates smoothly, allowing for medical procedures to be performed during transport. Rebranding Since July 2017, TGV services are gradually being rebranded as TGV inOui and Ouigo in preparation for the opening of the French HSR market to competition. TGV inOui TGV inOui is SNCF's premium high-speed rail service. The name inOui was chosen because it sounds like the French word inouï meaning "extraordinary" (or more literally, "unheard of"). Ouigo Ouigo is SNCF's low-cost high-speed rail service. Trains have a high-density one-class configuration and reduced on-board services. The services traditionally operate from less busy secondary stations, sometimes outside of the city centre. The literal translation of the brand name is "yes go", but the name is also a play on the English homonym, "we go".
Technology
High-speed rail
null
57260
https://en.wikipedia.org/wiki/Envelope
Envelope
An envelope is a common packaging item, usually made of thin, flat material. It is designed to contain a flat object, such as a letter or card. Traditional envelopes are made from sheets of paper cut to one of three shapes: a rhombus, a short-arm cross or a kite. These shapes allow the envelope structure to be made by folding the sheet sides around a central rectangular area. In this manner, a rectangle-faced enclosure is formed with an arrangement of four flaps on the reverse side. Overview A folding sequence such that the last flap closed is on a short side is referred to in commercial envelope manufacture as a pocket – a format frequently employed in the packaging of small quantities of seeds. Although in principle the flaps can be held in place by securing the topmost flap at a single point (for example with a wax seal), generally they are pasted or gummed together at the overlaps. They are most commonly used for enclosing and sending mail (letters) through a prepaid-postage postal system. Window envelopes have a hole cut in the front side that allows the paper within to be seen. They are generally arranged so that the receiving address printed on the letter is visible, saving duplication of the address on the envelope itself. The window is normally covered with a transparent or translucent film to protect the letter inside, as was first designed by Americus F. Callahan in 1901 and patented the following year. In some cases, shortages of materials or the need to economize resulted in envelopes that had no film covering the window. One innovative process, invented in Europe about 1905, involved using hot oil to saturate the area of the envelope where the address would appear. The treated area became sufficiently translucent for the address to be readable. there is no international standard for window envelopes, but some countries, including Germany and the United Kingdom, have national standards. An aerogram is related to a letter sheet, both being designed to have writing on the inside to minimize the weight. Any handmade envelope is effectively a letter sheet because prior to the folding stage it offers the opportunity for writing a message on that area of the sheet that after folding becomes the inside of the face of the envelope. For document security, the letter sheet can be sealed with wax. Another secure form of letter sheet is a locked letter, that is formed by cutting and folding the sheet in an elaborate way that prevents the letter from being opened without creating obvious damage to the letter/envelope. The "envelope" used to launch the Penny Post component of the British postal reforms of 1840 by Sir Rowland Hill and the invention of the postage stamp, was a lozenge-shaped lettersheet known as a Mulready. If desired, a separate letter could be enclosed with postage remaining at one penny provided the combined weight did not exceed half an ounce (14 grams). This was a legacy of the previous system of calculating postage, which partly depended on the number of sheets of paper used. During the U.S. Civil War those in the Confederate States Army occasionally used envelopes made from wallpaper, due to financial hardship. A "return envelope" is a pre-addressed, smaller envelope included as the contents of a larger envelope and can be used for courtesy reply mail, metered reply mail, or freepost (business reply mail). Some envelopes are designed to be reused as the return envelope, saving the expense of including a return envelope in the contents of the original envelope. The direct mail industry makes extensive use of return envelopes as a response mechanism. Up until 1840, all envelopes were handmade, each being individually cut to the appropriate shape out of an individual rectangular sheet. In that year George Wilson in the United Kingdom patented the method of tessellating (tiling) a number of envelope patterns across and down a large sheet, thereby reducing the overall amount of waste produced per envelope when they were cut out. In 1845 Edwin Hill and Warren de la Rue obtained a patent for a steam-driven machine that not only cut out the envelope shapes but creased and folded them as well. (Mechanised gumming had yet to be devised.) The convenience of the sheets ready cut to shape popularized the use of machine-made envelopes, and the economic significance of the factories that had produced handmade envelopes gradually diminished. As envelopes are made of paper, they are intrinsically amenable to embellishment with additional graphics and text over and above the necessary postal markings. This is a feature that the direct mail industry has long taken advantage of—and more recently the Mail Art movement. Custom printed envelopes has also become an increasingly popular marketing method for small business. Most of the over 400 billion envelopes of all sizes made worldwide are machine-made. Sizes International standard sizes International standard ISO 269 (withdrawn in 2009 without replacement) defined several standard envelope sizes, which are designed for use with ISO 216 standard paper sizes: The German standard DIN 678 defines a similar list of envelope formats. DL comes from the DIN Lang (German: "Long") size envelope which originated in the 1920s. North American sizes There are dozens of sizes of envelopes available in the United States. The designations such as "A2" do not correspond to ISO paper sizes. Sometimes, North American paper jobbers and printers will insert a hyphen to distinguish from ISO sizes, thus: A-2. The No. 10 envelope is the standard business envelope size in the United States. PWG 5101.1 also lists the following even inch sizes for envelopes: , , , , , and . Envelopes accepted by the U.S. Postal Service for mailing at the price of a letter must be: Rectangular At least  inches high × 5 inches long × 0.007 inch thick. No more than  inches high ×  inches long ×  inch thick. Letters that have a length-to-height aspect ratio of less than 1.3 or more than 2.5 are classified as "non-machinable" by the USPS and may cost more to mail. Chinese sizes Japanese sizes Japanese traditional rectangular (角形, kakugata, K) and long (長形, nagagata, N) envelopes open on the short side, while Western-style (洋形, yōgata, Y) envelopes open on the long side. The Japanese standard JIS S 5502 was first published in 1964. Some traditional sizes were not kept and some sizes have been removed until its latest edition in 2014, leaving behind gaps in the numeric sequence of designations. Manufacture History of envelopes The first known envelope was nothing like the paper envelope of today. It can be dated back to around 3500 to 3200 BC in the ancient Middle East. Hollow clay spheres were molded around financial tokens and used in private transactions. The two people who discovered these first envelopes were Jacques de Morgan, in 1901, and Roland de Mecquenem, in 1907. Paper envelopes were developed in China, where paper was invented by 2nd century BC. Paper envelopes, known as chih poh, were used to store gifts of money. In the Southern Song dynasty, the Chinese imperial court used paper envelopes to distribute monetary gifts to government officials. In Western history, from the time flexible writing material became more readily available in the 13th century until the mid-19th century, correspondence was typically secured by a process of folding and sealing the letter itself, sometimes including elaborate letterlocking techniques to indicate tampering or prove authenticity. Some of these letter techniques, which could involve stitching or wax seals, were also employed to secure hand-made envelopes. Prior to 1840, all envelopes were handmade, including those for commercial use. In 1840 George Wilson of London was granted a patent for an envelope-cutting machine (patent: "an improved paper-cutting machine"); these machine-cut envelopes still needed to be folded by hand. There is a picture of the front and backside of an envelope stamped in 1841 here on this page. It seems to be machine cut. In 1845, Edwin Hill and Warren De La Rue were granted a British patent for the first envelope-folding machine. The "envelopes" produced by the Hill/De La Rue machine were not like those used today. They were flat diamond, lozenge (or rhombus)-shaped sheets or "blanks" that had been precut to shape before being fed to the machine for creasing and made ready for folding to form a rectangular enclosure. The edges of the overlapping flaps treated with a paste or adhesive and the method of securing the envelope or wrapper was a user choice. The symmetrical flap arrangement meant that it could be held together with a single wax seal at the apex of the topmost flap. (That the flaps of an envelope can be held together by applying a seal at a single point is a classic design feature of an envelope.) Nearly 50 years passed before a commercially successful machine for producing pre-gummed envelopes, like those in use today, appeared. The origin of the use of the diamond shape for envelopes is debated. However, as an alternative to simply wrapping a sheet of paper around a folded letter or an invitation and sealing the edges, it is a tidy and ostensibly paper-efficient way of producing a rectangular-faced envelope. Where the claim to be paper-efficient fails is a consequence of paper manufacturers normally making paper available in rectangular sheets, because the largest size of envelope that can be realised by cutting out a diamond or any other shape which yields an envelope with symmetrical flaps is smaller than the largest that can be made from that sheet simply by folding. The folded diamond-shaped sheet (or "blank") was in use at the beginning of the 19th century as a novelty wrapper for invitations and letters among the proportion of the population that had the time to sit and cut them out and were affluent enough not to bother about the waste offcuts. Their use first became widespread in the UK when the British government took monopoly control of postal services and tasked Rowland Hill with its introduction. The new service was launched in May 1840 with a postage-paid machine-printed illustrated (or pictorial) version of the wrapper and the much-celebrated first adhesive postage stamp, the Penny Black, for the production of which the Jacob Perkins printing process was used to deter counterfeiting and forgery. The wrappers were printed and sold as a sheet of 12, with cutting the purchaser's task. Known as Mulready stationery, because the illustration was created by the respected artist William Mulready, the envelopes were withdrawn when the illustration was ridiculed and lampooned. Nevertheless, the public apparently saw the convenience of the wrappers being available ready-shaped, and it must have been obvious that with the stamp available totally plain versions of the wrapper could be produced and postage prepaid by purchasing a stamp and affixing it to the wrapper once folded and secured. In this way although the postage-prepaid printed pictorial version died ignominiously, the diamond-shaped wrapper acquired de facto official status and became readily available to the public notwithstanding the time taken to cut them out and the waste generated. With the issuing of the stamps and the operation and control of the service (which is a communications medium) in government hands the British model spread around the world and the diamond-shaped wrapper went with it. Hill also installed his brother Edwin as The Controller of Stamps, and it was he with his partner Warren De La Rue who patented the machine for mass-producing the diamond-shaped sheets for conversion to envelopes in 1845. Today, envelope-making machine manufacture is a long- and well-established international industry, and blanks are produced with a short-arm-cross shape and a kite shape as well as diamond shape. (The short-arm-cross style is mostly encountered in "pocket" envelopes i.e. envelopes with the closing flap on a short side. The more common style, with the closing flap on a long side, are sometimes referred to as "standard" or "wallet" style for purposes of differentiation.) The most famous paper-making machine was the Fourdrinier machine. The process involves taking processed pulp stock and converting it to a continuous web which is gathered as a reel. Subsequently, the reel is guillotined edge to edge to create a large number of properly rectangular sheets because ever since the invention of Gutenberg's press paper has been closely associated with printing. To this day, all other mechanical printing and duplicating equipments devised in the meantime, including the typewriter (which was used up to the 1990s for addressing envelopes), have been primarily designed to process rectangular sheets. Hence the large sheets are in turn guillotined down to the sizes of rectangular sheet commonly used in the commercial printing industry, and nowadays to the sizes commonly used as feed-stock in office-grade computer printers, copiers and duplicators (mainly ISO, A4 and US Letter). Using any mechanical printing equipment to print on envelopes, which although rectangular, are in fact folded sheets with differing thicknesses across their surfaces, calls for skill and attention on the part of the operator. In commercial printing the task of printing on machine-made envelopes is referred to as "overprinting" and is usually confined to the front of the envelope. If printing is required on all four flaps as well as the front, the process is referred to as "printing on the flat". Eye-catching illustrated envelopes or pictorial envelopes, the origins of which as an artistic genre can be attributed to the Mulready stationery – and which was printed in this way – are used extensively for direct mail. In this respect, direct mail envelopes have a shared history with propaganda envelopes (or "covers") as they are called by philatelists. Present and future state of envelopes In 1998, the U.S. Postal Service became the first postal authority to approve a system of printing digital stamps. With this innovative alternative to an adhesive-backed postage stamp, businesses could more easily produce envelopes in-house, address them, and customize them with advertising information on the face. The fortunes of the commercial envelope manufacturing industry and the postal service go hand in hand, and both link to the printing industry and the mechanized envelope processing industry producing equipment such as franking and addressing machines. Technological developments affecting one ricochet through the others: addressing machines print addresses, postage stamps are a print product, franking machines imprint a frank on an envelope. If fewer envelopes are required; fewer stamps are required; fewer franking machines are required and fewer addressing machines are required. For example, the advent of information-based indicia (IBI) (commonly referred to as digitally-encoded electronic stamps or digital indicia) by the US Postal Service in 1998 caused widespread consternation in the franking machine industry, as their machines were rendered obsolete, and resulted in a flurry of lawsuits involving Pitney Bowes among others. The advent of e-mail in the late 1990s appeared to offer a substantial threat to the postal service. By 2008 letter-post service operators were reporting significantly smaller volumes of letter-post, specifically stamped envelopes, which they attributed mainly to e-mail. Although a corresponding reduction in the volume of envelopes required would have been expected, no such decrease was reported as widely as the reduction in letter-post volumes. Types of envelopes Windowed envelopes A windowed envelope is an envelope with a plastic or glassine window in it. The plastic in these envelopes creates problems in paper recycling. Security envelopes Security envelopes have special tamper-resistant and tamper-evident features. They are used for high value products and documents as well as for evidence for legal proceedings. Some security envelopes have a patterned tint printed on the inside, which makes it difficult to read the contents. Various patterns exist. Mailers Some envelopes are available for full-size documents or for other items. Some carriers have large mailing envelopes for their express services. Other similar envelopes are available at stationery supply locations. These mailers usually have an opening on an end with a flap that can be attached by gummed adhesive, integral pressure-sensitive adhesive, adhesive tape, or security tape. Construction is usually: Paperboard Corrugated fiberboard Polyethylene, often a coextrusion Nonwoven fabric Padded mailers Shipping envelopes can have padding to provide stiffness and some degree of cushioning. The padding can be ground newsprint, plastic foam sheets, or bubble packing. Inter-office envelopes Various U.S. Federal Government offices use Standard Form (SF) 65 Government Messenger Envelopes for inter-office mail delivery. These envelopes are typically light brown in color and un-sealed with string-tied closure method and an array of holes throughout both sides such that it is somewhat visible what the envelope contains. Other colloquial names for this envelope include "Holey Joe" and "Shotgun" envelope due to the holey nature of the envelope. Address method is unique in that these envelopes are re-usable and the previous address is crossed out thoroughly and the new addressee (name, building, room, and mailstop) is written in the next available box. Although still in use, SF-65 is no longer listed on the United States Office of Personnel Management website list of standard forms.
Technology
Containers
null
57326
https://en.wikipedia.org/wiki/De%20Moivre%27s%20formula
De Moivre's formula
In mathematics, de Moivre's formula (also known as de Moivre's theorem and de Moivre's identity) states that for any real number and integer it is the case that where is the imaginary unit (). The formula is named after Abraham de Moivre, although he never stated it in his works. The expression is sometimes abbreviated to . The formula is important because it connects complex numbers and trigonometry. By expanding the left hand side and then comparing the real and imaginary parts under the assumption that is real, it is possible to derive useful expressions for and in terms of and . As written, the formula is not valid for non-integer powers . However, there are generalizations of this formula valid for other exponents. These can be used to give explicit expressions for the th roots of unity, that is, complex numbers such that . Using the standard extensions of the sine and cosine functions to complex numbers, the formula is valid even when is an arbitrary complex number. Example For and , de Moivre's formula asserts that or equivalently that In this example, it is easy to check the validity of the equation by multiplying out the left side. Relation to Euler's formula De Moivre's formula is a precursor to Euler's formula with expressed in radians rather than degrees, which establishes the fundamental relationship between the trigonometric functions and the complex exponential function. One can derive de Moivre's formula using Euler's formula and the exponential law for integer powers since Euler's formula implies that the left side is equal to while the right side is equal to Proof by induction The truth of de Moivre's theorem can be established by using mathematical induction for natural numbers, and extended to all integers from there. For an integer , call the following statement : For , we proceed by mathematical induction. is clearly true. For our hypothesis, we assume is true for some natural . That is, we assume Now, considering : See angle sum and difference identities. We deduce that implies . By the principle of mathematical induction it follows that the result is true for all natural numbers. Now, is clearly true since . Finally, for the negative integer cases, we consider an exponent of for natural . The equation (*) is a result of the identity for . Hence, holds for all integers . Formulae for cosine and sine individually For an equality of complex numbers, one necessarily has equality both of the real parts and of the imaginary parts of both members of the equation. If , and therefore also and , are real numbers, then the identity of these parts can be written using binomial coefficients. This formula was given by 16th century French mathematician François Viète: In each of these two equations, the final trigonometric function equals one or minus one or zero, thus removing half the entries in each of the sums. These equations are in fact valid even for complex values of , because both sides are entire (that is, holomorphic on the whole complex plane) functions of , and two such functions that coincide on the real axis necessarily coincide everywhere. Here are the concrete instances of these equations for and : The right-hand side of the formula for is in fact the value of the Chebyshev polynomial at . Failure for non-integer powers, and generalization De Moivre's formula does not hold for non-integer powers. The derivation of de Moivre's formula above involves a complex number raised to the integer power . If a complex number is raised to a non-integer power, the result is multiple-valued (see failure of power and logarithm identities). Roots of complex numbers A modest extension of the version of de Moivre's formula given in this article can be used to find the -th roots of a complex number for a non-zero integer . (This is equivalent to raising to a power of ). If is a complex number, written in polar form as then the -th roots of are given by where varies over the integer values from 0 to . This formula is also sometimes known as de Moivre's formula. Complex numbers raised to an arbitrary power Generally, if (in polar form) and are arbitrary complex numbers, then the set of possible values is (Note that if is a rational number that equals in lowest terms then this set will have exactly distinct values rather than infinitely many. In particular, if is an integer then the set will have exactly one value, as previously discussed.) In contrast, de Moivre's formula gives which is just the single value from this set corresponding to . Analogues in other settings Hyperbolic trigonometry Since , an analog to de Moivre's formula also applies to the hyperbolic trigonometry. For all integers , If is a rational number (but not necessarily an integer), then will be one of the values of . Extension to complex numbers For any integer , the formula holds for any complex number where Quaternions To find the roots of a quaternion there is an analogous form of de Moivre's formula. A quaternion in the form can be represented in the form In this representation, and the trigonometric functions are defined as In the case that , that is, the unit vector. This leads to the variation of De Moivre's formula: Example To find the cube roots of write the quaternion in the form Then the cube roots are given by: 2 × 2 matrices With matrices, when is an integer. This is a direct consequence of the isomorphism between the matrices of type and the complex plane.
Mathematics
Complex analysis
null
57330
https://en.wikipedia.org/wiki/Circulatory%20system
Circulatory system
In vertebrates, the circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the body. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek meaning heart, and Latin meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with circulatory system. The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Many invertebrates such as arthropods have an open circulatory system with a heart that pumps a hemolymph which returns via the body cavity rather than via blood vessels. Diploblasts such as sponges and comb jellies lack a circulatory system. Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH. In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the blood circulatory system; without it the blood would become depleted of fluid. The lymphatic system also works with the immune system. The circulation of lymph takes much longer than that of blood and, unlike the closed (blood) circulatory system, the lymphatic system is an open system. Some sources describe it as a secondary circulatory system. The circulatory system can be affected by many cardiovascular diseases. Cardiologists are medical professionals which specialise in the heart, and cardiothoracic surgeons specialise in operating on the heart and its surrounding areas. Vascular surgeons focus on disorders of the blood vessels, and lymphatic vessels. Structure The circulatory system includes the heart, blood vessels, and blood. The cardiovascular system in all vertebrates, consists of the heart and blood vessels. The circulatory system is further divided into two major circuits – a pulmonary circulation, and a systemic circulation. The pulmonary circulation is a circuit loop from the right heart taking deoxygenated blood to the lungs where it is oxygenated and returned to the left heart. The systemic circulation is a circuit loop that delivers oxygenated blood from the left heart to the rest of the body, and returns deoxygenated blood back to the right heart via large veins known as the venae cavae. The systemic circulation can also be defined as two parts – a macrocirculation and a microcirculation. An average adult contains five to six quarts (roughly 4.7 to 5.7 liters) of blood, accounting for approximately 7% of their total body weight. Blood consists of plasma, red blood cells, white blood cells, and platelets. The digestive system also works with the circulatory system to provide the nutrients the system needs to keep the heart pumping. Further circulatory routes are associated, such as the coronary circulation to the heart itself, the cerebral circulation to the brain, renal circulation to the kidneys, and bronchial circulation to the bronchi in the lungs. The human circulatory system is closed, meaning that the blood is contained within the vascular network. Nutrients travel through tiny blood vessels of the microcirculation to reach organs. The lymphatic system is an essential subsystem of the circulatory system consisting of a network of lymphatic vessels, lymph nodes, organs, tissues and circulating lymph. This subsystem is an open system. A major function is to carry the lymph, draining and returning interstitial fluid into the lymphatic ducts back to the heart for return to the circulatory system. Another major function is working together with the immune system to provide defense against pathogens. Heart The heart pumps blood to all parts of the body providing nutrients and oxygen to every cell, and removing waste products. The left heart pumps oxygenated blood returned from the lungs to the rest of the body in the systemic circulation. The right heart pumps deoxygenated blood to the lungs in the pulmonary circulation. In the human heart there is one atrium and one ventricle for each circulation, and with both a systemic and a pulmonary circulation there are four chambers in total: left atrium, left ventricle, right atrium and right ventricle. The right atrium is the upper chamber of the right side of the heart. The blood that is returned to the right atrium is deoxygenated (poor in oxygen) and passed into the right ventricle to be pumped through the pulmonary artery to the lungs for re-oxygenation and removal of carbon dioxide. The left atrium receives newly oxygenated blood from the lungs as well as the pulmonary vein which is passed into the strong left ventricle to be pumped through the aorta to the different organs of the body. Pulmonary circulation The pulmonary circulation is the part of the circulatory system in which oxygen-depleted blood is pumped away from the heart, via the pulmonary artery, to the lungs and returned, oxygenated, to the heart via the pulmonary vein. Oxygen-deprived blood from the superior and inferior vena cava enters the right atrium of the heart and flows through the tricuspid valve (right atrioventricular valve) into the right ventricle, from which it is then pumped through the pulmonary semilunar valve into the pulmonary artery to the lungs. Gas exchange occurs in the lungs, whereby is released from the blood, and oxygen is absorbed. The pulmonary vein returns the now oxygen-rich blood to the left atrium. A separate circuit from the systemic circulation, the bronchial circulation supplies blood to the tissue of the larger airways of the lung. Systemic circulation The systemic circulation is a circuit loop that delivers oxygenated blood from the left heart to the rest of the body through the aorta. Deoxygenated blood is returned in the systemic circulation to the right heart via two large veins, the inferior vena cava and superior vena cava, where it is pumped from the right atrium into the pulmonary circulation for oxygenation. The systemic circulation can also be defined as having two parts – a macrocirculation and a microcirculation. Blood vessels The blood vessels of the circulatory system are the arteries, veins, and capillaries. The large arteries and veins that take blood to, and away from the heart are known as the great vessels. Arteries Oxygenated blood enters the systemic circulation when leaving the left ventricle, via the aortic semilunar valve. The first part of the systemic circulation is the aorta, a massive and thick-walled artery. The aorta arches and gives branches supplying the upper part of the body after passing through the aortic opening of the diaphragm at the level of thoracic ten vertebra, it enters the abdomen. Later, it descends down and supplies branches to abdomen, pelvis, perineum and the lower limbs. The walls of the aorta are elastic. This elasticity helps to maintain the blood pressure throughout the body. When the aorta receives almost five litres of blood from the heart, it recoils and is responsible for pulsating blood pressure. As the aorta branches into smaller arteries, their elasticity goes on decreasing and their compliance goes on increasing. Capillaries Arteries branch into small passages called arterioles and then into the capillaries. The capillaries merge to bring blood into the venous system. The total length of muscle capillaries in a 70 kg human is estimated to be between 9,000 and 19,000 km. Veins Capillaries merge into venules, which merge into veins. The venous system feeds into the two major veins: the superior vena cava – which mainly drains tissues above the heart – and the inferior vena cava – which mainly drains tissues below the heart. These two large veins empty into the right atrium of the heart. Portal veins The general rule is that arteries from the heart branch out into capillaries, which collect into veins leading back to the heart. Portal veins are a slight exception to this. In humans, the only significant example is the hepatic portal vein which combines from capillaries around the gastrointestinal tract where the blood absorbs the various products of digestion; rather than leading directly back to the heart, the hepatic portal vein branches into a second capillary system in the liver. Coronary circulation The heart itself is supplied with oxygen and nutrients through a small "loop" of the systemic circulation and derives very little from the blood contained within the four chambers. The coronary circulation system provides a blood supply to the heart muscle itself. The coronary circulation begins near the origin of the aorta by two coronary arteries: the right coronary artery and the left coronary artery. After nourishing the heart muscle, blood returns through the coronary veins into the coronary sinus and from this one into the right atrium. Backflow of blood through its opening during atrial systole is prevented by the Thebesian valve. The smallest cardiac veins drain directly into the heart chambers. Cerebral circulation The brain has a dual blood supply, an anterior and a posterior circulation from arteries at its front and back. The anterior circulation arises from the internal carotid arteries to supply the front of the brain. The posterior circulation arises from the vertebral arteries, to supply the back of the brain and brainstem. The circulation from the front and the back join (anastomise) at the circle of Willis. The neurovascular unit, composed of various cells and vasculature channels within the brain, regulates the flow of blood to activated neurons in order to satisfy their high energy demands. Renal circulation The renal circulation is the blood supply to the kidneys, contains many specialized blood vessels and receives around 20% of the cardiac output. It branches from the abdominal aorta and returns blood to the ascending inferior vena cava. Development The development of the circulatory system starts with vasculogenesis in the embryo. The human arterial and venous systems develop from different areas in the embryo. The arterial system develops mainly from the aortic arches, six pairs of arches that develop on the upper part of the embryo. The venous system arises from three bilateral veins during weeks 4 – 8 of embryogenesis. Fetal circulation begins within the 8th week of development. Fetal circulation does not include the lungs, which are bypassed via the truncus arteriosus. Before birth the fetus obtains oxygen (and nutrients) from the mother through the placenta and the umbilical cord. Arteries The human arterial system originates from the aortic arches and from the dorsal aortae starting from week 4 of embryonic life. The first and second aortic arches regress and form only the maxillary arteries and stapedial arteries respectively. The arterial system itself arises from aortic arches 3, 4 and 6 (aortic arch 5 completely regresses). The dorsal aortae, present on the dorsal side of the embryo, are initially present on both sides of the embryo. They later fuse to form the basis for the aorta itself. Approximately thirty smaller arteries branch from this at the back and sides. These branches form the intercostal arteries, arteries of the arms and legs, lumbar arteries and the lateral sacral arteries. Branches to the sides of the aorta will form the definitive renal, suprarenal and gonadal arteries. Finally, branches at the front of the aorta consist of the vitelline arteries and umbilical arteries. The vitelline arteries form the celiac, superior and inferior mesenteric arteries of the gastrointestinal tract. After birth, the umbilical arteries will form the internal iliac arteries. Veins The human venous system develops mainly from the vitelline veins, the umbilical veins and the cardinal veins, all of which empty into the sinus venosus. Function About 98.5% of the oxygen in a sample of arterial blood in a healthy human, breathing air at sea-level pressure, is chemically combined with hemoglobin molecules. About 1.5% is physically dissolved in the other blood liquids and not connected to hemoglobin. The hemoglobin molecule is the primary transporter of oxygen in vertebrates. Clinical significance Many diseases affect the circulatory system. These include a number of cardiovascular diseases, affecting the heart and blood vessels; hematologic diseases that affect the blood, such as anemia, and lymphatic diseases affecting the lymphatic system. Cardiologists are medical professionals which specialise in the heart, and cardiothoracic surgeons specialise in operating on the heart and its surrounding areas. Vascular surgeons focus on the blood vessels. Cardiovascular disease Diseases affecting the cardiovascular system are called cardiovascular disease. Many of these diseases are called "lifestyle diseases" because they develop over time and are related to a person's exercise habits, diet, whether they smoke, and other lifestyle choices a person makes. Atherosclerosis is the precursor to many of these diseases. It is where small atheromatous plaques build up in the walls of medium and large arteries. This may eventually grow or rupture to occlude the arteries. It is also a risk factor for acute coronary syndromes, which are diseases that are characterised by a sudden deficit of oxygenated blood to the heart tissue. Atherosclerosis is also associated with problems such as aneurysm formation or splitting ("dissection") of arteries. Another major cardiovascular disease involves the creation of a clot, called a "thrombus". These can originate in veins or arteries. Deep venous thrombosis, which mostly occurs in the legs, is one cause of clots in the veins of the legs, particularly when a person has been stationary for a long time. These clots may embolise, meaning travel to another location in the body. The results of this may include pulmonary embolus, transient ischaemic attacks, or stroke. Cardiovascular diseases may also be congenital in nature, such as heart defects or persistent fetal circulation, where the circulatory changes that are supposed to happen after birth do not. Not all congenital changes to the circulatory system are associated with diseases, a large number are anatomical variations. Investigations The function and health of the circulatory system and its parts are measured in a variety of manual and automated ways. These include simple methods such as those that are part of the cardiovascular examination, including the taking of a person's pulse as an indicator of a person's heart rate, the taking of blood pressure through a sphygmomanometer or the use of a stethoscope to listen to the heart for murmurs which may indicate problems with the heart's valves. An electrocardiogram can also be used to evaluate the way in which electricity is conducted through the heart. Other more invasive means can also be used. A cannula or catheter inserted into an artery may be used to measure pulse pressure or pulmonary wedge pressures. Angiography, which involves injecting a dye into an artery to visualise an arterial tree, can be used in the heart (coronary angiography) or brain. At the same time as the arteries are visualised, blockages or narrowings may be fixed through the insertion of stents, and active bleeds may be managed by the insertion of coils. An MRI may be used to image arteries, called an MRI angiogram. For evaluation of the blood supply to the lungs a CT pulmonary angiogram may be used. Vascular ultrasonography may be used to investigate vascular diseases affecting the venous system and the arterial system including the diagnosis of stenosis, thrombosis or venous insufficiency. An intravascular ultrasound using a catheter is also an option. Surgery There are a number of surgical procedures performed on the circulatory system: Coronary artery bypass surgery Coronary stent used in angioplasty Vascular surgery Vein stripping Cosmetic procedures Cardiovascular procedures are more likely to be performed in an inpatient setting than in an ambulatory care setting; in the United States, only 28% of cardiovascular surgeries were performed in the ambulatory care setting. Other animals While humans, as well as other vertebrates, have a closed blood circulatory system (meaning that the blood never leaves the network of arteries, veins and capillaries), some invertebrate groups have an open circulatory system containing a heart but limited blood vessels. The most primitive, diploblastic animal phyla lack circulatory systems. An additional transport system, the lymphatic system, which is only found in animals with a closed blood circulation, is an open system providing an accessory route for excess interstitial fluid to be returned to the blood. The blood vascular system first appeared probably in an ancestor of the triploblasts over 600 million years ago, overcoming the time-distance constraints of diffusion, while endothelium evolved in an ancestral vertebrate some 540–510 million years ago. Open circulatory system In arthropods, the open circulatory system is a system in which a fluid in a cavity called the hemocoel or haemocoel bathes the organs directly with oxygen and nutrients, with there being no distinction between blood and interstitial fluid; this combined fluid is called hemolymph or haemolymph. Muscular movements by the animal during locomotion can facilitate hemolymph movement, but diverting flow from one area to another is limited. When the heart relaxes, blood is drawn back toward the heart through open-ended pores (ostia). Hemolymph fills all of the interior hemocoel of the body and surrounds all cells. Hemolymph is composed of water, inorganic salts (mostly sodium, chloride, potassium, magnesium, and calcium), and organic compounds (mostly carbohydrates, proteins, and lipids). The primary oxygen transporter molecule is hemocyanin. There are free-floating cells, the hemocytes, within the hemolymph. They play a role in the arthropod immune system. Closed circulatory system The circulatory systems of all vertebrates, as well as of annelids (for example, earthworms) and cephalopods (squids, octopuses and relatives) always keep their circulating blood enclosed within heart chambers or blood vessels and are classified as closed, just as in humans. Still, the systems of fish, amphibians, reptiles, and birds show various stages of the evolution of the circulatory system. Closed systems permit blood to be directed to the organs that require it. In fish, the system has only one circuit, with the blood being pumped through the capillaries of the gills and on to the capillaries of the body tissues. This is known as single cycle circulation. The heart of fish is, therefore, only a single pump (consisting of two chambers). In amphibians and most reptiles, a double circulatory system is used, but the heart is not always completely separated into two pumps. Amphibians have a three-chambered heart. In reptiles, the ventricular septum of the heart is incomplete and the pulmonary artery is equipped with a sphincter muscle. This allows a second possible route of blood flow. Instead of blood flowing through the pulmonary artery to the lungs, the sphincter may be contracted to divert this blood flow through the incomplete ventricular septum into the left ventricle and out through the aorta. This means the blood flows from the capillaries to the heart and back to the capillaries instead of to the lungs. This process is useful to ectothermic (cold-blooded) animals in the regulation of their body temperature. Mammals, birds and crocodilians show complete separation of the heart into two pumps, for a total of four heart chambers; it is thought that the four-chambered heart of birds and crocodilians evolved independently from that of mammals. Double circulatory systems permit blood to be repressurized after returning from the lungs, speeding up delivery of oxygen to tissues. No circulatory system Circulatory systems are absent in some animals, including flatworms. Their body cavity has no lining or enclosed fluid. Instead, a muscular pharynx leads to an extensively branched digestive system that facilitates direct diffusion of nutrients to all cells. The flatworm's dorso-ventrally flattened body shape also restricts the distance of any cell from the digestive system or the exterior of the organism. Oxygen can diffuse from the surrounding water into the cells, and carbon dioxide can diffuse out. Consequently, every cell is able to obtain nutrients, water and oxygen without the need of a transport system. Some animals, such as jellyfish, have more extensive branching from their gastrovascular cavity (which functions as both a place of digestion and a form of circulation), this branching allows for bodily fluids to reach the outer layers, since the digestion begins in the inner layers. History The earliest known writings on the circulatory system are found in the Ebers Papyrus (16th century BCE), an ancient Egyptian medical papyrus containing over 700 prescriptions and remedies, both physical and spiritual. In the papyrus, it acknowledges the connection of the heart to the arteries. The Egyptians thought air came in through the mouth and into the lungs and heart. From the heart, the air travelled to every member through the arteries. Although this concept of the circulatory system is only partially correct, it represents one of the earliest accounts of scientific thought. In the 6th century BCE, the knowledge of circulation of vital fluids through the body was known to the Ayurvedic physician Sushruta in ancient India. He also seems to have possessed knowledge of the arteries, described as 'channels' by Dwivedi & Dwivedi (2007). The first major ancient Greek research into the circulatory system was completed by Plato in the Timaeus, who argues that blood circulates around the body in accordance with the general rules that govern the motions of the elements in the body; accordingly, he does not place much importance in the heart itself. The valves of the heart were discovered by a physician of the Hippocratic school around the early 3rd century BC. However, their function was not properly understood then. Because blood pools in the veins after death, arteries look empty. Ancient anatomists assumed they were filled with air and that they were for the transport of air. The Greek physician, Herophilus, distinguished veins from arteries but thought that the pulse was a property of arteries themselves. Greek anatomist Erasistratus observed that arteries that were cut during life bleed. He ascribed the fact to the phenomenon that air escaping from an artery is replaced with blood that enters between veins and arteries by very small vessels. Thus he apparently postulated capillaries but with reversed flow of blood. In 2nd-century AD Rome, the Greek physician Galen knew that blood vessels carried blood and identified venous (dark red) and arterial (brighter and thinner) blood, each with distinct and separate functions. Growth and energy were derived from venous blood created in the liver from chyle, while arterial blood gave vitality by containing pneuma (air) and originated in the heart. Blood flowed from both creating organs to all parts of the body where it was consumed and there was no return of blood to the heart or liver. The heart did not pump blood around, the heart's motion sucked blood in during diastole and the blood moved by the pulsation of the arteries themselves. Galen believed that the arterial blood was created by venous blood passing from the left ventricle to the right by passing through 'pores' in the interventricular septum, air passed from the lungs via the pulmonary artery to the left side of the heart. As the arterial blood was created 'sooty' vapors were created and passed to the lungs also via the pulmonary artery to be exhaled. In 1025, The Canon of Medicine by the Persian physician, Avicenna, "erroneously accepted the Greek notion regarding the existence of a hole in the ventricular septum by which the blood traveled between the ventricles." Despite this, Avicenna "correctly wrote on the cardiac cycles and valvular function", and "had a vision of blood circulation" in his Treatise on Pulse. While also refining Galen's erroneous theory of the pulse, Avicenna provided the first correct explanation of pulsation: "Every beat of the pulse comprises two movements and two pauses. Thus, expansion : pause : contraction : pause. [...] The pulse is a movement in the heart and arteries ... which takes the form of alternate expansion and contraction." In 1242, the Arabian physician, Ibn al-Nafis described the process of pulmonary circulation in greater, more accurate detail than his predecessors, though he believed, as they did, in the notion of vital spirit (pneuma), which he believed was formed in the left ventricle. Ibn al-Nafis stated in his Commentary on Anatomy in Avicenna's Canon: ...the blood from the right chamber of the heart must arrive at the left chamber but there is no direct pathway between them. The thick septum of the heart is not perforated and does not have visible pores as some people thought or invisible pores as Galen thought. The blood from the right chamber must flow through the vena arteriosa (pulmonary artery) to the lungs, spread through its substances, be mingled there with air, pass through the arteria venosa (pulmonary vein) to reach the left chamber of the heart and there form the vital spirit... In addition, Ibn al-Nafis had an insight into what later became a larger theory of the capillary circulation. He stated that "there must be small communications or pores (manafidh in Arabic) between the pulmonary artery and vein," a prediction that preceded the discovery of the capillary system by more than 400 years. Ibn al-Nafis' theory was confined to blood transit in the lungs and did not extend to the entire body. Michael Servetus was the first European to describe the function of pulmonary circulation, although his achievement was not widely recognized at the time, for a few reasons. He firstly described it in the "Manuscript of Paris" (near 1546), but this work was never published. And later he published this description, but in a theological treatise, Christianismi Restitutio, not in a book on medicine. Only three copies of the book survived but these remained hidden for decades, the rest were burned shortly after its publication in 1553 because of persecution of Servetus by religious authorities. A better known discovery of pulmonary circulation was by Vesalius's successor at Padua, Realdo Colombo, in 1559. Finally, the English physician William Harvey, a pupil of Hieronymus Fabricius (who had earlier described the valves of the veins without recognizing their function), performed a sequence of experiments and published his Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus in 1628, which "demonstrated that there had to be a direct connection between the venous and arterial systems throughout the body, and not just the lungs. Most importantly, he argued that the beat of the heart produced a continuous circulation of blood through minute connections at the extremities of the body. This is a conceptual leap that was quite different from Ibn al-Nafis' refinement of the anatomy and bloodflow in the heart and lungs." This work, with its essentially correct exposition, slowly convinced the medical world. However, Harvey did not identify the capillary system connecting arteries and veins; this was discovered by Marcello Malpighi in 1661.
Biology and health sciences
Biology
null
57413
https://en.wikipedia.org/wiki/Rambutan
Rambutan
Rambutan ( ; Nephelium lappaceum) is a medium-sized tropical tree in the family Sapindaceae. The name also refers to the edible fruit produced by this tree. The rambutan is native to Southeast Asia. It is closely related to several other edible tropical fruits, including the lychee, longan, pulasan, and quenepa. Description It is an evergreen tree growing to a height of . The leaves are alternate, long, pinnate, with three to eleven leaflets, each leaflet wide and broad with an entire margin. The flowers are small, , apetalous, discoidal, and borne in erect terminal panicles wide. Rambutan trees can be male (producing only staminate flowers and, hence, produce no fruit), female (producing flowers that are only functionally female), or hermaphroditic (producing flowers that are female with a small percentage of male flowers). Fruit The fruit is a round to oval single-seeded drupe, long, rarely to long and broad, borne in a loose pendant cluster of ten to twenty fruits together. The leathery skin is reddish (rarely orange or yellow) and covered with fleshy pliable spines, hence the name, which means 'hairs'. The spines (also known as "spinterns") contribute to the transpiration of the fruit, which can affect the fruit's quality. The fruit flesh, the aril, is translucent, whitish, or very pale pink, with a sweet, mildly acidic flavor reminiscent of grapes. The single seed is glossy brown, , with a white basal scar. Soft and containing equal portions of saturated and unsaturated fats, the seed may be cooked and eaten, but is bitter and has narcotic properties. History Around the 13th to 15th centuries, Arab traders, who played a major role in Indian Ocean trade, introduced rambutans to Zanzibar and Pemba in East Africa. There are limited rambutan plantings in some parts of India. In the 19th century, the Dutch introduced rambutans from Indonesia in Southeast Asia, to Suriname in South America. Subsequently, the plants spread to the tropical Americas, planted in the coastal lowlands of Colombia, Ecuador, Honduras, Costa Rica, Trinidad, and Cuba. In 1912, rambutans were introduced to the Philippines from Indonesia. Further introductions were made in 1920 (from Indonesia) and 1930 (from Malaya), but until the 1950s its distribution was limited. There was an attempt to introduce rambutans to the Southeastern United States, with seeds imported from Java, Indonesia in 1906, but the species proved to be unsuccessful, except in Puerto Rico. Etymology The name rambutan is derived from the Malay word meaning 'hair' referring to the numerous hairy protuberances of the fruits, together with the noun-building suffix . Similarly, in Vietnam, they are called (meaning 'messy hair'). The Chinese name is (Mandarin hóngmáodān, Hokkien âng-mô͘-tan), literally 'red-haired pellet'. Composition Nutrients Rambutan fruit is 78% water, 21% carbohydrates, 1% protein, and has negligible fat (see table; data are for canned fruit in syrup; raw fruit data are unpublished). In a reference amount of , the canned fruit supplies 82 calories and only manganese at 15% of the Daily Value (DV), while other micronutrients are in low content (less than 10% DV, table). Phytochemicals As an un-pigmented fruit flesh, rambutan does not contain significant polyphenol content, but its colorful rind displays diverse phenolic acids, such as syringic, coumaric, gallic, caffeic, and ellagic acids. Rambutan seeds contain equal proportions of saturated and unsaturated fatty acids, where arachidic (34%) and oleic (42%) acids, respectively, are the highest in fat content. The pleasant fragrance of rambutan fruit derives from numerous volatile organic compounds, including beta-damascenone, vanillin, phenylacetic acid, and cinnamic acid. Ecology Pollination Aromatic rambutan flowers are highly attractive to many insects, especially bees. Flies (Diptera), bees (Hymenoptera), and ants (Solenopsis) are the main pollinators. Among the Diptera, Lucilia spp. are abundant, and among the Hymenoptera, honey bees (Apis dorsata and A. cerana) and the stingless bee genus Trigona are the major visitors. A. cerana colonies foraging on rambutan flowers produce large quantities of honey. Bees foraging for nectar routinely contact the stigma of female flowers and gather significant quantities of the sticky pollen from male blossoms. Little pollen has been seen on bees foraging female flowers. Although male flowers open at 06:00, foraging by A. cerana is most intense between 07:00 and 11:00, tapering off rather abruptly thereafter. In Thailand, A. cerana is the preferred species for small-scale pollination of rambutan. Its hair is also helpful in pollination where pollen can be hooked on and transported to female flowers. Varieties Well over 200 cultivars were developed from selected clones available throughout tropical Asia. Most of the cultivars are also selected for compact growth, reaching a height of only for easier harvesting. Compared to propagated rambutan clones, rambutans taken from the wild have a higher acidity and potential for various food purposes. In Indonesia, 22 rambutan cultivars were identified as good quality, with five as leading commercial cultivars: 'Binjai', 'Lebak Bulus', 'Rapiah', 'Cimacan' and 'Sinyonya', with other popular cultivars including 'Simacan', 'Silengkeng', 'Sikonto' and 'Aceh kuning'. In the Malay Peninsula, commercial varieties include 'Chooi Ang', 'Peng Thing Bee', 'Ya Tow', 'Azimat', and 'Ayer Mas'. In Nicaragua, a joint World Relief–European Union team distributed seedlings to organizations such as Ascociación Pueblos en Acción Comunitaria in 2001 to more than 100 farmers. Some of these farmers saw the first production of rambutans from their trees in 2005–2006 with development directed at the local market. In the Philippines, two cultivars of rambutans are distinguished by their seed. The common rambutan seed and fruit are difficult to separate, while the 'Maharlika Rambutan' fruit separates cleanly from its seed. The fruit taste and size of these two cultivars are identical, but the 'Maharlika Rambutan' is more popular with a higher price. Uses Culinary The fruit of the rambutan tree may be eaten raw by removing the peel, eating the pulp, and discarding the seed. Rambutan is most often used in desserts, such as sorbets and puddings, but also in curries and savory dishes. The flavor is similar to lychee and pairs well with other tropical fruits. Cultivation Rambutans are adapted to warm tropical climates, around , and are sensitive to temperatures below . It is grown commercially within 12–15° of the equator. The trees grow well at elevations up to above sea level and do best in deep soil, clay loam, or sandy loam rich in organic matter. They grow on hilly terrain where there is good drainage. Rambutans are propagated by grafting, air-layering, and budding. Budded trees may fruit after two to three years with optimum production occurring after eight to ten years. Trees grown from seed bear after five to six years. The aril is attached to the seed in some commercial cultivars, but "freestone" cultivars are available and in high demand. Usually, a single light brown seed is found, which is high in certain fats and oils (primarily oleic acid and arachidic acid) valuable to industry, and used in cooking and the manufacture of soap. Rambutan roots, bark, and leaves have various uses in traditional medicine and in the production of dyes. In some areas, rambutan trees can bear fruit twice annually, once in late fall and early winter, with a shorter season in late spring and early summer. Other areas, such as Costa Rica, have a single fruit season, with the start of the rainy season in April stimulating flowering, and the fruit is usually ripe in August and September. The fragile fruit must ripen on the tree, then they are harvested over a four- to seven-week period. The fresh fruit are easily bruised and have a limited shelf life. An average tree may produce 5,000–6,000 or more fruit ( per tree). Yields begin at in young orchards and may reach on mature trees. In Hawaii, were harvested producing of fruit in 1997. Yields could be increased by improved orchard management, including pollination, and by planting high-yielding compact cultivars. Most commercial cultivars are hermaphroditic; cultivars that produce only functionally female flowers require the presence of male trees. Male trees are seldom found, as vegetative selection has favored hermaphroditic clones that produce a high proportion of functionally female flowers and a much lower number of flowers that produce pollen. Over 3,000 greenish-white flowers occur in male panicles, each with five to seven anthers and a nonfunctional ovary. Male flowers have yellow nectaries and five to seven stamens. About 500 greenish-yellow flowers occur in each hermaphroditic panicle. Each flower has six anthers, usually a bilobed stigma, and one ovule in each of its two sections (locules). The flowers are receptive for about one day but may persist if pollinators are excluded. In Thailand, rambutan trees were first planted in Surat Thani in 1926 by the Chinese Malay K. Vong in Ban Na San. An annual rambutan fair is held during August harvest time. In Malaysia, rambutan flowers from March to July and again between June and November, usually in response to rain following a dry period. Flowering periods differ for other localities. Most, but not all, flowers open early in the day. Up to 100 flowers in each female panicle may be open each day during peak bloom. The initial fruit set may approach 25 percent, but a high abortion level contributes to a much lower level of production at harvest (1 to 3 percent). The fruit matures 15 to 18 weeks after flowering. Rambutan cultivation in Sri Lanka mainly consists of small home gardens. Malwana, a village in the Kelani River Valley, is popular for its rambutan orchards. Their production comes to market in May, June, and July when it is very common to observe seasonal traders along the streets of Colombo. Sri Lanka also has some off-season rambutan production in January and February in areas such as Bibile, Medagama, and Monaragala. Both male and female flowers are faintly sweet-scented and have functional nectaries at the ovary base. Female flowers produce two to three times more nectar than male flowers. Nectar sugar concentration ranges between 18–47 percent and is similar between the flower types. Rambutans are an important nectar source for bees in Malaysia. Cross-pollination is a necessity because the anther is absent in most functionally female flowers. Although apomixis may occur in some cultivars, rambutans, like lychee, are dependent upon insects for pollination. In Malaysia, where only about one percent of the female flowers set fruit, no fruit is set on bagged flowers while hand pollination resulted in a 13 percent fruit set. Pollinators may maintain fidelity to either male or hermaphroditic flowers (trees), thus limiting pollination and fruit set under natural conditions where crossing between male and female flowers is required. Production Rambutan is a fruit tree cultivated in humid tropical Southeast Asia. It is a common garden fruit tree and propagated commercially in small orchards. It is one of the best-known fruits of Southeast Asia and is also widely cultivated elsewhere in the tropics including Africa, southern Mexico, the Caribbean islands, Costa Rica, Honduras, Guatemala, Panama, India, Vietnam, Philippines, and Sri Lanka. It is also produced in Ecuador where it is known as achotillo, and on the island of Puerto Rico. , Thailand was the largest producer of rambutans (, ), growing 450,000 tonnes, followed by Indonesia at 100,000 tonnes, and Malaysia, 60,000 tonnes. In Thailand, the major cultivation centers are Chanthaburi Province, followed by Chumphon Province and Surat Thani Province. In Indonesia, the production center of rambutan is in the western parts of Indonesia, which includes Java, Sumatra, and Kalimantan. In Java, the orchards and pekarangan (habitation yards) in the villages of Greater Jakarta and West Java have been known as rambutan production centers since the colonial era, with a trading center in Pasar Minggu, South Jakarta. During 2017 and years before, imports of rambutan to the European Union were about 1,000 tonnes annually, enabling a year-round supply from numerous tropical suppliers. The fruits are usually sold fresh and have a short shelf-life, and are commonly used in making jams and jellies, or canned. Evergreen rambutan trees with their abundant colored fruit make attractive landscape specimens. In India, rambutan is imported from Thailand, as well as grown in the Pathanamthitta District of the southern state of Kerala. Rambutans are not climacteric fruit—that is, they ripen only on the tree and appear not to produce a ripening agent, such as the plant hormone ethylene, after being harvested. However, at post-harvest, the quality of the fruit is affected by storage factors. Low humidity levels, storage time, and incidences of mechanical damage can severely affect the quality of the fruit which would negatively affect the demand for such. In general, the fruit has a short shelf life in ambient conditions but implementing methods that can extend such is a productional advantage. Certain treatments like irradiation and the use of hot-forced air can help in fruit preservation although the former has seen more success. Distribution The center of genetic diversity for rambutans is the Indonesian region. They have been widely cultivated in Southeast Asian areas, such as Malaysia, Thailand, Myanmar, Sri Lanka, Indonesia, Singapore, and the Philippines. It has spread from there to parts of Asia, Africa, Oceania, and Central America. Gallery
Biology and health sciences
Stone fruits
Plants
57414
https://en.wikipedia.org/wiki/Evolutionary%20developmental%20biology
Evolutionary developmental biology
Evolutionary developmental biology (informally, evo-devo) is a field of biological research that compares the developmental processes of different organisms to infer how developmental processes evolved. The field grew from 19th-century beginnings, where embryology faced a mystery: zoologists did not know how embryonic development was controlled at the molecular level. Charles Darwin noted that having similar embryos implied common ancestry, but little progress was made until the 1970s. Then, recombinant DNA technology at last brought embryology together with molecular genetics. A key early discovery was of homeotic genes that regulate development in a wide range of eukaryotes. The field is composed of multiple core evolutionary concepts. One is deep homology, the finding that dissimilar organs such as the eyes of insects, vertebrates and cephalopod molluscs, long thought to have evolved separately, are controlled by similar genes such as pax-6, from the evo-devo gene toolkit. These genes are ancient, being highly conserved among phyla; they generate the patterns in time and space which shape the embryo, and ultimately form the body plan of the organism. Another is that species do not differ much in their structural genes, such as those coding for enzymes; what does differ is the way that gene expression is regulated by the toolkit genes. These genes are reused, unchanged, many times in different parts of the embryo and at different stages of development, forming a complex cascade of control, switching other regulatory genes as well as structural genes on and off in a precise pattern. This multiple pleiotropic reuse explains why these genes are highly conserved, as any change would have many adverse consequences which natural selection would oppose. New morphological features and ultimately new species are produced by variations in the toolkit, either when genes are expressed in a new pattern, or when toolkit genes acquire additional functions. Another possibility is the neo-Lamarckian theory that epigenetic changes are later consolidated at gene level, something that may have been important early in the history of multicellular life. History Early theories Philosophers began to think about how animals acquired form in the womb in classical antiquity. Aristotle asserts in his Physics treatise that according to Empedocles, order "spontaneously" appears in the developing embryo. In his The Parts of Animals treatise, he argues that Empedocles' theory was wrong. In Aristotle's account, Empedocles stated that the vertebral column is divided into vertebrae because, as it happens, the embryo twists about and snaps the column into pieces. Aristotle argues instead that the process has a predefined goal: that the "seed" that develops into the embryo began with an inbuilt "potential" to become specific body parts, such as vertebrae. Further, each sort of animal gives rise to animals of its own kind: humans only have human babies. Recapitulation A recapitulation theory of evolutionary development was proposed by Étienne Serres in 1824–26, echoing the 1808 ideas of Johann Friedrich Meckel. They argued that the embryos of 'higher' animals went through or recapitulated a series of stages, each of which resembled an animal lower down the great chain of being. For example, the brain of a human embryo looked first like that of a fish, then in turn like that of a reptile, bird, and mammal before becoming clearly human. The embryologist Karl Ernst von Baer opposed this, arguing in 1828 that there was no linear sequence as in the great chain of being, based on a single body plan, but a process of epigenesis in which structures differentiate. Von Baer instead recognized four distinct animal body plans: radiate, like starfish; molluscan, like clams; articulate, like lobsters; and vertebrate, like fish. Zoologists then largely abandoned recapitulation, though Ernst Haeckel revived it in 1866. Evolutionary morphology From the early 19th century through most of the 20th century, embryology faced a mystery. Animals were seen to develop into adults of widely differing body plan, often through similar stages, from the egg, but zoologists knew almost nothing about how embryonic development was controlled at the molecular level, and therefore equally little about how developmental processes had evolved. Charles Darwin argued that a shared embryonic structure implied a common ancestor. For example, Darwin cited in his 1859 book On the Origin of Species the shrimp-like larva of the barnacle, whose sessile adults looked nothing like other arthropods; Linnaeus and Cuvier had classified them as molluscs. Darwin also noted Alexander Kowalevsky's finding that the tunicate, too, was not a mollusc, but in its larval stage had a notochord and pharyngeal slits which developed from the same germ layers as the equivalent structures in vertebrates, and should therefore be grouped with them as chordates. 19th century zoology thus converted embryology into an evolutionary science, connecting phylogeny with homologies between the germ layers of embryos. Zoologists including Fritz Müller proposed the use of embryology to discover phylogenetic relationships between taxa. Müller demonstrated that crustaceans shared the Nauplius larva, identifying several parasitic species that had not been recognized as crustaceans. Müller also recognized that natural selection must act on larvae, just as it does on adults, giving the lie to recapitulation, which would require larval forms to be shielded from natural selection. Two of Haeckel's other ideas about the evolution of development have fared better than recapitulation: he argued in the 1870s that changes in the timing (heterochrony) and changes in the positioning within the body (heterotopy) of aspects of embryonic development would drive evolution by changing the shape of a descendant's body compared to an ancestor's. It took a century before these ideas were shown to be correct. In 1917, D'Arcy Thompson wrote a book on the shapes of animals, showing with simple mathematics how small changes to parameters, such as the angles of a gastropod's spiral shell, can radically alter an animal's form, though he preferred a mechanical to evolutionary explanation. But without molecular evidence, progress stalled. In 1952, Alan Turing published his paper "The Chemical Basis of Morphogenesis", on the development of patterns in animals' bodies. He suggested that morphogenesis could be explained by a reaction–diffusion system, a system of reacting chemicals able to diffuse through the body. He modelled catalysed chemical reactions using partial differential equations, showing that patterns emerged when the chemical reaction produced both a catalyst (A) and an inhibitor (B) that slowed down production of A. If A and B then diffused at different rates, A dominated in some places, and B in others. The Russian biochemist Boris Belousov had run experiments with similar results, but was unable to publish them because scientists thought at that time that creating visible order violated the second law of thermodynamics. The modern synthesis of the early 20th century In the so-called modern synthesis of the early 20th century, between 1918 and 1930 Ronald Fisher brought together Darwin's theory of evolution, with its insistence on natural selection, heredity, and variation, and Gregor Mendel's laws of genetics into a coherent structure for evolutionary biology. Biologists assumed that an organism was a straightforward reflection of its component genes: the genes coded for proteins, which built the organism's body. Biochemical pathways (and, they supposed, new species) evolved through mutations in these genes. It was a simple, clear and nearly comprehensive picture: but it did not explain embryology. Sean B. Carroll has commented that had evo-devo's insights been available, embryology would certainly have played a central role in the synthesis. The evolutionary embryologist Gavin de Beer anticipated evolutionary developmental biology in his 1930 book Embryos and Ancestors, by showing that evolution could occur by heterochrony, such as in the retention of juvenile features in the adult. This, de Beer argued, could cause apparently sudden changes in the fossil record, since embryos fossilise poorly. As the gaps in the fossil record had been used as an argument against Darwin's gradualist evolution, de Beer's explanation supported the Darwinian position. However, despite de Beer, the modern synthesis largely ignored embryonic development to explain the form of organisms, since population genetics appeared to be an adequate explanation of how forms evolved. The lac operon In 1961, Jacques Monod, Jean-Pierre Changeux and François Jacob discovered the lac operon in the bacterium Escherichia coli. It was a cluster of genes, arranged in a feedback control loop so that its products would only be made when "switched on" by an environmental stimulus. One of these products was an enzyme that splits a sugar, lactose; and lactose itself was the stimulus that switched the genes on. This was a revelation, as it showed for the first time that genes, even in organisms as small as a bacterium, are subject to precise control. The implication was that many other genes were also elaborately regulated. The birth of evo-devo and a second synthesis In 1977, a revolution in thinking about evolution and developmental biology began, with the arrival of recombinant DNA technology in genetics, the book Ontogeny and Phylogeny by Stephen J. Gould and the paper "Evolution and Tinkering" by François Jacob. Gould laid to rest Haeckel's interpretation of evolutionary embryology, while Jacob set out an alternative theory. This led to a second synthesis, at last including embryology as well as molecular genetics, phylogeny, and evolutionary biology to form evo-devo. In 1978, Edward B. Lewis discovered homeotic genes that regulate embryonic development in Drosophila fruit flies, which like all insects are arthropods, one of the major phyla of invertebrate animals. Bill McGinnis quickly discovered homeotic gene sequences, homeoboxes, in animals in other phyla, in vertebrates such as frogs, birds, and mammals; they were later also found in fungi such as yeasts, and in plants. There were evidently strong similarities in the genes that controlled development across all the eukaryotes. In 1980, Christiane Nüsslein-Volhard and Eric Wieschaus described gap genes which help to create the segmentation pattern in fruit fly embryos; they and Lewis won a Nobel Prize for their work in 1995. Later, more specific similarities were discovered: for example, the distal-less gene was found in 1989 to be involved in the development of appendages or limbs in fruit flies, the fins of fish, the wings of chickens, the parapodia of marine annelid worms, the ampullae and siphons of tunicates, and the tube feet of sea urchins. It was evident that the gene must be ancient, dating back to the last common ancestor of bilateral animals (before the Ediacaran Period, which began some 635 million years ago). Evo-devo had started to uncover the ways that all animal bodies were built during development. The control of body structure Deep homology Roughly spherical eggs of different animals give rise to unique morphologies, from jellyfish to lobsters, butterflies to elephants. Many of these organisms share the same structural genes for bodybuilding proteins like collagen and enzymes, but biologists had expected that each group of animals would have its own rules of development. The surprise of evo-devo is that the shaping of bodies is controlled by a rather small percentage of genes, and that these regulatory genes are ancient, shared by all animals. The giraffe does not have a gene for a long neck, any more than the elephant has a gene for a big body. Their bodies are patterned by a system of switching which causes development of different features to begin earlier or later, to occur in this or that part of the embryo, and to continue for more or less time. The puzzle of how embryonic development was controlled began to be solved using the fruit fly Drosophila melanogaster as a model organism. The step-by-step control of its embryogenesis was visualized by attaching fluorescent dyes of different colours to specific types of protein made by genes expressed in the embryo. A dye such as green fluorescent protein, originally from a jellyfish, was typically attached to an antibody specific to a fruit fly protein, forming a precise indicator of where and when that protein appeared in the living embryo. Using such a technique, in 1994 Walter Gehring found that the pax-6 gene, vital for forming the eyes of fruit flies, exactly matches an eye-forming gene in mice and humans. The same gene was quickly found in many other groups of animals, such as squid, a cephalopod mollusc. Biologists including Ernst Mayr had believed that eyes had arisen in the animal kingdom at least 40 times, as the anatomy of different types of eye varies widely. For example, the fruit fly's compound eye is made of hundreds of small lensed structures (ommatidia); the human eye has a blind spot where the optic nerve enters the eye, and the nerve fibres run over the surface of the retina, so light has to pass through a layer of nerve fibres before reaching the detector cells in the retina, so the structure is effectively "upside-down"; in contrast, the cephalopod eye has the retina, then a layer of nerve fibres, then the wall of the eye "the right way around". The evidence of pax-6, however, was that the same genes controlled the development of the eyes of all these animals, suggesting that they all evolved from a common ancestor. Ancient genes had been conserved through millions of years of evolution to create dissimilar structures for similar functions, demonstrating deep homology between structures once thought to be purely analogous. This notion was later extended to the evolution of embryogenesis and has caused a radical revision of the meaning of homology in evolutionary biology. Gene toolkit A small fraction of the genes in an organism's genome control the organism's development. These genes are called the developmental-genetic toolkit. They are highly conserved among phyla, meaning that they are ancient and very similar in widely separated groups of animals. Differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. Most toolkit genes are parts of signalling pathways: they encode transcription factors, cell adhesion proteins, cell surface receptor proteins and signalling ligands that bind to them, and secreted morphogens that diffuse through the embryo. All of these help to define the fate of undifferentiated cells in the embryo. Together, they generate the patterns in time and space which shape the embryo, and ultimately form the body plan of the organism. Among the most important toolkit genes are the Hox genes. These transcription factors contain the homeobox protein-binding DNA motif, also found in other toolkit genes, and create the basic pattern of the body along its front-to-back axis. Hox genes determine where repeating parts, such as the many vertebrae of snakes, will grow in a developing embryo or larva. Pax-6, already mentioned, is a classic toolkit gene. Although other toolkit genes are involved in establishing the plant bodyplan, homeobox genes are also found in plants, implying they are common to all eukaryotes. The embryo's regulatory networks The protein products of the regulatory toolkit are reused not by duplication and modification, but by a complex mosaic of pleiotropy, being applied unchanged in many independent developmental processes, giving pattern to many dissimilar body structures. The loci of these pleiotropic toolkit genes have large, complicated and modular cis-regulatory elements. For example, while a non-pleiotropic rhodopsin gene in the fruit fly has a cis-regulatory element just a few hundred base pairs long, the pleiotropic eyeless cis-regulatory region contains 6 cis-regulatory elements in over 7000 base pairs. The regulatory networks involved are often very large. Each regulatory protein controls "scores to hundreds" of cis-regulatory elements. For instance, 67 fruit fly transcription factors controlled on average 124 target genes each. All this complexity enables genes involved in the development of the embryo to be switched on and off at exactly the right times and in exactly the right places. Some of these genes are structural, directly forming enzymes, tissues and organs of the embryo. But many others are themselves regulatory genes, so what is switched on is often a precisely-timed cascade of switching, involving turning on one developmental process after another in the developing embryo. Such a cascading regulatory network has been studied in detail in the development of the fruit fly embryo. The young embryo is oval in shape, like a rugby ball. A small number of genes produce messenger RNAs that set up concentration gradients along the long axis of the embryo. In the early embryo, the bicoid and hunchback genes are at high concentration near the anterior end, and give pattern to the future head and thorax; the caudal and nanos genes are at high concentration near the posterior end, and give pattern to the hindmost abdominal segments. The effects of these genes interact; for instance, the Bicoid protein blocks the translation of caudal messenger RNA, so the Caudal protein concentration becomes low at the anterior end. Caudal later switches on genes which create the fly's hindmost segments, but only at the posterior end where it is most concentrated. The Bicoid, Hunchback and Caudal proteins in turn regulate the transcription of gap genes such as giant, knirps, Krüppel, and tailless in a striped pattern, creating the first level of structures that will become segments. The proteins from these in turn control the pair-rule genes, which in the next stage set up 7 bands across the embryo's long axis. Finally, the segment polarity genes such as engrailed split each of the 7 bands into two, creating 14 future segments. This process explains the accurate conservation of toolkit gene sequences, which has resulted in deep homology and functional equivalence of toolkit proteins in dissimilar animals (seen, for example, when a mouse protein controls fruit fly development). The interactions of transcription factors and cis-regulatory elements, or of signalling proteins and receptors, become locked in through multiple usages, making almost any mutation deleterious and hence eliminated by natural selection. The mechanism that sets up every animal's front-back axis is the same, implying a common ancestor. There is a similar mechanism for the back-belly axis for bilaterian animals, but it is reversed between arthropods and vertebrates. Another process, gastrulation of the embryo, is driven by Myosin II molecular motors, which are not conserved across species. The process may have been started by movements of sea water in the environment, later replaced by the evolution of tissue movements in the embryo. The origins of novelty Among the more surprising and, perhaps, counterintuitive (from a neo-Darwinian viewpoint) results of recent research in evolutionary developmental biology is that the diversity of body plans and morphology in organisms across many phyla are not necessarily reflected in diversity at the level of the sequences of genes, including those of the developmental genetic toolkit and other genes involved in development. Indeed, as John Gerhart and Marc Kirschner have noted, there is an apparent paradox: "where we most expect to find variation, we find conservation, a lack of change". So, if the observed morphological novelty between different clades does not come from changes in gene sequences (such as by mutation), where does it come from? Novelty may arise by mutation-driven changes in gene regulation. Variations in the toolkit Variations in the toolkit may have produced a large part of the morphological evolution of animals. The toolkit can drive evolution in two ways. A toolkit gene can be expressed in a different pattern, as when the beak of Darwin's large ground-finch was enlarged by the BMP gene, or when snakes lost their legs as distal-less became under-expressed or not expressed at all in the places where other reptiles continued to form their limbs. Or, a toolkit gene can acquire a new function, as seen in the many functions of that same gene, distal-less, which controls such diverse structures as the mandible in vertebrates, legs and antennae in the fruit fly, and eyespot pattern in butterfly wings. Given that small changes in toolbox genes can cause significant changes in body structures, they have often enabled the same function convergently or in parallel. distal-less generates wing patterns in the butterflies Heliconius erato and Heliconius melpomene, which are Müllerian mimics. In so-called facilitated variation, their wing patterns arose in different evolutionary events, but are controlled by the same genes. Developmental changes can contribute directly to speciation. Consolidation of epigenetic changes Evolutionary innovation may sometimes begin in Lamarckian style with epigenetic alterations of gene regulation or phenotype generation, subsequently consolidated by changes at the gene level. Epigenetic changes include modification of DNA by reversible methylation, as well as nonprogrammed remoulding of the organism by physical and other environmental effects due to the inherent plasticity of developmental mechanisms. The biologists Stuart A. Newman and Gerd B. Müller have suggested that organisms early in the history of multicellular life were more susceptible to this second category of epigenetic determination than are modern organisms, providing a basis for early macroevolutionary changes. Developmental bias Development in specific lineages can be biased either positively, towards a given trajectory or phenotype, or negatively, away from producing certain types of change; either may be absolute (the change is always or never produced) or relative. Evidence for any such direction in evolution is however hard to acquire and can also result from developmental constraints that limit diversification. For example, in the gastropods, the snail-type shell is always built as a tube that grows both in length and in diameter; selection has created a wide variety of shell shapes such as flat spirals, cowries and tall turret spirals within these constraints. Among the centipedes, the Lithobiomorpha always have 15 trunk segments as adults, probably the result of a developmental bias towards an odd number of trunk segments. Another centipede order, the Geophilomorpha, the number of segments varies in different species between 27 and 191, but the number is always odd, making this an absolute constraint; almost all the odd numbers in that range are occupied by one or another species. Ecological evolutionary developmental biology Ecological evolutionary developmental biology integrates research from developmental biology and ecology to examine their relationship with evolutionary theory. Researchers study concepts and mechanisms such as developmental plasticity, epigenetic inheritance, genetic assimilation, niche construction and symbiosis.
Biology and health sciences
Basics_4
Biology
57416
https://en.wikipedia.org/wiki/Przewalski%27s%20horse
Przewalski's horse
Przewalski's horse ( ; (); ; Equus ferus przewalskii or Equus przewalskii), also called the takhi (), Mongolian wild horse or Dzungarian horse, is a rare and endangered horse originally native to the steppes of Central Asia. It is named after the Russian geographer and explorer Nikolay Przhevalsky. Once extinct in the wild, since the 1990s it has been reintroduced to its native habitat in Mongolia in the Khustain Nuruu National Park, Takhin Tal Nature Reserve, and Khomiin Tal, as well as several other locales in Central Asia and Eastern Europe. Several genetic characteristics of Przewalski's horse differ from what is seen in modern domestic horses, indicating neither is an ancestor of the other. For example, Przewalski's horse has 33 chromosome pairs, compared to 32 for the domestic horse. Their ancestral lineages split from a common ancestor between 160,000 and 38,000 years ago, long before the domestication of the horse. Przewalski's horse was long considered the only remaining truly wild horse, in contrast with the American mustang and the Australian brumby, which are instead feral horses descended from domesticated animals. That status was called into question when domestic horses of the 5,000-year-old Botai culture of Central Asia were found to be more closely related to Przewalski's horses than to E. f. caballus. The study raised the possibility that modern Przewalski's horses could be the feral descendants of the domestic Botai horses. However, it remains possible that both the Botai horses and the modern Przewalski's horses descend separately from the same ancient wild Przewalski's horse population. Its taxonomic position is still debated, with some taxonomists treating Przewalski's horse as a species, E. przewalskii, others as a subspecies of wild horse (E. ferus przewalskii) or a variety of the domesticated horse (E. caballus). Przewalski's horse is stockily built, smaller, and shorter than its domesticated relatives. Typical height is about , and the average weight is around . They have a dun coat with pangaré features and often have dark primitive markings. Taxonomy Przewalski's horse was formally described as a novel species in 1881 by Ivan Semyonovich Polyakov. The taxonomic position of Przewalski's horse remains controversial, and no consensus exists about whether it is a full species (as Equus przewalskii); a subspecies of Equus ferus the wild horse (as Equus ferus przewalskii in trinomial nomenclature, along with two other subspecies, the domestic horse E. f. caballus, and the extinct tarpan E. f. ferus); or even a subpopulation of the domestic horse. The American Society of Mammalogists considers Przewalski's horse and the tarpan both to be subspecies of Equus ferus, and classifies the domestic horse as a separate species, Equus caballus. Lineage Genetic analysis shows that the takhi and the domestic horse differ significantly, with neither ancestral to the other. The evolutionary divergence of the two populations was estimated to have occurred about 72,000–38,000 years ago, well before domestication, most likely due to climate, topography, or other environmental changes. According to a 2009 study, the earliest known domestic horses were found at settlements of the Botai culture, from about 5500 years ago. These horses were raised for meat and milk. In 2018, a new study indicated ancient horses of the Botai culture are related to takhis, not to domestic horses as was previously thought. Specifically, the Botai horses appeared to be ancestral to the modern takhi, because all seven takhis nested within the phylogenetic tree of the 20 Botai horses. No comparison was made to definitively wild early takhis. The authors posit that modern Przewalski's horses are feral descendants of the ancient Botai domesticated animals, rather than representing a surviving population of never-domesticated horses. Another geneticist pointed out that Przewalski's horses may have simply descended from the same wild population that the Botai horses came from, which would still be compatible with the findings of the study. In 2021, William Taylor and Christina Barron-Ortiz disputed the evidence for domestication of Przewalski's horse. Their case was rejected by Alan Outram and colleagues in a paper which was not dated or peer-reviewed. Taylor reiterated his arguments that Przewalski's horse had never been domesticated in an article in Scientific American in 2024. In any case, the Botai horses were found to have negligible genetic contribution to any of the ancient or modern domestic horses studied, indicating that the domestication of the latter was independent, involving a different wild population, from any possible domestication of Przewalski's horse by the Botai culture. Characteristics Przewalski's horse is stockily built in comparison to domesticated horses, with shorter legs, and is much smaller and shorter than its domesticated relatives. Typical height is about , and length is about . It weighs around . The coat is generally dun in color with pangaré features, varying from dark brown around the mane, to pale brown on the flanks, and yellowish-white on the belly, as well as around the muzzle. The legs of Przewalski's horse are often faintly striped, also typical of primitive markings. The mane stands erect and does not extend as far forward, while the tail is about long, with a longer dock and shorter hair than seen in domesticated horses. The hooves of Przewalski's horse are longer in the front and have significantly thicker sole horns than feral horses, an adaptation that improves hoof performance on terrain. Genomics The karyotype of Przewalski's horse differs from that of the domestic horse, having 33 chromosome pairs versus 32, apparently due to a fission of a large chromosome ancestral to domestic horse chromosome 5 to produce Przewalski's horse chromosomes 23 and 24, though conversely, a Robertsonian translocation that fused two chromosomes ancestral to those seen in Przewalski's horse to produce the single large domestic horse chromosome has also been proposed. Many smaller inversions, insertions and other rearrangements were observed between the chromosomes of domestic and Przewalski's horses, while there was much lower heterozygosity in Przewalski's horses, with extensive segments devoid of genetic diversity, a consequence of the recent severe bottleneck of the captive Przewalski's horse population. In comparison, the chromosomal differences between domestic horses and zebras include numerous large-scale translocations, fusions, inversions, and centromere repositioning. Przewalski's horse has the highest diploid chromosome number among all equine species. They can interbreed with the domestic horse and produce fertile offspring, with 65 chromosomes. The mitochondrial genome has 37 genes that are 99.63% identical to that of the domestic horse. Ecology and behavior Przewalski reported the horses forming troops of between five and fifteen members, consisting of a mature stallion, his mares and foals. Modern reintroduced populations similarly form family groups of one adult stallion, one to three mares, and their common offspring that stay in the family group until they are no longer dependent, usually at two or three years old. Young females join other harems, while bachelor stallions as well as old stallions who have lost their harems join bachelor groups. Family groups can join to form a herd that moves together. The patterns of their daily lives exhibit horse behavior similar to that of feral horse herds. Stallions herd, drive, and defend all members of their family, while the mares often display leadership in the family. Stallions and mares stay with their preferred partners for years. While behavioral synchronization is high among mares, stallions other than the main harem stallion are generally less stable in this respect. Home range in the wild is little studied, but estimated as in the Hustai National Park and in the Great Gobi B Strictly Protected Area. The ranges of harems are separated, but slightly overlapping. They have few modern predators, but one of the few is the Himalayan wolf. Horses maintain visual contact with their family and herd at all times, and have a host of ways to communicate with one another, including vocalizations, scent marking, and a wide range of visual and tactile signals. Each kick, groom, tilt of the ear, or other contact with another horse is a means of communicating. This constant communication leads to complex social behaviors among Przewalski's horses. The historical population was said to have lived in the "wildest parts of the desert" with a preference for "especially saline districts". They were observed mostly during spring and summer at natural wells, migrating to them by crossing valleys rather than by way of higher mountains. Diet Przewalski horse's diet consists of vegetation. Many plant species are in a typical Przewalski's horse environment, including: Elymus repens, Carex spp., Fabaceae, and Asteraceae. Looking at the species' diet overall, Przewalski's horses most often eat E. repens, Trifolium pratense, Vicia cracca, Poa trivialis, Dactylis glomerata, and Bromus inermis. While the horses eat a variety of different plant species, they tend to favor different species at different times of year. In the springtime, they favor Elymus repens, Corynephorus canescens, Festuca valesiaca, and Chenopodium album. In early summer, they favor Dactylis glomerata and Trifolium, and in late summer, they gravitate towards E. repens and Vicia cracca. In winter the horses eat Salix spp., Pyrus communis, Malus sylvatica, Pinus sylvestris, Rosa spp., and Alnus spp. Additionally, Przewalski's horses may dig for Festuca spp., Bromus inermis, and E. repens that grow beneath the ice and snow. Their winter diet is very similar to the winter diet of domestic horses, but differs from that revealed by isotope analysis of the historical (pre-captivity) population, which switched in winter to browsing shrubs, though the difference may be due to the extreme habitat pressure the historical population was under. In the wintertime, they eat their food more slowly than they do during other times of the year. Przewalski's horses seasonally display a set of changes collectively characteristic of physiologic adaptation to starvation, with their basal metabolic rate in winter being half what it is during springtime. This is not a direct consequence of decreased nutrient intake, but rather a programmed response to predictable seasonal dietary fluctuation. Reproduction Mating occurs in late spring or early summer. Mating stallions do not start looking for mating partners until the age of five. Stallions assemble groups of mares or challenge the leader of another group for dominance. Females are able to give birth at the age of three and have a gestation period of 11–12 months. Foals are able to stand about an hour after birth. The rate of infant mortality among foals is 25%, with 83.3% of these deaths resulting from leading stallion infanticide. Foals begin grazing within a few weeks but are not weaned for 8–13 months after birth. They reach sexual maturity at two years of age. Population History Przewalski's-type wild horses appear in European cave art dating as far back as 20,000 years ago, but genetic investigation of a 35,870-year-old specimen from one such cave instead showed an affinity with extinct Iberian horse lineage and the modern domestic horse, suggesting that it was not Przewalski's horse being depicted in this art. Horse skeletons dating to the fifth to the third millennia BCE, found in Central Asia, with a range extending to the southern Urals and the Altai, belong to the genetic lineage of Przewalski's horse. Of particular note are the horses of this lineage found in the archaeological sites of the Chalcolithic Botai culture. Sites dating from the mid-fourth-millennium BCE show evidence of horse domestication. Analysis of ancient DNA from Botai horse specimens from about 3000 BCE reveals them to have DNA markers consistent with the lineage of modern Przewalski's horses. There are sporadic reports of Przewalski's horse in the historical record before its formal characterization. The Buddhist monk Bodowa wrote a description of what is thought to have been Przewalski's horse about AD 900, and an account from 1226 reports an incident involving wild horses during Genghis Khan's campaign against the Tangut empire. In the fifteenth century, Johann Schiltberger recorded one of the first European sightings of the horses in the journal recounting his trip to Mongolia as a prisoner of the Mongol Khan. Another was recorded as a gift to the Manchurian emperor around 1630, its value as a gift suggesting a difficulty in obtaining them. John Bell, a Scottish doctor in service to Peter the Great from 1719 to 1722, observed a horse in Russia's Tomsk Oblast that was apparently this species, and a few decades later in 1750, a large hunt with thousands of beaters organized by the Manchurian emperor killed between two and three hundred of these horses. The species is named after a Russian colonel of Polish descent, Nikolai Przhevalsky (1839–1888) (Nikołaj Przewalski in Polish). An explorer and naturalist, he obtained the skull and hide of an animal shot in 1878 in the Gobi near today's China–Mongolia border. He would travel to the Dzungarian Basin to observe it in the wild. In 1881, the horse received a formal scientific description and was named Equus przevalskii by Ivan Semyonovich Polyakov, based on Przewalski's collection and description, while in 1884, the sole exemplar of the horse in Europe was a preserved specimen in the Museum of the Russian Academy of Sciences in St. Petersburg. This was supplemented in 1894 when the brothers Grum-Grzhimailo returned several hides and skulls to St. Petersburg and described the horse's behavior in the wild. A number of these horses were captured around 1900 by Carl Hagenbeck and placed in zoos, and these, along with one later captive, reproduced to give rise to today's population. After 1903, there were no reports of the wild population until 1947, when several isolated groups were observed and a lone filly captured. Although local herdsmen reported seeing as many as 50 to 100 takhis grazing in small groups then, there were only sporadic sightings of single groups of two or three animals after that, mostly near natural wells. Two scientific expeditions in 1955 and 1962 failed to find any. After herders and naturalists reported single harem groups in 1966 and 1967, the last observation of the wild horse in its native habitat was of a single stallion in 1969. Expeditions after this failed to locate any horses, and the species would be designated "extinct in the wild" for over 30 years. Competition with livestock, hunting, capture of foals for zoological collections, military activities, and harsh winters recorded in 1945, 1948, and 1956 are considered to be main causes of the decline in Przewalski's horse population. The wild population was already rare at its first scientific characterization. Przewalski reported seeing them only from a distance and may have instead sighted herds of local onager Mongolian wild asses. He was only able to obtain specimens of the type from Kirghiz hunters. The range of Przewalski's horse was limited to the arid Dzungarian Basin in the Gobi Desert. It has been suggested that this was not their natural habitat, but, like the onager, they were a steppe animal driven to this barren last refuge by the dual pressures of hunting and habitat loss to agricultural grazing. There were two distinct populations recognized by local Mongolians, a lighter steppe variety and a darker mountain one. This distinction is seen in early twentieth-century descriptions. Their mountainous habitat included the Takhiin Shar Nuruu (The Yellow Wild-Horse Mountain Range). In their last decades in the wild, the remnant population was limited to the small region between the Takhiin Shar Nuruu and Bajtag-Bogdo mountain ridges. Captivity Attempts to obtain specimens for exhibit and captive breeding were largely unsuccessful until 1902, when 28 captured foals were brought to Europe. These and a small number of additional captives would be distributed among zoos and breeding centers in Europe and the United States. Many facilities failed in their attempts at captive breeding, but a few programs were established. However, by the mid-1930s, inbreeding had caused reduced fertility, and the captive population experienced a genetic bottleneck, with the surviving captive breeding stock descended from only 11 of the founder captives. In addition, in at least one instance, the progeny of interbreeding with a domestic horse was bred back into the captive Przewalski's horse population. However, recent studies have shown only minimal genetic contribution of this domestic horse to the captive population. The situation was improved when the exchange of breeding animals among facilities increased genetic diversity and there was a consequent improvement in fertility, but the population experienced another genetic bottleneck when many of the horses failed to survive World War II. The most valuable group, in Askania Nova, Ukraine, was shot by German soldiers during World War II occupation, and the group in the United States had died out. Only two captive populations in zoos remained, in Munich and in Prague, and of the 31 remaining horses at war's end, only 9 became ancestors of the subsequent captive population. By the end of the 1950s, only 12 individual horses were left in the world's zoos. A wild-caught mare captured as a foal a decade earlier was introduced into the Ukrainian captive population in 1957. This would prove the last wild-caught horse, and with the presumed extinction of the wild population, last sighted in Mongolia in the late 1960s, the captive population became the sole representatives of Przewalski's horse. Genetic diversity received a much-needed boost from this new source, with the spread of her bloodline through the inbred captive groups leading to their increased reproductive success, and by 1965, there were more than 130 animals spread among thirty-two zoos and parks. Conservation efforts In 1977, the Foundation for the Preservation and Protection of the Przewalski Horse was founded in Rotterdam, the Netherlands, by Jan and Inge Bouman. The foundation started a program of exchange between captive populations in zoos worldwide to reduce inbreeding and later began its own breeding program. As a result of such efforts, the extant herd has retained a far greater genetic diversity than its genetic bottleneck made likely. By 1979, when this concerted program of population management to maximize genetic diversity was begun, there were almost four hundred horses in sixteen facilities, a number that had grown by the early 1990s to over 1,500. While dozens of zoos worldwide have Przewalski's horses in small numbers, specialized reserves are also dedicated primarily to the species. The world's largest captive-breeding program for Przewalski's horses is at the Askania Nova preserve in Ukraine. From 1998, thirty-one horses were also released in the unenclosed Chernobyl Exclusion Zone in Ukraine and Belarus. People evacuated the zone after the Chernobyl accident, so now it serves as a deserted de facto nature reserve. Though poaching has taken a toll on numbers, as of 2019 the estimated population in the Chernobyl zone was over 100 individuals. Le Villaret, located in the Cevennes National Park in southern France and run by the Association Takh, is a breeding site for Przewalski's horses that was created to allow the free expression of natural Przewalski's horse behaviors. In 1993, eleven zoo-born horses were brought to Le Villaret. Horses born there are adapted to life in the wild, free to choose their mates, and required to forage independently. This was intended to produce individuals capable of being reintroduced into Mongolia. In 2012, 39 individuals were at Le Villaret. An intensely researched population of free-ranging animals was also introduced to the Hortobágy National Park puszta in Hungary; data on social structure, behavior, and diseases gathered from these animals are used to improve the Mongolian conservation effort. An additional breeding population of Przewalski's horses roams the former Döberitzer Heide military proving ground, now a nature reserve in Dallgow-Döberitz, Germany. Established in 2008, this population comprised 24 horses in 2019. Another population is being established in the Iberian System in Spain, the first free-roaming Przewalski’s horses in Western Europe. In 2024, a Colorado rancher discovered what appears to be a critically endangered Przewalski's horse at a Kansas livestock auction, mistakenly identified as a mule. Another similar horse was found at a Utah sanctuary. Genetic tests suggest both are Przewalski's horses, raising concerns about how they ended up in U.S. auctions. The owners care for the animals but hope they can eventually join professional conservation programs. Reintroduction The Przewalski's Horse Reintroduction Project of China was initiated in 1985 when 11 wild horses were imported from overseas. After more than two decades of effort, the Xinjiang Wild Horse Breeding Centre has bred a large number of horses, 55 of which were released into the Kalamely Mountain area. The animals quickly adapted to their new environment. In 1988, six foals were born and survived, and by 2001, over 100 horses were at the centre. , the center hosted 127 horses divided into 13 breeding herds and three bachelor herds. Reintroductions organized by Western European countries started in the 1990s. Several populations have now been released into the wild. A cooperative venture between the Zoological Society of London and Mongolian scientists has successfully reintroduced these horses from zoos into their natural habitat in Mongolia. In 1992, 16 horses were released into the wild in Mongolia, followed by additional animals later. One of the areas to which they were reintroduced became Khustain Nuruu National Park in 1998. Another reintroduction site is Great Gobi B Strictly Protected Area, located at the fringes of the Gobi Desert. In 2001, Przewalski's horses were reintroduced into the Kalamaili Nature Reserve in Xinjiang, China. Since 2004, there has been a program to reintroduce Przewalski's horses that were bred in France into Mongolia. Instrumental to that 2004 reintroduction was Claudia Feh, a Swiss equine specialist and conservation biologist, Feh led an effort to bring together animals that zoos had conserved to create a breeding population in southern France. Then, after it was established, three family groups were relocated to Khovd in western Mongolia. At a site on the northern edge of the Gobi Desert, Feh worked in cooperation with local people to ensure the horses survived and flourished. For this work, Feh received a Rolex Award in 2004. In 2004 and 2005, 22 horses were released by the Association Takh to a third reintroduction site in the buffer zone of the Khar Us Nuur National Park, in the northern edge of the Gobi ecoregion. In the winter of 2009–2010, one of the worst dzud or snowy winter conditions ever hit Mongolia. The population of Przewalski's horse in the Great Gobi B SPA was drastically affected, providing clear evidence of the risks associated with reintroducing small and sequestered species in unpredictable and unfamiliar environments. After reintroduced horses had successfully reproduced, the status of the animal was changed from "extinct in the wild" to "endangered" in 2005, while on the IUCN Red List they were reclassified from "extinct in the wild" to "critically endangered", after a reassessment in 2008, and from "critically endangered" to "endangered" after a 2011 reassessment. In 2011, Prague Zoo started a new project, Return of the Wild Horses. With the support of public and many strategic partners, yearly transports of captive-bred horses into the Great Gobi B Strictly Protected Area continued. , an estimated total of almost 400 horses existed in three free-ranging populations in the wild. Prague Zoo has transported horses to Mongolia in several rounds in cooperation with partners (Czech Air Force, European Breeding Programme for Przewalski's Horses, Association pour le cheval de Przewalski: Takh, Czech Development Agency, Czech Embassy in Mongolia, and others). The zoo has the longest uninterrupted history of breeding Przewalski's horses in the world and keeps the studbook of this species. The first reintroduction into the Orenburg region on the Russian steppe occurred in 2016. In May 2023, a herd of ten Przewalski's horses obtained from Monts D'Azur Biological Reserve in France was introduced by Rewilding Europe to the Iberian Highlands rewilding landscape in Spain, near Villanueva de Alcorón. Following an acclimatization period, the horses were released into the reserve proper in September. This introduction was intended to address the buildup of dense scrub caused by the decrease in traditional sheep grazing due to rural depopulation. The horses are intended to fill a niche similar to that of the extinct European wild horse and of contemporary domesticated herbivores by opening the landscape through low-intensity grazing and browsing, thereby enhancing biodiversity and lowering the risk of forest fires. Future introductions are planned. In June 2024 six mares and a stallion were reintroduced to Kazakhstan from zoos in Europe, ten years after plans were announced to do so. The operation was organised by Prague Zoo, which selected horses from various programs in Europe, which were housed at Tierpark Berlin for some months before being transported to Kazakhstan in Czech army planes. Assisted reproduction and cloning In the earlier decades of captivity, the insular breeding by individual zoos led to inbreeding and reduced fertility. In 1979, several American zoos began a collaborative breeding-exchange program to maximize genetic diversity. Recent advances in equine reproductive science have also been used to preserve and expand the gene pool. Scientists at the Smithsonian Institution's National Zoo successfully reversed a vasectomy on a Przewalski horse in 2007—the first operation of its kind on this species, and possibly the first ever on any endangered species. While normally, a vasectomy may be performed on an endangered animal under limited circumstances, particularly if an individual has already produced many offspring and its genes are overrepresented in the population, scientists realized the animal in question was one of the most genetically valuable Przewalski's horses in the North American breeding program. The first birth by artificial insemination occurred on 27 July 2013 at the Smithsonian Conservation Biology Institute. In 2020, the first cloned Przewalski's horse was born, the result of a collaboration between San Diego Zoo Global, ViaGen Equine and Revive & Restore. The cloning was carried out by somatic cell nuclear transfer (SCNT), whereby a viable embryo is created by transplanting the DNA-containing nucleus of a somatic cell into an immature egg cell (oocyte) that has had its nucleus removed, producing offspring genetically identical to the somatic cell donor. Since the oocyte used was from a domestic horse, this was an example of interspecies SCNT. The somatic cell donor was a Przewalski horse stallion named Kuporovic, born in the UK in 1975 and relocated three years later to the US, where he died in 1998. Due to concerns over the loss of genetic variation in the captive Przewalski's horse population, and in anticipation of the development of new cloning techniques, tissue from the stallion was cryopreserved at the San Diego Zoo's Frozen Zoo. Breeding of this individual in the 1980s had already substantially increased the genetic diversity of the captive population after he was discovered to have more unique alleles than any other horse living at the time, including otherwise lost genetic material from two of the original captive founders. To produce the clone, frozen skin fibroblasts were thawed, and grown in cell culture. An oocyte was collected from a domestic horse, and its nucleus replaced by a nucleus collected from a cultured Przewalski's horse fibroblast. The resulting embryo was induced to begin division. It was cultured until it reached the blastocyst stage, then implanted into a domestic horse surrogate mare, which carried the embryo to term and delivered a foal with the Przewalski horse DNA of the long-deceased stallion. The cloned horse was named Kurt, after Dr. Kurt Benirschke, a geneticist who developed the idea of cryopreserving genetic material from species considered to be endangered. His ideas led to creating the Frozen Zoo as a genetic library. In 2021, Kurt was relocated to the breeding herd at the San Diego Zoo Safari Park. In order to integrate him into the existing herd, Kurt was partnered with a young female named Holly, a few months older than him, in order to allow him to learn the social and communication behaviors of wild Przewalski's horses. On reaching maturity at three to four years of age, Kurt is intended to become the breeder stallion for the San Diego Zoo herd to pass Kuporovic's genes into the larger captive Przewalski's horse population and thereby increase the genetic variation of the species. In 2023, a genetic twin of Kurt, named Ollie, was born from cloning with the help of the San Diego Zoo Global Frozen Zoo. It is the first reported case of any endangered species having more than one clone successfully produced. This individual eventually joins Kurt and Holly at the San Diego Zoo Safari Park. Due to having been conceived through the transfer of a somatic cell nucleus into an egg cell obtained from a domestic horse donor, Kurt and Ollie both display the mithocondrial genome of domestic horses instead of belonging to a Przewlaski horse mithocondrial clade.
Biology and health sciences
Equidae
Animals